feat: A lot of refactoring and CSI test cases

This commit is contained in:
TheiLLeniumStudios
2026-01-10 13:42:10 +01:00
parent 46e7d74bd1
commit f0e6d3af58
79 changed files with 6434 additions and 3987 deletions

View File

@@ -1,319 +1,259 @@
# Reloader E2E Tests
These tests verify that Reloader actually works in a real Kubernetes cluster. They spin up a Kind cluster, build and deploy Reloader, then create workloads and change their ConfigMaps/Secrets to make sure everything reloads correctly.
End-to-end tests that verify Reloader works correctly in a real Kubernetes cluster. Tests create workloads, modify their referenced ConfigMaps/Secrets/SecretProviderClasses, and verify that Reloader triggers the appropriate rolling updates.
## Running the Tests
## Table of Contents
- [Quick Start](#quick-start)
- [Prerequisites](#prerequisites)
- [Running Tests](#running-tests)
- [Test Coverage](#test-coverage)
- [Workload Types](#workload-types)
- [Resource Types](#resource-types)
- [Reload Strategies](#reload-strategies)
- [Reference Methods](#reference-methods)
- [Annotations](#annotations)
- [CLI Flags](#cli-flags)
- [Test Organization](#test-organization)
- [Debugging](#debugging)
- [Writing Tests](#writing-tests)
---
## Quick Start
```bash
# Run everything (creates Kind cluster, builds image, runs tests)
# One-time setup: create Kind cluster and install dependencies
make e2e-setup
# Run all e2e tests
make e2e
# Test a specific image without building
SKIP_BUILD=true RELOADER_IMAGE=stakater/reloader:v1.0.0 make e2e
# Run just one test suite
go test -v -timeout 30m ./test/e2e/core/...
go test -v -timeout 30m ./test/e2e/annotations/...
go test -v -timeout 30m ./test/e2e/flags/...
# Skip Argo/OpenShift tests (if you don't have them installed)
go test -v ./test/e2e/core/... --ginkgo.label-filter="!argo && !openshift"
# Cleanup when done
make e2e-cleanup
```
## What You Need
---
- Go 1.21+
- Docker
- [Kind](https://kind.sigs.k8s.io/)
- kubectl
- Helm 3
- Argo Rollouts (optional, for Argo tests)
- OpenShift (optional, for DeploymentConfig tests)
## Prerequisites
| Requirement | Version | Purpose |
|------------|---------|---------|
| Go | 1.25+ | Test execution |
| Docker/Podman | Latest | Image building |
| [Kind](https://kind.sigs.k8s.io/) | 0.20+ | Local Kubernetes cluster |
| kubectl | Latest | Cluster interaction |
| Helm | 3.x | Reloader deployment |
### Optional Dependencies
| Component | Purpose | Auto-installed by |
|-----------|---------|-------------------|
| [Argo Rollouts](https://argoproj.github.io/rollouts/) | Argo Rollout tests | `make e2e-setup` |
| [CSI Secrets Store Driver](https://secrets-store-csi-driver.sigs.k8s.io/) | SecretProviderClass tests | `make e2e-setup` |
| [Vault](https://www.vaultproject.io/) | CSI provider backend | `make e2e-setup` |
| OpenShift | DeploymentConfig tests | Requires OpenShift cluster |
---
## What Gets Tested
## Running Tests
### Deployments
### Make Targets
Deployments are the most thoroughly tested workload. Here's everything we verify:
| Target | Description |
|--------|-------------|
| `make e2e-setup` | Create Kind cluster and install all dependencies (Argo, CSI, Vault) |
| `make e2e` | Build image, load to Kind, run all tests |
| `make e2e-cleanup` | Remove test resources and delete Kind cluster |
| `make e2e-ci` | Full CI pipeline: setup → test → cleanup |
**Basic Reload Behavior**
- Reloads when a referenced ConfigMap's data changes
- Reloads when a referenced Secret's data changes
- Reloads when using `auto=true` annotation (auto-detects all mounted ConfigMaps/Secrets)
- Does NOT reload when only ConfigMap/Secret labels change (data must change)
- Does NOT reload when `auto=false` is set
### Common Workflows
**Different Ways to Reference ConfigMaps/Secrets**
- `envFrom` - inject all keys as environment variables
- `valueFrom.configMapKeyRef` - single key as env var
- `valueFrom.secretKeyRef` - single key as env var
- Volume mounts - mount ConfigMap/Secret as files
- Projected volumes - multiple sources combined into one mount
- Init containers with envFrom
- Init containers with volume mounts
```bash
# Development workflow
make e2e-setup # Once at the start
make e2e # Run tests (repeat as needed)
make e2e # ...iterate...
make e2e-cleanup # When done
**Annotation Variations**
- `configmap.reloader.stakater.com/reload: my-config` - explicit ConfigMap
- `secret.reloader.stakater.com/reload: my-secret` - explicit Secret
- `reloader.stakater.com/auto: "true"` - auto-detect everything
- `configmap.reloader.stakater.com/auto: "true"` - auto-detect only ConfigMaps
- `secret.reloader.stakater.com/auto: "true"` - auto-detect only Secrets
- Multiple ConfigMaps/Secrets in one annotation (comma-separated)
- Annotations on pod template vs deployment metadata (both work)
# CI workflow
make e2e-ci # Does everything
**Search & Match**
- Deployments with `search` annotation find ConfigMaps with `match` annotation
- Only reloads if both sides have the right annotations
# Test specific image
SKIP_BUILD=true RELOADER_IMAGE=ghcr.io/stakater/reloader:v1.2.0 make e2e
```
**Exclude & Ignore**
- Exclude specific ConfigMaps/Secrets from auto-reload
- Ignore annotation on ConfigMap/Secret prevents any reload
### Running Specific Tests
**Pause Period**
- Deployment gets paused after reload when pause-period annotation is set
```bash
# Run a specific test suite
go tool ginkgo -v ./test/e2e/core/...
go tool ginkgo -v ./test/e2e/annotations/...
go tool ginkgo -v ./test/e2e/csi/...
**Regex Patterns**
- Pattern matching for ConfigMap/Secret names (e.g., `app-config-.*`)
# Run tests matching a pattern
go tool ginkgo -v --focus="should reload when ConfigMap" ./test/e2e/...
**Multi-Container**
- Works when multiple containers share the same ConfigMap
- Works when different containers use different ConfigMaps
# Run tests with specific labels
go tool ginkgo -v --label-filter="csi" ./test/e2e/...
go tool ginkgo -v --label-filter="!argo && !openshift" ./test/e2e/...
**EnvVars Strategy**
- Adds `STAKATER_` environment variables instead of pod annotations
- Verifies the env var appears after ConfigMap/Secret change
# Run all tests, continue on failure
go tool ginkgo --keep-going -v ./test/e2e/...
```
### DaemonSets
### Environment Variables
DaemonSets get the same treatment as Deployments:
- Reloads when ConfigMap data changes
- Reloads when Secret data changes
- Works with `auto=true` annotation
- Does NOT reload on label-only changes
- Supports all reference methods (envFrom, valueFrom, volumes, projected, init containers)
- EnvVars strategy works
### StatefulSets
StatefulSets are tested identically to Deployments and DaemonSets:
- Reloads when ConfigMap data changes
- Reloads when Secret data changes
- Works with `auto=true` annotation
- Does NOT reload on label-only changes
- Supports all reference methods
- EnvVars strategy works
### CronJobs
CronJobs are a bit special - when a CronJob's ConfigMap changes, Reloader updates the CronJob spec so the *next* Job it creates will have the new config.
**What's Tested**
- CronJob spec updates when referenced ConfigMap changes
- CronJob spec updates when referenced Secret changes
- Works with `auto=true` annotation
- Works with explicit reload annotations
- Does NOT update on label-only changes
**Note:** CronJobs don't support the EnvVars strategy since they don't have running pods to inject env vars into.
### Jobs
Jobs require special handling - since you can't modify a running Job, Reloader deletes and recreates it with the new config.
**What's Tested**
- Job gets recreated (new UID) when ConfigMap changes
- Job gets recreated when Secret changes
- Works with `auto=true` annotation
- Works with explicit reload annotations
- Works with `valueFrom.configMapKeyRef` references
- Works with `valueFrom.secretKeyRef` references
**Note:** Jobs don't support the EnvVars strategy.
### Argo Rollouts
Argo Rollouts are Kubernetes Deployments on steroids with advanced deployment strategies. Tests require Argo Rollouts to be installed.
**What's Tested**
- Reloads when ConfigMap data changes
- Reloads when Secret data changes
- Works with `auto=true` annotation
- Does NOT reload on label-only changes
- Default strategy (annotation-based, like Deployments)
- Restart strategy (sets `spec.restartAt` field instead of annotations)
- Supports all reference methods
- EnvVars strategy works
### DeploymentConfigs (OpenShift)
OpenShift's legacy workload type. Tests only run on OpenShift clusters.
**What's Tested**
- Reloads when ConfigMap data changes
- Reloads when Secret data changes
- Works with `auto=true` annotation
- Does NOT reload on label-only changes
- Supports all reference methods
- EnvVars strategy works
| Variable | Description | Default |
|----------|-------------|---------|
| `RELOADER_IMAGE` | Image to test | `ghcr.io/stakater/reloader:test` |
| `SKIP_BUILD` | Skip image build | `false` |
| `KIND_CLUSTER` | Kind cluster name | `reloader-e2e` |
| `KUBECONFIG` | Kubernetes config path | `~/.kube/config` |
| `E2E_TIMEOUT` | Test timeout | `45m` |
---
## CLI Flag Tests
## Test Coverage
These tests verify Reloader's command-line options work correctly. Each test deploys Reloader with different flags.
### Workload Types
### Namespace Filtering
| Workload | Annotations | EnvVars | CSI | Special Handling |
|----------|-------------|---------|-----|------------------|
| Deployment | ✅ | ✅ | ✅ | Standard rolling update |
| DaemonSet | ✅ | ✅ | ✅ | Standard rolling update |
| StatefulSet | ✅ | ✅ | ✅ | Standard rolling update |
| CronJob | ✅ | ❌ | ❌ | Updates job template |
| Job | ✅ | ❌ | ❌ | Recreates job |
| Argo Rollout | ✅ | ✅ | ❌ | Supports restart strategy |
| DeploymentConfig | ✅ | ✅ | ❌ | OpenShift only |
**`namespaceSelector`**
- Only watches namespaces with matching labels
- Ignores ConfigMap changes in non-matching namespaces
### Resource Types
**`ignoreNamespaces`**
- Skips specified namespaces entirely
- Still watches all other namespaces
#### ConfigMaps & Secrets
**`watchGlobally`**
- `true` (default): watches all namespaces
- `false`: only watches Reloader's own namespace
Standard Kubernetes resources that trigger reloads when their data changes.
### Resource Filtering
**Tested Scenarios:**
- Data changes trigger reload
- Label-only changes do NOT trigger reload
- Annotation-only changes do NOT trigger reload
- Multiple resources in single annotation (comma-separated)
- Regex patterns for resource names
**`resourceLabelSelector`**
- Only watches ConfigMaps/Secrets with matching labels
- Ignores changes to resources without the label
#### SecretProviderClass (CSI)
**`ignoreSecrets`**
- Completely ignores all Secret changes
- Still watches ConfigMaps
CSI Secrets Store Driver integration for external secret providers (Vault, Azure, AWS, etc.).
**`ignoreConfigMaps`**
- Completely ignores all ConfigMap changes
- Still watches Secrets
**Tested Scenarios:**
- SecretProviderClassPodStatus changes trigger reload
- Label-only changes on SPCPS do NOT trigger reload
- Auto-detection with `secretproviderclass.reloader.stakater.com/auto: "true"`
- Exclude specific SPCs from auto-reload
- Init containers with CSI volumes
- Multiple CSI volumes per workload
### Workload Filtering
### Reload Strategies
**`ignoreCronJobs`**
- Skips CronJobs, still handles Deployments/etc
#### Annotations Strategy (Default)
**`ignoreJobs`**
- Skips Jobs, still handles other workloads
Adds/updates `reloader.stakater.com/last-reloaded-from` annotation on pod template.
### Reload Triggers
```yaml
spec:
template:
metadata:
annotations:
reloader.stakater.com/last-reloaded-from: "my-configmap"
```
**`reloadOnCreate`**
- `true`: triggers reload when a new ConfigMap/Secret is created
- `false` (default): only triggers on updates
#### EnvVars Strategy
**`reloadOnDelete`**
- `true`: triggers reload when a ConfigMap/Secret is deleted
- `false` (default): only triggers on updates
Adds `STAKATER_<RESOURCE>_<TYPE>` environment variable to containers.
### Global Auto-Reload
```yaml
spec:
template:
spec:
containers:
- env:
- name: STAKATER_MY_CONFIGMAP_CONFIGMAP
value: "<sha256-hash>"
```
**`autoReloadAll`**
- `true`: all workloads auto-reload without needing annotations
- `auto=false` on a workload still opts it out
### Reference Methods
---
All methods are tested for Deployment, DaemonSet, and StatefulSet:
## Annotation-Specific Tests
| Method | Description | ConfigMap | Secret | CSI |
|--------|-------------|-----------|--------|-----|
| `envFrom` | All keys as env vars | ✅ | ✅ | - |
| `valueFrom.configMapKeyRef` | Single key as env var | ✅ | - | - |
| `valueFrom.secretKeyRef` | Single key as env var | - | ✅ | - |
| Volume mount | Mount as files | ✅ | ✅ | ✅ |
| Projected volume | Combined sources | ✅ | ✅ | - |
| Init container (envFrom) | Init container env | ✅ | ✅ | - |
| Init container (volume) | Init container mount | ✅ | ✅ | ✅ |
### Auto Reload Variations
### Annotations
- `reloader.stakater.com/auto: "true"` - watches both ConfigMaps and Secrets
- `reloader.stakater.com/auto: "false"` - completely disables reload
- `configmap.reloader.stakater.com/auto: "true"` - only watches ConfigMaps
- `secret.reloader.stakater.com/auto: "true"` - only watches Secrets
#### Reload Triggers
### Combining Annotations
| Annotation | Description |
|------------|-------------|
| `configmap.reloader.stakater.com/reload` | Reload on specific ConfigMap(s) change |
| `secret.reloader.stakater.com/reload` | Reload on specific Secret(s) change |
| `secretproviderclass.reloader.stakater.com/reload` | Reload on specific SPC(s) change |
- `auto=true` + explicit reload annotation work together
- Auto-detected resources + explicitly listed resources both trigger reload
- Exclude annotations override auto-detection
#### Auto-Detection
### Search & Match
| Annotation | Description |
|------------|-------------|
| `reloader.stakater.com/auto: "true"` | Auto-detect all mounted resources |
| `configmap.reloader.stakater.com/auto: "true"` | Auto-detect ConfigMaps only |
| `secret.reloader.stakater.com/auto: "true"` | Auto-detect Secrets only |
| `secretproviderclass.reloader.stakater.com/auto: "true"` | Auto-detect SPCs only |
The search/match system lets you decouple workloads from specific resource names:
#### Exclusions
1. Workload has `reloader.stakater.com/search: "true"`
2. ConfigMap has `reloader.stakater.com/match: "true"`
3. When ConfigMap changes, workload reloads
| Annotation | Description |
|------------|-------------|
| `configmaps.exclude.reloader.stakater.com/reload` | Exclude ConfigMaps from auto |
| `secrets.exclude.reloader.stakater.com/reload` | Exclude Secrets from auto |
| `secretproviderclasses.exclude.reloader.stakater.com/reload` | Exclude SPCs from auto |
| `reloader.stakater.com/ignore: "true"` | On resource: prevents any reload |
**Tests verify:**
- Reload happens when both annotations present
- No reload when workload has search but ConfigMap lacks match
- No reload when ConfigMap has match but no workload has search
- Multiple workloads can have search, only ones with search reload
#### Search & Match
### Exclude Annotations
| Annotation | Target | Description |
|------------|--------|-------------|
| `reloader.stakater.com/search: "true"` | Workload | Watch for matching resources |
| `reloader.stakater.com/match: "true"` | Resource | Trigger watchers on change |
Exclude specific resources from auto-reload:
#### Other
- `configmap.reloader.stakater.com/exclude: "config-to-skip"`
- `secret.reloader.stakater.com/exclude: "secret-to-skip"`
| Annotation | Description |
|------------|-------------|
| `reloader.stakater.com/pause-period` | Pause deployment after reload |
**Tests verify:**
- Excluded ConfigMap changes don't trigger reload
- Non-excluded ConfigMap changes still trigger reload
- Same behavior for Secrets
### CLI Flags
### Resource Ignore
Tests verify these Reloader command-line flags:
Put this on the ConfigMap/Secret itself to prevent any reload:
- `reloader.stakater.com/ignore: "true"`
**Tests verify:**
- ConfigMap with ignore annotation never triggers reload
- Secret with ignore annotation never triggers reload
- Even with explicit reload annotation on workload
### Pause Period
Delay between detecting change and triggering reload:
- `reloader.stakater.com/pause-period: "10s"`
**Tests verify:**
- Deployment gets paused-at annotation after reload
- Without pause-period, no paused-at annotation
---
## Advanced Scenarios
### Pod Template Annotations
Reloader reads annotations from both places:
1. Deployment/DaemonSet/etc metadata
2. Pod template metadata (inside spec.template.metadata)
**Tests verify:**
- Annotation only on pod template still works
- Annotation on both locations works
- Mismatched annotations (ConfigMap annotation but updating Secret) correctly doesn't reload
### Regex Patterns
Use regex in the reload annotation:
- `configmap.reloader.stakater.com/reload: "app-config-.*"`
- `secret.reloader.stakater.com/reload: "db-creds-.*"`
**Tests verify:**
- Matching ConfigMap/Secret triggers reload
- Non-matching ConfigMap/Secret doesn't trigger reload
### Multiple Containers
**Tests verify:**
- Multiple containers sharing one ConfigMap - changes trigger reload
- Multiple containers with different ConfigMaps - change to either triggers reload
| Flag | Description |
|------|-------------|
| `--namespaces-to-ignore` | Skip specified namespaces |
| `--namespace-selector` | Only watch namespaces with matching labels |
| `--watch-globally` | Watch all namespaces vs own namespace only |
| `--resource-label-selector` | Only watch resources with matching labels |
| `--ignore-secrets` | Ignore all Secret changes |
| `--ignore-configmaps` | Ignore all ConfigMap changes |
| `--ignore-cronjobs` | Skip CronJob workloads |
| `--ignore-jobs` | Skip Job workloads |
| `--reload-on-create` | Trigger reload on resource creation |
| `--reload-on-delete` | Trigger reload on resource deletion |
| `--auto-reload-all` | Auto-reload all workloads without annotations |
| `--enable-csi-integration` | Enable SecretProviderClass support |
---
@@ -321,99 +261,163 @@ Use regex in the reload annotation:
```
test/e2e/
├── core/ # Main tests (all workload types)
│ ├── workloads_test.go # Basic reload behavior
│ └── reference_methods_test.go # envFrom, volumes, etc.
├── annotations/ # Annotation-specific behavior
│ ├── auto_reload_test.go
│ ├── combination_test.go
│ ├── exclude_test.go
│ ├── search_match_test.go
│ ├── pause_period_test.go
── resource_ignore_test.go
├── flags/ # CLI flag behavior
── namespace_selector_test.go
├── namespace_ignore_test.go
│ ├── resource_selector_test.go
├── core/ # Core workload tests
│ ├── core_suite_test.go
│ └── workloads_test.go # All workload types, both strategies
├── annotations/ # Annotation behavior tests
│ ├── annotations_suite_test.go
│ ├── auto_reload_test.go # Auto-detection variations
│ ├── combination_test.go # Multiple annotations together
│ ├── exclude_test.go # Exclude annotations
── pause_period_test.go # Pause after reload
│ ├── resource_ignore_test.go # Ignore annotation on resources
── search_match_test.go # Search/match pattern
├── flags/ # CLI flag tests
│ ├── flags_suite_test.go
│ ├── auto_reload_all_test.go
│ ├── ignore_resources_test.go
│ ├── ignored_workloads_test.go
│ ├── auto_reload_all_test.go
│ ├── namespace_ignore_test.go
│ ├── namespace_selector_test.go
│ ├── reload_on_create_test.go
│ ├── reload_on_delete_test.go
│ ├── resource_selector_test.go
│ └── watch_globally_test.go
├── advanced/ # Edge cases
│ ├── job_reload_test.go
│ ├── multi_container_test.go
│ ├── pod_annotations_test.go
── regex_test.go
├── argo/ # Argo Rollouts (requires installation)
├── advanced/ # Advanced scenarios
│ ├── advanced_suite_test.go
│ ├── job_reload_test.go # Job recreation
── multi_container_test.go # Multiple containers
│ ├── pod_annotations_test.go # Pod template annotations
│ └── regex_test.go # Regex patterns
├── csi/ # CSI SecretProviderClass tests
│ ├── csi_suite_test.go
│ └── csi_test.go # SPC-specific scenarios
├── argo/ # Argo Rollouts (requires installation)
│ ├── argo_suite_test.go
│ └── rollout_test.go
├── openshift/ # OpenShift (requires cluster)
│ └── deploymentconfig_test.go
└── utils/ # Shared test helpers
└── utils/ # Shared test utilities
├── annotations.go # Annotation builders
├── constants.go # Test constants
├── csi.go # CSI client and helpers
├── resources.go # Resource creation helpers
├── testenv.go # Test environment setup
├── wait.go # Wait/polling utilities
├── workload_adapter.go # Workload abstraction interface
├── workload_deployment.go # Deployment adapter
├── workload_daemonset.go # DaemonSet adapter
├── workload_statefulset.go # StatefulSet adapter
├── workload_cronjob.go # CronJob adapter
├── workload_job.go # Job adapter
├── workload_argo.go # Argo Rollout adapter
└── workload_openshift.go # DeploymentConfig adapter
```
---
## Debugging Failed Tests
## Debugging
### See What's Happening
### View Test Output
```bash
# Verbose output
go test -v ./test/e2e/core/...
go tool ginkgo -v ./test/e2e/core/...
# Run one specific test
go test -v ./test/e2e/core/... --ginkgo.focus="should reload when ConfigMap"
# Focus on specific test
go tool ginkgo -v --focus="should reload when ConfigMap" ./test/e2e/...
# Keep the cluster around after tests
SKIP_CLEANUP=true make e2e
# Show all spec names
go tool ginkgo -v --dry-run ./test/e2e/...
```
### Check Reloader Logs
```bash
# Find the Reloader pod
# Find Reloader pod
kubectl get pods -A | grep reloader
# Check its logs
kubectl logs -n <namespace> -l app=reloader-reloader --tail=100
# View logs
kubectl logs -n <namespace> -l app.kubernetes.io/name=reloader --tail=100 -f
# Check events
kubectl get events -n <namespace> --sort-by='.lastTimestamp'
```
### Common Problems
### Inspect Test Resources
| Problem | Solution |
|---------|----------|
| Test timeout | Reloader might not be running - check pod status |
| Argo tests skipped | Install Argo Rollouts first |
| OpenShift tests skipped | Only work on OpenShift clusters |
| "resource not found" | Missing CRDs (Argo, OpenShift) |
```bash
# List test namespaces
kubectl get ns | grep reloader
# Check workloads in test namespace
kubectl get deploy,ds,sts,cronjob,job -n <test-namespace>
# Check ConfigMaps/Secrets
kubectl get cm,secret -n <test-namespace>
# Check CSI resources
kubectl get secretproviderclass,secretproviderclasspodstatus -n <test-namespace>
```
### Common Issues
| Issue | Cause | Solution |
|-------|-------|----------|
| Tests timeout | Reloader not running | Check pod status and logs |
| CSI tests skipped | CSI driver not installed | Run `make e2e-setup` |
| Argo tests skipped | Argo Rollouts not installed | Run `make e2e-setup` |
| OpenShift tests skipped | Not an OpenShift cluster | Expected on Kind |
| "resource not found" | Missing CRDs | Install required components |
| Duplicate volume names | Test bug | Check CSI volume naming |
---
## Environment Variables
## Writing Tests
| Variable | What it does | Default |
|----------|--------------|---------|
| `RELOADER_IMAGE` | Image to test | `ghcr.io/stakater/reloader:test` |
| `SKIP_BUILD` | Don't build the image | `false` |
| `SKIP_CLEANUP` | Keep cluster after tests | `false` |
| `KIND_CLUSTER` | Kind cluster name | `kind` |
| `KUBECONFIG` | Kubernetes config path | `~/.kube/config` |
### Using the Workload Adapter Pattern
---
## Writing New Tests
### For Multiple Workload Types
Use the adapter pattern to test the same behavior across Deployments, DaemonSets, etc:
Test the same behavior across multiple workload types:
```go
DescribeTable("should reload when ConfigMap changes",
func(workloadType utils.WorkloadType) {
adapter := registry.Get(workloadType)
// ... create ConfigMap, workload, update ConfigMap, verify reload
if adapter == nil {
Skip(fmt.Sprintf("%s not available", workloadType))
}
// Create ConfigMap
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "initial"}, nil)
Expect(err).NotTo(HaveOccurred())
// Create workload via adapter
err = adapter.Create(ctx, testNamespace, workloadName, utils.WorkloadConfig{
ConfigMapName: configMapName,
UseConfigMapEnvFrom: true,
Annotations: utils.BuildConfigMapReloadAnnotation(configMapName),
})
Expect(err).NotTo(HaveOccurred())
// Wait for ready
err = adapter.WaitReady(ctx, testNamespace, workloadName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
// Update ConfigMap
err = utils.UpdateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "updated"})
Expect(err).NotTo(HaveOccurred())
// Verify reload
reloaded, err := adapter.WaitReloaded(ctx, testNamespace, workloadName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue())
},
Entry("Deployment", utils.WorkloadDeployment),
Entry("DaemonSet", utils.WorkloadDaemonSet),
@@ -421,25 +425,66 @@ DescribeTable("should reload when ConfigMap changes",
)
```
### For Deployment-Only Tests
### Direct Resource Creation
Use the direct creation helpers:
For Deployment-specific tests:
```go
It("should reload with my specific setup", func() {
It("should reload with custom setup", func() {
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "value"}, nil)
Expect(err).NotTo(HaveOccurred())
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithConfigMapEnvFrom(configMapName),
utils.WithAnnotations(utils.BuildAutoTrueAnnotation()),
)
Expect(err).NotTo(HaveOccurred())
// Update and verify...
// ... test logic ...
})
```
### Negative Tests (Verifying Nothing Happens)
### CSI Tests
```go
It("should reload when SecretProviderClassPodStatus changes", func() {
if !utils.IsCSIDriverInstalled(ctx, csiClient) {
Skip("CSI driver not installed")
}
// Create SPC
_, err := utils.CreateSecretProviderClass(ctx, csiClient, testNamespace, spcName, nil)
Expect(err).NotTo(HaveOccurred())
// Create SPCPS
_, err = utils.CreateSecretProviderClassPodStatus(ctx, csiClient, testNamespace, spcpsName, spcName,
utils.NewSPCPSObjects("secret1", "v1"))
Expect(err).NotTo(HaveOccurred())
// Create Deployment with CSI volume
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithCSIVolume(spcName),
utils.WithAnnotations(utils.BuildSecretProviderClassReloadAnnotation(spcName)),
)
Expect(err).NotTo(HaveOccurred())
// Update SPCPS
err = utils.UpdateSecretProviderClassPodStatus(ctx, csiClient, testNamespace, spcpsName,
utils.NewSPCPSObjects("secret1", "v2"))
Expect(err).NotTo(HaveOccurred())
// Verify reload
reloaded, err := utils.WaitForDeploymentReloaded(ctx, kubeClient, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue())
})
```
### Negative Tests
Verify that something does NOT trigger a reload:
```go
It("should NOT reload when only labels change", func() {
@@ -448,10 +493,29 @@ It("should NOT reload when only labels change", func() {
// Make a change that shouldn't trigger reload
err = utils.UpdateConfigMapLabels(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"new-label": "value"})
Expect(err).NotTo(HaveOccurred())
// Wait a bit, then verify NO reload happened
// Wait briefly, then verify NO reload
time.Sleep(utils.NegativeTestWait)
reloaded, _ := utils.WaitForDeploymentReloaded(...)
Expect(reloaded).To(BeFalse())
reloaded, err := utils.WaitForDeploymentReloaded(ctx, kubeClient, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ShortTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeFalse(), "Should NOT have reloaded")
})
```
### Test Labels
Use labels to categorize tests:
```go
Entry("Deployment", Label("csi"), utils.WorkloadDeployment),
Entry("with OpenShift", Label("openshift"), utils.WorkloadDeploymentConfig),
Entry("with Argo", Label("argo"), utils.WorkloadArgoRollout),
```
Run by label:
```bash
go tool ginkgo --label-filter="csi" ./test/e2e/...
go tool ginkgo --label-filter="!openshift && !argo" ./test/e2e/...
```

View File

@@ -6,12 +6,17 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
csiclient "sigs.k8s.io/secrets-store-csi-driver/pkg/client/clientset/versioned"
"github.com/stakater/Reloader/test/e2e/utils"
)
var (
kubeClient kubernetes.Interface
csiClient csiclient.Interface
restConfig *rest.Config
testNamespace string
ctx context.Context
testEnv *utils.TestEnvironment
@@ -26,18 +31,25 @@ var _ = BeforeSuite(func() {
var err error
ctx = context.Background()
// Setup test environment
testEnv, err = utils.SetupTestEnvironment(ctx, "reloader-advanced")
Expect(err).NotTo(HaveOccurred(), "Failed to setup test environment")
// Export for use in tests
kubeClient = testEnv.KubeClient
csiClient = testEnv.CSIClient
restConfig = testEnv.RestConfig
testNamespace = testEnv.Namespace
// Deploy Reloader with annotations strategy
err = testEnv.DeployAndWait(map[string]string{
deployValues := map[string]string{
"reloader.reloadStrategy": "annotations",
})
"reloader.watchGlobally": "false", // Only watch own namespace to prevent cross-talk between test suites
}
if utils.IsCSIDriverInstalled(ctx, csiClient) {
deployValues["reloader.enableCSIIntegration"] = "true"
GinkgoWriter.Println("Deploying Reloader with CSI integration support")
}
err = testEnv.DeployAndWait(deployValues)
Expect(err).NotTo(HaveOccurred(), "Failed to deploy Reloader")
})

View File

@@ -3,6 +3,7 @@ package advanced
import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)
@@ -35,8 +36,7 @@ var _ = Describe("Job Workload Recreation Tests", func() {
By("Creating a Job with ConfigMap envFrom")
job, err := utils.CreateJob(ctx, kubeClient, testNamespace, jobName,
utils.WithJobConfigMapEnvFrom(configMapName),
utils.WithJobAnnotations(utils.BuildConfigMapReloadAnnotation(configMapName)),
)
utils.WithJobAnnotations(utils.BuildConfigMapReloadAnnotation(configMapName)))
Expect(err).NotTo(HaveOccurred())
originalUID := string(job.UID)
@@ -50,8 +50,8 @@ var _ = Describe("Job Workload Recreation Tests", func() {
Expect(err).NotTo(HaveOccurred())
By("Waiting for Job to be recreated (new UID)")
_, recreated, err := utils.WaitForJobRecreated(ctx, kubeClient, testNamespace, jobName,
originalUID, utils.ReloadTimeout)
_, recreated, err := utils.WaitForJobRecreated(ctx, kubeClient, testNamespace, jobName, originalUID,
utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(recreated).To(BeTrue(), "Job should be recreated with new UID when ConfigMap changes")
})
@@ -65,10 +65,8 @@ var _ = Describe("Job Workload Recreation Tests", func() {
Expect(err).NotTo(HaveOccurred())
By("Creating a Job with Secret envFrom")
job, err := utils.CreateJob(ctx, kubeClient, testNamespace, jobName,
utils.WithJobSecretEnvFrom(secretName),
utils.WithJobAnnotations(utils.BuildSecretReloadAnnotation(secretName)),
)
job, err := utils.CreateJob(ctx, kubeClient, testNamespace, jobName, utils.WithJobSecretEnvFrom(secretName),
utils.WithJobAnnotations(utils.BuildSecretReloadAnnotation(secretName)))
Expect(err).NotTo(HaveOccurred())
originalUID := string(job.UID)
@@ -82,8 +80,8 @@ var _ = Describe("Job Workload Recreation Tests", func() {
Expect(err).NotTo(HaveOccurred())
By("Waiting for Job to be recreated (new UID)")
_, recreated, err := utils.WaitForJobRecreated(ctx, kubeClient, testNamespace, jobName,
originalUID, utils.ReloadTimeout)
_, recreated, err := utils.WaitForJobRecreated(ctx, kubeClient, testNamespace, jobName, originalUID,
utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(recreated).To(BeTrue(), "Job should be recreated with new UID when Secret changes")
})
@@ -99,8 +97,7 @@ var _ = Describe("Job Workload Recreation Tests", func() {
By("Creating a Job with auto annotation")
job, err := utils.CreateJob(ctx, kubeClient, testNamespace, jobName,
utils.WithJobConfigMapEnvFrom(configMapName),
utils.WithJobAnnotations(utils.BuildAutoTrueAnnotation()),
)
utils.WithJobAnnotations(utils.BuildAutoTrueAnnotation()))
Expect(err).NotTo(HaveOccurred())
originalUID := string(job.UID)
@@ -114,8 +111,8 @@ var _ = Describe("Job Workload Recreation Tests", func() {
Expect(err).NotTo(HaveOccurred())
By("Waiting for Job to be recreated (new UID)")
_, recreated, err := utils.WaitForJobRecreated(ctx, kubeClient, testNamespace, jobName,
originalUID, utils.ReloadTimeout)
_, recreated, err := utils.WaitForJobRecreated(ctx, kubeClient, testNamespace, jobName, originalUID,
utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(recreated).To(BeTrue(), "Job with auto=true should be recreated when ConfigMap changes")
})
@@ -131,8 +128,7 @@ var _ = Describe("Job Workload Recreation Tests", func() {
By("Creating a Job with valueFrom.configMapKeyRef")
job, err := utils.CreateJob(ctx, kubeClient, testNamespace, jobName,
utils.WithJobConfigMapKeyRef(configMapName, "config_key", "MY_CONFIG"),
utils.WithJobAnnotations(utils.BuildConfigMapReloadAnnotation(configMapName)),
)
utils.WithJobAnnotations(utils.BuildConfigMapReloadAnnotation(configMapName)))
Expect(err).NotTo(HaveOccurred())
originalUID := string(job.UID)
@@ -146,10 +142,11 @@ var _ = Describe("Job Workload Recreation Tests", func() {
Expect(err).NotTo(HaveOccurred())
By("Waiting for Job to be recreated (new UID)")
_, recreated, err := utils.WaitForJobRecreated(ctx, kubeClient, testNamespace, jobName,
originalUID, utils.ReloadTimeout)
_, recreated, err := utils.WaitForJobRecreated(ctx, kubeClient, testNamespace, jobName, originalUID,
utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(recreated).To(BeTrue(), "Job with valueFrom.configMapKeyRef should be recreated when ConfigMap changes")
Expect(recreated).To(BeTrue(),
"Job with valueFrom.configMapKeyRef should be recreated when ConfigMap changes")
})
})
@@ -163,8 +160,7 @@ var _ = Describe("Job Workload Recreation Tests", func() {
By("Creating a Job with valueFrom.secretKeyRef")
job, err := utils.CreateJob(ctx, kubeClient, testNamespace, jobName,
utils.WithJobSecretKeyRef(secretName, "secret_key", "MY_SECRET"),
utils.WithJobAnnotations(utils.BuildSecretReloadAnnotation(secretName)),
)
utils.WithJobAnnotations(utils.BuildSecretReloadAnnotation(secretName)))
Expect(err).NotTo(HaveOccurred())
originalUID := string(job.UID)
@@ -178,8 +174,8 @@ var _ = Describe("Job Workload Recreation Tests", func() {
Expect(err).NotTo(HaveOccurred())
By("Waiting for Job to be recreated (new UID)")
_, recreated, err := utils.WaitForJobRecreated(ctx, kubeClient, testNamespace, jobName,
originalUID, utils.ReloadTimeout)
_, recreated, err := utils.WaitForJobRecreated(ctx, kubeClient, testNamespace, jobName, originalUID,
utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(recreated).To(BeTrue(), "Job with valueFrom.secretKeyRef should be recreated when Secret changes")
})

View File

@@ -1,8 +1,12 @@
package advanced
import (
"fmt"
"time"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)
@@ -91,4 +95,125 @@ var _ = Describe("Multi-Container Tests", func() {
Expect(reloaded).To(BeTrue(), "Deployment should be reloaded when first container's ConfigMap changes")
})
})
Context("Init container with CSI volume", Label("csi"), func() {
var (
spcName string
vaultSecretPath string
)
BeforeEach(func() {
if !utils.IsCSIDriverInstalled(ctx, csiClient) {
Skip("CSI secrets store driver not installed")
}
if !utils.IsVaultProviderInstalled(ctx, kubeClient) {
Skip("Vault CSI provider not installed")
}
spcName = utils.RandName("spc")
vaultSecretPath = fmt.Sprintf("secret/%s", utils.RandName("test"))
})
AfterEach(func() {
if spcName != "" {
_ = utils.DeleteSecretProviderClass(ctx, csiClient, testNamespace, spcName)
}
if vaultSecretPath != "" {
_ = utils.DeleteVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath)
}
})
It("should reload when SecretProviderClassPodStatus used by init container changes", func() {
By("Creating a Vault secret")
err := utils.CreateVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath, map[string]string{
"api_key": "initial-init-value",
})
Expect(err).NotTo(HaveOccurred())
By("Creating a SecretProviderClass pointing to Vault")
_, err = utils.CreateSecretProviderClassWithSecret(ctx, csiClient, testNamespace, spcName, vaultSecretPath, "api_key")
Expect(err).NotTo(HaveOccurred())
By("Creating a Deployment with init container using CSI volume")
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithInitContainerCSIVolume(spcName),
utils.WithAnnotations(utils.BuildSecretProviderClassReloadAnnotation(spcName)),
)
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be ready")
err = utils.WaitForDeploymentReady(ctx, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Finding the SPCPS created by CSI driver")
spcpsName, err := utils.FindSPCPSForDeployment(ctx, csiClient, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Getting initial SPCPS version")
initialVersion, err := utils.GetSPCPSVersion(ctx, csiClient, testNamespace, spcpsName)
Expect(err).NotTo(HaveOccurred())
By("Updating the Vault secret")
err = utils.UpdateVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath, map[string]string{
"api_key": "updated-init-value",
})
Expect(err).NotTo(HaveOccurred())
By("Waiting for CSI driver to sync (SPCPS version change)")
err = utils.WaitForSPCPSVersionChange(ctx, csiClient, testNamespace, spcpsName, initialVersion, 10*time.Second)
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be reloaded")
reloaded, err := utils.WaitForDeploymentReloaded(ctx, kubeClient, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue(), "Deployment with init container using CSI volume should be reloaded")
})
It("should reload with auto annotation when init container CSI volume changes", func() {
By("Creating a Vault secret")
err := utils.CreateVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath, map[string]string{
"api_key": "initial-init-auto-value",
})
Expect(err).NotTo(HaveOccurred())
By("Creating a SecretProviderClass pointing to Vault")
_, err = utils.CreateSecretProviderClassWithSecret(ctx, csiClient, testNamespace, spcName, vaultSecretPath, "api_key")
Expect(err).NotTo(HaveOccurred())
By("Creating a Deployment with init container using CSI volume and auto annotation")
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithInitContainerCSIVolume(spcName),
utils.WithAnnotations(utils.BuildAutoTrueAnnotation()),
)
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be ready")
err = utils.WaitForDeploymentReady(ctx, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Finding the SPCPS created by CSI driver")
spcpsName, err := utils.FindSPCPSForDeployment(ctx, csiClient, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Getting initial SPCPS version")
initialVersion, err := utils.GetSPCPSVersion(ctx, csiClient, testNamespace, spcpsName)
Expect(err).NotTo(HaveOccurred())
By("Updating the Vault secret")
err = utils.UpdateVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath, map[string]string{
"api_key": "updated-init-auto-value",
})
Expect(err).NotTo(HaveOccurred())
By("Waiting for CSI driver to sync (SPCPS version change)")
err = utils.WaitForSPCPSVersionChange(ctx, csiClient, testNamespace, spcpsName, initialVersion, 10*time.Second)
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be reloaded")
reloaded, err := utils.WaitForDeploymentReloaded(ctx, kubeClient, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue(), "Deployment with init container CSI volume and auto=true should be reloaded")
})
})
})

View File

@@ -5,6 +5,7 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)

View File

@@ -5,6 +5,7 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)

View File

@@ -6,14 +6,17 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
csiclient "sigs.k8s.io/secrets-store-csi-driver/pkg/client/clientset/versioned"
"github.com/stakater/Reloader/test/e2e/utils"
)
var (
kubeClient kubernetes.Interface
dynamicClient dynamic.Interface
csiClient csiclient.Interface
restConfig *rest.Config
testNamespace string
ctx context.Context
cancel context.CancelFunc
@@ -25,35 +28,43 @@ func TestAnnotations(t *testing.T) {
RunSpecs(t, "Annotations Strategy E2E Suite")
}
var _ = BeforeSuite(func() {
var err error
ctx, cancel = context.WithCancel(context.Background())
var _ = BeforeSuite(
func() {
var err error
ctx, cancel = context.WithCancel(context.Background())
// Setup test environment
testEnv, err = utils.SetupTestEnvironment(ctx, "reloader-annotations-test")
Expect(err).NotTo(HaveOccurred(), "Failed to setup test environment")
testEnv, err = utils.SetupTestEnvironment(ctx, "reloader-annotations-test")
Expect(err).NotTo(HaveOccurred(), "Failed to setup test environment")
// Export for use in tests
kubeClient = testEnv.KubeClient
dynamicClient = testEnv.DynamicClient
testNamespace = testEnv.Namespace
kubeClient = testEnv.KubeClient
csiClient = testEnv.CSIClient
restConfig = testEnv.RestConfig
testNamespace = testEnv.Namespace
// Deploy Reloader with annotations strategy
err = testEnv.DeployAndWait(map[string]string{
"reloader.reloadStrategy": "annotations",
deployValues := map[string]string{
"reloader.reloadStrategy": "annotations",
"reloader.watchGlobally": "false", // Only watch own namespace to prevent cross-talk between test suites
}
if utils.IsCSIDriverInstalled(ctx, csiClient) {
deployValues["reloader.enableCSIIntegration"] = "true"
GinkgoWriter.Println("Deploying Reloader with CSI integration support")
}
err = testEnv.DeployAndWait(deployValues)
Expect(err).NotTo(HaveOccurred(), "Failed to deploy Reloader")
})
Expect(err).NotTo(HaveOccurred(), "Failed to deploy Reloader")
})
var _ = AfterSuite(func() {
if testEnv != nil {
err := testEnv.Cleanup()
Expect(err).NotTo(HaveOccurred(), "Failed to cleanup test environment")
}
var _ = AfterSuite(
func() {
if testEnv != nil {
err := testEnv.Cleanup()
Expect(err).NotTo(HaveOccurred(), "Failed to cleanup test environment")
}
if cancel != nil {
cancel()
}
if cancel != nil {
cancel()
}
GinkgoWriter.Println("Annotations E2E Suite cleanup complete")
})
GinkgoWriter.Println("Annotations E2E Suite cleanup complete")
})

View File

@@ -1,30 +1,40 @@
package annotations
import (
"fmt"
"time"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)
var _ = Describe("Auto Reload Annotation Tests", func() {
var (
deploymentName string
configMapName string
secretName string
deploymentName string
configMapName string
secretName string
spcName string
vaultSecretPath string
)
BeforeEach(func() {
deploymentName = utils.RandName("deploy")
configMapName = utils.RandName("cm")
secretName = utils.RandName("secret")
spcName = utils.RandName("spc")
vaultSecretPath = fmt.Sprintf("secret/%s", utils.RandName("test"))
})
AfterEach(func() {
_ = utils.DeleteDeployment(ctx, kubeClient, testNamespace, deploymentName)
_ = utils.DeleteConfigMap(ctx, kubeClient, testNamespace, configMapName)
_ = utils.DeleteSecret(ctx, kubeClient, testNamespace, secretName)
if csiClient != nil {
_ = utils.DeleteSecretProviderClass(ctx, csiClient, testNamespace, spcName)
}
_ = utils.DeleteVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath)
})
Context("with reloader.stakater.com/auto=true annotation", func() {
@@ -225,6 +235,176 @@ var _ = Describe("Auto Reload Annotation Tests", func() {
})
})
Context("with secretproviderclass.reloader.stakater.com/auto=true annotation", Label("csi"), func() {
BeforeEach(func() {
if !utils.IsCSIDriverInstalled(ctx, csiClient) {
Skip("CSI secrets store driver not installed")
}
if !utils.IsVaultProviderInstalled(ctx, kubeClient) {
Skip("Vault CSI provider not installed")
}
})
It("should reload Deployment when SecretProviderClassPodStatus changes", func() {
By("Creating a secret in Vault")
err := utils.CreateVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath,
map[string]string{"api_key": "initial-value-v1"})
Expect(err).NotTo(HaveOccurred())
By("Creating a SecretProviderClass pointing to Vault secret")
_, err = utils.CreateSecretProviderClassWithSecret(ctx, csiClient, testNamespace, spcName,
vaultSecretPath, "api_key")
Expect(err).NotTo(HaveOccurred())
By("Creating a Deployment with secretproviderclass auto=true annotation")
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithCSIVolume(spcName),
utils.WithAnnotations(utils.BuildSecretProviderClassAutoAnnotation()),
)
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be ready")
err = utils.WaitForDeploymentReady(ctx, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Finding the SPCPS created by CSI driver")
spcpsName, err := utils.FindSPCPSForDeployment(ctx, csiClient, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
GinkgoWriter.Printf("Found SPCPS: %s\n", spcpsName)
By("Getting initial SPCPS version")
initialVersion, err := utils.GetSPCPSVersion(ctx, csiClient, testNamespace, spcpsName)
Expect(err).NotTo(HaveOccurred())
GinkgoWriter.Printf("Initial SPCPS version: %s\n", initialVersion)
By("Updating the Vault secret")
err = utils.UpdateVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath,
map[string]string{"api_key": "updated-value-v2"})
Expect(err).NotTo(HaveOccurred())
By("Waiting for CSI driver to sync the new secret version")
err = utils.WaitForSPCPSVersionChange(ctx, csiClient, testNamespace, spcpsName, initialVersion, 10*time.Second)
Expect(err).NotTo(HaveOccurred())
GinkgoWriter.Println("CSI driver synced new secret version")
By("Waiting for Deployment to be reloaded")
reloaded, err := utils.WaitForDeploymentReloaded(ctx, kubeClient, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue(), "Deployment should have been reloaded for Vault secret change")
})
It("should NOT reload Deployment when ConfigMap changes (only SPC auto enabled)", func() {
By("Creating a secret in Vault")
err := utils.CreateVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath,
map[string]string{"api_key": "initial-value-v1"})
Expect(err).NotTo(HaveOccurred())
By("Creating a SecretProviderClass pointing to Vault secret")
_, err = utils.CreateSecretProviderClassWithSecret(ctx, csiClient, testNamespace, spcName,
vaultSecretPath, "api_key")
Expect(err).NotTo(HaveOccurred())
By("Creating a ConfigMap")
_, err = utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "initial"}, nil)
Expect(err).NotTo(HaveOccurred())
By("Creating a Deployment with CSI volume AND ConfigMap, but only SPC auto annotation")
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithCSIVolume(spcName),
utils.WithConfigMapEnvFrom(configMapName),
utils.WithAnnotations(utils.BuildSecretProviderClassAutoAnnotation()),
)
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be ready")
err = utils.WaitForDeploymentReady(ctx, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Finding the SPCPS created by CSI driver")
spcpsName, err := utils.FindSPCPSForDeployment(ctx, csiClient, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Updating the ConfigMap (should NOT trigger reload with SPC auto only)")
err = utils.UpdateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "updated"})
Expect(err).NotTo(HaveOccurred())
By("Verifying Deployment was NOT reloaded for ConfigMap change")
time.Sleep(utils.NegativeTestWait)
reloaded, err := utils.WaitForDeploymentReloaded(ctx, kubeClient, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ShortTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeFalse(), "Deployment with SPC auto only should NOT have been reloaded for ConfigMap change")
By("Getting initial SPCPS version")
initialVersion, err := utils.GetSPCPSVersion(ctx, csiClient, testNamespace, spcpsName)
Expect(err).NotTo(HaveOccurred())
By("Updating the Vault secret (should trigger reload)")
err = utils.UpdateVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath,
map[string]string{"api_key": "updated-value-v2"})
Expect(err).NotTo(HaveOccurred())
By("Waiting for CSI driver to sync the new secret version")
err = utils.WaitForSPCPSVersionChange(ctx, csiClient, testNamespace, spcpsName, initialVersion, 10*time.Second)
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be reloaded for SPC change")
reloaded, err = utils.WaitForDeploymentReloaded(ctx, kubeClient, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue(), "Deployment should have been reloaded for Vault secret change")
})
It("should reload when using combined auto=true annotation for SPC", func() {
By("Creating a secret in Vault")
err := utils.CreateVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath,
map[string]string{"api_key": "initial-value-v1"})
Expect(err).NotTo(HaveOccurred())
By("Creating a SecretProviderClass pointing to Vault secret")
_, err = utils.CreateSecretProviderClassWithSecret(ctx, csiClient, testNamespace, spcName,
vaultSecretPath, "api_key")
Expect(err).NotTo(HaveOccurred())
By("Creating a Deployment with CSI volume and general auto=true annotation")
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithCSIVolume(spcName),
utils.WithAnnotations(utils.BuildAutoTrueAnnotation()),
)
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be ready")
err = utils.WaitForDeploymentReady(ctx, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Finding the SPCPS created by CSI driver")
spcpsName, err := utils.FindSPCPSForDeployment(ctx, csiClient, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Getting initial SPCPS version")
initialVersion, err := utils.GetSPCPSVersion(ctx, csiClient, testNamespace, spcpsName)
Expect(err).NotTo(HaveOccurred())
By("Updating the Vault secret")
err = utils.UpdateVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath,
map[string]string{"api_key": "updated-value-v2"})
Expect(err).NotTo(HaveOccurred())
By("Waiting for CSI driver to sync the new secret version")
err = utils.WaitForSPCPSVersionChange(ctx, csiClient, testNamespace, spcpsName, initialVersion, 10*time.Second)
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be reloaded")
reloaded, err := utils.WaitForDeploymentReloaded(ctx, kubeClient, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue(), "Deployment with auto=true should have been reloaded for Vault secret change")
})
})
Context("with auto annotation and explicit reload annotation together", func() {
It("should reload when auto-detected resource changes", func() {
configMapName2 := utils.RandName("cm2")

View File

@@ -5,6 +5,7 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)

View File

@@ -1,10 +1,12 @@
package annotations
import (
"fmt"
"time"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)
@@ -15,7 +17,6 @@ var _ = Describe("Exclude Annotation Tests", func() {
configMapName2 string
secretName string
secretName2 string
excludeNS string
)
BeforeEach(func() {
@@ -24,35 +25,29 @@ var _ = Describe("Exclude Annotation Tests", func() {
configMapName2 = utils.RandName("cm2")
secretName = utils.RandName("secret")
secretName2 = utils.RandName("secret2")
excludeNS = "exclude-" + utils.RandName("ns")
// Create test namespace
err := utils.CreateNamespace(ctx, kubeClient, excludeNS)
Expect(err).NotTo(HaveOccurred())
})
AfterEach(func() {
_ = utils.DeleteDeployment(ctx, kubeClient, excludeNS, deploymentName)
_ = utils.DeleteConfigMap(ctx, kubeClient, excludeNS, configMapName)
_ = utils.DeleteConfigMap(ctx, kubeClient, excludeNS, configMapName2)
_ = utils.DeleteSecret(ctx, kubeClient, excludeNS, secretName)
_ = utils.DeleteSecret(ctx, kubeClient, excludeNS, secretName2)
_ = utils.DeleteNamespace(ctx, kubeClient, excludeNS)
_ = utils.DeleteDeployment(ctx, kubeClient, testNamespace, deploymentName)
_ = utils.DeleteConfigMap(ctx, kubeClient, testNamespace, configMapName)
_ = utils.DeleteConfigMap(ctx, kubeClient, testNamespace, configMapName2)
_ = utils.DeleteSecret(ctx, kubeClient, testNamespace, secretName)
_ = utils.DeleteSecret(ctx, kubeClient, testNamespace, secretName2)
})
Context("ConfigMap exclude annotation", func() {
It("should NOT reload when excluded ConfigMap changes", func() {
By("Creating two ConfigMaps")
_, err := utils.CreateConfigMap(ctx, kubeClient, excludeNS, configMapName,
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "initial"}, nil)
Expect(err).NotTo(HaveOccurred())
_, err = utils.CreateConfigMap(ctx, kubeClient, excludeNS, configMapName2,
_, err = utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName2,
map[string]string{"key2": "initial2"}, nil)
Expect(err).NotTo(HaveOccurred())
By("Creating a Deployment with auto=true and configmaps.exclude annotation")
_, err = utils.CreateDeployment(ctx, kubeClient, excludeNS, deploymentName,
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithConfigMapEnvFrom(configMapName),
utils.WithConfigMapEnvFrom(configMapName2),
utils.WithAnnotations(utils.MergeAnnotations(
@@ -63,17 +58,17 @@ var _ = Describe("Exclude Annotation Tests", func() {
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be ready")
err = utils.WaitForDeploymentReady(ctx, kubeClient, excludeNS, deploymentName, utils.DeploymentReady)
err = utils.WaitForDeploymentReady(ctx, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Updating the excluded ConfigMap")
err = utils.UpdateConfigMap(ctx, kubeClient, excludeNS, configMapName,
err = utils.UpdateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "updated"})
Expect(err).NotTo(HaveOccurred())
By("Verifying Deployment was NOT reloaded (excluded ConfigMap)")
time.Sleep(utils.NegativeTestWait)
reloaded, err := utils.WaitForDeploymentReloaded(ctx, kubeClient, excludeNS, deploymentName,
reloaded, err := utils.WaitForDeploymentReloaded(ctx, kubeClient, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ShortTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeFalse(), "Deployment should NOT reload when excluded ConfigMap changes")
@@ -81,16 +76,16 @@ var _ = Describe("Exclude Annotation Tests", func() {
It("should reload when non-excluded ConfigMap changes", func() {
By("Creating two ConfigMaps")
_, err := utils.CreateConfigMap(ctx, kubeClient, excludeNS, configMapName,
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "initial"}, nil)
Expect(err).NotTo(HaveOccurred())
_, err = utils.CreateConfigMap(ctx, kubeClient, excludeNS, configMapName2,
_, err = utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName2,
map[string]string{"key2": "initial2"}, nil)
Expect(err).NotTo(HaveOccurred())
By("Creating a Deployment with auto=true and configmaps.exclude annotation")
_, err = utils.CreateDeployment(ctx, kubeClient, excludeNS, deploymentName,
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithConfigMapEnvFrom(configMapName),
utils.WithConfigMapEnvFrom(configMapName2),
utils.WithAnnotations(utils.MergeAnnotations(
@@ -101,16 +96,16 @@ var _ = Describe("Exclude Annotation Tests", func() {
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be ready")
err = utils.WaitForDeploymentReady(ctx, kubeClient, excludeNS, deploymentName, utils.DeploymentReady)
err = utils.WaitForDeploymentReady(ctx, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Updating the non-excluded ConfigMap")
err = utils.UpdateConfigMap(ctx, kubeClient, excludeNS, configMapName2,
err = utils.UpdateConfigMap(ctx, kubeClient, testNamespace, configMapName2,
map[string]string{"key2": "updated2"})
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be reloaded")
reloaded, err := utils.WaitForDeploymentReloaded(ctx, kubeClient, excludeNS, deploymentName,
reloaded, err := utils.WaitForDeploymentReloaded(ctx, kubeClient, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue(), "Deployment should reload when non-excluded ConfigMap changes")
@@ -120,16 +115,16 @@ var _ = Describe("Exclude Annotation Tests", func() {
Context("Secret exclude annotation", func() {
It("should NOT reload when excluded Secret changes", func() {
By("Creating two Secrets")
_, err := utils.CreateSecretFromStrings(ctx, kubeClient, excludeNS, secretName,
_, err := utils.CreateSecretFromStrings(ctx, kubeClient, testNamespace, secretName,
map[string]string{"password": "initial"}, nil)
Expect(err).NotTo(HaveOccurred())
_, err = utils.CreateSecretFromStrings(ctx, kubeClient, excludeNS, secretName2,
_, err = utils.CreateSecretFromStrings(ctx, kubeClient, testNamespace, secretName2,
map[string]string{"password2": "initial2"}, nil)
Expect(err).NotTo(HaveOccurred())
By("Creating a Deployment with auto=true and secrets.exclude annotation")
_, err = utils.CreateDeployment(ctx, kubeClient, excludeNS, deploymentName,
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithSecretEnvFrom(secretName),
utils.WithSecretEnvFrom(secretName2),
utils.WithAnnotations(utils.MergeAnnotations(
@@ -140,17 +135,17 @@ var _ = Describe("Exclude Annotation Tests", func() {
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be ready")
err = utils.WaitForDeploymentReady(ctx, kubeClient, excludeNS, deploymentName, utils.DeploymentReady)
err = utils.WaitForDeploymentReady(ctx, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Updating the excluded Secret")
err = utils.UpdateSecretFromStrings(ctx, kubeClient, excludeNS, secretName,
err = utils.UpdateSecretFromStrings(ctx, kubeClient, testNamespace, secretName,
map[string]string{"password": "updated"})
Expect(err).NotTo(HaveOccurred())
By("Verifying Deployment was NOT reloaded (excluded Secret)")
time.Sleep(utils.NegativeTestWait)
reloaded, err := utils.WaitForDeploymentReloaded(ctx, kubeClient, excludeNS, deploymentName,
reloaded, err := utils.WaitForDeploymentReloaded(ctx, kubeClient, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ShortTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeFalse(), "Deployment should NOT reload when excluded Secret changes")
@@ -158,16 +153,16 @@ var _ = Describe("Exclude Annotation Tests", func() {
It("should reload when non-excluded Secret changes", func() {
By("Creating two Secrets")
_, err := utils.CreateSecretFromStrings(ctx, kubeClient, excludeNS, secretName,
_, err := utils.CreateSecretFromStrings(ctx, kubeClient, testNamespace, secretName,
map[string]string{"password": "initial"}, nil)
Expect(err).NotTo(HaveOccurred())
_, err = utils.CreateSecretFromStrings(ctx, kubeClient, excludeNS, secretName2,
_, err = utils.CreateSecretFromStrings(ctx, kubeClient, testNamespace, secretName2,
map[string]string{"password2": "initial2"}, nil)
Expect(err).NotTo(HaveOccurred())
By("Creating a Deployment with auto=true and secrets.exclude annotation")
_, err = utils.CreateDeployment(ctx, kubeClient, excludeNS, deploymentName,
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithSecretEnvFrom(secretName),
utils.WithSecretEnvFrom(secretName2),
utils.WithAnnotations(utils.MergeAnnotations(
@@ -178,19 +173,159 @@ var _ = Describe("Exclude Annotation Tests", func() {
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be ready")
err = utils.WaitForDeploymentReady(ctx, kubeClient, excludeNS, deploymentName, utils.DeploymentReady)
err = utils.WaitForDeploymentReady(ctx, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Updating the non-excluded Secret")
err = utils.UpdateSecretFromStrings(ctx, kubeClient, excludeNS, secretName2,
err = utils.UpdateSecretFromStrings(ctx, kubeClient, testNamespace, secretName2,
map[string]string{"password2": "updated2"})
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be reloaded")
reloaded, err := utils.WaitForDeploymentReloaded(ctx, kubeClient, excludeNS, deploymentName,
reloaded, err := utils.WaitForDeploymentReloaded(ctx, kubeClient, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue(), "Deployment should reload when non-excluded Secret changes")
})
})
Context("SecretProviderClass exclude annotation", Label("csi"), func() {
var (
spcName string
spcName2 string
vaultSecretPath string
vaultSecretPath2 string
)
BeforeEach(func() {
if !utils.IsCSIDriverInstalled(ctx, csiClient) {
Skip("CSI secrets store driver not installed")
}
if !utils.IsVaultProviderInstalled(ctx, kubeClient) {
Skip("Vault CSI provider not installed")
}
spcName = utils.RandName("spc")
spcName2 = utils.RandName("spc2")
vaultSecretPath = fmt.Sprintf("secret/%s", utils.RandName("test"))
vaultSecretPath2 = fmt.Sprintf("secret/%s", utils.RandName("test2"))
})
AfterEach(func() {
_ = utils.DeleteSecretProviderClass(ctx, csiClient, testNamespace, spcName)
_ = utils.DeleteSecretProviderClass(ctx, csiClient, testNamespace, spcName2)
_ = utils.DeleteVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath)
_ = utils.DeleteVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath2)
})
It("should NOT reload when excluded SecretProviderClassPodStatus changes", func() {
By("Creating Vault secret for the excluded SPC")
err := utils.CreateVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath, map[string]string{
"api_key": "initial-excluded-value",
})
Expect(err).NotTo(HaveOccurred())
By("Creating SecretProviderClass pointing to Vault secret")
_, err = utils.CreateSecretProviderClassWithSecret(ctx, csiClient, testNamespace, spcName, vaultSecretPath, "api_key")
Expect(err).NotTo(HaveOccurred())
By("Creating a Deployment with auto=true and secretproviderclasses.exclude annotation")
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithCSIVolume(spcName),
utils.WithAnnotations(utils.MergeAnnotations(
utils.BuildAutoTrueAnnotation(),
utils.BuildSecretProviderClassExcludeAnnotation(spcName),
)),
)
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be ready")
err = utils.WaitForDeploymentReady(ctx, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Finding the SPCPS created by CSI driver")
spcpsName, err := utils.FindSPCPSForDeployment(ctx, csiClient, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Getting initial SPCPS version")
initialVersion, err := utils.GetSPCPSVersion(ctx, csiClient, testNamespace, spcpsName)
Expect(err).NotTo(HaveOccurred())
By("Updating the Vault secret for excluded SPC")
err = utils.UpdateVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath, map[string]string{
"api_key": "updated-excluded-value",
})
Expect(err).NotTo(HaveOccurred())
By("Waiting for CSI driver to sync (SPCPS version change)")
err = utils.WaitForSPCPSVersionChange(ctx, csiClient, testNamespace, spcpsName, initialVersion, 10*time.Second)
Expect(err).NotTo(HaveOccurred())
By("Verifying Deployment was NOT reloaded (excluded SPC)")
time.Sleep(utils.NegativeTestWait)
reloaded, err := utils.WaitForDeploymentReloaded(ctx, kubeClient, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ShortTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeFalse(), "Deployment should NOT reload when excluded SecretProviderClassPodStatus changes")
})
It("should reload when non-excluded SecretProviderClassPodStatus changes", func() {
By("Creating two Vault secrets")
err := utils.CreateVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath, map[string]string{
"api_key": "initial-excluded-value",
})
Expect(err).NotTo(HaveOccurred())
err = utils.CreateVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath2, map[string]string{
"api_key": "initial-nonexcluded-value",
})
Expect(err).NotTo(HaveOccurred())
By("Creating two SecretProviderClasses")
_, err = utils.CreateSecretProviderClassWithSecret(ctx, csiClient, testNamespace, spcName, vaultSecretPath, "api_key")
Expect(err).NotTo(HaveOccurred())
_, err = utils.CreateSecretProviderClassWithSecret(ctx, csiClient, testNamespace, spcName2, vaultSecretPath2, "api_key")
Expect(err).NotTo(HaveOccurred())
By("Creating a Deployment with auto=true and secretproviderclasses.exclude for first SPC only")
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithCSIVolume(spcName),
utils.WithCSIVolume(spcName2),
utils.WithAnnotations(utils.MergeAnnotations(
utils.BuildAutoTrueAnnotation(),
utils.BuildSecretProviderClassExcludeAnnotation(spcName),
)),
)
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be ready")
err = utils.WaitForDeploymentReady(ctx, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Finding the SPCPS for non-excluded SPC")
// We need to find SPCPS for the non-excluded SPC (spcName2)
spcpsName2, err := utils.FindSPCPSForSPC(ctx, csiClient, testNamespace, spcName2, 30*time.Second)
Expect(err).NotTo(HaveOccurred())
By("Getting initial SPCPS version for non-excluded SPC")
initialVersion, err := utils.GetSPCPSVersion(ctx, csiClient, testNamespace, spcpsName2)
Expect(err).NotTo(HaveOccurred())
By("Updating the Vault secret for non-excluded SPC")
err = utils.UpdateVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath2, map[string]string{
"api_key": "updated-nonexcluded-value",
})
Expect(err).NotTo(HaveOccurred())
By("Waiting for CSI driver to sync (SPCPS version change)")
err = utils.WaitForSPCPSVersionChange(ctx, csiClient, testNamespace, spcpsName2, initialVersion, 10*time.Second)
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be reloaded")
reloaded, err := utils.WaitForDeploymentReloaded(ctx, kubeClient, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue(), "Deployment should reload when non-excluded SecretProviderClassPodStatus changes")
})
})
})

View File

@@ -5,6 +5,7 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)
@@ -58,7 +59,7 @@ var _ = Describe("Pause Period Tests", func() {
By("Verifying Deployment has paused-at annotation")
paused, err := utils.WaitForDeploymentPaused(ctx, kubeClient, testNamespace, deploymentName,
"utils.AnnotationDeploymentPausedAt", utils.ShortTimeout)
utils.AnnotationDeploymentPausedAt, utils.ShortTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(paused).To(BeTrue(), "Deployment should have paused-at annotation after reload")
})
@@ -94,7 +95,7 @@ var _ = Describe("Pause Period Tests", func() {
By("Verifying Deployment does NOT have paused-at annotation")
time.Sleep(utils.NegativeTestWait)
paused, err := utils.WaitForDeploymentPaused(ctx, kubeClient, testNamespace, deploymentName,
"utils.AnnotationDeploymentPausedAt", utils.ShortTimeout)
utils.AnnotationDeploymentPausedAt, utils.ShortTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(paused).To(BeFalse(), "Deployment should NOT have paused-at annotation without pause-period")
})

View File

@@ -5,6 +5,7 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)

View File

@@ -5,6 +5,7 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)

View File

@@ -4,19 +4,20 @@ import (
"context"
"testing"
rolloutsclient "github.com/argoproj/argo-rollouts/pkg/client/clientset/versioned"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
"github.com/stakater/Reloader/test/e2e/utils"
)
var (
kubeClient kubernetes.Interface
dynamicClient dynamic.Interface
testNamespace string
ctx context.Context
testEnv *utils.TestEnvironment
kubeClient kubernetes.Interface
rolloutsClient rolloutsclient.Interface
testNamespace string
ctx context.Context
testEnv *utils.TestEnvironment
)
func TestArgo(t *testing.T) {
@@ -28,24 +29,18 @@ var _ = BeforeSuite(func() {
var err error
ctx = context.Background()
// Setup test environment
testEnv, err = utils.SetupTestEnvironment(ctx, "reloader-argo")
Expect(err).NotTo(HaveOccurred(), "Failed to setup test environment")
// Export for use in tests
kubeClient = testEnv.KubeClient
dynamicClient = testEnv.DynamicClient
rolloutsClient = testEnv.RolloutsClient
testNamespace = testEnv.Namespace
// Check if Argo Rollouts is installed
// NOTE: Argo Rollouts should be pre-installed using: ./scripts/e2e-cluster-setup.sh
// This suite does NOT install Argo Rollouts to ensure consistent behavior across all test suites.
if !utils.IsArgoRolloutsInstalled(ctx, dynamicClient) {
if !utils.IsArgoRolloutsInstalled(ctx, rolloutsClient) {
Skip("Argo Rollouts is not installed. Run ./scripts/e2e-cluster-setup.sh first")
}
GinkgoWriter.Println("Argo Rollouts is installed")
// Deploy Reloader with Argo Rollouts support
err = testEnv.DeployAndWait(map[string]string{
"reloader.reloadStrategy": "annotations",
"reloader.isArgoRollouts": "true",
@@ -54,13 +49,10 @@ var _ = BeforeSuite(func() {
})
var _ = AfterSuite(func() {
// Cleanup test environment (Reloader + namespace)
if testEnv != nil {
err := testEnv.Cleanup()
Expect(err).NotTo(HaveOccurred(), "Failed to cleanup test environment")
}
// NOTE: Argo Rollouts is NOT uninstalled here to allow other test suites (core/)
// to run Argo tests. Cleanup is handled by: ./scripts/e2e-cluster-cleanup.sh
GinkgoWriter.Println("Argo Rollouts E2E Suite cleanup complete (Argo Rollouts preserved for other suites)")
})

View File

@@ -3,6 +3,7 @@ package argo
import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)
@@ -22,7 +23,7 @@ var _ = Describe("Argo Rollout Strategy Tests", func() {
})
AfterEach(func() {
_ = utils.DeleteArgoRollout(ctx, dynamicClient, testNamespace, rolloutName)
_ = utils.DeleteRollout(ctx, rolloutsClient, testNamespace, rolloutName)
_ = utils.DeleteConfigMap(ctx, kubeClient, testNamespace, configMapName)
})
@@ -36,14 +37,14 @@ var _ = Describe("Argo Rollout Strategy Tests", func() {
Expect(err).NotTo(HaveOccurred())
By("Creating an Argo Rollout with auto=true (default strategy)")
err = utils.CreateArgoRollout(ctx, dynamicClient, testNamespace, rolloutName,
_, err = utils.CreateRollout(ctx, rolloutsClient, testNamespace, rolloutName,
utils.WithRolloutConfigMapEnvFrom(configMapName),
utils.WithRolloutAnnotations(utils.BuildAutoTrueAnnotation()),
)
Expect(err).NotTo(HaveOccurred())
By("Waiting for Rollout to be ready")
err = utils.WaitForRolloutReady(ctx, dynamicClient, testNamespace, rolloutName, utils.DeploymentReady)
err = utils.WaitForRolloutReady(ctx, rolloutsClient, testNamespace, rolloutName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Updating the ConfigMap")
@@ -52,7 +53,7 @@ var _ = Describe("Argo Rollout Strategy Tests", func() {
Expect(err).NotTo(HaveOccurred())
By("Waiting for Rollout to be reloaded with annotation")
reloaded, err := utils.WaitForRolloutReloaded(ctx, dynamicClient, testNamespace, rolloutName,
reloaded, err := utils.WaitForRolloutReloaded(ctx, rolloutsClient, testNamespace, rolloutName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue(), "Argo Rollout should be reloaded with default rollout strategy")
@@ -66,7 +67,7 @@ var _ = Describe("Argo Rollout Strategy Tests", func() {
By("Creating an Argo Rollout with restart strategy annotation")
// Note: auto annotation goes on pod template, rollout-strategy goes on object metadata
err = utils.CreateArgoRollout(ctx, dynamicClient, testNamespace, rolloutName,
_, err = utils.CreateRollout(ctx, rolloutsClient, testNamespace, rolloutName,
utils.WithRolloutConfigMapEnvFrom(configMapName),
utils.WithRolloutAnnotations(utils.BuildAutoTrueAnnotation()),
utils.WithRolloutObjectAnnotations(utils.BuildRolloutRestartStrategyAnnotation()),
@@ -74,7 +75,7 @@ var _ = Describe("Argo Rollout Strategy Tests", func() {
Expect(err).NotTo(HaveOccurred())
By("Waiting for Rollout to be ready")
err = utils.WaitForRolloutReady(ctx, dynamicClient, testNamespace, rolloutName, utils.DeploymentReady)
err = utils.WaitForRolloutReady(ctx, rolloutsClient, testNamespace, rolloutName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Updating the ConfigMap")
@@ -83,7 +84,7 @@ var _ = Describe("Argo Rollout Strategy Tests", func() {
Expect(err).NotTo(HaveOccurred())
By("Waiting for Rollout to have restartAt field set")
restarted, err := utils.WaitForRolloutRestartAt(ctx, dynamicClient, testNamespace, rolloutName, utils.ReloadTimeout)
restarted, err := utils.WaitForRolloutRestartAt(ctx, rolloutsClient, testNamespace, rolloutName, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(restarted).To(BeTrue(), "Argo Rollout should have restartAt field set with restart strategy")
})

View File

@@ -6,15 +6,17 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
)
"k8s.io/client-go/rest"
csiclient "sigs.k8s.io/secrets-store-csi-driver/pkg/client/clientset/versioned"
"github.com/stakater/Reloader/test/e2e/utils"
)
var (
kubeClient kubernetes.Interface
dynamicClient dynamic.Interface
csiClient csiclient.Interface
restConfig *rest.Config
testNamespace string
ctx context.Context
cancel context.CancelFunc
@@ -31,46 +33,45 @@ var _ = BeforeSuite(func() {
var err error
ctx, cancel = context.WithCancel(context.Background())
// Setup test environment
testEnv, err = utils.SetupTestEnvironment(ctx, "reloader-core-test")
Expect(err).NotTo(HaveOccurred(), "Failed to setup test environment")
// Export for use in tests
kubeClient = testEnv.KubeClient
dynamicClient = testEnv.DynamicClient
csiClient = testEnv.CSIClient
restConfig = testEnv.RestConfig
testNamespace = testEnv.Namespace
// Create adapter registry
registry = utils.NewAdapterRegistry(kubeClient, dynamicClient)
registry = utils.NewAdapterRegistry(kubeClient)
// Register ArgoRolloutAdapter if Argo Rollouts is installed
if utils.IsArgoRolloutsInstalled(ctx, dynamicClient) {
if utils.IsArgoRolloutsInstalled(ctx, testEnv.RolloutsClient) {
GinkgoWriter.Println("Argo Rollouts detected, registering ArgoRolloutAdapter")
registry.RegisterAdapter(utils.NewArgoRolloutAdapter(dynamicClient))
registry.RegisterAdapter(utils.NewArgoRolloutAdapter(testEnv.RolloutsClient))
} else {
GinkgoWriter.Println("Argo Rollouts not detected, skipping ArgoRolloutAdapter registration")
}
// Register DeploymentConfigAdapter if OpenShift is available
if utils.HasDeploymentConfigSupport(testEnv.DiscoveryClient) {
if utils.HasDeploymentConfigSupport(testEnv.DiscoveryClient) && testEnv.OpenShiftClient != nil {
GinkgoWriter.Println("OpenShift detected, registering DeploymentConfigAdapter")
registry.RegisterAdapter(utils.NewDeploymentConfigAdapter(dynamicClient))
registry.RegisterAdapter(utils.NewDeploymentConfigAdapter(testEnv.OpenShiftClient))
} else {
GinkgoWriter.Println("OpenShift not detected, skipping DeploymentConfigAdapter registration")
}
// Deploy Reloader with default annotations strategy
// Individual test contexts will redeploy with different strategies if needed
deployValues := map[string]string{
"reloader.reloadStrategy": "annotations",
"reloader.watchGlobally": "false", // Only watch own namespace to prevent cross-talk between test suites
}
// Enable Argo Rollouts support if Argo is installed
if utils.IsArgoRolloutsInstalled(ctx, dynamicClient) {
if utils.IsArgoRolloutsInstalled(ctx, testEnv.RolloutsClient) {
deployValues["reloader.isArgoRollouts"] = "true"
GinkgoWriter.Println("Deploying Reloader with Argo Rollouts support")
}
if utils.IsCSIDriverInstalled(ctx, csiClient) {
deployValues["reloader.enableCSIIntegration"] = "true"
GinkgoWriter.Println("Deploying Reloader with CSI integration support")
}
err = testEnv.DeployAndWait(deployValues)
Expect(err).NotTo(HaveOccurred(), "Failed to deploy Reloader")
})

View File

@@ -5,6 +5,7 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)
@@ -33,7 +34,9 @@ var _ = Describe("Reference Method Tests", func() {
DescribeTable("should reload when ConfigMap referenced via valueFrom.configMapKeyRef changes",
func(workloadType utils.WorkloadType) {
adapter := registry.Get(workloadType)
if adapter == nil { Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType)) }
if adapter == nil {
Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType))
}
By("Creating a ConfigMap")
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
@@ -46,7 +49,7 @@ var _ = Describe("Reference Method Tests", func() {
UseConfigMapKeyRef: true,
ConfigMapKey: "config_key",
EnvVarName: "MY_CONFIG_VAR",
Annotations: utils.BuildConfigMapReloadAnnotation(configMapName),
Annotations: utils.BuildConfigMapReloadAnnotation(configMapName),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, testNamespace, workloadName) })
@@ -81,7 +84,9 @@ var _ = Describe("Reference Method Tests", func() {
DescribeTable("should reload when Secret referenced via valueFrom.secretKeyRef changes",
func(workloadType utils.WorkloadType) {
adapter := registry.Get(workloadType)
if adapter == nil { Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType)) }
if adapter == nil {
Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType))
}
By("Creating a Secret")
_, err := utils.CreateSecretFromStrings(ctx, kubeClient, testNamespace, secretName,
@@ -90,11 +95,11 @@ var _ = Describe("Reference Method Tests", func() {
By("Creating workload with valueFrom.secretKeyRef")
err = adapter.Create(ctx, testNamespace, workloadName, utils.WorkloadConfig{
SecretName: secretName,
SecretName: secretName,
UseSecretKeyRef: true,
SecretKey: "secret_key",
EnvVarName: "MY_SECRET_VAR",
Annotations: utils.BuildSecretReloadAnnotation(secretName),
SecretKey: "secret_key",
EnvVarName: "MY_SECRET_VAR",
Annotations: utils.BuildSecretReloadAnnotation(secretName),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, testNamespace, workloadName) })
@@ -129,7 +134,9 @@ var _ = Describe("Reference Method Tests", func() {
DescribeTable("should reload when ConfigMap in projected volume changes",
func(workloadType utils.WorkloadType) {
adapter := registry.Get(workloadType)
if adapter == nil { Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType)) }
if adapter == nil {
Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType))
}
By("Creating a ConfigMap")
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
@@ -140,7 +147,7 @@ var _ = Describe("Reference Method Tests", func() {
err = adapter.Create(ctx, testNamespace, workloadName, utils.WorkloadConfig{
ConfigMapName: configMapName,
UseProjectedVolume: true,
Annotations: utils.BuildConfigMapReloadAnnotation(configMapName),
Annotations: utils.BuildConfigMapReloadAnnotation(configMapName),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, testNamespace, workloadName) })
@@ -170,7 +177,9 @@ var _ = Describe("Reference Method Tests", func() {
DescribeTable("should reload when Secret in projected volume changes",
func(workloadType utils.WorkloadType) {
adapter := registry.Get(workloadType)
if adapter == nil { Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType)) }
if adapter == nil {
Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType))
}
By("Creating a Secret")
_, err := utils.CreateSecretFromStrings(ctx, kubeClient, testNamespace, secretName,
@@ -181,7 +190,7 @@ var _ = Describe("Reference Method Tests", func() {
err = adapter.Create(ctx, testNamespace, workloadName, utils.WorkloadConfig{
SecretName: secretName,
UseProjectedVolume: true,
Annotations: utils.BuildSecretReloadAnnotation(secretName),
Annotations: utils.BuildSecretReloadAnnotation(secretName),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, testNamespace, workloadName) })
@@ -211,7 +220,9 @@ var _ = Describe("Reference Method Tests", func() {
DescribeTable("should reload when ConfigMap changes in mixed projected volume",
func(workloadType utils.WorkloadType) {
adapter := registry.Get(workloadType)
if adapter == nil { Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType)) }
if adapter == nil {
Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType))
}
By("Creating a ConfigMap and Secret")
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
@@ -260,7 +271,9 @@ var _ = Describe("Reference Method Tests", func() {
DescribeTable("should reload when Secret changes in mixed projected volume",
func(workloadType utils.WorkloadType) {
adapter := registry.Get(workloadType)
if adapter == nil { Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType)) }
if adapter == nil {
Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType))
}
By("Creating a ConfigMap and Secret")
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
@@ -314,7 +327,9 @@ var _ = Describe("Reference Method Tests", func() {
DescribeTable("should reload when ConfigMap referenced by init container changes",
func(workloadType utils.WorkloadType) {
adapter := registry.Get(workloadType)
if adapter == nil { Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType)) }
if adapter == nil {
Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType))
}
By("Creating a ConfigMap")
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
@@ -325,7 +340,7 @@ var _ = Describe("Reference Method Tests", func() {
err = adapter.Create(ctx, testNamespace, workloadName, utils.WorkloadConfig{
ConfigMapName: configMapName,
UseInitContainer: true,
Annotations: utils.BuildConfigMapReloadAnnotation(configMapName),
Annotations: utils.BuildConfigMapReloadAnnotation(configMapName),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, testNamespace, workloadName) })
@@ -355,7 +370,9 @@ var _ = Describe("Reference Method Tests", func() {
DescribeTable("should reload when Secret referenced by init container changes",
func(workloadType utils.WorkloadType) {
adapter := registry.Get(workloadType)
if adapter == nil { Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType)) }
if adapter == nil {
Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType))
}
By("Creating a Secret")
_, err := utils.CreateSecretFromStrings(ctx, kubeClient, testNamespace, secretName,
@@ -366,7 +383,7 @@ var _ = Describe("Reference Method Tests", func() {
err = adapter.Create(ctx, testNamespace, workloadName, utils.WorkloadConfig{
SecretName: secretName,
UseInitContainer: true,
Annotations: utils.BuildSecretReloadAnnotation(secretName),
Annotations: utils.BuildSecretReloadAnnotation(secretName),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, testNamespace, workloadName) })
@@ -398,7 +415,9 @@ var _ = Describe("Reference Method Tests", func() {
DescribeTable("should reload when ConfigMap volume mounted in init container changes",
func(workloadType utils.WorkloadType) {
adapter := registry.Get(workloadType)
if adapter == nil { Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType)) }
if adapter == nil {
Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType))
}
By("Creating a ConfigMap")
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
@@ -409,7 +428,7 @@ var _ = Describe("Reference Method Tests", func() {
err = adapter.Create(ctx, testNamespace, workloadName, utils.WorkloadConfig{
ConfigMapName: configMapName,
UseInitContainerVolume: true,
Annotations: utils.BuildConfigMapReloadAnnotation(configMapName),
Annotations: utils.BuildConfigMapReloadAnnotation(configMapName),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, testNamespace, workloadName) })
@@ -439,7 +458,9 @@ var _ = Describe("Reference Method Tests", func() {
DescribeTable("should reload when Secret volume mounted in init container changes",
func(workloadType utils.WorkloadType) {
adapter := registry.Get(workloadType)
if adapter == nil { Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType)) }
if adapter == nil {
Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType))
}
By("Creating a Secret")
_, err := utils.CreateSecretFromStrings(ctx, kubeClient, testNamespace, secretName,
@@ -450,7 +471,7 @@ var _ = Describe("Reference Method Tests", func() {
err = adapter.Create(ctx, testNamespace, workloadName, utils.WorkloadConfig{
SecretName: secretName,
UseInitContainerVolume: true,
Annotations: utils.BuildSecretReloadAnnotation(secretName),
Annotations: utils.BuildSecretReloadAnnotation(secretName),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, testNamespace, workloadName) })
@@ -485,7 +506,9 @@ var _ = Describe("Reference Method Tests", func() {
DescribeTable("should reload with auto=true when ConfigMap referenced via valueFrom changes",
func(workloadType utils.WorkloadType) {
adapter := registry.Get(workloadType)
if adapter == nil { Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType)) }
if adapter == nil {
Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType))
}
By("Creating a ConfigMap")
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
@@ -498,7 +521,7 @@ var _ = Describe("Reference Method Tests", func() {
UseConfigMapKeyRef: true,
ConfigMapKey: "auto_config_key",
EnvVarName: "AUTO_CONFIG_VAR",
Annotations: utils.BuildAutoTrueAnnotation(),
Annotations: utils.BuildAutoTrueAnnotation(),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, testNamespace, workloadName) })

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,75 @@
package csi
import (
"context"
"testing"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
csiclient "sigs.k8s.io/secrets-store-csi-driver/pkg/client/clientset/versioned"
"github.com/stakater/Reloader/test/e2e/utils"
)
var (
kubeClient kubernetes.Interface
csiClient csiclient.Interface
restConfig *rest.Config
testNamespace string
ctx context.Context
cancel context.CancelFunc
testEnv *utils.TestEnvironment
)
func TestCSI(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "CSI SecretProviderClass E2E Suite")
}
var _ = BeforeSuite(func() {
var err error
ctx, cancel = context.WithCancel(context.Background())
// Setup test environment
testEnv, err = utils.SetupTestEnvironment(ctx, "reloader-csi-test")
Expect(err).NotTo(HaveOccurred(), "Failed to setup test environment")
// Export for use in tests
kubeClient = testEnv.KubeClient
csiClient = testEnv.CSIClient
restConfig = testEnv.RestConfig
testNamespace = testEnv.Namespace
// Skip entire suite if CSI driver not installed
if !utils.IsCSIDriverInstalled(ctx, csiClient) {
Skip("CSI secrets store driver not installed - skipping CSI suite")
}
// Skip entire suite if Vault CSI provider not installed
if !utils.IsVaultProviderInstalled(ctx, kubeClient) {
Skip("Vault CSI provider not installed - skipping CSI suite")
}
// Deploy Reloader with annotations strategy and CSI integration enabled
err = testEnv.DeployAndWait(map[string]string{
"reloader.reloadStrategy": "annotations",
"reloader.watchGlobally": "false", // Only watch own namespace to prevent cross-talk between test suites
"reloader.enableCSIIntegration": "true",
})
Expect(err).NotTo(HaveOccurred(), "Failed to deploy Reloader")
})
var _ = AfterSuite(func() {
if testEnv != nil {
err := testEnv.Cleanup()
Expect(err).NotTo(HaveOccurred(), "Failed to cleanup test environment")
}
if cancel != nil {
cancel()
}
GinkgoWriter.Println("CSI E2E Suite cleanup complete")
})

390
test/e2e/csi/csi_test.go Normal file
View File

@@ -0,0 +1,390 @@
package csi
import (
"fmt"
"time"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)
var _ = Describe(
"CSI SecretProviderClass Tests", func() {
var (
deploymentName string
configMapName string
spcName string
vaultSecretPath string
)
BeforeEach(
func() {
deploymentName = utils.RandName("deploy")
configMapName = utils.RandName("cm")
spcName = utils.RandName("spc")
// Each test gets its own Vault secret path to avoid conflicts
vaultSecretPath = fmt.Sprintf("secret/%s", utils.RandName("test"))
},
)
AfterEach(
func() {
_ = utils.DeleteDeployment(ctx, kubeClient, testNamespace, deploymentName)
_ = utils.DeleteConfigMap(ctx, kubeClient, testNamespace, configMapName)
_ = utils.DeleteSecretProviderClass(ctx, csiClient, testNamespace, spcName)
// Clean up Vault secret
_ = utils.DeleteVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath)
},
)
Context(
"Real Vault Integration Tests", func() {
It(
"should reload when Vault secret changes", func() {
By("Creating a secret in Vault")
err := utils.CreateVaultSecret(
ctx, kubeClient, restConfig, vaultSecretPath,
map[string]string{"api_key": "initial-value-v1"},
)
Expect(err).NotTo(HaveOccurred())
By("Creating a SecretProviderClass pointing to Vault secret")
_, err = utils.CreateSecretProviderClassWithSecret(
ctx, csiClient, testNamespace, spcName,
vaultSecretPath, "api_key",
)
Expect(err).NotTo(HaveOccurred())
By("Creating Deployment with CSI volume and SPC reload annotation")
_, err = utils.CreateDeployment(
ctx, kubeClient, testNamespace, deploymentName,
utils.WithCSIVolume(spcName),
utils.WithAnnotations(utils.BuildSecretProviderClassReloadAnnotation(spcName)),
)
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be ready")
err = utils.WaitForDeploymentReady(ctx, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Finding the SPCPS created by CSI driver")
spcpsName, err := utils.FindSPCPSForDeployment(
ctx, csiClient, kubeClient, testNamespace, deploymentName, utils.DeploymentReady,
)
Expect(err).NotTo(HaveOccurred())
GinkgoWriter.Printf("Found SPCPS: %s\n", spcpsName)
By("Getting initial SPCPS version")
initialVersion, err := utils.GetSPCPSVersion(ctx, csiClient, testNamespace, spcpsName)
Expect(err).NotTo(HaveOccurred())
GinkgoWriter.Printf("Initial SPCPS version: %s\n", initialVersion)
By("Updating the Vault secret")
err = utils.UpdateVaultSecret(
ctx, kubeClient, restConfig, vaultSecretPath,
map[string]string{"api_key": "updated-value-v2"},
)
Expect(err).NotTo(HaveOccurred())
By("Waiting for CSI driver to sync the new secret version")
// CSI rotation poll interval is 10s, wait up to 30s for sync
err = utils.WaitForSPCPSVersionChange(ctx, csiClient, testNamespace, spcpsName, initialVersion, 10*time.Second)
Expect(err).NotTo(HaveOccurred())
GinkgoWriter.Println("CSI driver synced new secret version")
By("Waiting for Deployment to be reloaded by Reloader")
reloaded, err := utils.WaitForDeploymentReloaded(
ctx, kubeClient, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout,
)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue(), "Deployment should have been reloaded after Vault secret change")
},
)
It(
"should handle multiple Vault secret updates", func() {
By("Creating a secret in Vault")
err := utils.CreateVaultSecret(
ctx, kubeClient, restConfig, vaultSecretPath,
map[string]string{"password": "pass-v1"},
)
Expect(err).NotTo(HaveOccurred())
By("Creating a SecretProviderClass pointing to Vault secret")
_, err = utils.CreateSecretProviderClassWithSecret(
ctx, csiClient, testNamespace, spcName,
vaultSecretPath, "password",
)
Expect(err).NotTo(HaveOccurred())
By("Creating Deployment with CSI volume")
_, err = utils.CreateDeployment(
ctx, kubeClient, testNamespace, deploymentName,
utils.WithCSIVolume(spcName),
utils.WithAnnotations(utils.BuildSecretProviderClassReloadAnnotation(spcName)),
)
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be ready")
err = utils.WaitForDeploymentReady(ctx, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Finding the SPCPS")
spcpsName, err := utils.FindSPCPSForDeployment(
ctx, csiClient, kubeClient, testNamespace, deploymentName, utils.DeploymentReady,
)
Expect(err).NotTo(HaveOccurred())
By("First update to Vault secret")
initialVersion, _ := utils.GetSPCPSVersion(ctx, csiClient, testNamespace, spcpsName)
err = utils.UpdateVaultSecret(
ctx, kubeClient, restConfig, vaultSecretPath,
map[string]string{"password": "pass-v2"},
)
Expect(err).NotTo(HaveOccurred())
By("Waiting for first CSI sync")
err = utils.WaitForSPCPSVersionChange(ctx, csiClient, testNamespace, spcpsName, initialVersion, 10*time.Second)
Expect(err).NotTo(HaveOccurred())
By("Waiting for first reload")
reloaded, err := utils.WaitForDeploymentReloaded(
ctx, kubeClient, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout,
)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue())
By("Getting annotation value after first reload")
deploy, err := utils.GetDeployment(ctx, kubeClient, testNamespace, deploymentName)
Expect(err).NotTo(HaveOccurred())
firstReloadValue := deploy.Spec.Template.Annotations[utils.AnnotationLastReloadedFrom]
Expect(firstReloadValue).NotTo(BeEmpty())
By("Waiting for Deployment to stabilize")
err = utils.WaitForDeploymentReady(ctx, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Finding the NEW SPCPS after first reload (new pod = new SPCPS)")
newSpcpsName, err := utils.FindSPCPSForDeployment(
ctx, csiClient, kubeClient, testNamespace, deploymentName, utils.DeploymentReady,
)
Expect(err).NotTo(HaveOccurred())
GinkgoWriter.Printf("New SPCPS after first reload: %s\n", newSpcpsName)
By("Second update to Vault secret")
err = utils.UpdateVaultSecret(
ctx, kubeClient, restConfig, vaultSecretPath,
map[string]string{"password": "pass-v3"},
)
Expect(err).NotTo(HaveOccurred())
// Note: We do not wait for SPCPS version change here because:
// 1. CSI driver syncs the new secret version to SPCPS
// 2. Reloader sees SPCPS change and immediately reloads deployment
// 3. Deployment reload creates new pod -> new SPCPS (old one deleted)
// So by the time we check, the original SPCPS no longer exists.
// Instead, we directly verify the deployment annotation changed.
By("Waiting for second reload with different annotation value")
Eventually(
func() string {
deploy, err := utils.GetDeployment(ctx, kubeClient, testNamespace, deploymentName)
if err != nil {
return ""
}
return deploy.Spec.Template.Annotations[utils.AnnotationLastReloadedFrom]
}, utils.ReloadTimeout,
).ShouldNot(Equal(firstReloadValue), "Annotation should change after second Vault secret update")
},
)
},
)
Context(
"Typed Auto Annotation Tests", func() {
It(
"should reload only SPC changes with secretproviderclass auto annotation, not ConfigMap", func() {
By("Creating a ConfigMap")
_, err := utils.CreateConfigMap(
ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "initial"}, nil,
)
Expect(err).NotTo(HaveOccurred())
By("Creating a secret in Vault")
err = utils.CreateVaultSecret(
ctx, kubeClient, restConfig, vaultSecretPath,
map[string]string{"token": "token-v1"},
)
Expect(err).NotTo(HaveOccurred())
By("Creating a SecretProviderClass pointing to Vault secret")
_, err = utils.CreateSecretProviderClassWithSecret(
ctx, csiClient, testNamespace, spcName,
vaultSecretPath, "token",
)
Expect(err).NotTo(HaveOccurred())
By("Creating Deployment with ConfigMap envFrom AND CSI volume, but only SPC auto annotation")
_, err = utils.CreateDeployment(
ctx, kubeClient, testNamespace, deploymentName,
utils.WithConfigMapEnvFrom(configMapName),
utils.WithCSIVolume(spcName),
utils.WithAnnotations(utils.BuildSecretProviderClassAutoAnnotation()),
)
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be ready")
err = utils.WaitForDeploymentReady(ctx, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Updating the ConfigMap (should NOT trigger reload)")
err = utils.UpdateConfigMap(
ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "updated"},
)
Expect(err).NotTo(HaveOccurred())
By("Verifying Deployment was NOT reloaded for ConfigMap change")
time.Sleep(utils.NegativeTestWait)
reloaded, err := utils.WaitForDeploymentReloaded(
ctx, kubeClient, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ShortTimeout,
)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeFalse(), "SPC auto annotation should not trigger reload for ConfigMap changes")
By("Finding the SPCPS")
spcpsName, err := utils.FindSPCPSForDeployment(
ctx, csiClient, kubeClient, testNamespace, deploymentName, utils.DeploymentReady,
)
Expect(err).NotTo(HaveOccurred())
By("Getting SPCPS version before Vault update")
initialVersion, _ := utils.GetSPCPSVersion(ctx, csiClient, testNamespace, spcpsName)
By("Updating the Vault secret (should trigger reload)")
err = utils.UpdateVaultSecret(
ctx, kubeClient, restConfig, vaultSecretPath,
map[string]string{"token": "token-v2"},
)
Expect(err).NotTo(HaveOccurred())
By("Waiting for CSI driver to sync")
err = utils.WaitForSPCPSVersionChange(ctx, csiClient, testNamespace, spcpsName, initialVersion, 10*time.Second)
Expect(err).NotTo(HaveOccurred())
By("Verifying Deployment WAS reloaded for Vault secret change")
reloaded, err = utils.WaitForDeploymentReloaded(
ctx, kubeClient, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout,
)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue(), "SPC auto annotation should trigger reload for Vault secret changes")
},
)
It(
"should reload for both ConfigMap and SPC when using combined auto=true", func() {
By("Creating a ConfigMap")
_, err := utils.CreateConfigMap(
ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "initial"}, nil,
)
Expect(err).NotTo(HaveOccurred())
By("Creating a secret in Vault")
err = utils.CreateVaultSecret(
ctx, kubeClient, restConfig, vaultSecretPath,
map[string]string{"secret": "secret-v1"},
)
Expect(err).NotTo(HaveOccurred())
By("Creating a SecretProviderClass pointing to Vault secret")
_, err = utils.CreateSecretProviderClassWithSecret(
ctx, csiClient, testNamespace, spcName,
vaultSecretPath, "secret",
)
Expect(err).NotTo(HaveOccurred())
By("Creating Deployment with ConfigMap envFrom AND CSI volume with combined auto=true")
_, err = utils.CreateDeployment(
ctx, kubeClient, testNamespace, deploymentName,
utils.WithConfigMapEnvFrom(configMapName),
utils.WithCSIVolume(spcName),
utils.WithAnnotations(utils.BuildAutoTrueAnnotation()),
)
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be ready")
err = utils.WaitForDeploymentReady(ctx, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Updating the ConfigMap (should trigger reload with auto=true)")
err = utils.UpdateConfigMap(
ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "updated"},
)
Expect(err).NotTo(HaveOccurred())
By("Verifying Deployment WAS reloaded for ConfigMap change")
reloaded, err := utils.WaitForDeploymentReloaded(
ctx, kubeClient, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout,
)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue(), "Combined auto=true should trigger reload for ConfigMap changes")
By("Waiting for Deployment to stabilize")
err = utils.WaitForDeploymentReady(ctx, kubeClient, testNamespace, deploymentName, utils.DeploymentReady)
Expect(err).NotTo(HaveOccurred())
By("Getting current annotation value")
deploy, err := utils.GetDeployment(ctx, kubeClient, testNamespace, deploymentName)
Expect(err).NotTo(HaveOccurred())
firstReloadValue := deploy.Spec.Template.Annotations[utils.AnnotationLastReloadedFrom]
By("Finding the NEW SPCPS after ConfigMap reload (new pod = new SPCPS)")
newSpcpsName, err := utils.FindSPCPSForDeployment(
ctx, csiClient, kubeClient, testNamespace, deploymentName, utils.DeploymentReady,
)
Expect(err).NotTo(HaveOccurred())
GinkgoWriter.Printf("New SPCPS after ConfigMap reload: %s\n", newSpcpsName)
By("Updating the Vault secret (should also trigger reload with auto=true)")
err = utils.UpdateVaultSecret(
ctx, kubeClient, restConfig, vaultSecretPath,
map[string]string{"secret": "secret-v2"},
)
Expect(err).NotTo(HaveOccurred())
// Note: We don't wait for SPCPS version change here because:
// 1. CSI driver syncs the new secret version to SPCPS
// 2. Reloader sees SPCPS change and immediately reloads deployment
// 3. Deployment reload creates new pod → new SPCPS (old one deleted)
// So by the time we check, the original SPCPS no longer exists.
// Instead, we directly verify the deployment annotation changed.
By("Verifying Deployment WAS reloaded for Vault secret change")
Eventually(
func() string {
deploy, err := utils.GetDeployment(ctx, kubeClient, testNamespace, deploymentName)
if err != nil {
return ""
}
return deploy.Spec.Template.Annotations[utils.AnnotationLastReloadedFrom]
}, utils.ReloadTimeout,
).ShouldNot(
Equal(firstReloadValue),
"Combined auto=true should trigger reload for Vault secret changes",
)
},
)
},
)
},
)

View File

@@ -9,10 +9,11 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"github.com/stakater/Reloader/test/e2e/utils"
)
var (

View File

@@ -5,6 +5,7 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)

View File

@@ -6,8 +6,9 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
"k8s.io/client-go/kubernetes"
"github.com/stakater/Reloader/test/e2e/utils"
)
var (

View File

@@ -5,6 +5,7 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)

View File

@@ -5,6 +5,7 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)

View File

@@ -5,6 +5,7 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)

View File

@@ -5,6 +5,7 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)

View File

@@ -5,6 +5,7 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)

View File

@@ -5,6 +5,7 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)

View File

@@ -5,6 +5,7 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)

View File

@@ -5,6 +5,7 @@ import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/stakater/Reloader/test/e2e/utils"
)

View File

@@ -20,6 +20,11 @@ const (
// Value: comma-separated list of Secret names, e.g., "secret1,secret2"
AnnotationSecretReload = "secret.reloader.stakater.com/reload"
// AnnotationSecretProviderClassReload triggers reload when specified SecretProviderClass(es) change.
// Value: comma-separated list of SecretProviderClass names, e.g., "spc1,spc2"
// Note: Reloader actually watches SecretProviderClassPodStatus resources, not SecretProviderClass.
AnnotationSecretProviderClassReload = "secretproviderclass.reloader.stakater.com/reload"
// ============================================================
// Auto-reload annotations
// ============================================================
@@ -36,6 +41,10 @@ const (
// Value: "true" or "false"
AnnotationSecretAuto = "secret.reloader.stakater.com/auto"
// AnnotationSecretProviderClassAuto enables auto-reload for all referenced SecretProviderClasses only.
// Value: "true" or "false"
AnnotationSecretProviderClassAuto = "secretproviderclass.reloader.stakater.com/auto"
// ============================================================
// Exclude annotations (used with auto=true to exclude specific resources)
// ============================================================
@@ -48,6 +57,10 @@ const (
// Value: comma-separated list of Secret names
AnnotationSecretExclude = "secrets.exclude.reloader.stakater.com/reload"
// AnnotationSecretProviderClassExclude excludes specified SecretProviderClasses from auto-reload.
// Value: comma-separated list of SecretProviderClass names
AnnotationSecretProviderClassExclude = "secretproviderclasses.exclude.reloader.stakater.com/reload"
// ============================================================
// Search annotations (for regex matching)
// ============================================================
@@ -117,6 +130,13 @@ func BuildSecretReloadAnnotation(secretNames ...string) map[string]string {
}
}
// BuildSecretProviderClassReloadAnnotation creates an annotation map for SecretProviderClass reload.
func BuildSecretProviderClassReloadAnnotation(spcNames ...string) map[string]string {
return map[string]string{
AnnotationSecretProviderClassReload: joinNames(spcNames),
}
}
// BuildAutoTrueAnnotation creates an annotation map with auto=true.
func BuildAutoTrueAnnotation() map[string]string {
return map[string]string{
@@ -145,6 +165,13 @@ func BuildSecretAutoAnnotation() map[string]string {
}
}
// BuildSecretProviderClassAutoAnnotation creates an annotation map with secretproviderclass auto=true.
func BuildSecretProviderClassAutoAnnotation() map[string]string {
return map[string]string{
AnnotationSecretProviderClassAuto: AnnotationValueTrue,
}
}
// BuildSearchAnnotation creates an annotation map to enable search mode.
func BuildSearchAnnotation() map[string]string {
return map[string]string{
@@ -187,6 +214,13 @@ func BuildSecretExcludeAnnotation(secretNames ...string) map[string]string {
}
}
// BuildSecretProviderClassExcludeAnnotation creates an annotation to exclude SecretProviderClasses from auto-reload.
func BuildSecretProviderClassExcludeAnnotation(spcNames ...string) map[string]string {
return map[string]string{
AnnotationSecretProviderClassExclude: joinNames(spcNames),
}
}
// BuildPausePeriodAnnotation creates an annotation for deployment pause period.
func BuildPausePeriodAnnotation(duration string) map[string]string {
return map[string]string{

View File

@@ -2,307 +2,119 @@ package utils
import (
"context"
"time"
rolloutv1alpha1 "github.com/argoproj/argo-rollouts/pkg/apis/rollouts/v1alpha1"
rolloutsclient "github.com/argoproj/argo-rollouts/pkg/client/clientset/versioned"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/dynamic"
"k8s.io/utils/ptr"
)
// ArgoRolloutGVR returns the GroupVersionResource for Argo Rollouts.
var ArgoRolloutGVR = schema.GroupVersionResource{
Group: "argoproj.io",
Version: "v1alpha1",
Resource: "rollouts",
}
// RolloutOption is a functional option for configuring an Argo Rollout.
type RolloutOption func(*unstructured.Unstructured)
// RolloutOption is a function that modifies a Rollout.
type RolloutOption func(*rolloutv1alpha1.Rollout)
// IsArgoRolloutsInstalled checks if Argo Rollouts CRD is installed in the cluster.
func IsArgoRolloutsInstalled(ctx context.Context, dynamicClient dynamic.Interface) bool {
// Try to list rollouts - if CRD exists, this will succeed (possibly with empty list)
_, err := dynamicClient.Resource(ArgoRolloutGVR).Namespace("default").List(ctx, metav1.ListOptions{Limit: 1})
func IsArgoRolloutsInstalled(ctx context.Context, client rolloutsclient.Interface) bool {
if client == nil {
return false
}
_, err := client.ArgoprojV1alpha1().Rollouts("default").List(ctx, metav1.ListOptions{Limit: 1})
return err == nil
}
// CreateArgoRollout creates an Argo Rollout with the given options.
func CreateArgoRollout(ctx context.Context, dynamicClient dynamic.Interface, namespace, name string, opts ...RolloutOption) error {
rollout := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "argoproj.io/v1alpha1",
"kind": "Rollout",
"metadata": map[string]interface{}{
"name": name,
"namespace": namespace,
// CreateRollout creates an Argo Rollout with the given options.
func CreateRollout(ctx context.Context, client rolloutsclient.Interface, namespace, name string, opts ...RolloutOption) (*rolloutv1alpha1.Rollout, error) {
rollout := &rolloutv1alpha1.Rollout{
ObjectMeta: metav1.ObjectMeta{Name: name},
Spec: rolloutv1alpha1.RolloutSpec{
Replicas: ptr.To[int32](1),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{"app": name},
},
"spec": map[string]interface{}{
"replicas": int64(1),
"selector": map[string]interface{}{
"matchLabels": map[string]interface{}{
"app": name,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{"app": name},
},
"template": map[string]interface{}{
"metadata": map[string]interface{}{
"labels": map[string]interface{}{
"app": name,
},
},
"spec": map[string]interface{}{
"containers": []interface{}{
map[string]interface{}{
"name": "app",
"image": "busybox:1.36",
"command": []interface{}{"sh", "-c", "sleep 3600"},
},
},
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{{
Name: "main",
Image: DefaultImage,
Command: []string{"sh", "-c", DefaultCommand},
}},
},
"strategy": map[string]interface{}{
"canary": map[string]interface{}{
"steps": []interface{}{
map[string]interface{}{
"setWeight": int64(100),
},
},
},
Strategy: rolloutv1alpha1.RolloutStrategy{
Canary: &rolloutv1alpha1.CanaryStrategy{
Steps: []rolloutv1alpha1.CanaryStep{
{SetWeight: ptr.To[int32](100)},
},
},
},
},
}
// Apply options
for _, opt := range opts {
opt(rollout)
}
_, err := dynamicClient.Resource(ArgoRolloutGVR).Namespace(namespace).Create(ctx, rollout, metav1.CreateOptions{})
return err
return client.ArgoprojV1alpha1().Rollouts(namespace).Create(ctx, rollout, metav1.CreateOptions{})
}
// DeleteArgoRollout deletes an Argo Rollout.
func DeleteArgoRollout(ctx context.Context, dynamicClient dynamic.Interface, namespace, name string) error {
err := dynamicClient.Resource(ArgoRolloutGVR).Namespace(namespace).Delete(ctx, name, metav1.DeleteOptions{})
return err
}
// GetArgoRollout retrieves an Argo Rollout.
func GetArgoRollout(ctx context.Context, dynamicClient dynamic.Interface, namespace, name string) (*unstructured.Unstructured, error) {
return dynamicClient.Resource(ArgoRolloutGVR).Namespace(namespace).Get(ctx, name, metav1.GetOptions{})
// DeleteRollout deletes an Argo Rollout using typed client.
func DeleteRollout(ctx context.Context, client rolloutsclient.Interface, namespace, name string) error {
return client.ArgoprojV1alpha1().Rollouts(namespace).Delete(ctx, name, metav1.DeleteOptions{})
}
// WithRolloutConfigMapEnvFrom adds a ConfigMap envFrom to the Rollout.
func WithRolloutConfigMapEnvFrom(configMapName string) RolloutOption {
return func(rollout *unstructured.Unstructured) {
containers, _, _ := unstructured.NestedSlice(rollout.Object, "spec", "template", "spec", "containers")
if len(containers) > 0 {
container := containers[0].(map[string]interface{})
envFrom, _, _ := unstructured.NestedSlice(container, "envFrom")
envFrom = append(envFrom, map[string]interface{}{
"configMapRef": map[string]interface{}{
"name": configMapName,
},
})
container["envFrom"] = envFrom
containers[0] = container
_ = unstructured.SetNestedSlice(rollout.Object, containers, "spec", "template", "spec", "containers")
}
return func(r *rolloutv1alpha1.Rollout) {
AddEnvFromSource(&r.Spec.Template.Spec, 0, configMapName, false)
}
}
// WithRolloutSecretEnvFrom adds a Secret envFrom to the Rollout.
func WithRolloutSecretEnvFrom(secretName string) RolloutOption {
return func(rollout *unstructured.Unstructured) {
containers, _, _ := unstructured.NestedSlice(rollout.Object, "spec", "template", "spec", "containers")
if len(containers) > 0 {
container := containers[0].(map[string]interface{})
envFrom, _, _ := unstructured.NestedSlice(container, "envFrom")
envFrom = append(envFrom, map[string]interface{}{
"secretRef": map[string]interface{}{
"name": secretName,
},
})
container["envFrom"] = envFrom
containers[0] = container
_ = unstructured.SetNestedSlice(rollout.Object, containers, "spec", "template", "spec", "containers")
}
return func(r *rolloutv1alpha1.Rollout) {
AddEnvFromSource(&r.Spec.Template.Spec, 0, secretName, true)
}
}
// WithRolloutConfigMapVolume adds a ConfigMap volume to the Rollout.
func WithRolloutConfigMapVolume(configMapName string) RolloutOption {
return func(rollout *unstructured.Unstructured) {
// Add volume
volumes, _, _ := unstructured.NestedSlice(rollout.Object, "spec", "template", "spec", "volumes")
volumes = append(volumes, map[string]interface{}{
"name": configMapName + "-volume",
"configMap": map[string]interface{}{
"name": configMapName,
},
})
_ = unstructured.SetNestedSlice(rollout.Object, volumes, "spec", "template", "spec", "volumes")
// Add volumeMount
containers, _, _ := unstructured.NestedSlice(rollout.Object, "spec", "template", "spec", "containers")
if len(containers) > 0 {
container := containers[0].(map[string]interface{})
volumeMounts, _, _ := unstructured.NestedSlice(container, "volumeMounts")
volumeMounts = append(volumeMounts, map[string]interface{}{
"name": configMapName + "-volume",
"mountPath": "/etc/config/" + configMapName,
})
container["volumeMounts"] = volumeMounts
containers[0] = container
_ = unstructured.SetNestedSlice(rollout.Object, containers, "spec", "template", "spec", "containers")
}
return func(r *rolloutv1alpha1.Rollout) {
AddConfigMapVolume(&r.Spec.Template.Spec, 0, configMapName)
}
}
// WithRolloutSecretVolume adds a Secret volume to the Rollout.
func WithRolloutSecretVolume(secretName string) RolloutOption {
return func(rollout *unstructured.Unstructured) {
// Add volume
volumes, _, _ := unstructured.NestedSlice(rollout.Object, "spec", "template", "spec", "volumes")
volumes = append(volumes, map[string]interface{}{
"name": secretName + "-volume",
"secret": map[string]interface{}{
"secretName": secretName,
},
})
_ = unstructured.SetNestedSlice(rollout.Object, volumes, "spec", "template", "spec", "volumes")
// Add volumeMount
containers, _, _ := unstructured.NestedSlice(rollout.Object, "spec", "template", "spec", "containers")
if len(containers) > 0 {
container := containers[0].(map[string]interface{})
volumeMounts, _, _ := unstructured.NestedSlice(container, "volumeMounts")
volumeMounts = append(volumeMounts, map[string]interface{}{
"name": secretName + "-volume",
"mountPath": "/etc/secrets/" + secretName,
})
container["volumeMounts"] = volumeMounts
containers[0] = container
_ = unstructured.SetNestedSlice(rollout.Object, containers, "spec", "template", "spec", "containers")
}
return func(r *rolloutv1alpha1.Rollout) {
AddSecretVolume(&r.Spec.Template.Spec, 0, secretName)
}
}
// WithRolloutAnnotations adds annotations to the Rollout's pod template.
// WithRolloutAnnotations adds annotations to the Rollout level (where Reloader checks them).
func WithRolloutAnnotations(annotations map[string]string) RolloutOption {
return func(rollout *unstructured.Unstructured) {
annotationsMap := make(map[string]interface{})
for k, v := range annotations {
annotationsMap[k] = v
return func(r *rolloutv1alpha1.Rollout) {
if len(annotations) > 0 {
if r.Annotations == nil {
r.Annotations = make(map[string]string)
}
for k, v := range annotations {
r.Annotations[k] = v
}
}
_ = unstructured.SetNestedMap(rollout.Object, annotationsMap, "spec", "template", "metadata", "annotations")
}
}
// WithRolloutObjectAnnotations adds annotations to the Rollout's top-level metadata.
// Use this for annotations that are read from the Rollout object itself (like rollout-strategy).
func WithRolloutObjectAnnotations(annotations map[string]string) RolloutOption {
return func(rollout *unstructured.Unstructured) {
annotationsMap := make(map[string]interface{})
return func(r *rolloutv1alpha1.Rollout) {
if r.Annotations == nil {
r.Annotations = make(map[string]string)
}
for k, v := range annotations {
annotationsMap[k] = v
r.Annotations[k] = v
}
_ = unstructured.SetNestedMap(rollout.Object, annotationsMap, "metadata", "annotations")
}
}
// WaitForRolloutReady waits for an Argo Rollout to be ready.
func WaitForRolloutReady(ctx context.Context, dynamicClient dynamic.Interface, namespace, name string, timeout time.Duration) error {
return wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
rollout, err := dynamicClient.Resource(ArgoRolloutGVR).Namespace(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil // Keep polling
}
// Check status.phase == "Healthy" or replicas == availableReplicas
status, found, _ := unstructured.NestedMap(rollout.Object, "status")
if !found {
return false, nil
}
phase, _, _ := unstructured.NestedString(status, "phase")
if phase == "Healthy" {
return true, nil
}
// Alternative: check replicas
replicas, _, _ := unstructured.NestedInt64(rollout.Object, "spec", "replicas")
availableReplicas, _, _ := unstructured.NestedInt64(status, "availableReplicas")
if replicas > 0 && replicas == availableReplicas {
return true, nil
}
return false, nil
})
}
// WaitForRolloutReloaded waits for an Argo Rollout's pod template to have the reloader annotation.
func WaitForRolloutReloaded(ctx context.Context, dynamicClient dynamic.Interface, namespace, name, annotationKey string, timeout time.Duration) (bool, error) {
var found bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
rollout, err := dynamicClient.Resource(ArgoRolloutGVR).Namespace(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
}
// Check pod template annotations
annotations, _, _ := unstructured.NestedStringMap(rollout.Object, "spec", "template", "metadata", "annotations")
if annotations != nil {
if _, ok := annotations[annotationKey]; ok {
found = true
return true, nil
}
}
return false, nil
})
if err != nil && err != context.DeadlineExceeded {
return false, err
}
return found, nil
}
// GetRolloutPodTemplateAnnotations retrieves the pod template annotations from an Argo Rollout.
func GetRolloutPodTemplateAnnotations(ctx context.Context, dynamicClient dynamic.Interface, namespace, name string) (map[string]string, error) {
rollout, err := dynamicClient.Resource(ArgoRolloutGVR).Namespace(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return nil, err
}
annotations, _, _ := unstructured.NestedStringMap(rollout.Object, "spec", "template", "metadata", "annotations")
return annotations, nil
}
// WaitForRolloutRestartAt waits for an Argo Rollout's spec.restartAt field to be set.
// This is used when the restart strategy is specified.
func WaitForRolloutRestartAt(ctx context.Context, dynamicClient dynamic.Interface, namespace, name string, timeout time.Duration) (bool, error) {
var found bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
rollout, err := dynamicClient.Resource(ArgoRolloutGVR).Namespace(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
}
// Check if spec.restartAt is set
restartAt, exists, _ := unstructured.NestedString(rollout.Object, "spec", "restartAt")
if exists && restartAt != "" {
found = true
return true, nil
}
return false, nil
})
if err != nil && err != context.DeadlineExceeded {
return false, err
}
return found, nil
}

385
test/e2e/utils/csi.go Normal file
View File

@@ -0,0 +1,385 @@
package utils
import (
"bytes"
"context"
"fmt"
"strings"
"time"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/kubernetes/scheme"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/tools/remotecommand"
csiv1 "sigs.k8s.io/secrets-store-csi-driver/apis/v1"
csiclient "sigs.k8s.io/secrets-store-csi-driver/pkg/client/clientset/versioned"
)
// CSI Driver constants
const (
// CSIDriverName is the name of the secrets-store CSI driver
CSIDriverName = "secrets-store.csi.k8s.io"
// DefaultCSIProvider is the default provider name for testing (Vault)
DefaultCSIProvider = "vault"
// VaultAddress is the default Vault address in the cluster
VaultAddress = "http://vault.vault:8200"
// VaultRole is the Kubernetes auth role configured in Vault for testing
VaultRole = "test-role"
// VaultNamespace is the namespace where Vault is deployed
VaultNamespace = "vault"
// VaultPodName is the name of the Vault pod (dev mode)
VaultPodName = "vault-0"
// CSIVolumeName is the default volume name for CSI volumes in tests
CSIVolumeName = "csi-secrets-store"
// CSIMountPath is the default mount path for CSI volumes in tests
CSIMountPath = "/mnt/secrets-store"
// CSIRotationPollInterval is how often CSI driver checks for secret changes
CSIRotationPollInterval = 2 * time.Second
)
// NewCSIClient creates a new CSI client using the default kubeconfig.
func NewCSIClient() (csiclient.Interface, error) {
kubeconfig := GetKubeconfig()
config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
return nil, fmt.Errorf("building config from kubeconfig: %w", err)
}
return NewCSIClientFromConfig(config)
}
// NewCSIClientFromConfig creates a new CSI client from a rest.Config.
func NewCSIClientFromConfig(config *rest.Config) (csiclient.Interface, error) {
client, err := csiclient.NewForConfig(config)
if err != nil {
return nil, fmt.Errorf("creating CSI client: %w", err)
}
return client, nil
}
// IsCSIDriverInstalled checks if the CSI secrets store driver CRDs are available in the cluster.
// This checks for the SecretProviderClass CRD which is required for CSI tests.
func IsCSIDriverInstalled(ctx context.Context, client csiclient.Interface) bool {
if client == nil {
return false
}
// Try to list SecretProviderClasses - if CRD doesn't exist, this will fail
_, err := client.SecretsstoreV1().SecretProviderClasses("default").List(ctx, metav1.ListOptions{Limit: 1})
return err == nil
}
// IsVaultProviderInstalled checks if Vault CSI provider is installed by checking for the vault-csi-provider DaemonSet.
// This is used to determine if CSI tests with actual volume mounting can run.
func IsVaultProviderInstalled(ctx context.Context, kubeClient kubernetes.Interface) bool {
if kubeClient == nil {
return false
}
// Check if vault-csi-provider DaemonSet exists in vault namespace
_, err := kubeClient.AppsV1().DaemonSets("vault").Get(ctx, "vault-csi-provider", metav1.GetOptions{})
return err == nil
}
// CreateSecretProviderClass creates a SecretProviderClass in the given namespace.
// If params is nil, it creates a Vault-compatible SecretProviderClass with default test settings.
func CreateSecretProviderClass(ctx context.Context, client csiclient.Interface, namespace, name string, params map[string]string) (
*csiv1.SecretProviderClass, error,
) {
if params == nil {
// Default Vault-compatible parameters for testing
params = map[string]string{
"vaultAddress": VaultAddress,
"roleName": VaultRole,
"objects": `- objectName: "test-secret"
secretPath: "secret/data/test"
secretKey: "username"`,
}
}
spc := &csiv1.SecretProviderClass{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
},
Spec: csiv1.SecretProviderClassSpec{
Provider: DefaultCSIProvider,
Parameters: params,
},
}
created, err := client.SecretsstoreV1().SecretProviderClasses(namespace).Create(ctx, spc, metav1.CreateOptions{})
if err != nil {
return nil, fmt.Errorf("creating SecretProviderClass %s/%s: %w", namespace, name, err)
}
return created, nil
}
// CreateSecretProviderClassWithSecret creates a SecretProviderClass that fetches a specific secret from Vault.
// secretPath should be like "secret/mysecret" (the function converts it to KV v2 format "secret/data/mysecret").
// secretKey is the key within that secret to fetch.
func CreateSecretProviderClassWithSecret(ctx context.Context, client csiclient.Interface, namespace, name, secretPath, secretKey string) (
*csiv1.SecretProviderClass, error,
) {
// Convert KV v1 style path to KV v2 data path
// "secret/foo" -> "secret/data/foo"
kvV2Path := secretPath
if strings.HasPrefix(secretPath, "secret/") && !strings.HasPrefix(secretPath, "secret/data/") {
kvV2Path = strings.Replace(secretPath, "secret/", "secret/data/", 1)
}
params := map[string]string{
"vaultAddress": VaultAddress,
"roleName": VaultRole,
"objects": fmt.Sprintf(
`- objectName: "%s"
secretPath: "%s"
secretKey: "%s"`, secretKey, kvV2Path, secretKey,
),
}
return CreateSecretProviderClass(ctx, client, namespace, name, params)
}
// DeleteSecretProviderClass deletes a SecretProviderClass by name.
func DeleteSecretProviderClass(ctx context.Context, client csiclient.Interface, namespace, name string) error {
err := client.SecretsstoreV1().SecretProviderClasses(namespace).Delete(ctx, name, metav1.DeleteOptions{})
if err != nil {
return fmt.Errorf("deleting SecretProviderClass %s/%s: %w", namespace, name, err)
}
return nil
}
// UpdateSecretProviderClassPodStatusLabels updates only the labels on a SecretProviderClassPodStatus.
// This should NOT trigger a reload (used for negative testing to verify Reloader ignores label-only changes).
func UpdateSecretProviderClassPodStatusLabels(ctx context.Context, client csiclient.Interface, namespace, name string, labels map[string]string) error {
spcps, err := client.SecretsstoreV1().SecretProviderClassPodStatuses(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return fmt.Errorf("getting SecretProviderClassPodStatus %s/%s: %w", namespace, name, err)
}
if spcps.Labels == nil {
spcps.Labels = make(map[string]string)
}
for k, v := range labels {
spcps.Labels[k] = v
}
_, err = client.SecretsstoreV1().SecretProviderClassPodStatuses(namespace).Update(ctx, spcps, metav1.UpdateOptions{})
if err != nil {
return fmt.Errorf("updating SecretProviderClassPodStatus labels %s/%s: %w", namespace, name, err)
}
return nil
}
// =============================================================================
// Vault Integration Helpers
// =============================================================================
// CreateVaultSecret creates a new secret in Vault.
// secretPath should be like "secret/test" (without "data" prefix - it's added automatically).
// data is a map of key-value pairs to store in the secret.
func CreateVaultSecret(ctx context.Context, kubeClient kubernetes.Interface, restConfig *rest.Config, secretPath string, data map[string]string) error {
return UpdateVaultSecret(ctx, kubeClient, restConfig, secretPath, data)
}
// UpdateVaultSecret updates a secret in Vault. This triggers the CSI driver to
// sync the new secret version, which creates/updates the SecretProviderClassPodStatus.
// secretPath should be like "secret/test" (without "data" prefix - it's added automatically).
// data is a map of key-value pairs to store in the secret.
func UpdateVaultSecret(ctx context.Context, kubeClient kubernetes.Interface, restConfig *rest.Config, secretPath string, data map[string]string) error {
// Build the vault kv put command
// Format: vault kv put secret/path key1=value1 key2=value2
args := []string{"kv", "put", secretPath}
for k, v := range data {
args = append(args, fmt.Sprintf("%s=%s", k, v))
}
if err := execInVaultPod(ctx, kubeClient, restConfig, args); err != nil {
return fmt.Errorf("updating Vault secret %s: %w", secretPath, err)
}
return nil
}
// DeleteVaultSecret deletes a secret from Vault.
// secretPath should be like "secret/test".
func DeleteVaultSecret(ctx context.Context, kubeClient kubernetes.Interface, restConfig *rest.Config, secretPath string) error {
args := []string{"kv", "metadata", "delete", secretPath}
if err := execInVaultPod(ctx, kubeClient, restConfig, args); err != nil {
// Ignore not found errors
if strings.Contains(err.Error(), "No value found") {
return nil
}
return fmt.Errorf("deleting Vault secret %s: %w", secretPath, err)
}
return nil
}
// execInVaultPod executes a vault command in the Vault pod.
func execInVaultPod(ctx context.Context, kubeClient kubernetes.Interface, restConfig *rest.Config, args []string) error {
req := kubeClient.CoreV1().RESTClient().Post().
Resource("pods").
Name(VaultPodName).
Namespace(VaultNamespace).
SubResource("exec").
VersionedParams(
&corev1.PodExecOptions{
Container: "vault",
Command: append([]string{"vault"}, args...),
Stdout: true,
Stderr: true,
}, scheme.ParameterCodec,
)
exec, err := remotecommand.NewSPDYExecutor(restConfig, "POST", req.URL())
if err != nil {
return fmt.Errorf("creating executor: %w", err)
}
var stderr bytes.Buffer
err = exec.StreamWithContext(
ctx, remotecommand.StreamOptions{
Stderr: &stderr,
},
)
if err != nil {
return fmt.Errorf("executing command: %w (stderr: %s)", err, stderr.String())
}
return nil
}
// WaitForSPCPSVersionChange waits for the SecretProviderClassPodStatus objects to change
// from the initial version. This is used after updating a Vault secret to wait for CSI
// driver to sync the new version.
func WaitForSPCPSVersionChange(ctx context.Context, client csiclient.Interface, namespace, spcpsName, initialVersion string, timeout time.Duration) error {
deadline := time.Now().Add(timeout)
for time.Now().Before(deadline) {
spcps, err := client.SecretsstoreV1().SecretProviderClassPodStatuses(namespace).Get(ctx, spcpsName, metav1.GetOptions{})
if err == nil && spcps.Status.Mounted && len(spcps.Status.Objects) > 0 {
// Check if any object version has changed
for _, obj := range spcps.Status.Objects {
if obj.Version != initialVersion {
return nil
}
}
}
select {
case <-ctx.Done():
return ctx.Err()
case <-time.After(1 * time.Second):
}
}
return fmt.Errorf("timeout waiting for SecretProviderClassPodStatus %s/%s version to change from %s", namespace, spcpsName, initialVersion)
}
// FindSPCPSForDeployment finds the SecretProviderClassPodStatus created by CSI driver
// for pods of a given deployment. Returns the first matching SPCPS name.
func FindSPCPSForDeployment(ctx context.Context, csiClient csiclient.Interface, kubeClient kubernetes.Interface, namespace, deploymentName string, timeout time.Duration) (
string, error,
) {
deadline := time.Now().Add(timeout)
for time.Now().Before(deadline) {
// Get pods for the deployment
pods, err := kubeClient.CoreV1().Pods(namespace).List(
ctx, metav1.ListOptions{
LabelSelector: fmt.Sprintf("app=%s", deploymentName),
},
)
if err != nil {
select {
case <-ctx.Done():
return "", ctx.Err()
case <-time.After(1 * time.Second):
continue
}
}
// Look for SPCPS that references any of these pods
spcpsList, err := csiClient.SecretsstoreV1().SecretProviderClassPodStatuses(namespace).List(ctx, metav1.ListOptions{})
if err != nil {
select {
case <-ctx.Done():
return "", ctx.Err()
case <-time.After(1 * time.Second):
continue
}
}
for _, pod := range pods.Items {
for _, spcps := range spcpsList.Items {
if spcps.Status.PodName == pod.Name && spcps.Status.Mounted {
return spcps.Name, nil
}
}
}
select {
case <-ctx.Done():
return "", ctx.Err()
case <-time.After(1 * time.Second):
}
}
return "", fmt.Errorf("timeout finding SecretProviderClassPodStatus for deployment %s/%s", namespace, deploymentName)
}
// FindSPCPSForSPC finds the SecretProviderClassPodStatus created by CSI driver
// that references a specific SecretProviderClass. Returns the first matching SPCPS name.
func FindSPCPSForSPC(ctx context.Context, csiClient csiclient.Interface, namespace, spcName string, timeout time.Duration) (string, error) {
deadline := time.Now().Add(timeout)
for time.Now().Before(deadline) {
spcpsList, err := csiClient.SecretsstoreV1().SecretProviderClassPodStatuses(namespace).List(ctx, metav1.ListOptions{})
if err != nil {
select {
case <-ctx.Done():
return "", ctx.Err()
case <-time.After(1 * time.Second):
continue
}
}
for _, spcps := range spcpsList.Items {
if spcps.Status.SecretProviderClassName == spcName && spcps.Status.Mounted {
return spcps.Name, nil
}
}
select {
case <-ctx.Done():
return "", ctx.Err()
case <-time.After(1 * time.Second):
}
}
return "", fmt.Errorf("timeout finding SecretProviderClassPodStatus for SPC %s/%s", namespace, spcName)
}
// GetSPCPSVersion gets the current version string from a SecretProviderClassPodStatus.
// Returns the version of the first object, or empty string if not found.
func GetSPCPSVersion(ctx context.Context, client csiclient.Interface, namespace, name string) (string, error) {
spcps, err := client.SecretsstoreV1().SecretProviderClassPodStatuses(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return "", fmt.Errorf("getting SecretProviderClassPodStatus %s/%s: %w", namespace, name, err)
}
if len(spcps.Status.Objects) == 0 {
return "", nil
}
// Return concatenated versions for all objects to detect any change
var versions []string
for _, obj := range spcps.Status.Objects {
versions = append(versions, obj.Version)
}
return strings.Join(versions, ","), nil
}

View File

@@ -1,27 +1,9 @@
package utils
import (
"context"
"time"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/discovery"
"k8s.io/client-go/dynamic"
)
// DeploymentConfigGVR returns the GroupVersionResource for OpenShift DeploymentConfigs.
var DeploymentConfigGVR = schema.GroupVersionResource{
Group: "apps.openshift.io",
Version: "v1",
Resource: "deploymentconfigs",
}
// DCOption is a functional option for configuring a DeploymentConfig.
type DCOption func(*unstructured.Unstructured)
// HasDeploymentConfigSupport checks if the cluster has OpenShift DeploymentConfig API available.
func HasDeploymentConfigSupport(discoveryClient discovery.DiscoveryInterface) bool {
_, apiLists, err := discoveryClient.ServerGroupsAndResources()
@@ -39,227 +21,3 @@ func HasDeploymentConfigSupport(discoveryClient discovery.DiscoveryInterface) bo
return false
}
// CreateDeploymentConfig creates an OpenShift DeploymentConfig with the given options.
func CreateDeploymentConfig(ctx context.Context, dynamicClient dynamic.Interface, namespace, name string, opts ...DCOption) error {
dc := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "apps.openshift.io/v1",
"kind": "DeploymentConfig",
"metadata": map[string]interface{}{
"name": name,
"namespace": namespace,
},
"spec": map[string]interface{}{
"replicas": int64(1),
"selector": map[string]interface{}{
"app": name,
},
"template": map[string]interface{}{
"metadata": map[string]interface{}{
"labels": map[string]interface{}{
"app": name,
},
},
"spec": map[string]interface{}{
"containers": []interface{}{
map[string]interface{}{
"name": "app",
"image": "busybox:1.36",
"command": []interface{}{"sh", "-c", "sleep 3600"},
},
},
},
},
"triggers": []interface{}{
map[string]interface{}{
"type": "ConfigChange",
},
},
},
},
}
// Apply options
for _, opt := range opts {
opt(dc)
}
_, err := dynamicClient.Resource(DeploymentConfigGVR).Namespace(namespace).Create(ctx, dc, metav1.CreateOptions{})
return err
}
// DeleteDeploymentConfig deletes a DeploymentConfig.
func DeleteDeploymentConfig(ctx context.Context, dynamicClient dynamic.Interface, namespace, name string) error {
return dynamicClient.Resource(DeploymentConfigGVR).Namespace(namespace).Delete(ctx, name, metav1.DeleteOptions{})
}
// GetDeploymentConfig retrieves a DeploymentConfig.
func GetDeploymentConfig(ctx context.Context, dynamicClient dynamic.Interface, namespace, name string) (*unstructured.Unstructured, error) {
return dynamicClient.Resource(DeploymentConfigGVR).Namespace(namespace).Get(ctx, name, metav1.GetOptions{})
}
// WithDCConfigMapEnvFrom adds a ConfigMap envFrom to the DeploymentConfig.
func WithDCConfigMapEnvFrom(configMapName string) DCOption {
return func(dc *unstructured.Unstructured) {
containers, _, _ := unstructured.NestedSlice(dc.Object, "spec", "template", "spec", "containers")
if len(containers) > 0 {
container := containers[0].(map[string]interface{})
envFrom, _, _ := unstructured.NestedSlice(container, "envFrom")
envFrom = append(envFrom, map[string]interface{}{
"configMapRef": map[string]interface{}{
"name": configMapName,
},
})
container["envFrom"] = envFrom
containers[0] = container
_ = unstructured.SetNestedSlice(dc.Object, containers, "spec", "template", "spec", "containers")
}
}
}
// WithDCSecretEnvFrom adds a Secret envFrom to the DeploymentConfig.
func WithDCSecretEnvFrom(secretName string) DCOption {
return func(dc *unstructured.Unstructured) {
containers, _, _ := unstructured.NestedSlice(dc.Object, "spec", "template", "spec", "containers")
if len(containers) > 0 {
container := containers[0].(map[string]interface{})
envFrom, _, _ := unstructured.NestedSlice(container, "envFrom")
envFrom = append(envFrom, map[string]interface{}{
"secretRef": map[string]interface{}{
"name": secretName,
},
})
container["envFrom"] = envFrom
containers[0] = container
_ = unstructured.SetNestedSlice(dc.Object, containers, "spec", "template", "spec", "containers")
}
}
}
// WithDCConfigMapVolume adds a ConfigMap volume to the DeploymentConfig.
func WithDCConfigMapVolume(configMapName string) DCOption {
return func(dc *unstructured.Unstructured) {
// Add volume
volumes, _, _ := unstructured.NestedSlice(dc.Object, "spec", "template", "spec", "volumes")
volumes = append(volumes, map[string]interface{}{
"name": configMapName + "-volume",
"configMap": map[string]interface{}{
"name": configMapName,
},
})
_ = unstructured.SetNestedSlice(dc.Object, volumes, "spec", "template", "spec", "volumes")
// Add volumeMount
containers, _, _ := unstructured.NestedSlice(dc.Object, "spec", "template", "spec", "containers")
if len(containers) > 0 {
container := containers[0].(map[string]interface{})
volumeMounts, _, _ := unstructured.NestedSlice(container, "volumeMounts")
volumeMounts = append(volumeMounts, map[string]interface{}{
"name": configMapName + "-volume",
"mountPath": "/etc/config/" + configMapName,
})
container["volumeMounts"] = volumeMounts
containers[0] = container
_ = unstructured.SetNestedSlice(dc.Object, containers, "spec", "template", "spec", "containers")
}
}
}
// WithDCSecretVolume adds a Secret volume to the DeploymentConfig.
func WithDCSecretVolume(secretName string) DCOption {
return func(dc *unstructured.Unstructured) {
// Add volume
volumes, _, _ := unstructured.NestedSlice(dc.Object, "spec", "template", "spec", "volumes")
volumes = append(volumes, map[string]interface{}{
"name": secretName + "-volume",
"secret": map[string]interface{}{
"secretName": secretName,
},
})
_ = unstructured.SetNestedSlice(dc.Object, volumes, "spec", "template", "spec", "volumes")
// Add volumeMount
containers, _, _ := unstructured.NestedSlice(dc.Object, "spec", "template", "spec", "containers")
if len(containers) > 0 {
container := containers[0].(map[string]interface{})
volumeMounts, _, _ := unstructured.NestedSlice(container, "volumeMounts")
volumeMounts = append(volumeMounts, map[string]interface{}{
"name": secretName + "-volume",
"mountPath": "/etc/secrets/" + secretName,
})
container["volumeMounts"] = volumeMounts
containers[0] = container
_ = unstructured.SetNestedSlice(dc.Object, containers, "spec", "template", "spec", "containers")
}
}
}
// WithDCAnnotations adds annotations to the DeploymentConfig's pod template.
func WithDCAnnotations(annotations map[string]string) DCOption {
return func(dc *unstructured.Unstructured) {
annotationsMap := make(map[string]interface{})
for k, v := range annotations {
annotationsMap[k] = v
}
_ = unstructured.SetNestedMap(dc.Object, annotationsMap, "spec", "template", "metadata", "annotations")
}
}
// WaitForDeploymentConfigReady waits for a DeploymentConfig to be ready.
func WaitForDeploymentConfigReady(ctx context.Context, dynamicClient dynamic.Interface, namespace, name string, timeout time.Duration) error {
return wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
dc, err := dynamicClient.Resource(DeploymentConfigGVR).Namespace(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil // Keep polling
}
// Check replicas == readyReplicas
replicas, _, _ := unstructured.NestedInt64(dc.Object, "spec", "replicas")
readyReplicas, _, _ := unstructured.NestedInt64(dc.Object, "status", "readyReplicas")
if replicas > 0 && replicas == readyReplicas {
return true, nil
}
return false, nil
})
}
// WaitForDeploymentConfigReloaded waits for a DeploymentConfig's pod template to have the reloader annotation.
func WaitForDeploymentConfigReloaded(ctx context.Context, dynamicClient dynamic.Interface, namespace, name, annotationKey string, timeout time.Duration) (bool, error) {
var found bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
dc, err := dynamicClient.Resource(DeploymentConfigGVR).Namespace(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
}
// Check pod template annotations
annotations, _, _ := unstructured.NestedStringMap(dc.Object, "spec", "template", "metadata", "annotations")
if annotations != nil {
if _, ok := annotations[annotationKey]; ok {
found = true
return true, nil
}
}
return false, nil
})
if err != nil && err != context.DeadlineExceeded {
return false, err
}
return found, nil
}
// GetDeploymentConfigPodTemplateAnnotations retrieves the pod template annotations from a DeploymentConfig.
func GetDeploymentConfigPodTemplateAnnotations(ctx context.Context, dynamicClient dynamic.Interface, namespace, name string) (map[string]string, error) {
dc, err := dynamicClient.Resource(DeploymentConfigGVR).Namespace(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return nil, err
}
annotations, _, _ := unstructured.NestedStringMap(dc.Object, "spec", "template", "metadata", "annotations")
return annotations, nil
}

257
test/e2e/utils/podspec.go Normal file
View File

@@ -0,0 +1,257 @@
package utils
import (
"fmt"
corev1 "k8s.io/api/core/v1"
"k8s.io/utils/ptr"
)
// AddEnvFromSource adds ConfigMap or Secret envFrom to a container.
func AddEnvFromSource(spec *corev1.PodSpec, containerIdx int, name string, isSecret bool) {
if containerIdx >= len(spec.Containers) {
return
}
source := corev1.EnvFromSource{}
if isSecret {
source.SecretRef = &corev1.SecretEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: name},
}
} else {
source.ConfigMapRef = &corev1.ConfigMapEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: name},
}
}
spec.Containers[containerIdx].EnvFrom = append(spec.Containers[containerIdx].EnvFrom, source)
}
// AddVolume adds a volume and mount to a container.
func AddVolume(spec *corev1.PodSpec, containerIdx int, volume corev1.Volume, mountPath string) {
spec.Volumes = append(spec.Volumes, volume)
if containerIdx < len(spec.Containers) {
spec.Containers[containerIdx].VolumeMounts = append(
spec.Containers[containerIdx].VolumeMounts,
corev1.VolumeMount{Name: volume.Name, MountPath: mountPath},
)
}
}
// AddConfigMapVolume adds ConfigMap volume and mount.
func AddConfigMapVolume(spec *corev1.PodSpec, containerIdx int, name string) {
AddVolume(spec, containerIdx, corev1.Volume{
Name: "cm-" + name,
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{Name: name},
},
},
}, "/etc/config/"+name)
}
// AddSecretVolume adds Secret volume and mount.
func AddSecretVolume(spec *corev1.PodSpec, containerIdx int, name string) {
AddVolume(spec, containerIdx, corev1.Volume{
Name: "secret-" + name,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{SecretName: name},
},
}, "/etc/secrets/"+name)
}
// AddProjectedVolume adds projected volume with ConfigMap and/or Secret.
func AddProjectedVolume(spec *corev1.PodSpec, containerIdx int, cmName, secretName string) {
sources := []corev1.VolumeProjection{}
if cmName != "" {
sources = append(sources, corev1.VolumeProjection{
ConfigMap: &corev1.ConfigMapProjection{
LocalObjectReference: corev1.LocalObjectReference{Name: cmName},
},
})
}
if secretName != "" {
sources = append(sources, corev1.VolumeProjection{
Secret: &corev1.SecretProjection{
LocalObjectReference: corev1.LocalObjectReference{Name: secretName},
},
})
}
AddVolume(spec, containerIdx, corev1.Volume{
Name: "projected-config",
VolumeSource: corev1.VolumeSource{
Projected: &corev1.ProjectedVolumeSource{Sources: sources},
},
}, "/etc/projected")
}
// AddKeyRef adds env var from ConfigMap or Secret key.
func AddKeyRef(spec *corev1.PodSpec, containerIdx int, resourceName, key, envVarName string, isSecret bool) {
if containerIdx >= len(spec.Containers) {
return
}
envVar := corev1.EnvVar{Name: envVarName}
if isSecret {
envVar.ValueFrom = &corev1.EnvVarSource{
SecretKeyRef: &corev1.SecretKeySelector{
LocalObjectReference: corev1.LocalObjectReference{Name: resourceName},
Key: key,
},
}
} else {
envVar.ValueFrom = &corev1.EnvVarSource{
ConfigMapKeyRef: &corev1.ConfigMapKeySelector{
LocalObjectReference: corev1.LocalObjectReference{Name: resourceName},
Key: key,
},
}
}
spec.Containers[containerIdx].Env = append(spec.Containers[containerIdx].Env, envVar)
}
// AddCSIVolume adds CSI volume referencing SecretProviderClass.
func AddCSIVolume(spec *corev1.PodSpec, containerIdx int, spcName string) {
volumeName := "csi-" + spcName
mountPath := "/mnt/secrets-store/" + spcName
spec.Volumes = append(spec.Volumes, corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
CSI: &corev1.CSIVolumeSource{
Driver: CSIDriverName,
ReadOnly: ptr.To(true),
VolumeAttributes: map[string]string{
"secretProviderClass": spcName,
},
},
},
})
if containerIdx < len(spec.Containers) {
spec.Containers[containerIdx].VolumeMounts = append(
spec.Containers[containerIdx].VolumeMounts,
corev1.VolumeMount{Name: volumeName, MountPath: mountPath, ReadOnly: true},
)
}
}
// AddInitContainer adds init container with optional envFrom references.
func AddInitContainer(spec *corev1.PodSpec, cmName, secretName string) {
init := corev1.Container{
Name: "init",
Image: DefaultImage,
Command: []string{"sh", "-c", "echo init done"},
}
if cmName != "" {
init.EnvFrom = append(init.EnvFrom, corev1.EnvFromSource{
ConfigMapRef: &corev1.ConfigMapEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: cmName},
},
})
}
if secretName != "" {
init.EnvFrom = append(init.EnvFrom, corev1.EnvFromSource{
SecretRef: &corev1.SecretEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: secretName},
},
})
}
spec.InitContainers = append(spec.InitContainers, init)
}
// AddInitContainerWithVolumes adds init container with volume mounts.
func AddInitContainerWithVolumes(spec *corev1.PodSpec, cmName, secretName string) {
init := corev1.Container{
Name: "init",
Image: DefaultImage,
Command: []string{"sh", "-c", "echo init done"},
}
if cmName != "" {
volumeName := "init-cm-" + cmName
spec.Volumes = append(spec.Volumes, corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{Name: cmName},
},
},
})
init.VolumeMounts = append(init.VolumeMounts, corev1.VolumeMount{
Name: volumeName,
MountPath: "/etc/init-config/" + cmName,
})
}
if secretName != "" {
volumeName := "init-secret-" + secretName
spec.Volumes = append(spec.Volumes, corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{SecretName: secretName},
},
})
init.VolumeMounts = append(init.VolumeMounts, corev1.VolumeMount{
Name: volumeName,
MountPath: "/etc/init-secrets/" + secretName,
})
}
spec.InitContainers = append(spec.InitContainers, init)
}
// ApplyWorkloadConfig applies all WorkloadConfig settings to a PodSpec.
// This single function replaces all the duplicate buildXxxOptions functions.
func ApplyWorkloadConfig(spec *corev1.PodSpec, cfg WorkloadConfig) {
if cfg.UseConfigMapEnvFrom && cfg.ConfigMapName != "" {
AddEnvFromSource(spec, 0, cfg.ConfigMapName, false)
}
if cfg.UseSecretEnvFrom && cfg.SecretName != "" {
AddEnvFromSource(spec, 0, cfg.SecretName, true)
}
if cfg.UseConfigMapVolume && cfg.ConfigMapName != "" {
AddConfigMapVolume(spec, 0, cfg.ConfigMapName)
}
if cfg.UseSecretVolume && cfg.SecretName != "" {
AddSecretVolume(spec, 0, cfg.SecretName)
}
if cfg.UseProjectedVolume {
AddProjectedVolume(spec, 0, cfg.ConfigMapName, cfg.SecretName)
}
if cfg.UseConfigMapKeyRef && cfg.ConfigMapName != "" {
key := cfg.ConfigMapKey
if key == "" {
key = "key"
}
envVar := cfg.EnvVarName
if envVar == "" {
envVar = "CONFIG_VAR"
}
AddKeyRef(spec, 0, cfg.ConfigMapName, key, envVar, false)
}
if cfg.UseSecretKeyRef && cfg.SecretName != "" {
key := cfg.SecretKey
if key == "" {
key = "key"
}
envVar := cfg.EnvVarName
if envVar == "" {
envVar = "SECRET_VAR"
}
AddKeyRef(spec, 0, cfg.SecretName, key, envVar, true)
}
if cfg.UseCSIVolume && cfg.SPCName != "" {
AddCSIVolume(spec, 0, cfg.SPCName)
}
if cfg.UseInitContainer {
AddInitContainer(spec, cfg.ConfigMapName, cfg.SecretName)
}
if cfg.UseInitContainerVolume {
AddInitContainerWithVolumes(spec, cfg.ConfigMapName, cfg.SecretName)
}
if cfg.UseInitContainerCSI && cfg.SPCName != "" {
AddCSIVolume(spec, 0, cfg.SPCName)
}
if cfg.MultipleContainers > 1 {
for i := 1; i < cfg.MultipleContainers; i++ {
spec.Containers = append(spec.Containers, corev1.Container{
Name: fmt.Sprintf("container-%d", i),
Image: DefaultImage,
Command: []string{"sh", "-c", DefaultCommand},
})
}
}
}

View File

@@ -175,7 +175,7 @@ type DeploymentOption func(*appsv1.Deployment)
// CreateDeployment creates a Deployment with the given options.
func CreateDeployment(ctx context.Context, client kubernetes.Interface, namespace, name string, opts ...DeploymentOption) (*appsv1.Deployment, error) {
deploy := baseDeployment(namespace, name)
deploy := baseDeploymentResource(namespace, name)
for _, opt := range opts {
opt(deploy)
}
@@ -349,14 +349,12 @@ func WithMultipleContainers(count int) DeploymentOption {
// WithMultipleContainersAndEnv creates two containers, each with a different ConfigMap envFrom.
func WithMultipleContainersAndEnv(cm1Name, cm2Name string) DeploymentOption {
return func(d *appsv1.Deployment) {
// First container gets the first ConfigMap
d.Spec.Template.Spec.Containers[0].EnvFrom = append(d.Spec.Template.Spec.Containers[0].EnvFrom,
corev1.EnvFromSource{
ConfigMapRef: &corev1.ConfigMapEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: cm1Name},
},
})
// Add second container with second ConfigMap
d.Spec.Template.Spec.Containers = append(d.Spec.Template.Spec.Containers, corev1.Container{
Name: "container-1",
Image: DefaultImage,
@@ -379,396 +377,6 @@ func WithReplicas(replicas int32) DeploymentOption {
}
}
// baseDeployment creates a base Deployment template.
func baseDeployment(namespace, name string) *appsv1.Deployment {
labels := map[string]string{"app": name}
return &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
},
Spec: appsv1.DeploymentSpec{
Replicas: ptr.To(int32(1)),
Selector: &metav1.LabelSelector{
MatchLabels: labels,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: labels,
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "app",
Image: DefaultImage,
Command: []string{"sh", "-c", DefaultCommand},
},
},
},
},
},
}
}
// DeleteDeployment deletes a Deployment.
func DeleteDeployment(ctx context.Context, client kubernetes.Interface, namespace, name string) error {
return client.AppsV1().Deployments(namespace).Delete(ctx, name, metav1.DeleteOptions{})
}
// DaemonSetOption is a functional option for configuring a DaemonSet.
type DaemonSetOption func(*appsv1.DaemonSet)
// CreateDaemonSet creates a DaemonSet with the given options.
func CreateDaemonSet(ctx context.Context, client kubernetes.Interface, namespace, name string, opts ...DaemonSetOption) (*appsv1.DaemonSet, error) {
ds := baseDaemonSet(namespace, name)
for _, opt := range opts {
opt(ds)
}
return client.AppsV1().DaemonSets(namespace).Create(ctx, ds, metav1.CreateOptions{})
}
// WithDaemonSetAnnotations adds annotations to the DaemonSet metadata.
func WithDaemonSetAnnotations(annotations map[string]string) DaemonSetOption {
return func(ds *appsv1.DaemonSet) {
if ds.Annotations == nil {
ds.Annotations = make(map[string]string)
}
for k, v := range annotations {
ds.Annotations[k] = v
}
}
}
// WithDaemonSetConfigMapEnvFrom adds an envFrom reference to a ConfigMap.
func WithDaemonSetConfigMapEnvFrom(name string) DaemonSetOption {
return func(ds *appsv1.DaemonSet) {
ds.Spec.Template.Spec.Containers[0].EnvFrom = append(
ds.Spec.Template.Spec.Containers[0].EnvFrom,
corev1.EnvFromSource{
ConfigMapRef: &corev1.ConfigMapEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: name},
},
},
)
}
}
// WithDaemonSetSecretEnvFrom adds an envFrom reference to a Secret.
func WithDaemonSetSecretEnvFrom(name string) DaemonSetOption {
return func(ds *appsv1.DaemonSet) {
ds.Spec.Template.Spec.Containers[0].EnvFrom = append(
ds.Spec.Template.Spec.Containers[0].EnvFrom,
corev1.EnvFromSource{
SecretRef: &corev1.SecretEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: name},
},
},
)
}
}
// baseDaemonSet creates a base DaemonSet template.
func baseDaemonSet(namespace, name string) *appsv1.DaemonSet {
labels := map[string]string{"app": name}
return &appsv1.DaemonSet{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
},
Spec: appsv1.DaemonSetSpec{
Selector: &metav1.LabelSelector{
MatchLabels: labels,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: labels,
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "app",
Image: DefaultImage,
Command: []string{"sh", "-c", DefaultCommand},
},
},
},
},
},
}
}
// DeleteDaemonSet deletes a DaemonSet.
func DeleteDaemonSet(ctx context.Context, client kubernetes.Interface, namespace, name string) error {
return client.AppsV1().DaemonSets(namespace).Delete(ctx, name, metav1.DeleteOptions{})
}
// StatefulSetOption is a functional option for configuring a StatefulSet.
type StatefulSetOption func(*appsv1.StatefulSet)
// CreateStatefulSet creates a StatefulSet with the given options.
func CreateStatefulSet(ctx context.Context, client kubernetes.Interface, namespace, name string, opts ...StatefulSetOption) (*appsv1.StatefulSet, error) {
ss := baseStatefulSet(namespace, name)
for _, opt := range opts {
opt(ss)
}
return client.AppsV1().StatefulSets(namespace).Create(ctx, ss, metav1.CreateOptions{})
}
// WithStatefulSetAnnotations adds annotations to the StatefulSet metadata.
func WithStatefulSetAnnotations(annotations map[string]string) StatefulSetOption {
return func(ss *appsv1.StatefulSet) {
if ss.Annotations == nil {
ss.Annotations = make(map[string]string)
}
for k, v := range annotations {
ss.Annotations[k] = v
}
}
}
// WithStatefulSetConfigMapEnvFrom adds an envFrom reference to a ConfigMap.
func WithStatefulSetConfigMapEnvFrom(name string) StatefulSetOption {
return func(ss *appsv1.StatefulSet) {
ss.Spec.Template.Spec.Containers[0].EnvFrom = append(
ss.Spec.Template.Spec.Containers[0].EnvFrom,
corev1.EnvFromSource{
ConfigMapRef: &corev1.ConfigMapEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: name},
},
},
)
}
}
// WithStatefulSetSecretEnvFrom adds an envFrom reference to a Secret.
func WithStatefulSetSecretEnvFrom(name string) StatefulSetOption {
return func(ss *appsv1.StatefulSet) {
ss.Spec.Template.Spec.Containers[0].EnvFrom = append(
ss.Spec.Template.Spec.Containers[0].EnvFrom,
corev1.EnvFromSource{
SecretRef: &corev1.SecretEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: name},
},
},
)
}
}
// baseStatefulSet creates a base StatefulSet template.
func baseStatefulSet(namespace, name string) *appsv1.StatefulSet {
labels := map[string]string{"app": name}
return &appsv1.StatefulSet{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
},
Spec: appsv1.StatefulSetSpec{
ServiceName: name,
Replicas: ptr.To(int32(1)),
Selector: &metav1.LabelSelector{
MatchLabels: labels,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: labels,
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "app",
Image: DefaultImage,
Command: []string{"sh", "-c", DefaultCommand},
},
},
},
},
},
}
}
// DeleteStatefulSet deletes a StatefulSet.
func DeleteStatefulSet(ctx context.Context, client kubernetes.Interface, namespace, name string) error {
return client.AppsV1().StatefulSets(namespace).Delete(ctx, name, metav1.DeleteOptions{})
}
// CronJobOption is a functional option for configuring a CronJob.
type CronJobOption func(*batchv1.CronJob)
// CreateCronJob creates a CronJob with the given options.
func CreateCronJob(ctx context.Context, client kubernetes.Interface, namespace, name string, opts ...CronJobOption) (*batchv1.CronJob, error) {
cj := baseCronJob(namespace, name)
for _, opt := range opts {
opt(cj)
}
return client.BatchV1().CronJobs(namespace).Create(ctx, cj, metav1.CreateOptions{})
}
// WithCronJobAnnotations adds annotations to the CronJob metadata.
func WithCronJobAnnotations(annotations map[string]string) CronJobOption {
return func(cj *batchv1.CronJob) {
if cj.Annotations == nil {
cj.Annotations = make(map[string]string)
}
for k, v := range annotations {
cj.Annotations[k] = v
}
}
}
// WithCronJobConfigMapEnvFrom adds an envFrom reference to a ConfigMap.
func WithCronJobConfigMapEnvFrom(name string) CronJobOption {
return func(cj *batchv1.CronJob) {
cj.Spec.JobTemplate.Spec.Template.Spec.Containers[0].EnvFrom = append(
cj.Spec.JobTemplate.Spec.Template.Spec.Containers[0].EnvFrom,
corev1.EnvFromSource{
ConfigMapRef: &corev1.ConfigMapEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: name},
},
},
)
}
}
// WithCronJobSecretEnvFrom adds an envFrom reference to a Secret.
func WithCronJobSecretEnvFrom(name string) CronJobOption {
return func(cj *batchv1.CronJob) {
cj.Spec.JobTemplate.Spec.Template.Spec.Containers[0].EnvFrom = append(
cj.Spec.JobTemplate.Spec.Template.Spec.Containers[0].EnvFrom,
corev1.EnvFromSource{
SecretRef: &corev1.SecretEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: name},
},
},
)
}
}
// baseCronJob creates a base CronJob template.
func baseCronJob(namespace, name string) *batchv1.CronJob {
labels := map[string]string{"app": name}
return &batchv1.CronJob{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
},
Spec: batchv1.CronJobSpec{
Schedule: "* * * * *", // Every minute
JobTemplate: batchv1.JobTemplateSpec{
Spec: batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: labels,
},
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyOnFailure,
Containers: []corev1.Container{
{
Name: "job",
Image: DefaultImage,
Command: []string{"sh", "-c", "echo done"},
},
},
},
},
},
},
},
}
}
// DeleteCronJob deletes a CronJob.
func DeleteCronJob(ctx context.Context, client kubernetes.Interface, namespace, name string) error {
return client.BatchV1().CronJobs(namespace).Delete(ctx, name, metav1.DeleteOptions{})
}
// JobOption is a functional option for configuring a Job.
type JobOption func(*batchv1.Job)
// CreateJob creates a Job with the given options.
func CreateJob(ctx context.Context, client kubernetes.Interface, namespace, name string, opts ...JobOption) (*batchv1.Job, error) {
job := baseJob(namespace, name)
for _, opt := range opts {
opt(job)
}
return client.BatchV1().Jobs(namespace).Create(ctx, job, metav1.CreateOptions{})
}
// WithJobAnnotations adds annotations to the Job metadata.
func WithJobAnnotations(annotations map[string]string) JobOption {
return func(j *batchv1.Job) {
if j.Annotations == nil {
j.Annotations = make(map[string]string)
}
for k, v := range annotations {
j.Annotations[k] = v
}
}
}
// WithJobConfigMapEnvFrom adds an envFrom reference to a ConfigMap.
func WithJobConfigMapEnvFrom(name string) JobOption {
return func(j *batchv1.Job) {
j.Spec.Template.Spec.Containers[0].EnvFrom = append(
j.Spec.Template.Spec.Containers[0].EnvFrom,
corev1.EnvFromSource{
ConfigMapRef: &corev1.ConfigMapEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: name},
},
},
)
}
}
// WithJobSecretEnvFrom adds an envFrom reference to a Secret.
func WithJobSecretEnvFrom(name string) JobOption {
return func(j *batchv1.Job) {
j.Spec.Template.Spec.Containers[0].EnvFrom = append(
j.Spec.Template.Spec.Containers[0].EnvFrom,
corev1.EnvFromSource{
SecretRef: &corev1.SecretEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: name},
},
},
)
}
}
// baseJob creates a base Job template.
func baseJob(namespace, name string) *batchv1.Job {
labels := map[string]string{"app": name}
return &batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
},
Spec: batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: labels,
},
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyNever,
Containers: []corev1.Container{
{
Name: "job",
Image: DefaultImage,
Command: []string{"sh", "-c", "echo done"},
},
},
},
},
},
}
}
// DeleteJob deletes a Job.
func DeleteJob(ctx context.Context, client kubernetes.Interface, namespace, name string) error {
propagation := metav1.DeletePropagationBackground
return client.BatchV1().Jobs(namespace).Delete(ctx, name, metav1.DeleteOptions{
PropagationPolicy: &propagation,
})
}
// WithConfigMapKeyRef adds a valueFrom.configMapKeyRef env var to the container.
func WithConfigMapKeyRef(cmName, key, envVarName string) DeploymentOption {
return func(d *appsv1.Deployment) {
@@ -907,150 +515,346 @@ func WithInitContainerProjectedVolume(cmName, secretName string) DeploymentOptio
}
}
// WithDaemonSetProjectedVolume adds a projected volume with ConfigMap and/or Secret sources to a DaemonSet.
func WithDaemonSetProjectedVolume(cmName, secretName string) DaemonSetOption {
return func(ds *appsv1.DaemonSet) {
volumeName := "projected-config"
sources := []corev1.VolumeProjection{}
// WithCSIVolume adds a CSI volume referencing a SecretProviderClass to a Deployment.
func WithCSIVolume(spcName string) DeploymentOption {
return func(d *appsv1.Deployment) {
volumeName := csiVolumeName(spcName)
mountPath := csiMountPath(spcName)
if cmName != "" {
sources = append(sources, corev1.VolumeProjection{
ConfigMap: &corev1.ConfigMapProjection{
LocalObjectReference: corev1.LocalObjectReference{Name: cmName},
},
})
}
if secretName != "" {
sources = append(sources, corev1.VolumeProjection{
Secret: &corev1.SecretProjection{
LocalObjectReference: corev1.LocalObjectReference{Name: secretName},
},
})
}
ds.Spec.Template.Spec.Volumes = append(ds.Spec.Template.Spec.Volumes, corev1.Volume{
d.Spec.Template.Spec.Volumes = append(d.Spec.Template.Spec.Volumes, corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
Projected: &corev1.ProjectedVolumeSource{
Sources: sources,
CSI: &corev1.CSIVolumeSource{
Driver: CSIDriverName,
ReadOnly: ptr.To(true),
VolumeAttributes: map[string]string{
"secretProviderClass": spcName,
},
},
},
})
ds.Spec.Template.Spec.Containers[0].VolumeMounts = append(
ds.Spec.Template.Spec.Containers[0].VolumeMounts,
d.Spec.Template.Spec.Containers[0].VolumeMounts = append(
d.Spec.Template.Spec.Containers[0].VolumeMounts,
corev1.VolumeMount{
Name: volumeName,
MountPath: "/etc/projected",
MountPath: mountPath,
ReadOnly: true,
},
)
}
}
// WithStatefulSetProjectedVolume adds a projected volume with ConfigMap and/or Secret sources to a StatefulSet.
func WithStatefulSetProjectedVolume(cmName, secretName string) StatefulSetOption {
return func(ss *appsv1.StatefulSet) {
volumeName := "projected-config"
sources := []corev1.VolumeProjection{}
// WithInitContainerCSIVolume adds an init container with a CSI volume mount.
func WithInitContainerCSIVolume(spcName string) DeploymentOption {
return func(d *appsv1.Deployment) {
volumeName := csiVolumeName(spcName)
mountPath := csiMountPath(spcName)
if cmName != "" {
sources = append(sources, corev1.VolumeProjection{
ConfigMap: &corev1.ConfigMapProjection{
LocalObjectReference: corev1.LocalObjectReference{Name: cmName},
},
})
hasCSIVolume := false
for _, v := range d.Spec.Template.Spec.Volumes {
if v.Name == volumeName {
hasCSIVolume = true
break
}
}
if secretName != "" {
sources = append(sources, corev1.VolumeProjection{
Secret: &corev1.SecretProjection{
LocalObjectReference: corev1.LocalObjectReference{Name: secretName},
if !hasCSIVolume {
d.Spec.Template.Spec.Volumes = append(d.Spec.Template.Spec.Volumes, corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
CSI: &corev1.CSIVolumeSource{
Driver: CSIDriverName,
ReadOnly: ptr.To(true),
VolumeAttributes: map[string]string{
"secretProviderClass": spcName,
},
},
},
})
}
ss.Spec.Template.Spec.Volumes = append(ss.Spec.Template.Spec.Volumes, corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
Projected: &corev1.ProjectedVolumeSource{
Sources: sources,
initContainer := corev1.Container{
Name: fmt.Sprintf("init-csi-%s", spcName),
Image: DefaultImage,
Command: []string{"sh", "-c", "echo init done"},
VolumeMounts: []corev1.VolumeMount{
{
Name: volumeName,
MountPath: mountPath,
ReadOnly: true,
},
},
})
ss.Spec.Template.Spec.Containers[0].VolumeMounts = append(
ss.Spec.Template.Spec.Containers[0].VolumeMounts,
corev1.VolumeMount{
Name: volumeName,
MountPath: "/etc/projected",
},
)
}
d.Spec.Template.Spec.InitContainers = append(d.Spec.Template.Spec.InitContainers, initContainer)
}
}
// WithDaemonSetConfigMapKeyRef adds a valueFrom.configMapKeyRef env var to a DaemonSet.
func WithDaemonSetConfigMapKeyRef(cmName, key, envVarName string) DaemonSetOption {
return func(ds *appsv1.DaemonSet) {
ds.Spec.Template.Spec.Containers[0].Env = append(
ds.Spec.Template.Spec.Containers[0].Env,
corev1.EnvVar{
Name: envVarName,
ValueFrom: &corev1.EnvVarSource{
ConfigMapKeyRef: &corev1.ConfigMapKeySelector{
LocalObjectReference: corev1.LocalObjectReference{Name: cmName},
Key: key,
func baseDeploymentResource(namespace, name string) *appsv1.Deployment {
labels := map[string]string{"app": name}
return &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
},
Spec: appsv1.DeploymentSpec{
Replicas: ptr.To(int32(1)),
Selector: &metav1.LabelSelector{
MatchLabels: labels,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: labels,
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "app",
Image: DefaultImage,
Command: []string{"sh", "-c", DefaultCommand},
},
},
},
},
},
}
}
// DeleteDeployment deletes a Deployment.
func DeleteDeployment(ctx context.Context, client kubernetes.Interface, namespace, name string) error {
return client.AppsV1().Deployments(namespace).Delete(ctx, name, metav1.DeleteOptions{})
}
// DaemonSetOption is a functional option for configuring a DaemonSet.
type DaemonSetOption func(*appsv1.DaemonSet)
// CreateDaemonSet creates a DaemonSet with the given options.
func CreateDaemonSet(ctx context.Context, client kubernetes.Interface, namespace, name string, opts ...DaemonSetOption) (*appsv1.DaemonSet, error) {
ds := baseDaemonSetResource(namespace, name)
for _, opt := range opts {
opt(ds)
}
return client.AppsV1().DaemonSets(namespace).Create(ctx, ds, metav1.CreateOptions{})
}
// baseDaemonSetResource creates a base DaemonSet template.
func baseDaemonSetResource(namespace, name string) *appsv1.DaemonSet {
labels := map[string]string{"app": name}
return &appsv1.DaemonSet{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
},
Spec: appsv1.DaemonSetSpec{
Selector: &metav1.LabelSelector{
MatchLabels: labels,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: labels,
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "app",
Image: DefaultImage,
Command: []string{"sh", "-c", DefaultCommand},
},
},
},
},
},
}
}
// DeleteDaemonSet deletes a DaemonSet.
func DeleteDaemonSet(ctx context.Context, client kubernetes.Interface, namespace, name string) error {
return client.AppsV1().DaemonSets(namespace).Delete(ctx, name, metav1.DeleteOptions{})
}
// StatefulSetOption is a functional option for configuring a StatefulSet.
type StatefulSetOption func(*appsv1.StatefulSet)
// CreateStatefulSet creates a StatefulSet with the given options.
func CreateStatefulSet(ctx context.Context, client kubernetes.Interface, namespace, name string, opts ...StatefulSetOption) (*appsv1.StatefulSet, error) {
ss := baseStatefulSetResource(namespace, name)
for _, opt := range opts {
opt(ss)
}
return client.AppsV1().StatefulSets(namespace).Create(ctx, ss, metav1.CreateOptions{})
}
// baseStatefulSetResource creates a base StatefulSet template.
func baseStatefulSetResource(namespace, name string) *appsv1.StatefulSet {
labels := map[string]string{"app": name}
return &appsv1.StatefulSet{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
},
Spec: appsv1.StatefulSetSpec{
ServiceName: name,
Replicas: ptr.To(int32(1)),
Selector: &metav1.LabelSelector{
MatchLabels: labels,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: labels,
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "app",
Image: DefaultImage,
Command: []string{"sh", "-c", DefaultCommand},
},
},
},
},
},
}
}
// DeleteStatefulSet deletes a StatefulSet.
func DeleteStatefulSet(ctx context.Context, client kubernetes.Interface, namespace, name string) error {
return client.AppsV1().StatefulSets(namespace).Delete(ctx, name, metav1.DeleteOptions{})
}
// CronJobOption is a functional option for configuring a CronJob.
type CronJobOption func(*batchv1.CronJob)
// CreateCronJob creates a CronJob with the given options.
func CreateCronJob(ctx context.Context, client kubernetes.Interface, namespace, name string, opts ...CronJobOption) (*batchv1.CronJob, error) {
cj := baseCronJobResource(namespace, name)
for _, opt := range opts {
opt(cj)
}
return client.BatchV1().CronJobs(namespace).Create(ctx, cj, metav1.CreateOptions{})
}
// WithCronJobAnnotations adds annotations to the CronJob metadata.
func WithCronJobAnnotations(annotations map[string]string) CronJobOption {
return func(cj *batchv1.CronJob) {
if cj.Annotations == nil {
cj.Annotations = make(map[string]string)
}
for k, v := range annotations {
cj.Annotations[k] = v
}
}
}
// WithCronJobConfigMapEnvFrom adds an envFrom reference to a ConfigMap.
func WithCronJobConfigMapEnvFrom(name string) CronJobOption {
return func(cj *batchv1.CronJob) {
cj.Spec.JobTemplate.Spec.Template.Spec.Containers[0].EnvFrom = append(
cj.Spec.JobTemplate.Spec.Template.Spec.Containers[0].EnvFrom,
corev1.EnvFromSource{
ConfigMapRef: &corev1.ConfigMapEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: name},
},
},
)
}
}
// WithDaemonSetSecretKeyRef adds a valueFrom.secretKeyRef env var to a DaemonSet.
func WithDaemonSetSecretKeyRef(secretName, key, envVarName string) DaemonSetOption {
return func(ds *appsv1.DaemonSet) {
ds.Spec.Template.Spec.Containers[0].Env = append(
ds.Spec.Template.Spec.Containers[0].Env,
corev1.EnvVar{
Name: envVarName,
ValueFrom: &corev1.EnvVarSource{
SecretKeyRef: &corev1.SecretKeySelector{
LocalObjectReference: corev1.LocalObjectReference{Name: secretName},
Key: key,
},
// WithCronJobSecretEnvFrom adds an envFrom reference to a Secret.
func WithCronJobSecretEnvFrom(name string) CronJobOption {
return func(cj *batchv1.CronJob) {
cj.Spec.JobTemplate.Spec.Template.Spec.Containers[0].EnvFrom = append(
cj.Spec.JobTemplate.Spec.Template.Spec.Containers[0].EnvFrom,
corev1.EnvFromSource{
SecretRef: &corev1.SecretEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: name},
},
},
)
}
}
// WithStatefulSetConfigMapKeyRef adds a valueFrom.configMapKeyRef env var to a StatefulSet.
func WithStatefulSetConfigMapKeyRef(cmName, key, envVarName string) StatefulSetOption {
return func(ss *appsv1.StatefulSet) {
ss.Spec.Template.Spec.Containers[0].Env = append(
ss.Spec.Template.Spec.Containers[0].Env,
corev1.EnvVar{
Name: envVarName,
ValueFrom: &corev1.EnvVarSource{
ConfigMapKeyRef: &corev1.ConfigMapKeySelector{
LocalObjectReference: corev1.LocalObjectReference{Name: cmName},
Key: key,
// baseCronJobResource creates a base CronJob template.
func baseCronJobResource(namespace, name string) *batchv1.CronJob {
labels := map[string]string{"app": name}
return &batchv1.CronJob{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
},
Spec: batchv1.CronJobSpec{
Schedule: "* * * * *", // Every minute
JobTemplate: batchv1.JobTemplateSpec{
Spec: batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: labels,
},
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyOnFailure,
Containers: []corev1.Container{
{
Name: "job",
Image: DefaultImage,
Command: []string{"sh", "-c", "echo done"},
},
},
},
},
},
},
},
}
}
// DeleteCronJob deletes a CronJob.
func DeleteCronJob(ctx context.Context, client kubernetes.Interface, namespace, name string) error {
return client.BatchV1().CronJobs(namespace).Delete(ctx, name, metav1.DeleteOptions{})
}
// JobOption is a functional option for configuring a Job.
type JobOption func(*batchv1.Job)
// CreateJob creates a Job with the given options.
func CreateJob(ctx context.Context, client kubernetes.Interface, namespace, name string, opts ...JobOption) (*batchv1.Job, error) {
job := baseJobResource(namespace, name)
for _, opt := range opts {
opt(job)
}
return client.BatchV1().Jobs(namespace).Create(ctx, job, metav1.CreateOptions{})
}
// WithJobAnnotations adds annotations to the Job metadata.
func WithJobAnnotations(annotations map[string]string) JobOption {
return func(j *batchv1.Job) {
if j.Annotations == nil {
j.Annotations = make(map[string]string)
}
for k, v := range annotations {
j.Annotations[k] = v
}
}
}
// WithJobConfigMapEnvFrom adds an envFrom reference to a ConfigMap.
func WithJobConfigMapEnvFrom(name string) JobOption {
return func(j *batchv1.Job) {
j.Spec.Template.Spec.Containers[0].EnvFrom = append(
j.Spec.Template.Spec.Containers[0].EnvFrom,
corev1.EnvFromSource{
ConfigMapRef: &corev1.ConfigMapEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: name},
},
},
)
}
}
// WithStatefulSetSecretKeyRef adds a valueFrom.secretKeyRef env var to a StatefulSet.
func WithStatefulSetSecretKeyRef(secretName, key, envVarName string) StatefulSetOption {
return func(ss *appsv1.StatefulSet) {
ss.Spec.Template.Spec.Containers[0].Env = append(
ss.Spec.Template.Spec.Containers[0].Env,
corev1.EnvVar{
Name: envVarName,
ValueFrom: &corev1.EnvVarSource{
SecretKeyRef: &corev1.SecretKeySelector{
LocalObjectReference: corev1.LocalObjectReference{Name: secretName},
Key: key,
},
// WithJobSecretEnvFrom adds an envFrom reference to a Secret.
func WithJobSecretEnvFrom(name string) JobOption {
return func(j *batchv1.Job) {
j.Spec.Template.Spec.Containers[0].EnvFrom = append(
j.Spec.Template.Spec.Containers[0].EnvFrom,
corev1.EnvFromSource{
SecretRef: &corev1.SecretEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: name},
},
},
)
@@ -1092,3 +896,47 @@ func WithJobSecretKeyRef(secretName, key, envVarName string) JobOption {
)
}
}
// baseJobResource creates a base Job template.
func baseJobResource(namespace, name string) *batchv1.Job {
labels := map[string]string{"app": name}
return &batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
},
Spec: batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: labels,
},
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyNever,
Containers: []corev1.Container{
{
Name: "job",
Image: DefaultImage,
Command: []string{"sh", "-c", "echo done"},
},
},
},
},
},
}
}
// DeleteJob deletes a Job.
func DeleteJob(ctx context.Context, client kubernetes.Interface, namespace, name string) error {
propagation := metav1.DeletePropagationBackground
return client.BatchV1().Jobs(namespace).Delete(ctx, name, metav1.DeleteOptions{
PropagationPolicy: &propagation,
})
}
func csiVolumeName(spcName string) string {
return fmt.Sprintf("csi-%s", spcName)
}
func csiMountPath(spcName string) string {
return fmt.Sprintf("/mnt/secrets-store/%s", spcName)
}

View File

@@ -4,12 +4,15 @@ import (
"context"
"fmt"
. "github.com/onsi/ginkgo/v2"
rolloutsclient "github.com/argoproj/argo-rollouts/pkg/client/clientset/versioned"
"github.com/onsi/ginkgo/v2"
openshiftclient "github.com/openshift/client-go/apps/clientset/versioned"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/discovery"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
csiclient "sigs.k8s.io/secrets-store-csi-driver/pkg/client/clientset/versioned"
)
// TestEnvironment holds the common test environment state.
@@ -17,10 +20,13 @@ type TestEnvironment struct {
Ctx context.Context
Cancel context.CancelFunc
KubeClient kubernetes.Interface
DynamicClient dynamic.Interface
DiscoveryClient discovery.DiscoveryInterface
CSIClient csiclient.Interface
RolloutsClient rolloutsclient.Interface
OpenShiftClient openshiftclient.Interface
RestConfig *rest.Config
Namespace string
ReleaseName string // Unique Helm release name to prevent cluster-scoped resource conflicts
ReleaseName string
TestImage string
ProjectDir string
}
@@ -35,56 +41,69 @@ func SetupTestEnvironment(ctx context.Context, namespacePrefix string) (*TestEnv
var err error
// Get project directory
env.ProjectDir, err = GetProjectDir()
if err != nil {
return nil, fmt.Errorf("getting project directory: %w", err)
}
// Setup Kubernetes client
kubeconfig := GetKubeconfig()
GinkgoWriter.Printf("Using kubeconfig: %s\n", kubeconfig)
ginkgo.GinkgoWriter.Printf("Using kubeconfig: %s\n", kubeconfig)
config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
return nil, fmt.Errorf("building config from kubeconfig: %w", err)
}
env.RestConfig = config
env.KubeClient, err = kubernetes.NewForConfig(config)
if err != nil {
return nil, fmt.Errorf("creating kubernetes client: %w", err)
}
env.DynamicClient, err = dynamic.NewForConfig(config)
if err != nil {
return nil, fmt.Errorf("creating dynamic client: %w", err)
}
env.DiscoveryClient, err = discovery.NewDiscoveryClientForConfig(config)
if err != nil {
return nil, fmt.Errorf("creating discovery client: %w", err)
}
// Verify cluster connectivity
GinkgoWriter.Println("Verifying cluster connectivity...")
env.CSIClient, err = csiclient.NewForConfig(config)
if err != nil {
ginkgo.GinkgoWriter.Printf("Warning: Could not create CSI client: %v (CSI tests will be skipped)\n", err)
env.CSIClient = nil
}
// Try to create Argo Rollouts client (optional - may not be installed)
env.RolloutsClient, err = rolloutsclient.NewForConfig(config)
if err != nil {
ginkgo.GinkgoWriter.Printf("Warning: Could not create Rollouts client: %v (Argo tests will be skipped)\n", err)
env.RolloutsClient = nil
}
// Try to create OpenShift client (optional - may not be installed)
env.OpenShiftClient, err = openshiftclient.NewForConfig(config)
if err != nil {
ginkgo.GinkgoWriter.Printf("Warning: Could not create OpenShift client: %v (OpenShift tests will be skipped)\n",
err)
env.OpenShiftClient = nil
}
ginkgo.GinkgoWriter.Println("Verifying cluster connectivity...")
_, err = env.KubeClient.CoreV1().Namespaces().List(ctx, metav1.ListOptions{Limit: 1})
if err != nil {
return nil, fmt.Errorf("connecting to kubernetes cluster: %w", err)
}
GinkgoWriter.Println("Cluster connectivity verified")
ginkgo.GinkgoWriter.Println("Cluster connectivity verified")
// Create test namespace with random suffix
env.Namespace = RandName(namespacePrefix)
// Use a unique release name to prevent cluster-scoped resource conflicts between test suites
env.ReleaseName = RandName("reloader")
GinkgoWriter.Printf("Creating test namespace: %s\n", env.Namespace)
GinkgoWriter.Printf("Using Helm release name: %s\n", env.ReleaseName)
ginkgo.GinkgoWriter.Printf("Creating test namespace: %s\n", env.Namespace)
ginkgo.GinkgoWriter.Printf("Using Helm release name: %s\n", env.ReleaseName)
if err := CreateNamespace(ctx, env.KubeClient, env.Namespace); err != nil {
return nil, fmt.Errorf("creating test namespace: %w", err)
}
GinkgoWriter.Printf("Using test image: %s\n", env.TestImage)
GinkgoWriter.Printf("Project directory: %s\n", env.ProjectDir)
ginkgo.GinkgoWriter.Printf("Using test image: %s\n", env.TestImage)
ginkgo.GinkgoWriter.Printf("Project directory: %s\n", env.ProjectDir)
return env, nil
}
@@ -95,20 +114,17 @@ func (e *TestEnvironment) Cleanup() error {
return nil
}
GinkgoWriter.Printf("Cleaning up test namespace: %s\n", e.Namespace)
GinkgoWriter.Printf("Cleaning up Helm release: %s\n", e.ReleaseName)
ginkgo.GinkgoWriter.Printf("Cleaning up test namespace: %s\n", e.Namespace)
ginkgo.GinkgoWriter.Printf("Cleaning up Helm release: %s\n", e.ReleaseName)
// Collect Reloader logs before cleanup (useful for debugging)
logs, err := GetPodLogs(e.Ctx, e.KubeClient, e.Namespace, ReloaderPodSelector(e.ReleaseName))
if err == nil && logs != "" {
GinkgoWriter.Println("Reloader logs:")
GinkgoWriter.Println(logs)
ginkgo.GinkgoWriter.Println("Reloader logs:")
ginkgo.GinkgoWriter.Println(logs)
}
// Undeploy Reloader using the suite's release name
_ = UndeployReloader(e.Namespace, e.ReleaseName)
// Delete test namespace
if err := DeleteNamespace(e.Ctx, e.KubeClient, e.Namespace); err != nil {
return fmt.Errorf("deleting namespace: %w", err)
}
@@ -118,27 +134,32 @@ func (e *TestEnvironment) Cleanup() error {
// DeployReloaderWithStrategy deploys Reloader with the specified reload strategy.
func (e *TestEnvironment) DeployReloaderWithStrategy(strategy string) error {
return e.DeployReloaderWithValues(map[string]string{
"reloader.reloadStrategy": strategy,
})
return e.DeployReloaderWithValues(
map[string]string{
"reloader.reloadStrategy": strategy,
},
)
}
// DeployReloaderWithValues deploys Reloader with the specified Helm values.
// Each test suite uses a unique release name to prevent cluster-scoped resource conflicts.
func (e *TestEnvironment) DeployReloaderWithValues(values map[string]string) error {
GinkgoWriter.Printf("Deploying Reloader with values: %v\n", values)
return DeployReloader(DeployOptions{
Namespace: e.Namespace,
ReleaseName: e.ReleaseName,
Image: e.TestImage,
Values: values,
})
ginkgo.GinkgoWriter.Printf("Deploying Reloader with values: %v\n", values)
return DeployReloader(
DeployOptions{
Namespace: e.Namespace,
ReleaseName: e.ReleaseName,
Image: e.TestImage,
Values: values,
},
)
}
// WaitForReloader waits for the Reloader deployment to be ready.
func (e *TestEnvironment) WaitForReloader() error {
GinkgoWriter.Println("Waiting for Reloader to be ready...")
return WaitForDeploymentReady(e.Ctx, e.KubeClient, e.Namespace, ReloaderDeploymentName(e.ReleaseName), DeploymentReady)
ginkgo.GinkgoWriter.Println("Waiting for Reloader to be ready...")
return WaitForDeploymentReady(e.Ctx, e.KubeClient, e.Namespace, ReloaderDeploymentName(e.ReleaseName),
DeploymentReady)
}
// DeployAndWait deploys Reloader with the given values and waits for it to be ready.
@@ -149,6 +170,6 @@ func (e *TestEnvironment) DeployAndWait(values map[string]string) error {
if err := e.WaitForReloader(); err != nil {
return fmt.Errorf("waiting for Reloader: %w", err)
}
GinkgoWriter.Println("Reloader is ready")
ginkgo.GinkgoWriter.Println("Reloader is ready")
return nil
}

View File

@@ -2,6 +2,7 @@ package utils
import (
"context"
"errors"
"fmt"
"strings"
"time"
@@ -16,7 +17,6 @@ import (
// Timeout and interval constants for polling operations.
const (
DefaultTimeout = 30 * time.Second // General operations
DefaultInterval = 1 * time.Second // Polling interval (faster feedback)
ShortTimeout = 5 * time.Second // Quick checks
NegativeTestWait = 3 * time.Second // Wait before checking negative conditions
@@ -26,179 +26,100 @@ const (
// WaitForDeploymentReady waits for a deployment to have all replicas available.
func WaitForDeploymentReady(ctx context.Context, client kubernetes.Interface, namespace, name string, timeout time.Duration) error {
return wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
deploy, err := client.AppsV1().Deployments(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil // Keep polling
}
return wait.PollUntilContextTimeout(
ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
deploy, err := client.AppsV1().Deployments(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
}
// Check if deployment is ready
if deploy.Status.ReadyReplicas == *deploy.Spec.Replicas &&
deploy.Status.UpdatedReplicas == *deploy.Spec.Replicas &&
deploy.Status.AvailableReplicas == *deploy.Spec.Replicas {
return true, nil
}
if deploy.Status.ReadyReplicas == *deploy.Spec.Replicas &&
deploy.Status.UpdatedReplicas == *deploy.Spec.Replicas &&
deploy.Status.AvailableReplicas == *deploy.Spec.Replicas {
return true, nil
}
return false, nil
})
return false, nil
},
)
}
// WaitForDeploymentReloaded waits for a deployment's pod template to have the reloader annotation.
// Returns true if the annotation was found, false if timeout occurred.
func WaitForDeploymentReloaded(ctx context.Context, client kubernetes.Interface, namespace, name, annotationKey string, timeout time.Duration) (bool, error) {
var found bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
return WaitForAnnotation(ctx, func(ctx context.Context) (map[string]string, error) {
deploy, err := client.AppsV1().Deployments(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil // Keep polling
return nil, err
}
// Check pod template annotations
if deploy.Spec.Template.Annotations != nil {
if _, ok := deploy.Spec.Template.Annotations[annotationKey]; ok {
found = true
return true, nil
}
}
return false, nil
})
if err != nil && err != context.DeadlineExceeded {
return false, err
}
return found, nil
return deploy.Spec.Template.Annotations, nil
}, annotationKey, timeout)
}
// WaitForDaemonSetReloaded waits for a DaemonSet's pod template to have the reloader annotation.
func WaitForDaemonSetReloaded(ctx context.Context, client kubernetes.Interface, namespace, name, annotationKey string, timeout time.Duration) (bool, error) {
var found bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
return WaitForAnnotation(ctx, func(ctx context.Context) (map[string]string, error) {
ds, err := client.AppsV1().DaemonSets(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
return nil, err
}
if ds.Spec.Template.Annotations != nil {
if _, ok := ds.Spec.Template.Annotations[annotationKey]; ok {
found = true
return true, nil
}
}
return false, nil
})
if err != nil && err != context.DeadlineExceeded {
return false, err
}
return found, nil
return ds.Spec.Template.Annotations, nil
}, annotationKey, timeout)
}
// WaitForStatefulSetReloaded waits for a StatefulSet's pod template to have the reloader annotation.
func WaitForStatefulSetReloaded(ctx context.Context, client kubernetes.Interface, namespace, name, annotationKey string, timeout time.Duration) (bool, error) {
var found bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
return WaitForAnnotation(ctx, func(ctx context.Context) (map[string]string, error) {
ss, err := client.AppsV1().StatefulSets(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
return nil, err
}
if ss.Spec.Template.Annotations != nil {
if _, ok := ss.Spec.Template.Annotations[annotationKey]; ok {
found = true
return true, nil
}
}
return false, nil
})
if err != nil && err != context.DeadlineExceeded {
return false, err
}
return found, nil
return ss.Spec.Template.Annotations, nil
}, annotationKey, timeout)
}
// WaitForCronJobReloaded waits for a CronJob's pod template to have the reloader annotation.
func WaitForCronJobReloaded(ctx context.Context, client kubernetes.Interface, namespace, name, annotationKey string, timeout time.Duration) (bool, error) {
var found bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
return WaitForAnnotation(ctx, func(ctx context.Context) (map[string]string, error) {
cj, err := client.BatchV1().CronJobs(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
return nil, err
}
if cj.Spec.JobTemplate.Spec.Template.Annotations != nil {
if _, ok := cj.Spec.JobTemplate.Spec.Template.Annotations[annotationKey]; ok {
found = true
return true, nil
}
}
return false, nil
})
if err != nil && err != context.DeadlineExceeded {
return false, err
}
return found, nil
}
// WaitForJobCreated waits for a Job to be created with the given label selector.
func WaitForJobCreated(ctx context.Context, client kubernetes.Interface, namespace, labelSelector string, timeout time.Duration) (bool, error) {
var found bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
jobs, err := client.BatchV1().Jobs(namespace).List(ctx, metav1.ListOptions{
LabelSelector: labelSelector,
})
if err != nil {
return false, nil
}
if len(jobs.Items) > 0 {
found = true
return true, nil
}
return false, nil
})
if err != nil && err != context.DeadlineExceeded {
return false, err
}
return found, nil
return cj.Spec.JobTemplate.Spec.Template.Annotations, nil
}, annotationKey, timeout)
}
// WaitForCronJobTriggeredJob waits for a Job to be created by the specified CronJob.
// It checks owner references to find Jobs created by Reloader's manual trigger.
func WaitForCronJobTriggeredJob(ctx context.Context, client kubernetes.Interface, namespace, cronJobName string, timeout time.Duration) (bool, error) {
func WaitForCronJobTriggeredJob(ctx context.Context, client kubernetes.Interface, namespace, cronJobName string, timeout time.Duration) (
bool, error,
) {
var found bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
jobs, err := client.BatchV1().Jobs(namespace).List(ctx, metav1.ListOptions{})
if err != nil {
return false, nil
}
err := wait.PollUntilContextTimeout(
ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
jobs, err := client.BatchV1().Jobs(namespace).List(ctx, metav1.ListOptions{})
if err != nil {
return false, nil
}
for _, job := range jobs.Items {
// Check if this job is owned by the CronJob
for _, ownerRef := range job.OwnerReferences {
if ownerRef.Kind == "CronJob" && ownerRef.Name == cronJobName {
// Check for the manual instantiate annotation (added by Reloader)
if job.Annotations != nil {
if _, ok := job.Annotations["cronjob.kubernetes.io/instantiate"]; ok {
found = true
return true, nil
for _, job := range jobs.Items {
for _, ownerRef := range job.OwnerReferences {
if ownerRef.Kind == "CronJob" && ownerRef.Name == cronJobName {
if job.Annotations != nil {
if _, ok := job.Annotations["cronjob.kubernetes.io/instantiate"]; ok {
found = true
return true, nil
}
}
}
}
}
}
return false, nil
})
return false, nil
},
)
if err != nil && err != context.DeadlineExceeded {
if err != nil && !errors.Is(err, context.DeadlineExceeded) {
return false, err
}
return found, nil
@@ -207,160 +128,96 @@ func WaitForCronJobTriggeredJob(ctx context.Context, client kubernetes.Interface
// WaitForDeploymentEnvVar waits for a deployment's containers to have an environment variable
// with the given prefix (e.g., "STAKATER_").
func WaitForDeploymentEnvVar(ctx context.Context, client kubernetes.Interface, namespace, name, prefix string, timeout time.Duration) (bool, error) {
var found bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
return WaitForEnvVarPrefix(ctx, func(ctx context.Context) ([]corev1.Container, error) {
deploy, err := client.AppsV1().Deployments(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
return nil, err
}
if hasEnvVarWithPrefix(deploy.Spec.Template.Spec.Containers, prefix) {
found = true
return true, nil
}
return false, nil
})
if err != nil && err != context.DeadlineExceeded {
return false, err
}
return found, nil
return deploy.Spec.Template.Spec.Containers, nil
}, prefix, timeout)
}
// WaitForDaemonSetEnvVar waits for a DaemonSet's containers to have an environment variable
// with the given prefix.
func WaitForDaemonSetEnvVar(ctx context.Context, client kubernetes.Interface, namespace, name, prefix string, timeout time.Duration) (bool, error) {
var found bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
return WaitForEnvVarPrefix(ctx, func(ctx context.Context) ([]corev1.Container, error) {
ds, err := client.AppsV1().DaemonSets(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
return nil, err
}
if hasEnvVarWithPrefix(ds.Spec.Template.Spec.Containers, prefix) {
found = true
return true, nil
}
return false, nil
})
if err != nil && err != context.DeadlineExceeded {
return false, err
}
return found, nil
return ds.Spec.Template.Spec.Containers, nil
}, prefix, timeout)
}
// WaitForStatefulSetEnvVar waits for a StatefulSet's containers to have an environment variable
// with the given prefix.
func WaitForStatefulSetEnvVar(ctx context.Context, client kubernetes.Interface, namespace, name, prefix string, timeout time.Duration) (bool, error) {
var found bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
return WaitForEnvVarPrefix(ctx, func(ctx context.Context) ([]corev1.Container, error) {
ss, err := client.AppsV1().StatefulSets(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
return nil, err
}
if hasEnvVarWithPrefix(ss.Spec.Template.Spec.Containers, prefix) {
found = true
return true, nil
}
return false, nil
})
if err != nil && err != context.DeadlineExceeded {
return false, err
}
return found, nil
return ss.Spec.Template.Spec.Containers, nil
}, prefix, timeout)
}
// WaitForDeploymentPaused waits for a deployment to have the paused-at annotation.
func WaitForDeploymentPaused(ctx context.Context, client kubernetes.Interface, namespace, name, pausedAtAnnotation string, timeout time.Duration) (bool, error) {
var found bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
return WaitForAnnotation(ctx, func(ctx context.Context) (map[string]string, error) {
deploy, err := client.AppsV1().Deployments(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
return nil, err
}
// Check deployment annotations (not pod template)
if deploy.Annotations != nil {
if _, ok := deploy.Annotations[pausedAtAnnotation]; ok {
found = true
return true, nil
}
}
return false, nil
})
if err != nil && err != context.DeadlineExceeded {
return false, err
}
return found, nil
return deploy.Annotations, nil
}, pausedAtAnnotation, timeout)
}
// WaitForDeploymentUnpaused waits for a deployment to NOT have the paused-at annotation.
func WaitForDeploymentUnpaused(ctx context.Context, client kubernetes.Interface, namespace, name, pausedAtAnnotation string, timeout time.Duration) (bool, error) {
var unpaused bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
return WaitForNoAnnotation(ctx, func(ctx context.Context) (map[string]string, error) {
deploy, err := client.AppsV1().Deployments(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
return nil, err
}
// Check if paused-at annotation is gone
if deploy.Annotations == nil {
unpaused = true
return true, nil
}
if _, ok := deploy.Annotations[pausedAtAnnotation]; !ok {
unpaused = true
return true, nil
}
return false, nil
})
if err != nil && err != context.DeadlineExceeded {
return false, err
}
return unpaused, nil
return deploy.Annotations, nil
}, pausedAtAnnotation, timeout)
}
// WaitForDaemonSetReady waits for a DaemonSet to have all pods ready.
func WaitForDaemonSetReady(ctx context.Context, client kubernetes.Interface, namespace, name string, timeout time.Duration) error {
return wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
ds, err := client.AppsV1().DaemonSets(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return wait.PollUntilContextTimeout(
ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
ds, err := client.AppsV1().DaemonSets(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
}
if ds.Status.DesiredNumberScheduled > 0 &&
ds.Status.NumberReady == ds.Status.DesiredNumberScheduled {
return true, nil
}
return false, nil
}
if ds.Status.DesiredNumberScheduled > 0 &&
ds.Status.NumberReady == ds.Status.DesiredNumberScheduled {
return true, nil
}
return false, nil
})
},
)
}
// WaitForStatefulSetReady waits for a StatefulSet to have all replicas ready.
func WaitForStatefulSetReady(ctx context.Context, client kubernetes.Interface, namespace, name string, timeout time.Duration) error {
return wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
ss, err := client.AppsV1().StatefulSets(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return wait.PollUntilContextTimeout(
ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
ss, err := client.AppsV1().StatefulSets(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
}
if ss.Status.ReadyReplicas == *ss.Spec.Replicas {
return true, nil
}
return false, nil
}
if ss.Status.ReadyReplicas == *ss.Spec.Replicas {
return true, nil
}
return false, nil
})
},
)
}
// GetDeployment retrieves a deployment by name.
@@ -368,31 +225,18 @@ func GetDeployment(ctx context.Context, client kubernetes.Interface, namespace,
return client.AppsV1().Deployments(namespace).Get(ctx, name, metav1.GetOptions{})
}
// GetDaemonSet retrieves a DaemonSet by name.
func GetDaemonSet(ctx context.Context, client kubernetes.Interface, namespace, name string) (*appsv1.DaemonSet, error) {
return client.AppsV1().DaemonSets(namespace).Get(ctx, name, metav1.GetOptions{})
}
// GetStatefulSet retrieves a StatefulSet by name.
func GetStatefulSet(ctx context.Context, client kubernetes.Interface, namespace, name string) (*appsv1.StatefulSet, error) {
return client.AppsV1().StatefulSets(namespace).Get(ctx, name, metav1.GetOptions{})
}
// GetCronJob retrieves a CronJob by name.
func GetCronJob(ctx context.Context, client kubernetes.Interface, namespace, name string) (*batchv1.CronJob, error) {
return client.BatchV1().CronJobs(namespace).Get(ctx, name, metav1.GetOptions{})
}
// WaitForCronJobExists waits for a CronJob to exist in the cluster.
// This is useful for giving Reloader time to detect and index the CronJob before making changes.
func WaitForCronJobExists(ctx context.Context, client kubernetes.Interface, namespace, name string, timeout time.Duration) error {
return wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
_, err := client.BatchV1().CronJobs(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil // Keep polling
}
return true, nil
})
return wait.PollUntilContextTimeout(
ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
_, err := client.BatchV1().CronJobs(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
}
return true, nil
},
)
}
// GetJob retrieves a Job by name.
@@ -400,82 +244,57 @@ func GetJob(ctx context.Context, client kubernetes.Interface, namespace, name st
return client.BatchV1().Jobs(namespace).Get(ctx, name, metav1.GetOptions{})
}
// hasEnvVarWithPrefix checks if any container has an environment variable with the given prefix.
func hasEnvVarWithPrefix(containers []corev1.Container, prefix string) bool {
for _, container := range containers {
for _, env := range container.Env {
if strings.HasPrefix(env.Name, prefix) {
return true
}
}
}
return false
}
// WaitForJobRecreated waits for a Job to be deleted and recreated with a new UID.
// Returns the new Job's UID if recreation was detected.
func WaitForJobRecreated(ctx context.Context, client kubernetes.Interface, namespace, name, originalUID string, timeout time.Duration) (string, bool, error) {
func WaitForJobRecreated(ctx context.Context, client kubernetes.Interface, namespace, name, originalUID string, timeout time.Duration) (
string, bool, error,
) {
var newUID string
var recreated bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
job, err := client.BatchV1().Jobs(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
// Job not found means it's been deleted, keep polling for recreation
err := wait.PollUntilContextTimeout(
ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
job, err := client.BatchV1().Jobs(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
}
if string(job.UID) != originalUID {
newUID = string(job.UID)
recreated = true
return true, nil
}
return false, nil
}
},
)
// Check if the UID has changed (indicating recreation)
if string(job.UID) != originalUID {
newUID = string(job.UID)
recreated = true
return true, nil
}
return false, nil
})
if err != nil && err != context.DeadlineExceeded {
if err != nil && !errors.Is(err, context.DeadlineExceeded) {
return "", false, err
}
return newUID, recreated, nil
}
// WaitForJobNotFound waits for a Job to be deleted.
func WaitForJobNotFound(ctx context.Context, client kubernetes.Interface, namespace, name string, timeout time.Duration) (bool, error) {
var deleted bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
_, err := client.BatchV1().Jobs(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
deleted = true
return true, nil
}
return false, nil
})
if err != nil && err != context.DeadlineExceeded {
return false, err
}
return deleted, nil
}
// WaitForJobExists waits for a Job to exist in the cluster.
func WaitForJobExists(ctx context.Context, client kubernetes.Interface, namespace, name string, timeout time.Duration) error {
return wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
_, err := client.BatchV1().Jobs(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil // Keep polling
}
return true, nil
})
return wait.PollUntilContextTimeout(
ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
_, err := client.BatchV1().Jobs(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil // Keep polling
}
return true, nil
},
)
}
// GetPodLogs retrieves logs from pods matching the given label selector.
func GetPodLogs(ctx context.Context, client kubernetes.Interface, namespace, labelSelector string) (string, error) {
pods, err := client.CoreV1().Pods(namespace).List(ctx, metav1.ListOptions{
LabelSelector: labelSelector,
})
pods, err := client.CoreV1().Pods(namespace).List(
ctx, metav1.ListOptions{
LabelSelector: labelSelector,
},
)
if err != nil {
return "", fmt.Errorf("failed to list pods: %w", err)
}
@@ -483,9 +302,11 @@ func GetPodLogs(ctx context.Context, client kubernetes.Interface, namespace, lab
var allLogs strings.Builder
for _, pod := range pods.Items {
for _, container := range pod.Spec.Containers {
logs, err := client.CoreV1().Pods(namespace).GetLogs(pod.Name, &corev1.PodLogOptions{
Container: container.Name,
}).Do(ctx).Raw()
logs, err := client.CoreV1().Pods(namespace).GetLogs(
pod.Name, &corev1.PodLogOptions{
Container: container.Name,
},
).Do(ctx).Raw()
if err != nil {
allLogs.WriteString(fmt.Sprintf("Error getting logs for %s/%s: %v\n", pod.Name, container.Name, err))
continue

View File

@@ -0,0 +1,87 @@
package utils
import (
"context"
"errors"
"strings"
"time"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/util/wait"
)
// AnnotationGetter retrieves annotations from a workload's pod template.
type AnnotationGetter func(ctx context.Context) (map[string]string, error)
// ContainerGetter retrieves containers from a workload's pod template.
type ContainerGetter func(ctx context.Context) ([]corev1.Container, error)
// WaitForAnnotation polls until an annotation key exists.
func WaitForAnnotation(ctx context.Context, getter AnnotationGetter, key string, timeout time.Duration) (bool, error) {
var found bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
annotations, err := getter(ctx)
if err != nil {
return false, nil // Keep polling on errors
}
if annotations != nil {
if _, ok := annotations[key]; ok {
found = true
return true, nil
}
}
return false, nil
})
if err != nil && !errors.Is(err, context.DeadlineExceeded) {
return false, err
}
return found, nil
}
// WaitForNoAnnotation polls until an annotation key is absent.
func WaitForNoAnnotation(ctx context.Context, getter AnnotationGetter, key string, timeout time.Duration) (bool, error) {
var absent bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
annotations, err := getter(ctx)
if err != nil {
return false, nil
}
if annotations == nil {
absent = true
return true, nil
}
if _, ok := annotations[key]; !ok {
absent = true
return true, nil
}
return false, nil
})
if err != nil && !errors.Is(err, context.DeadlineExceeded) {
return false, err
}
return absent, nil
}
// WaitForEnvVarPrefix polls until a container has an env var with given prefix.
func WaitForEnvVarPrefix(ctx context.Context, getter ContainerGetter, prefix string, timeout time.Duration) (bool, error) {
var found bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
containers, err := getter(ctx)
if err != nil {
return false, nil
}
for _, container := range containers {
for _, env := range container.Env {
if strings.HasPrefix(env.Name, prefix) {
found = true
return true, nil
}
}
}
return false, nil
})
if err != nil && !errors.Is(err, context.DeadlineExceeded) {
return false, err
}
return found, nil
}

View File

@@ -4,7 +4,6 @@ import (
"context"
"time"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
)
@@ -31,14 +30,10 @@ const (
// WorkloadConfig holds configuration for workload creation.
type WorkloadConfig struct {
// Resource references
ConfigMapName string
SecretName string
// Annotations to set on the workload
Annotations map[string]string
// Reference methods (flags - multiple can be true)
ConfigMapName string
SecretName string
SPCName string
Annotations map[string]string
UseConfigMapEnvFrom bool
UseSecretEnvFrom bool
UseConfigMapVolume bool
@@ -48,14 +43,12 @@ type WorkloadConfig struct {
UseSecretKeyRef bool
UseInitContainer bool
UseInitContainerVolume bool
// For valueFrom references
ConfigMapKey string
SecretKey string
EnvVarName string
// Special options
MultipleContainers int // Number of containers (0 or 1 means single container)
UseCSIVolume bool
UseInitContainerCSI bool
ConfigMapKey string
SecretKey string
EnvVarName string
MultipleContainers int
}
// WorkloadAdapter provides a unified interface for all workload types.
@@ -92,34 +85,27 @@ type WorkloadAdapter interface {
// AdapterRegistry holds adapters for all workload types.
type AdapterRegistry struct {
kubeClient kubernetes.Interface
dynamicClient dynamic.Interface
adapters map[WorkloadType]WorkloadAdapter
kubeClient kubernetes.Interface
adapters map[WorkloadType]WorkloadAdapter
}
// NewAdapterRegistry creates a new adapter registry with all standard adapters.
func NewAdapterRegistry(kubeClient kubernetes.Interface, dynamicClient dynamic.Interface) *AdapterRegistry {
func NewAdapterRegistry(kubeClient kubernetes.Interface) *AdapterRegistry {
r := &AdapterRegistry{
kubeClient: kubeClient,
dynamicClient: dynamicClient,
adapters: make(map[WorkloadType]WorkloadAdapter),
kubeClient: kubeClient,
adapters: make(map[WorkloadType]WorkloadAdapter),
}
// Register standard adapters
r.adapters[WorkloadDeployment] = NewDeploymentAdapter(kubeClient)
r.adapters[WorkloadDaemonSet] = NewDaemonSetAdapter(kubeClient)
r.adapters[WorkloadStatefulSet] = NewStatefulSetAdapter(kubeClient)
r.adapters[WorkloadCronJob] = NewCronJobAdapter(kubeClient)
r.adapters[WorkloadJob] = NewJobAdapter(kubeClient)
// Argo and OpenShift adapters are registered separately via RegisterAdapter
// as they require specific cluster support
return r
}
// RegisterAdapter registers a custom adapter for a workload type.
// Use this to add Argo Rollout or DeploymentConfig adapters.
func (r *AdapterRegistry) RegisterAdapter(adapter WorkloadAdapter) {
r.adapters[adapter.Type()] = adapter
}

View File

@@ -2,24 +2,27 @@ package utils
import (
"context"
"fmt"
"strings"
"errors"
"time"
rolloutv1alpha1 "github.com/argoproj/argo-rollouts/pkg/apis/rollouts/v1alpha1"
rolloutsclient "github.com/argoproj/argo-rollouts/pkg/client/clientset/versioned"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/dynamic"
"k8s.io/utils/ptr"
)
// ArgoRolloutAdapter implements WorkloadAdapter for Argo Rollouts.
type ArgoRolloutAdapter struct {
dynamicClient dynamic.Interface
rolloutsClient rolloutsclient.Interface
}
// NewArgoRolloutAdapter creates a new ArgoRolloutAdapter.
func NewArgoRolloutAdapter(dynamicClient dynamic.Interface) *ArgoRolloutAdapter {
return &ArgoRolloutAdapter{dynamicClient: dynamicClient}
func NewArgoRolloutAdapter(rolloutsClient rolloutsclient.Interface) *ArgoRolloutAdapter {
return &ArgoRolloutAdapter{
rolloutsClient: rolloutsClient,
}
}
// Type returns the workload type.
@@ -29,28 +32,33 @@ func (a *ArgoRolloutAdapter) Type() WorkloadType {
// Create creates an Argo Rollout with the given config.
func (a *ArgoRolloutAdapter) Create(ctx context.Context, namespace, name string, cfg WorkloadConfig) error {
rollout := baseRollout(name)
opts := buildRolloutOptions(cfg)
return CreateArgoRollout(ctx, a.dynamicClient, namespace, name, opts...)
for _, opt := range opts {
opt(rollout)
}
_, err := a.rolloutsClient.ArgoprojV1alpha1().Rollouts(namespace).Create(ctx, rollout, metav1.CreateOptions{})
return err
}
// Delete removes the Argo Rollout.
func (a *ArgoRolloutAdapter) Delete(ctx context.Context, namespace, name string) error {
return DeleteArgoRollout(ctx, a.dynamicClient, namespace, name)
return a.rolloutsClient.ArgoprojV1alpha1().Rollouts(namespace).Delete(ctx, name, metav1.DeleteOptions{})
}
// WaitReady waits for the Argo Rollout to be ready.
func (a *ArgoRolloutAdapter) WaitReady(ctx context.Context, namespace, name string, timeout time.Duration) error {
return WaitForRolloutReady(ctx, a.dynamicClient, namespace, name, timeout)
return WaitForRolloutReady(ctx, a.rolloutsClient, namespace, name, timeout)
}
// WaitReloaded waits for the Argo Rollout to have the reload annotation.
func (a *ArgoRolloutAdapter) WaitReloaded(ctx context.Context, namespace, name, annotationKey string, timeout time.Duration) (bool, error) {
return WaitForRolloutReloaded(ctx, a.dynamicClient, namespace, name, annotationKey, timeout)
return WaitForRolloutReloaded(ctx, a.rolloutsClient, namespace, name, annotationKey, timeout)
}
// WaitEnvVar waits for the Argo Rollout to have a STAKATER_ env var.
func (a *ArgoRolloutAdapter) WaitEnvVar(ctx context.Context, namespace, name, prefix string, timeout time.Duration) (bool, error) {
return WaitForRolloutEnvVar(ctx, a.dynamicClient, namespace, name, prefix, timeout)
return WaitForRolloutEnvVar(ctx, a.rolloutsClient, namespace, name, prefix, timeout)
}
// SupportsEnvVarStrategy returns true as Argo Rollouts support env var reload strategy.
@@ -63,277 +71,118 @@ func (a *ArgoRolloutAdapter) RequiresSpecialHandling() bool {
return false
}
// baseRollout returns a minimal Rollout template.
func baseRollout(name string) *rolloutv1alpha1.Rollout {
return &rolloutv1alpha1.Rollout{
ObjectMeta: metav1.ObjectMeta{Name: name},
Spec: rolloutv1alpha1.RolloutSpec{
Replicas: ptr.To[int32](1),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{"app": name},
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{"app": name},
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{{
Name: "main",
Image: DefaultImage,
Command: []string{"sh", "-c", DefaultCommand},
}},
},
},
Strategy: rolloutv1alpha1.RolloutStrategy{
Canary: &rolloutv1alpha1.CanaryStrategy{
Steps: []rolloutv1alpha1.CanaryStep{
{SetWeight: ptr.To[int32](100)},
},
},
},
},
}
}
// buildRolloutOptions converts WorkloadConfig to RolloutOption slice.
func buildRolloutOptions(cfg WorkloadConfig) []RolloutOption {
var opts []RolloutOption
// Add annotations (to pod template)
if len(cfg.Annotations) > 0 {
opts = append(opts, WithRolloutAnnotations(cfg.Annotations))
}
// Add envFrom references
if cfg.UseConfigMapEnvFrom && cfg.ConfigMapName != "" {
opts = append(opts, WithRolloutConfigMapEnvFrom(cfg.ConfigMapName))
}
if cfg.UseSecretEnvFrom && cfg.SecretName != "" {
opts = append(opts, WithRolloutSecretEnvFrom(cfg.SecretName))
}
// Add volume mounts
if cfg.UseConfigMapVolume && cfg.ConfigMapName != "" {
opts = append(opts, WithRolloutConfigMapVolume(cfg.ConfigMapName))
}
if cfg.UseSecretVolume && cfg.SecretName != "" {
opts = append(opts, WithRolloutSecretVolume(cfg.SecretName))
}
// Add projected volume
if cfg.UseProjectedVolume {
opts = append(opts, WithRolloutProjectedVolume(cfg.ConfigMapName, cfg.SecretName))
}
// Add valueFrom references
if cfg.UseConfigMapKeyRef && cfg.ConfigMapName != "" {
key := cfg.ConfigMapKey
if key == "" {
key = "key"
}
envVar := cfg.EnvVarName
if envVar == "" {
envVar = "CONFIG_VAR"
}
opts = append(opts, WithRolloutConfigMapKeyRef(cfg.ConfigMapName, key, envVar))
}
if cfg.UseSecretKeyRef && cfg.SecretName != "" {
key := cfg.SecretKey
if key == "" {
key = "key"
}
envVar := cfg.EnvVarName
if envVar == "" {
envVar = "SECRET_VAR"
}
opts = append(opts, WithRolloutSecretKeyRef(cfg.SecretName, key, envVar))
}
// Add init container with envFrom
if cfg.UseInitContainer {
opts = append(opts, WithRolloutInitContainer(cfg.ConfigMapName, cfg.SecretName))
}
// Add init container with volume mount
if cfg.UseInitContainerVolume {
opts = append(opts, WithRolloutInitContainerVolume(cfg.ConfigMapName, cfg.SecretName))
}
return opts
}
// WithRolloutProjectedVolume adds a projected volume with ConfigMap and/or Secret sources to a Rollout.
func WithRolloutProjectedVolume(cmName, secretName string) RolloutOption {
return func(rollout *unstructured.Unstructured) {
volumeName := "projected-config"
sources := []interface{}{}
if cmName != "" {
sources = append(sources, map[string]interface{}{
"configMap": map[string]interface{}{
"name": cmName,
},
})
}
if secretName != "" {
sources = append(sources, map[string]interface{}{
"secret": map[string]interface{}{
"name": secretName,
},
})
}
// Add volume
volumes, _, _ := unstructured.NestedSlice(rollout.Object, "spec", "template", "spec", "volumes")
volumes = append(volumes, map[string]interface{}{
"name": volumeName,
"projected": map[string]interface{}{
"sources": sources,
},
})
_ = unstructured.SetNestedSlice(rollout.Object, volumes, "spec", "template", "spec", "volumes")
// Add volumeMount
containers, _, _ := unstructured.NestedSlice(rollout.Object, "spec", "template", "spec", "containers")
if len(containers) > 0 {
container := containers[0].(map[string]interface{})
volumeMounts, _, _ := unstructured.NestedSlice(container, "volumeMounts")
volumeMounts = append(volumeMounts, map[string]interface{}{
"name": volumeName,
"mountPath": "/etc/projected",
})
container["volumeMounts"] = volumeMounts
containers[0] = container
_ = unstructured.SetNestedSlice(rollout.Object, containers, "spec", "template", "spec", "containers")
}
return []RolloutOption{
func(r *rolloutv1alpha1.Rollout) {
// Set annotations on Rollout level (where Reloader checks them)
if len(cfg.Annotations) > 0 {
if r.Annotations == nil {
r.Annotations = make(map[string]string)
}
for k, v := range cfg.Annotations {
r.Annotations[k] = v
}
}
ApplyWorkloadConfig(&r.Spec.Template.Spec, cfg)
},
}
}
// WithRolloutConfigMapKeyRef adds an env var with valueFrom.configMapKeyRef to a Rollout.
func WithRolloutConfigMapKeyRef(cmName, key, envVarName string) RolloutOption {
return func(rollout *unstructured.Unstructured) {
containers, _, _ := unstructured.NestedSlice(rollout.Object, "spec", "template", "spec", "containers")
if len(containers) > 0 {
container := containers[0].(map[string]interface{})
env, _, _ := unstructured.NestedSlice(container, "env")
env = append(env, map[string]interface{}{
"name": envVarName,
"valueFrom": map[string]interface{}{
"configMapKeyRef": map[string]interface{}{
"name": cmName,
"key": key,
},
},
})
container["env"] = env
containers[0] = container
_ = unstructured.SetNestedSlice(rollout.Object, containers, "spec", "template", "spec", "containers")
}
}
}
// WithRolloutSecretKeyRef adds an env var with valueFrom.secretKeyRef to a Rollout.
func WithRolloutSecretKeyRef(secretName, key, envVarName string) RolloutOption {
return func(rollout *unstructured.Unstructured) {
containers, _, _ := unstructured.NestedSlice(rollout.Object, "spec", "template", "spec", "containers")
if len(containers) > 0 {
container := containers[0].(map[string]interface{})
env, _, _ := unstructured.NestedSlice(container, "env")
env = append(env, map[string]interface{}{
"name": envVarName,
"valueFrom": map[string]interface{}{
"secretKeyRef": map[string]interface{}{
"name": secretName,
"key": key,
},
},
})
container["env"] = env
containers[0] = container
_ = unstructured.SetNestedSlice(rollout.Object, containers, "spec", "template", "spec", "containers")
}
}
}
// WithRolloutInitContainer adds an init container that references ConfigMap and/or Secret.
func WithRolloutInitContainer(cmName, secretName string) RolloutOption {
return func(rollout *unstructured.Unstructured) {
initContainer := map[string]interface{}{
"name": "init",
"image": DefaultImage,
"command": []interface{}{"sh", "-c", "echo init done"},
}
envFrom := []interface{}{}
if cmName != "" {
envFrom = append(envFrom, map[string]interface{}{
"configMapRef": map[string]interface{}{
"name": cmName,
},
})
}
if secretName != "" {
envFrom = append(envFrom, map[string]interface{}{
"secretRef": map[string]interface{}{
"name": secretName,
},
})
}
if len(envFrom) > 0 {
initContainer["envFrom"] = envFrom
}
initContainers, _, _ := unstructured.NestedSlice(rollout.Object, "spec", "template", "spec", "initContainers")
initContainers = append(initContainers, initContainer)
_ = unstructured.SetNestedSlice(rollout.Object, initContainers, "spec", "template", "spec", "initContainers")
}
}
// WithRolloutInitContainerVolume adds an init container with ConfigMap/Secret volume mounts.
func WithRolloutInitContainerVolume(cmName, secretName string) RolloutOption {
return func(rollout *unstructured.Unstructured) {
initContainer := map[string]interface{}{
"name": "init",
"image": DefaultImage,
"command": []interface{}{"sh", "-c", "echo init done"},
}
volumeMounts := []interface{}{}
volumes, _, _ := unstructured.NestedSlice(rollout.Object, "spec", "template", "spec", "volumes")
if cmName != "" {
volumeName := fmt.Sprintf("init-cm-%s", cmName)
volumes = append(volumes, map[string]interface{}{
"name": volumeName,
"configMap": map[string]interface{}{
"name": cmName,
},
})
volumeMounts = append(volumeMounts, map[string]interface{}{
"name": volumeName,
"mountPath": fmt.Sprintf("/etc/init-config/%s", cmName),
})
}
if secretName != "" {
volumeName := fmt.Sprintf("init-secret-%s", secretName)
volumes = append(volumes, map[string]interface{}{
"name": volumeName,
"secret": map[string]interface{}{
"secretName": secretName,
},
})
volumeMounts = append(volumeMounts, map[string]interface{}{
"name": volumeName,
"mountPath": fmt.Sprintf("/etc/init-secrets/%s", secretName),
})
}
if len(volumeMounts) > 0 {
initContainer["volumeMounts"] = volumeMounts
}
_ = unstructured.SetNestedSlice(rollout.Object, volumes, "spec", "template", "spec", "volumes")
initContainers, _, _ := unstructured.NestedSlice(rollout.Object, "spec", "template", "spec", "initContainers")
initContainers = append(initContainers, initContainer)
_ = unstructured.SetNestedSlice(rollout.Object, initContainers, "spec", "template", "spec", "initContainers")
}
}
// WaitForRolloutEnvVar waits for an Argo Rollout's container to have an env var with the given prefix.
func WaitForRolloutEnvVar(ctx context.Context, dynamicClient dynamic.Interface, namespace, name, prefix string, timeout time.Duration) (bool, error) {
var found bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
rollout, err := dynamicClient.Resource(ArgoRolloutGVR).Namespace(namespace).Get(ctx, name, metav1.GetOptions{})
// WaitForRolloutReady waits for an Argo Rollout to be ready using typed client.
func WaitForRolloutReady(ctx context.Context, client rolloutsclient.Interface, namespace, name string, timeout time.Duration) error {
return wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
rollout, err := client.ArgoprojV1alpha1().Rollouts(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
}
containers, _, _ := unstructured.NestedSlice(rollout.Object, "spec", "template", "spec", "containers")
for _, c := range containers {
container := c.(map[string]interface{})
env, _, _ := unstructured.NestedSlice(container, "env")
for _, e := range env {
envVar := e.(map[string]interface{})
if name, ok := envVar["name"].(string); ok && strings.HasPrefix(name, prefix) {
found = true
return true, nil
}
}
// Check status.phase == "Healthy" or replicas == availableReplicas
if rollout.Status.Phase == rolloutv1alpha1.RolloutPhaseHealthy {
return true, nil
}
if rollout.Spec.Replicas != nil && *rollout.Spec.Replicas > 0 &&
rollout.Status.AvailableReplicas == *rollout.Spec.Replicas {
return true, nil
}
return false, nil
})
}
// WaitForRolloutReloaded waits for an Argo Rollout's pod template to have the reloader annotation.
func WaitForRolloutReloaded(ctx context.Context, client rolloutsclient.Interface, namespace, name, annotationKey string, timeout time.Duration) (bool, error) {
return WaitForAnnotation(ctx, func(ctx context.Context) (map[string]string, error) {
rollout, err := client.ArgoprojV1alpha1().Rollouts(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return nil, err
}
return rollout.Spec.Template.Annotations, nil
}, annotationKey, timeout)
}
// WaitForRolloutEnvVar waits for an Argo Rollout's container to have an env var with the given prefix.
func WaitForRolloutEnvVar(ctx context.Context, client rolloutsclient.Interface, namespace, name, prefix string, timeout time.Duration) (bool, error) {
return WaitForEnvVarPrefix(ctx, func(ctx context.Context) ([]corev1.Container, error) {
rollout, err := client.ArgoprojV1alpha1().Rollouts(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return nil, err
}
return rollout.Spec.Template.Spec.Containers, nil
}, prefix, timeout)
}
// WaitForRolloutRestartAt waits for an Argo Rollout's spec.restartAt field to be set.
func WaitForRolloutRestartAt(ctx context.Context, client rolloutsclient.Interface, namespace, name string, timeout time.Duration) (bool, error) {
var found bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
rollout, err := client.ArgoprojV1alpha1().Rollouts(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
}
if rollout.Spec.RestartAt != nil && !rollout.Spec.RestartAt.IsZero() {
found = true
return true, nil
}
return false, nil
})
if err != nil && err != context.DeadlineExceeded {
if err != nil && !errors.Is(err, context.DeadlineExceeded) {
return false, err
}
return found, nil

View File

@@ -5,11 +5,7 @@ import (
"time"
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// CronJobAdapter implements WorkloadAdapter for Kubernetes CronJobs.
@@ -45,9 +41,6 @@ func (a *CronJobAdapter) WaitReady(ctx context.Context, namespace, name string,
}
// WaitReloaded waits for the CronJob to have the reload annotation OR for a triggered Job.
// For CronJobs, Reloader can either:
// 1. Add an annotation to the pod template
// 2. Trigger a new Job (which is the special handling case)
func (a *CronJobAdapter) WaitReloaded(ctx context.Context, namespace, name, annotationKey string, timeout time.Duration) (bool, error) {
return WaitForCronJobReloaded(ctx, a.client, namespace, name, annotationKey, timeout)
}
@@ -75,149 +68,18 @@ func (a *CronJobAdapter) WaitForTriggeredJob(ctx context.Context, namespace, cro
// buildCronJobOptions converts WorkloadConfig to CronJobOption slice.
func buildCronJobOptions(cfg WorkloadConfig) []CronJobOption {
var opts []CronJobOption
// Add annotations
if len(cfg.Annotations) > 0 {
opts = append(opts, WithCronJobAnnotations(cfg.Annotations))
}
// Add envFrom references
if cfg.UseConfigMapEnvFrom && cfg.ConfigMapName != "" {
opts = append(opts, WithCronJobConfigMapEnvFrom(cfg.ConfigMapName))
}
if cfg.UseSecretEnvFrom && cfg.SecretName != "" {
opts = append(opts, WithCronJobSecretEnvFrom(cfg.SecretName))
}
// Add volume mounts
if cfg.UseConfigMapVolume && cfg.ConfigMapName != "" {
opts = append(opts, WithCronJobConfigMapVolume(cfg.ConfigMapName))
}
if cfg.UseSecretVolume && cfg.SecretName != "" {
opts = append(opts, WithCronJobSecretVolume(cfg.SecretName))
}
// Add projected volume
if cfg.UseProjectedVolume {
opts = append(opts, WithCronJobProjectedVolume(cfg.ConfigMapName, cfg.SecretName))
}
return opts
}
// WithCronJobConfigMapVolume adds a volume mount for a ConfigMap to a CronJob.
func WithCronJobConfigMapVolume(name string) CronJobOption {
return func(cj *batchv1.CronJob) {
volumeName := "cm-" + name
cj.Spec.JobTemplate.Spec.Template.Spec.Volumes = append(
cj.Spec.JobTemplate.Spec.Template.Spec.Volumes,
corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{Name: name},
},
},
},
)
cj.Spec.JobTemplate.Spec.Template.Spec.Containers[0].VolumeMounts = append(
cj.Spec.JobTemplate.Spec.Template.Spec.Containers[0].VolumeMounts,
corev1.VolumeMount{
Name: volumeName,
MountPath: "/etc/config/" + name,
},
)
return []CronJobOption{
func(cj *batchv1.CronJob) {
// Set annotations on CronJob level (where Reloader checks them)
if len(cfg.Annotations) > 0 {
if cj.Annotations == nil {
cj.Annotations = make(map[string]string)
}
for k, v := range cfg.Annotations {
cj.Annotations[k] = v
}
}
ApplyWorkloadConfig(&cj.Spec.JobTemplate.Spec.Template.Spec, cfg)
},
}
}
// WithCronJobSecretVolume adds a volume mount for a Secret to a CronJob.
func WithCronJobSecretVolume(name string) CronJobOption {
return func(cj *batchv1.CronJob) {
volumeName := "secret-" + name
cj.Spec.JobTemplate.Spec.Template.Spec.Volumes = append(
cj.Spec.JobTemplate.Spec.Template.Spec.Volumes,
corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: name,
},
},
},
)
cj.Spec.JobTemplate.Spec.Template.Spec.Containers[0].VolumeMounts = append(
cj.Spec.JobTemplate.Spec.Template.Spec.Containers[0].VolumeMounts,
corev1.VolumeMount{
Name: volumeName,
MountPath: "/etc/secrets/" + name,
},
)
}
}
// WithCronJobProjectedVolume adds a projected volume with ConfigMap and/or Secret sources to a CronJob.
func WithCronJobProjectedVolume(cmName, secretName string) CronJobOption {
return func(cj *batchv1.CronJob) {
volumeName := "projected-config"
sources := []corev1.VolumeProjection{}
if cmName != "" {
sources = append(sources, corev1.VolumeProjection{
ConfigMap: &corev1.ConfigMapProjection{
LocalObjectReference: corev1.LocalObjectReference{Name: cmName},
},
})
}
if secretName != "" {
sources = append(sources, corev1.VolumeProjection{
Secret: &corev1.SecretProjection{
LocalObjectReference: corev1.LocalObjectReference{Name: secretName},
},
})
}
cj.Spec.JobTemplate.Spec.Template.Spec.Volumes = append(
cj.Spec.JobTemplate.Spec.Template.Spec.Volumes,
corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
Projected: &corev1.ProjectedVolumeSource{
Sources: sources,
},
},
},
)
cj.Spec.JobTemplate.Spec.Template.Spec.Containers[0].VolumeMounts = append(
cj.Spec.JobTemplate.Spec.Template.Spec.Containers[0].VolumeMounts,
corev1.VolumeMount{
Name: volumeName,
MountPath: "/etc/projected",
},
)
}
}
// WaitForCronJobEnvVar waits for a CronJob's containers to have an environment variable
// with the given prefix. Note: CronJobs don't typically use this strategy.
func WaitForCronJobEnvVar(ctx context.Context, client kubernetes.Interface, namespace, name, prefix string, timeout time.Duration) (bool, error) {
var found bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
cj, err := client.BatchV1().CronJobs(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
}
if hasEnvVarWithPrefix(cj.Spec.JobTemplate.Spec.Template.Spec.Containers, prefix) {
found = true
return true, nil
}
return false, nil
})
if err != nil && err != context.DeadlineExceeded {
return false, err
}
return found, nil
}

View File

@@ -2,11 +2,9 @@ package utils
import (
"context"
"fmt"
"time"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/client-go/kubernetes"
)
@@ -64,183 +62,18 @@ func (a *DaemonSetAdapter) RequiresSpecialHandling() bool {
// buildDaemonSetOptions converts WorkloadConfig to DaemonSetOption slice.
func buildDaemonSetOptions(cfg WorkloadConfig) []DaemonSetOption {
var opts []DaemonSetOption
// Add annotations
if len(cfg.Annotations) > 0 {
opts = append(opts, WithDaemonSetAnnotations(cfg.Annotations))
}
// Add envFrom references
if cfg.UseConfigMapEnvFrom && cfg.ConfigMapName != "" {
opts = append(opts, WithDaemonSetConfigMapEnvFrom(cfg.ConfigMapName))
}
if cfg.UseSecretEnvFrom && cfg.SecretName != "" {
opts = append(opts, WithDaemonSetSecretEnvFrom(cfg.SecretName))
}
// Add volume mounts
if cfg.UseConfigMapVolume && cfg.ConfigMapName != "" {
opts = append(opts, WithDaemonSetConfigMapVolume(cfg.ConfigMapName))
}
if cfg.UseSecretVolume && cfg.SecretName != "" {
opts = append(opts, WithDaemonSetSecretVolume(cfg.SecretName))
}
// Add projected volume
if cfg.UseProjectedVolume {
opts = append(opts, WithDaemonSetProjectedVolume(cfg.ConfigMapName, cfg.SecretName))
}
// Add valueFrom references
if cfg.UseConfigMapKeyRef && cfg.ConfigMapName != "" {
key := cfg.ConfigMapKey
if key == "" {
key = "key"
}
envVar := cfg.EnvVarName
if envVar == "" {
envVar = "CONFIG_VAR"
}
opts = append(opts, WithDaemonSetConfigMapKeyRef(cfg.ConfigMapName, key, envVar))
}
if cfg.UseSecretKeyRef && cfg.SecretName != "" {
key := cfg.SecretKey
if key == "" {
key = "key"
}
envVar := cfg.EnvVarName
if envVar == "" {
envVar = "SECRET_VAR"
}
opts = append(opts, WithDaemonSetSecretKeyRef(cfg.SecretName, key, envVar))
}
// Add init container with envFrom
if cfg.UseInitContainer {
opts = append(opts, WithDaemonSetInitContainer(cfg.ConfigMapName, cfg.SecretName))
}
// Add init container with volume mount
if cfg.UseInitContainerVolume {
opts = append(opts, WithDaemonSetInitContainerVolume(cfg.ConfigMapName, cfg.SecretName))
}
return opts
}
// WithDaemonSetConfigMapVolume adds a volume mount for a ConfigMap to a DaemonSet.
func WithDaemonSetConfigMapVolume(name string) DaemonSetOption {
return func(ds *appsv1.DaemonSet) {
volumeName := fmt.Sprintf("cm-%s", name)
ds.Spec.Template.Spec.Volumes = append(ds.Spec.Template.Spec.Volumes, corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{Name: name},
},
},
})
ds.Spec.Template.Spec.Containers[0].VolumeMounts = append(
ds.Spec.Template.Spec.Containers[0].VolumeMounts,
corev1.VolumeMount{
Name: volumeName,
MountPath: fmt.Sprintf("/etc/config/%s", name),
},
)
}
}
// WithDaemonSetSecretVolume adds a volume mount for a Secret to a DaemonSet.
func WithDaemonSetSecretVolume(name string) DaemonSetOption {
return func(ds *appsv1.DaemonSet) {
volumeName := fmt.Sprintf("secret-%s", name)
ds.Spec.Template.Spec.Volumes = append(ds.Spec.Template.Spec.Volumes, corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: name,
},
},
})
ds.Spec.Template.Spec.Containers[0].VolumeMounts = append(
ds.Spec.Template.Spec.Containers[0].VolumeMounts,
corev1.VolumeMount{
Name: volumeName,
MountPath: fmt.Sprintf("/etc/secrets/%s", name),
},
)
}
}
// WithDaemonSetInitContainer adds an init container that references ConfigMap and/or Secret.
func WithDaemonSetInitContainer(cmName, secretName string) DaemonSetOption {
return func(ds *appsv1.DaemonSet) {
initContainer := corev1.Container{
Name: "init",
Image: DefaultImage,
Command: []string{"sh", "-c", "echo init done"},
}
if cmName != "" {
initContainer.EnvFrom = append(initContainer.EnvFrom, corev1.EnvFromSource{
ConfigMapRef: &corev1.ConfigMapEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: cmName},
},
})
}
if secretName != "" {
initContainer.EnvFrom = append(initContainer.EnvFrom, corev1.EnvFromSource{
SecretRef: &corev1.SecretEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: secretName},
},
})
}
ds.Spec.Template.Spec.InitContainers = append(ds.Spec.Template.Spec.InitContainers, initContainer)
}
}
// WithDaemonSetInitContainerVolume adds an init container with ConfigMap/Secret volume mounts.
func WithDaemonSetInitContainerVolume(cmName, secretName string) DaemonSetOption {
return func(ds *appsv1.DaemonSet) {
initContainer := corev1.Container{
Name: "init",
Image: DefaultImage,
Command: []string{"sh", "-c", "echo init done"},
}
if cmName != "" {
volumeName := fmt.Sprintf("init-cm-%s", cmName)
ds.Spec.Template.Spec.Volumes = append(ds.Spec.Template.Spec.Volumes, corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{Name: cmName},
},
},
})
initContainer.VolumeMounts = append(initContainer.VolumeMounts, corev1.VolumeMount{
Name: volumeName,
MountPath: fmt.Sprintf("/etc/init-config/%s", cmName),
})
}
if secretName != "" {
volumeName := fmt.Sprintf("init-secret-%s", secretName)
ds.Spec.Template.Spec.Volumes = append(ds.Spec.Template.Spec.Volumes, corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
})
initContainer.VolumeMounts = append(initContainer.VolumeMounts, corev1.VolumeMount{
Name: volumeName,
MountPath: fmt.Sprintf("/etc/init-secrets/%s", secretName),
})
}
ds.Spec.Template.Spec.InitContainers = append(ds.Spec.Template.Spec.InitContainers, initContainer)
return []DaemonSetOption{
func(ds *appsv1.DaemonSet) {
// Set annotations on DaemonSet level (where Reloader checks them)
if len(cfg.Annotations) > 0 {
if ds.Annotations == nil {
ds.Annotations = make(map[string]string)
}
for k, v := range cfg.Annotations {
ds.Annotations[k] = v
}
}
ApplyWorkloadConfig(&ds.Spec.Template.Spec, cfg)
},
}
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"time"
appsv1 "k8s.io/api/apps/v1"
"k8s.io/client-go/kubernetes"
)
@@ -61,72 +62,18 @@ func (a *DeploymentAdapter) RequiresSpecialHandling() bool {
// buildDeploymentOptions converts WorkloadConfig to DeploymentOption slice.
func buildDeploymentOptions(cfg WorkloadConfig) []DeploymentOption {
var opts []DeploymentOption
// Add annotations
if len(cfg.Annotations) > 0 {
opts = append(opts, WithAnnotations(cfg.Annotations))
return []DeploymentOption{
func(d *appsv1.Deployment) {
// Set annotations on deployment level (where Reloader checks them)
if len(cfg.Annotations) > 0 {
if d.Annotations == nil {
d.Annotations = make(map[string]string)
}
for k, v := range cfg.Annotations {
d.Annotations[k] = v
}
}
ApplyWorkloadConfig(&d.Spec.Template.Spec, cfg)
},
}
// Add envFrom references
if cfg.UseConfigMapEnvFrom && cfg.ConfigMapName != "" {
opts = append(opts, WithConfigMapEnvFrom(cfg.ConfigMapName))
}
if cfg.UseSecretEnvFrom && cfg.SecretName != "" {
opts = append(opts, WithSecretEnvFrom(cfg.SecretName))
}
// Add volume mounts
if cfg.UseConfigMapVolume && cfg.ConfigMapName != "" {
opts = append(opts, WithConfigMapVolume(cfg.ConfigMapName))
}
if cfg.UseSecretVolume && cfg.SecretName != "" {
opts = append(opts, WithSecretVolume(cfg.SecretName))
}
// Add projected volume
if cfg.UseProjectedVolume {
opts = append(opts, WithProjectedVolume(cfg.ConfigMapName, cfg.SecretName))
}
// Add valueFrom references
if cfg.UseConfigMapKeyRef && cfg.ConfigMapName != "" {
key := cfg.ConfigMapKey
if key == "" {
key = "key"
}
envVar := cfg.EnvVarName
if envVar == "" {
envVar = "CONFIG_VAR"
}
opts = append(opts, WithConfigMapKeyRef(cfg.ConfigMapName, key, envVar))
}
if cfg.UseSecretKeyRef && cfg.SecretName != "" {
key := cfg.SecretKey
if key == "" {
key = "key"
}
envVar := cfg.EnvVarName
if envVar == "" {
envVar = "SECRET_VAR"
}
opts = append(opts, WithSecretKeyRef(cfg.SecretName, key, envVar))
}
// Add init container with envFrom
if cfg.UseInitContainer {
opts = append(opts, WithInitContainer(cfg.ConfigMapName, cfg.SecretName))
}
// Add init container with volume mount
if cfg.UseInitContainerVolume {
opts = append(opts, WithInitContainerVolume(cfg.ConfigMapName, cfg.SecretName))
}
// Add multiple containers
if cfg.MultipleContainers > 1 {
opts = append(opts, WithMultipleContainers(cfg.MultipleContainers))
}
return opts
}

View File

@@ -5,7 +5,6 @@ import (
"time"
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/client-go/kubernetes"
)
@@ -83,125 +82,18 @@ func (a *JobAdapter) WaitForRecreation(ctx context.Context, namespace, name, ori
// buildJobOptions converts WorkloadConfig to JobOption slice.
func buildJobOptions(cfg WorkloadConfig) []JobOption {
var opts []JobOption
// Add annotations
if len(cfg.Annotations) > 0 {
opts = append(opts, WithJobAnnotations(cfg.Annotations))
}
// Add envFrom references
if cfg.UseConfigMapEnvFrom && cfg.ConfigMapName != "" {
opts = append(opts, WithJobConfigMapEnvFrom(cfg.ConfigMapName))
}
if cfg.UseSecretEnvFrom && cfg.SecretName != "" {
opts = append(opts, WithJobSecretEnvFrom(cfg.SecretName))
}
// Add volume mounts
if cfg.UseConfigMapVolume && cfg.ConfigMapName != "" {
opts = append(opts, WithJobConfigMapVolume(cfg.ConfigMapName))
}
if cfg.UseSecretVolume && cfg.SecretName != "" {
opts = append(opts, WithJobSecretVolume(cfg.SecretName))
}
// Add projected volume
if cfg.UseProjectedVolume {
opts = append(opts, WithJobProjectedVolume(cfg.ConfigMapName, cfg.SecretName))
}
return opts
}
// WithJobConfigMapVolume adds a volume mount for a ConfigMap to a Job.
func WithJobConfigMapVolume(name string) JobOption {
return func(j *batchv1.Job) {
volumeName := "cm-" + name
j.Spec.Template.Spec.Volumes = append(
j.Spec.Template.Spec.Volumes,
corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{Name: name},
},
},
},
)
j.Spec.Template.Spec.Containers[0].VolumeMounts = append(
j.Spec.Template.Spec.Containers[0].VolumeMounts,
corev1.VolumeMount{
Name: volumeName,
MountPath: "/etc/config/" + name,
},
)
}
}
// WithJobSecretVolume adds a volume mount for a Secret to a Job.
func WithJobSecretVolume(name string) JobOption {
return func(j *batchv1.Job) {
volumeName := "secret-" + name
j.Spec.Template.Spec.Volumes = append(
j.Spec.Template.Spec.Volumes,
corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: name,
},
},
},
)
j.Spec.Template.Spec.Containers[0].VolumeMounts = append(
j.Spec.Template.Spec.Containers[0].VolumeMounts,
corev1.VolumeMount{
Name: volumeName,
MountPath: "/etc/secrets/" + name,
},
)
}
}
// WithJobProjectedVolume adds a projected volume with ConfigMap and/or Secret sources to a Job.
func WithJobProjectedVolume(cmName, secretName string) JobOption {
return func(j *batchv1.Job) {
volumeName := "projected-config"
sources := []corev1.VolumeProjection{}
if cmName != "" {
sources = append(sources, corev1.VolumeProjection{
ConfigMap: &corev1.ConfigMapProjection{
LocalObjectReference: corev1.LocalObjectReference{Name: cmName},
},
})
}
if secretName != "" {
sources = append(sources, corev1.VolumeProjection{
Secret: &corev1.SecretProjection{
LocalObjectReference: corev1.LocalObjectReference{Name: secretName},
},
})
}
j.Spec.Template.Spec.Volumes = append(
j.Spec.Template.Spec.Volumes,
corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
Projected: &corev1.ProjectedVolumeSource{
Sources: sources,
},
},
},
)
j.Spec.Template.Spec.Containers[0].VolumeMounts = append(
j.Spec.Template.Spec.Containers[0].VolumeMounts,
corev1.VolumeMount{
Name: volumeName,
MountPath: "/etc/projected",
},
)
return []JobOption{
func(job *batchv1.Job) {
// Set annotations on Job level (where Reloader checks them)
if len(cfg.Annotations) > 0 {
if job.Annotations == nil {
job.Annotations = make(map[string]string)
}
for k, v := range cfg.Annotations {
job.Annotations[k] = v
}
}
ApplyWorkloadConfig(&job.Spec.Template.Spec, cfg)
},
}
}

View File

@@ -2,24 +2,28 @@ package utils
import (
"context"
"fmt"
"strings"
"time"
openshiftappsv1 "github.com/openshift/api/apps/v1"
openshiftclient "github.com/openshift/client-go/apps/clientset/versioned"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/dynamic"
)
// DCOption is a function that modifies a DeploymentConfig.
type DCOption func(*openshiftappsv1.DeploymentConfig)
// DeploymentConfigAdapter implements WorkloadAdapter for OpenShift DeploymentConfigs.
type DeploymentConfigAdapter struct {
dynamicClient dynamic.Interface
openshiftClient openshiftclient.Interface
}
// NewDeploymentConfigAdapter creates a new DeploymentConfigAdapter.
func NewDeploymentConfigAdapter(dynamicClient dynamic.Interface) *DeploymentConfigAdapter {
return &DeploymentConfigAdapter{dynamicClient: dynamicClient}
func NewDeploymentConfigAdapter(openshiftClient openshiftclient.Interface) *DeploymentConfigAdapter {
return &DeploymentConfigAdapter{
openshiftClient: openshiftClient,
}
}
// Type returns the workload type.
@@ -29,28 +33,33 @@ func (a *DeploymentConfigAdapter) Type() WorkloadType {
// Create creates a DeploymentConfig with the given config.
func (a *DeploymentConfigAdapter) Create(ctx context.Context, namespace, name string, cfg WorkloadConfig) error {
opts := buildDCOptions(cfg)
return CreateDeploymentConfig(ctx, a.dynamicClient, namespace, name, opts...)
dc := baseDeploymentConfig(name)
opts := buildDeploymentConfigOptions(cfg)
for _, opt := range opts {
opt(dc)
}
_, err := a.openshiftClient.AppsV1().DeploymentConfigs(namespace).Create(ctx, dc, metav1.CreateOptions{})
return err
}
// Delete removes the DeploymentConfig.
func (a *DeploymentConfigAdapter) Delete(ctx context.Context, namespace, name string) error {
return DeleteDeploymentConfig(ctx, a.dynamicClient, namespace, name)
return a.openshiftClient.AppsV1().DeploymentConfigs(namespace).Delete(ctx, name, metav1.DeleteOptions{})
}
// WaitReady waits for the DeploymentConfig to be ready.
func (a *DeploymentConfigAdapter) WaitReady(ctx context.Context, namespace, name string, timeout time.Duration) error {
return WaitForDeploymentConfigReady(ctx, a.dynamicClient, namespace, name, timeout)
return WaitForDeploymentConfigReady(ctx, a.openshiftClient, namespace, name, timeout)
}
// WaitReloaded waits for the DeploymentConfig to have the reload annotation.
func (a *DeploymentConfigAdapter) WaitReloaded(ctx context.Context, namespace, name, annotationKey string, timeout time.Duration) (bool, error) {
return WaitForDeploymentConfigReloaded(ctx, a.dynamicClient, namespace, name, annotationKey, timeout)
return WaitForDeploymentConfigReloaded(ctx, a.openshiftClient, namespace, name, annotationKey, timeout)
}
// WaitEnvVar waits for the DeploymentConfig to have a STAKATER_ env var.
func (a *DeploymentConfigAdapter) WaitEnvVar(ctx context.Context, namespace, name, prefix string, timeout time.Duration) (bool, error) {
return WaitForDeploymentConfigEnvVar(ctx, a.dynamicClient, namespace, name, prefix, timeout)
return WaitForDeploymentConfigEnvVar(ctx, a.openshiftClient, namespace, name, prefix, timeout)
}
// SupportsEnvVarStrategy returns true as DeploymentConfigs support env var reload strategy.
@@ -63,278 +72,92 @@ func (a *DeploymentConfigAdapter) RequiresSpecialHandling() bool {
return false
}
// buildDCOptions converts WorkloadConfig to DCOption slice.
func buildDCOptions(cfg WorkloadConfig) []DCOption {
var opts []DCOption
// Add annotations (to pod template)
if len(cfg.Annotations) > 0 {
opts = append(opts, WithDCAnnotations(cfg.Annotations))
}
// Add envFrom references
if cfg.UseConfigMapEnvFrom && cfg.ConfigMapName != "" {
opts = append(opts, WithDCConfigMapEnvFrom(cfg.ConfigMapName))
}
if cfg.UseSecretEnvFrom && cfg.SecretName != "" {
opts = append(opts, WithDCSecretEnvFrom(cfg.SecretName))
}
// Add volume mounts
if cfg.UseConfigMapVolume && cfg.ConfigMapName != "" {
opts = append(opts, WithDCConfigMapVolume(cfg.ConfigMapName))
}
if cfg.UseSecretVolume && cfg.SecretName != "" {
opts = append(opts, WithDCSecretVolume(cfg.SecretName))
}
// Add projected volume
if cfg.UseProjectedVolume {
opts = append(opts, WithDCProjectedVolume(cfg.ConfigMapName, cfg.SecretName))
}
// Add valueFrom references
if cfg.UseConfigMapKeyRef && cfg.ConfigMapName != "" {
key := cfg.ConfigMapKey
if key == "" {
key = "key"
}
envVar := cfg.EnvVarName
if envVar == "" {
envVar = "CONFIG_VAR"
}
opts = append(opts, WithDCConfigMapKeyRef(cfg.ConfigMapName, key, envVar))
}
if cfg.UseSecretKeyRef && cfg.SecretName != "" {
key := cfg.SecretKey
if key == "" {
key = "key"
}
envVar := cfg.EnvVarName
if envVar == "" {
envVar = "SECRET_VAR"
}
opts = append(opts, WithDCSecretKeyRef(cfg.SecretName, key, envVar))
}
// Add init container with envFrom
if cfg.UseInitContainer {
opts = append(opts, WithDCInitContainer(cfg.ConfigMapName, cfg.SecretName))
}
// Add init container with volume mount
if cfg.UseInitContainerVolume {
opts = append(opts, WithDCInitContainerVolume(cfg.ConfigMapName, cfg.SecretName))
}
return opts
}
// WithDCProjectedVolume adds a projected volume with ConfigMap and/or Secret sources to a DeploymentConfig.
func WithDCProjectedVolume(cmName, secretName string) DCOption {
return func(dc *unstructured.Unstructured) {
volumeName := "projected-config"
sources := []interface{}{}
if cmName != "" {
sources = append(sources, map[string]interface{}{
"configMap": map[string]interface{}{
"name": cmName,
// baseDeploymentConfig returns a minimal DeploymentConfig template.
func baseDeploymentConfig(name string) *openshiftappsv1.DeploymentConfig {
return &openshiftappsv1.DeploymentConfig{
ObjectMeta: metav1.ObjectMeta{Name: name},
Spec: openshiftappsv1.DeploymentConfigSpec{
Replicas: 1,
Selector: map[string]string{"app": name},
Template: &corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{"app": name},
},
})
}
if secretName != "" {
sources = append(sources, map[string]interface{}{
"secret": map[string]interface{}{
"name": secretName,
Spec: corev1.PodSpec{
Containers: []corev1.Container{{
Name: "main",
Image: DefaultImage,
Command: []string{"sh", "-c", DefaultCommand},
}},
},
})
}
// Add volume
volumes, _, _ := unstructured.NestedSlice(dc.Object, "spec", "template", "spec", "volumes")
volumes = append(volumes, map[string]interface{}{
"name": volumeName,
"projected": map[string]interface{}{
"sources": sources,
},
})
_ = unstructured.SetNestedSlice(dc.Object, volumes, "spec", "template", "spec", "volumes")
// Add volumeMount
containers, _, _ := unstructured.NestedSlice(dc.Object, "spec", "template", "spec", "containers")
if len(containers) > 0 {
container := containers[0].(map[string]interface{})
volumeMounts, _, _ := unstructured.NestedSlice(container, "volumeMounts")
volumeMounts = append(volumeMounts, map[string]interface{}{
"name": volumeName,
"mountPath": "/etc/projected",
})
container["volumeMounts"] = volumeMounts
containers[0] = container
_ = unstructured.SetNestedSlice(dc.Object, containers, "spec", "template", "spec", "containers")
}
Triggers: openshiftappsv1.DeploymentTriggerPolicies{
{Type: openshiftappsv1.DeploymentTriggerOnConfigChange},
},
},
}
}
// WithDCConfigMapKeyRef adds an env var with valueFrom.configMapKeyRef to a DeploymentConfig.
func WithDCConfigMapKeyRef(cmName, key, envVarName string) DCOption {
return func(dc *unstructured.Unstructured) {
containers, _, _ := unstructured.NestedSlice(dc.Object, "spec", "template", "spec", "containers")
if len(containers) > 0 {
container := containers[0].(map[string]interface{})
env, _, _ := unstructured.NestedSlice(container, "env")
env = append(env, map[string]interface{}{
"name": envVarName,
"valueFrom": map[string]interface{}{
"configMapKeyRef": map[string]interface{}{
"name": cmName,
"key": key,
},
},
})
container["env"] = env
containers[0] = container
_ = unstructured.SetNestedSlice(dc.Object, containers, "spec", "template", "spec", "containers")
}
// buildDeploymentConfigOptions converts WorkloadConfig to DCOption slice.
func buildDeploymentConfigOptions(cfg WorkloadConfig) []DCOption {
return []DCOption{
func(dc *openshiftappsv1.DeploymentConfig) {
// Set annotations on DeploymentConfig level (where Reloader checks them)
if len(cfg.Annotations) > 0 {
if dc.Annotations == nil {
dc.Annotations = make(map[string]string)
}
for k, v := range cfg.Annotations {
dc.Annotations[k] = v
}
}
if dc.Spec.Template != nil {
ApplyWorkloadConfig(&dc.Spec.Template.Spec, cfg)
}
},
}
}
// WithDCSecretKeyRef adds an env var with valueFrom.secretKeyRef to a DeploymentConfig.
func WithDCSecretKeyRef(secretName, key, envVarName string) DCOption {
return func(dc *unstructured.Unstructured) {
containers, _, _ := unstructured.NestedSlice(dc.Object, "spec", "template", "spec", "containers")
if len(containers) > 0 {
container := containers[0].(map[string]interface{})
env, _, _ := unstructured.NestedSlice(container, "env")
env = append(env, map[string]interface{}{
"name": envVarName,
"valueFrom": map[string]interface{}{
"secretKeyRef": map[string]interface{}{
"name": secretName,
"key": key,
},
},
})
container["env"] = env
containers[0] = container
_ = unstructured.SetNestedSlice(dc.Object, containers, "spec", "template", "spec", "containers")
}
}
}
// WithDCInitContainer adds an init container that references ConfigMap and/or Secret via envFrom.
func WithDCInitContainer(cmName, secretName string) DCOption {
return func(dc *unstructured.Unstructured) {
initContainer := map[string]interface{}{
"name": "init",
"image": DefaultImage,
"command": []interface{}{"sh", "-c", "echo init done"},
}
envFrom := []interface{}{}
if cmName != "" {
envFrom = append(envFrom, map[string]interface{}{
"configMapRef": map[string]interface{}{
"name": cmName,
},
})
}
if secretName != "" {
envFrom = append(envFrom, map[string]interface{}{
"secretRef": map[string]interface{}{
"name": secretName,
},
})
}
if len(envFrom) > 0 {
initContainer["envFrom"] = envFrom
}
initContainers, _, _ := unstructured.NestedSlice(dc.Object, "spec", "template", "spec", "initContainers")
initContainers = append(initContainers, initContainer)
_ = unstructured.SetNestedSlice(dc.Object, initContainers, "spec", "template", "spec", "initContainers")
}
}
// WithDCInitContainerVolume adds an init container with ConfigMap/Secret volume mounts to a DeploymentConfig.
func WithDCInitContainerVolume(cmName, secretName string) DCOption {
return func(dc *unstructured.Unstructured) {
initContainer := map[string]interface{}{
"name": "init",
"image": DefaultImage,
"command": []interface{}{"sh", "-c", "echo init done"},
}
volumeMounts := []interface{}{}
volumes, _, _ := unstructured.NestedSlice(dc.Object, "spec", "template", "spec", "volumes")
if cmName != "" {
volumeName := fmt.Sprintf("init-cm-%s", cmName)
volumes = append(volumes, map[string]interface{}{
"name": volumeName,
"configMap": map[string]interface{}{
"name": cmName,
},
})
volumeMounts = append(volumeMounts, map[string]interface{}{
"name": volumeName,
"mountPath": fmt.Sprintf("/etc/init-config/%s", cmName),
})
}
if secretName != "" {
volumeName := fmt.Sprintf("init-secret-%s", secretName)
volumes = append(volumes, map[string]interface{}{
"name": volumeName,
"secret": map[string]interface{}{
"secretName": secretName,
},
})
volumeMounts = append(volumeMounts, map[string]interface{}{
"name": volumeName,
"mountPath": fmt.Sprintf("/etc/init-secrets/%s", secretName),
})
}
if len(volumeMounts) > 0 {
initContainer["volumeMounts"] = volumeMounts
}
_ = unstructured.SetNestedSlice(dc.Object, volumes, "spec", "template", "spec", "volumes")
initContainers, _, _ := unstructured.NestedSlice(dc.Object, "spec", "template", "spec", "initContainers")
initContainers = append(initContainers, initContainer)
_ = unstructured.SetNestedSlice(dc.Object, initContainers, "spec", "template", "spec", "initContainers")
}
}
// WaitForDeploymentConfigEnvVar waits for a DeploymentConfig's container to have an env var with the given prefix.
func WaitForDeploymentConfigEnvVar(ctx context.Context, dynamicClient dynamic.Interface, namespace, name, prefix string, timeout time.Duration) (bool, error) {
var found bool
err := wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
dc, err := dynamicClient.Resource(DeploymentConfigGVR).Namespace(namespace).Get(ctx, name, metav1.GetOptions{})
// WaitForDeploymentConfigReady waits for a DeploymentConfig to be ready using typed client.
func WaitForDeploymentConfigReady(ctx context.Context, client openshiftclient.Interface, namespace, name string, timeout time.Duration) error {
return wait.PollUntilContextTimeout(ctx, DefaultInterval, timeout, true, func(ctx context.Context) (bool, error) {
dc, err := client.AppsV1().DeploymentConfigs(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, nil
}
containers, _, _ := unstructured.NestedSlice(dc.Object, "spec", "template", "spec", "containers")
for _, c := range containers {
container := c.(map[string]interface{})
env, _, _ := unstructured.NestedSlice(container, "env")
for _, e := range env {
envVar := e.(map[string]interface{})
if envName, ok := envVar["name"].(string); ok && strings.HasPrefix(envName, prefix) {
found = true
return true, nil
}
}
if dc.Spec.Replicas > 0 && dc.Status.ReadyReplicas == dc.Spec.Replicas {
return true, nil
}
return false, nil
})
if err != nil && err != context.DeadlineExceeded {
return false, err
}
return found, nil
}
// WaitForDeploymentConfigReloaded waits for a DeploymentConfig's pod template to have the reloader annotation.
func WaitForDeploymentConfigReloaded(ctx context.Context, client openshiftclient.Interface, namespace, name, annotationKey string, timeout time.Duration) (bool, error) {
return WaitForAnnotation(ctx, func(ctx context.Context) (map[string]string, error) {
dc, err := client.AppsV1().DeploymentConfigs(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return nil, err
}
if dc.Spec.Template != nil {
return dc.Spec.Template.Annotations, nil
}
return nil, nil
}, annotationKey, timeout)
}
// WaitForDeploymentConfigEnvVar waits for a DeploymentConfig's container to have an env var with the given prefix.
func WaitForDeploymentConfigEnvVar(ctx context.Context, client openshiftclient.Interface, namespace, name, prefix string, timeout time.Duration) (bool, error) {
return WaitForEnvVarPrefix(ctx, func(ctx context.Context) ([]corev1.Container, error) {
dc, err := client.AppsV1().DeploymentConfigs(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return nil, err
}
if dc.Spec.Template != nil {
return dc.Spec.Template.Spec.Containers, nil
}
return nil, nil
}, prefix, timeout)
}

View File

@@ -2,11 +2,9 @@ package utils
import (
"context"
"fmt"
"time"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/client-go/kubernetes"
)
@@ -64,183 +62,18 @@ func (a *StatefulSetAdapter) RequiresSpecialHandling() bool {
// buildStatefulSetOptions converts WorkloadConfig to StatefulSetOption slice.
func buildStatefulSetOptions(cfg WorkloadConfig) []StatefulSetOption {
var opts []StatefulSetOption
// Add annotations
if len(cfg.Annotations) > 0 {
opts = append(opts, WithStatefulSetAnnotations(cfg.Annotations))
}
// Add envFrom references
if cfg.UseConfigMapEnvFrom && cfg.ConfigMapName != "" {
opts = append(opts, WithStatefulSetConfigMapEnvFrom(cfg.ConfigMapName))
}
if cfg.UseSecretEnvFrom && cfg.SecretName != "" {
opts = append(opts, WithStatefulSetSecretEnvFrom(cfg.SecretName))
}
// Add volume mounts
if cfg.UseConfigMapVolume && cfg.ConfigMapName != "" {
opts = append(opts, WithStatefulSetConfigMapVolume(cfg.ConfigMapName))
}
if cfg.UseSecretVolume && cfg.SecretName != "" {
opts = append(opts, WithStatefulSetSecretVolume(cfg.SecretName))
}
// Add projected volume
if cfg.UseProjectedVolume {
opts = append(opts, WithStatefulSetProjectedVolume(cfg.ConfigMapName, cfg.SecretName))
}
// Add valueFrom references
if cfg.UseConfigMapKeyRef && cfg.ConfigMapName != "" {
key := cfg.ConfigMapKey
if key == "" {
key = "key"
}
envVar := cfg.EnvVarName
if envVar == "" {
envVar = "CONFIG_VAR"
}
opts = append(opts, WithStatefulSetConfigMapKeyRef(cfg.ConfigMapName, key, envVar))
}
if cfg.UseSecretKeyRef && cfg.SecretName != "" {
key := cfg.SecretKey
if key == "" {
key = "key"
}
envVar := cfg.EnvVarName
if envVar == "" {
envVar = "SECRET_VAR"
}
opts = append(opts, WithStatefulSetSecretKeyRef(cfg.SecretName, key, envVar))
}
// Add init container with envFrom
if cfg.UseInitContainer {
opts = append(opts, WithStatefulSetInitContainer(cfg.ConfigMapName, cfg.SecretName))
}
// Add init container with volume mount
if cfg.UseInitContainerVolume {
opts = append(opts, WithStatefulSetInitContainerVolume(cfg.ConfigMapName, cfg.SecretName))
}
return opts
}
// WithStatefulSetConfigMapVolume adds a volume mount for a ConfigMap to a StatefulSet.
func WithStatefulSetConfigMapVolume(name string) StatefulSetOption {
return func(ss *appsv1.StatefulSet) {
volumeName := fmt.Sprintf("cm-%s", name)
ss.Spec.Template.Spec.Volumes = append(ss.Spec.Template.Spec.Volumes, corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{Name: name},
},
},
})
ss.Spec.Template.Spec.Containers[0].VolumeMounts = append(
ss.Spec.Template.Spec.Containers[0].VolumeMounts,
corev1.VolumeMount{
Name: volumeName,
MountPath: fmt.Sprintf("/etc/config/%s", name),
},
)
}
}
// WithStatefulSetSecretVolume adds a volume mount for a Secret to a StatefulSet.
func WithStatefulSetSecretVolume(name string) StatefulSetOption {
return func(ss *appsv1.StatefulSet) {
volumeName := fmt.Sprintf("secret-%s", name)
ss.Spec.Template.Spec.Volumes = append(ss.Spec.Template.Spec.Volumes, corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: name,
},
},
})
ss.Spec.Template.Spec.Containers[0].VolumeMounts = append(
ss.Spec.Template.Spec.Containers[0].VolumeMounts,
corev1.VolumeMount{
Name: volumeName,
MountPath: fmt.Sprintf("/etc/secrets/%s", name),
},
)
}
}
// WithStatefulSetInitContainer adds an init container that references ConfigMap and/or Secret.
func WithStatefulSetInitContainer(cmName, secretName string) StatefulSetOption {
return func(ss *appsv1.StatefulSet) {
initContainer := corev1.Container{
Name: "init",
Image: DefaultImage,
Command: []string{"sh", "-c", "echo init done"},
}
if cmName != "" {
initContainer.EnvFrom = append(initContainer.EnvFrom, corev1.EnvFromSource{
ConfigMapRef: &corev1.ConfigMapEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: cmName},
},
})
}
if secretName != "" {
initContainer.EnvFrom = append(initContainer.EnvFrom, corev1.EnvFromSource{
SecretRef: &corev1.SecretEnvSource{
LocalObjectReference: corev1.LocalObjectReference{Name: secretName},
},
})
}
ss.Spec.Template.Spec.InitContainers = append(ss.Spec.Template.Spec.InitContainers, initContainer)
}
}
// WithStatefulSetInitContainerVolume adds an init container with ConfigMap/Secret volume mounts.
func WithStatefulSetInitContainerVolume(cmName, secretName string) StatefulSetOption {
return func(ss *appsv1.StatefulSet) {
initContainer := corev1.Container{
Name: "init",
Image: DefaultImage,
Command: []string{"sh", "-c", "echo init done"},
}
if cmName != "" {
volumeName := fmt.Sprintf("init-cm-%s", cmName)
ss.Spec.Template.Spec.Volumes = append(ss.Spec.Template.Spec.Volumes, corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{Name: cmName},
},
},
})
initContainer.VolumeMounts = append(initContainer.VolumeMounts, corev1.VolumeMount{
Name: volumeName,
MountPath: fmt.Sprintf("/etc/init-config/%s", cmName),
})
}
if secretName != "" {
volumeName := fmt.Sprintf("init-secret-%s", secretName)
ss.Spec.Template.Spec.Volumes = append(ss.Spec.Template.Spec.Volumes, corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
})
initContainer.VolumeMounts = append(initContainer.VolumeMounts, corev1.VolumeMount{
Name: volumeName,
MountPath: fmt.Sprintf("/etc/init-secrets/%s", secretName),
})
}
ss.Spec.Template.Spec.InitContainers = append(ss.Spec.Template.Spec.InitContainers, initContainer)
return []StatefulSetOption{
func(sts *appsv1.StatefulSet) {
// Set annotations on StatefulSet level (where Reloader checks them)
if len(cfg.Annotations) > 0 {
if sts.Annotations == nil {
sts.Annotations = make(map[string]string)
}
for k, v := range cfg.Annotations {
sts.Annotations[k] = v
}
}
ApplyWorkloadConfig(&sts.Spec.Template.Spec, cfg)
},
}
}