Without this, the canary replicas are updated twice:
to 1 replica then after a few seconds to the value of HPA minReplicas.
In some cases, when updated to 1 replica (before updated by HPA
controller to the minReplicas), it's considered ready: 1 of 1 (readyThreshold 100%),
and the canary weight is advanced to receive traffic with less capacity
than expected.
Co-Authored-By: Joshua Gibeon <joshuagibeon7719@gmail.com>
Co-authored-by: Sanskar Jaiswal <hey@aryan.lol>
Signed-off-by: Andy Librian <andylibrian@gmail.com>
Signed-off-by: Karl Heins <karlheins@northwesternmutual.com>
Support updating primary Deployment/DaemonSet/HPA/Service labels and annotations after first-time rollout
Copying of Configmaps and Secrets managed through Flagger should now
follow the same label prefix filtering rules as for the workloads.
Extends: #709
Signed-off-by: Aurel Canciu <aurelcanciu@gmail.com>
add e2e tests istio
clean up comment from review
add e2e tests istio
clean up comment from review
clean up logging statement
add e2e tests istio
clean up comment from review
clean up logging statement
add log statement on e2e iteration
add e2e tests istio
clean up comment from review
clean up logging statement
add log statement on e2e iteration
extend timeout for finalizing
add e2e tests istio
clean up comment from review
clean up logging statement
add log statement on e2e iteration
extend timeout for finalizing
add phase to kustomize crd
add e2e tests istio
clean up comment from review
clean up logging statement
add log statement on e2e iteration
extend timeout for finalizing
add phase to kustomize crd
revert timeout on circleci
vs and svc checks for istio e2e tests
fix fmt errors and tests
add get statement in e2e test
add get statement in e2e test
add namespace to e2e
use only selector for service revert
rebase and squash
fix fmt issues
revert Dockerfile
revert go.mod and go.sum
introduction of finalizer
introduction of finalizer
remove test for finalizer add istio tests
fix fmt issues
revert go.mod and go.sum
revert Dockerfile and main.go
fmt deployment controller
introduction of finalizer
rebase and squash
fix fmt issues
revert Dockerfile
revert go.mod and go.sum
introduction of finalizer
introduction of finalizer
remove test for finalizer add istio tests
fix fmt issues
revert go.mod and go.sum
revert Dockerfile and main.go
fmt deployment controller
add unit tests for finalizing
introduction of finalizer
rebase and squash
fix fmt issues
revert Dockerfile
revert go.mod and go.sum
introduction of finalizer
introduction of finalizer
remove test for finalizer add istio tests
fix fmt issues
revert go.mod and go.sum
revert Dockerfile and main.go
fmt deployment controller
run fmt to clean up formatting
review changes
add kubectl annotation
add kubectl annotation support
introduction of finalizer
introduction of finalizer
rebase and squash
fix fmt issues
revert Dockerfile
revert go.mod and go.sum
introduction of finalizer
introduction of finalizer
remove test for finalizer add istio tests
fix fmt issues
revert go.mod and go.sum
revert Dockerfile and main.go
fmt deployment controller
introduction of finalizer
rebase and squash
fix fmt issues
revert Dockerfile
revert go.mod and go.sum
introduction of finalizer
introduction of finalizer
remove test for finalizer add istio tests
fix fmt issues
revert go.mod and go.sum
revert Dockerfile and main.go
fmt deployment controller
add unit tests for finalizing
introduction of finalizer
rebase and squash
fix fmt issues
revert Dockerfile
revert go.mod and go.sum
introduction of finalizer
introduction of finalizer
remove test for finalizer add istio tests
fix fmt issues
revert go.mod and go.sum
revert Dockerfile and main.go
fmt deployment controller
run fmt to clean up formatting
review changes
introduction of finalizer
introduction of finalizer
rebase and squash
fix fmt issues
revert Dockerfile
revert go.mod and go.sum
introduction of finalizer
introduction of finalizer
remove test for finalizer add istio tests
fix fmt issues
revert go.mod and go.sum
revert Dockerfile and main.go
fmt deployment controller
introduction of finalizer
rebase and squash
fix fmt issues
revert Dockerfile
revert go.mod and go.sum
introduction of finalizer
introduction of finalizer
remove test for finalizer add istio tests
fix fmt issues
revert go.mod and go.sum
revert Dockerfile and main.go
fmt deployment controller
add unit tests for finalizing
introduction of finalizer
rebase and squash
fix fmt issues
revert Dockerfile
revert go.mod and go.sum
introduction of finalizer
introduction of finalizer
remove test for finalizer add istio tests
fix fmt issues
revert go.mod and go.sum
revert Dockerfile and main.go
fmt deployment controller
run fmt to clean up formatting
review changes
- add analysis field to Canary spec
- deprecate canaryAnalysis filed (to be removed in the next API version)
- maintain backwards compatibility with v1alpha3 by using spec.canaryAnalysis if spec.analysis is nil
- set analysis threshold default value to 1
Resolves#371
---
This adds the support for `corev1.Service` as the `targetRef.kind`, so that we can use Flagger just for canary analysis and traffic-shifting on existing and pre-created services. Flagger doesn't touch deployments and HPAs in this mode.
This is useful for keeping your full-control on the resources backing the service to be canary-released, including pods(behind a ClusterIP service) and external services(behind an ExternalName service).
Major use-case in my mind are:
- Canary-release a K8s cluster. You create two clusters and a master cluster. In the master cluster, you create two `ExternalName` services pointing to (the hostname of the loadbalancer of the targeted app instance in) each cluster. Flagger runs on the master cluster and helps safely rolling-out a new K8s cluster by doing a canary release on the `ExternalName` service.
- You want annotations and labels added to the service for integrating with things like external lbs(without extending Flagger to support customizing any aspect of the K8s service it manages
**Design**:
A canary release on a K8s service is almost the same as one on a K8s deployment. The only fundamental difference is that it operates only on a set of K8s services.
For example, one may start by creating two Helm releases for `podinfo-blue` and `podinfo-green`, and a K8s service `podinfo`. The `podinfo` service should initially have the same `Spec` as that of `podinfo-blue`.
On a new release, you update `podinfo-green`, then trigger Flagger by updating the K8s service `podinfo` so that it points to pods or `externalName` as declared in `podinfo-green`. Flagger does the rest. The end result is the traffic to `podinfo` is gradually and safely shifted from `podinfo-blue` to `podinfo-green`.
**How it works**:
Under the hood, Flagger maintains two K8s services, `podinfo-primary` and `podinfo-canary`. Compared to canaries on K8s deployments, it doesn't create the service named `podinfo`, as it is already provided by YOU.
Once Flagger detects the change in the `podinfo` service, it updates the `podinfo-canary` service and the routes, then analyzes the canary. On successful analysis, it promotes the canary service to the `podinfo-primary` service. You expose the `podinfo` service via any L7 ingress solution or a service mesh so that the traffic is managed by Flagger for safe deployments.
**Giving it a try**:
To give it a try, create a `Canary` as usual, but its `targetRef` pointed to a K8s service:
```
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
spec:
provider: kubernetes
targetRef:
apiVersion: core/v1
kind: Service
name: podinfo
service:
port: 9898
canaryAnalysis:
# schedule interval (default 60s)
interval: 10s
# max number of failed checks before rollback
threshold: 2
# number of checks to run before rollback
iterations: 2
# Prometheus checks based on
# http_request_duration_seconds histogram
metrics: []
```
Create a K8s service named `podinfo`, and update it. Now watch for the services `podinfo`, `podinfo-primary`, `podinfo-canary`.
Flagger tracks `podinfo` service for changes. Upon any change, it reconciles `podinfo-primary` and `podinfo-canary` services. `podinfo-canary` always replicate the latest `podinfo`. In contract, `podinfo-primary` replicates the latest successful `podinfo-canary`.
**Notes**:
- For the canary cluster use-case, we would need to write a K8s operator to, e.g. for App Mesh, sync `ExternalName` services to AppMesh `VirtualNode`s. But that's another story!