When cross-namespace references are disabled, ensure that UpstreamRef,
MetricTemplateRef, and AlertProviderRef default to the canary's namespace
if their namespace field is empty. This aligns the validation logic with
the rest of the controller and prevents false positives when the namespace
is omitted.
Fixes#1827
Signed-off-by: Barrera, Angel <angelbarrerasanchez@protonmail.com>
- add CrossNamespaceObjectReference type
- add informers collection to controller
- use the informer cache to query for metric templates
- rename mock to fixture
- regenerate clientset
Assumes `envoy:smi` as the mesh provider name as I've successfully tested the progressive delivery for Envoy + Crossover with it.
This enhances Flagger to translate it to the metrics provider name of `envoy` for deployment targets, or `envoy:service` for service targets.
The `envoy` metrics provider is equivalent to `appmesh`, as both relies on the same set of standard metrics exposed by Envoy itself.
The `envoy:service` is almost the same as the `envoy` provider, but removing the condition on pod name, as we only need to filter on the backing service name = envoy_cluster_name. We don't consider other Envoy xDS implementations that uses anything that is different to original servicen ames as `envoy_cluster_name`, for now.
Ref #385
Resolves#371
---
This adds the support for `corev1.Service` as the `targetRef.kind`, so that we can use Flagger just for canary analysis and traffic-shifting on existing and pre-created services. Flagger doesn't touch deployments and HPAs in this mode.
This is useful for keeping your full-control on the resources backing the service to be canary-released, including pods(behind a ClusterIP service) and external services(behind an ExternalName service).
Major use-case in my mind are:
- Canary-release a K8s cluster. You create two clusters and a master cluster. In the master cluster, you create two `ExternalName` services pointing to (the hostname of the loadbalancer of the targeted app instance in) each cluster. Flagger runs on the master cluster and helps safely rolling-out a new K8s cluster by doing a canary release on the `ExternalName` service.
- You want annotations and labels added to the service for integrating with things like external lbs(without extending Flagger to support customizing any aspect of the K8s service it manages
**Design**:
A canary release on a K8s service is almost the same as one on a K8s deployment. The only fundamental difference is that it operates only on a set of K8s services.
For example, one may start by creating two Helm releases for `podinfo-blue` and `podinfo-green`, and a K8s service `podinfo`. The `podinfo` service should initially have the same `Spec` as that of `podinfo-blue`.
On a new release, you update `podinfo-green`, then trigger Flagger by updating the K8s service `podinfo` so that it points to pods or `externalName` as declared in `podinfo-green`. Flagger does the rest. The end result is the traffic to `podinfo` is gradually and safely shifted from `podinfo-blue` to `podinfo-green`.
**How it works**:
Under the hood, Flagger maintains two K8s services, `podinfo-primary` and `podinfo-canary`. Compared to canaries on K8s deployments, it doesn't create the service named `podinfo`, as it is already provided by YOU.
Once Flagger detects the change in the `podinfo` service, it updates the `podinfo-canary` service and the routes, then analyzes the canary. On successful analysis, it promotes the canary service to the `podinfo-primary` service. You expose the `podinfo` service via any L7 ingress solution or a service mesh so that the traffic is managed by Flagger for safe deployments.
**Giving it a try**:
To give it a try, create a `Canary` as usual, but its `targetRef` pointed to a K8s service:
```
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
spec:
provider: kubernetes
targetRef:
apiVersion: core/v1
kind: Service
name: podinfo
service:
port: 9898
canaryAnalysis:
# schedule interval (default 60s)
interval: 10s
# max number of failed checks before rollback
threshold: 2
# number of checks to run before rollback
iterations: 2
# Prometheus checks based on
# http_request_duration_seconds histogram
metrics: []
```
Create a K8s service named `podinfo`, and update it. Now watch for the services `podinfo`, `podinfo-primary`, `podinfo-canary`.
Flagger tracks `podinfo` service for changes. Upon any change, it reconciles `podinfo-primary` and `podinfo-canary` services. `podinfo-canary` always replicate the latest `podinfo`. In contract, `podinfo-primary` replicates the latest successful `podinfo-canary`.
**Notes**:
- For the canary cluster use-case, we would need to write a K8s operator to, e.g. for App Mesh, sync `ExternalName` services to AppMesh `VirtualNode`s. But that's another story!
Traffic mirroring is a pre-stage for canary deployments. When mirroring
is enabled, at the beginning of a canary deployment traffic is mirrored
to the canary instead of shifted for one canary period. The service
mesh should mirror by copying the request and sending one copy to the
primary and one copy to the canary; only the response from the primary
is sent to the user. The response from the canary is only used for
collecting metrics.
Once the mirror period is over, the canary proceeds as usual, shifting
traffic from primary to canary until complete.
Added TestScheduler_Mirroring unit test.
SetupMocks() currently takes a bool switch that tells it to configure
against either a shifting canary or an A-B canary. I'll need a third
canary that has mirroring turned on so I changed this to an interface
that just takes the canary to configure (and configs the default
shifting canary if you pass nil).
If port discovery is enabled, Flagger scans the deployment pod template and extracts the container ports excluding the port specified in the canary service spec and Istio proxy ports. All the extra ports will be used when generation the ClusterIP services.