diff --git a/.gitbook.yaml b/.gitbook.yaml index d13970bf..1c8ec9fc 100644 --- a/.gitbook.yaml +++ b/.gitbook.yaml @@ -13,3 +13,4 @@ redirects: usage/skipper-progressive-delivery: tutorials/skipper-progressive-delivery.md usage/crossover-progressive-delivery: tutorials/crossover-progressive-delivery.md usage/traefik-progressive-delivery: tutorials/traefik-progressive-delivery.md + usage/osm-progressive-delivery: tutorials/osm-progressive-delivery.md diff --git a/README.md b/README.md index 25245e57..80bb1ce7 100644 --- a/README.md +++ b/README.md @@ -13,7 +13,7 @@ by gradually shifting traffic to the new version while measuring metrics and run ![flagger-overview](https://raw.githubusercontent.com/fluxcd/flagger/main/docs/diagrams/flagger-overview.png) Flagger implements several deployment strategies (Canary releases, A/B testing, Blue/Green mirroring) -using a service mesh (App Mesh, Istio, Linkerd) +using a service mesh (App Mesh, Istio, Linkerd, Open Service Mesh) or an ingress controller (Contour, Gloo, NGINX, Skipper, Traefik) for traffic routing. For release analysis, Flagger can query Prometheus, Datadog, New Relic or CloudWatch and for alerting it uses Slack, MS Teams, Discord and Rocket. @@ -43,6 +43,7 @@ Flagger documentation can be found at [docs.flagger.app](https://docs.flagger.ap * [NGINX Ingress](https://docs.flagger.app/tutorials/nginx-progressive-delivery) * [Skipper](https://docs.flagger.app/tutorials/skipper-progressive-delivery) * [Traefik](https://docs.flagger.app/tutorials/traefik-progressive-delivery) + * [Open Service Mesh (OSM)](https://docs.flagger.app/tutorials/osm-progressive-delivery) * [Kubernetes Blue/Green](https://docs.flagger.app/tutorials/kubernetes-blue-green) ### Who is using Flagger @@ -70,7 +71,7 @@ metadata: namespace: test spec: # service mesh provider (optional) - # can be: kubernetes, istio, linkerd, appmesh, nginx, skipper, contour, gloo, supergloo, traefik + # can be: kubernetes, istio, linkerd, appmesh, nginx, skipper, contour, gloo, supergloo, traefik, osm # for SMI TrafficSplit can be: smi:v1alpha1, smi:v1alpha2, smi:v1alpha3 provider: istio # deployment reference @@ -182,19 +183,19 @@ For more details on how the canary analysis and promotion works please [read the **Service Mesh** -| Feature | App Mesh | Istio | Linkerd | SMI | Kubernetes CNI | -| ------------------------------------------ | ------------------ | ------------------ | ------------------ | ----------------- | ----------------- | -| Canary deployments (weighted traffic) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | -| A/B testing (headers and cookies routing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | -| Blue/Green deployments (traffic switch) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | -| Blue/Green deployments (traffic mirroring) | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | -| Webhooks (acceptance/load testing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | -| Manual gating (approve/pause/resume) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | -| Request success rate check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | -| Request duration check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | -| Custom metric checks | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | +| Feature | App Mesh | Istio | Linkerd | Open Service Mesh | SMI | Kubernetes CNI | +| ------------------------------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | +| Canary deployments (weighted traffic) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | +| A/B testing (headers and cookies routing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | +| Blue/Green deployments (traffic switch) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | +| Blue/Green deployments (traffic mirroring) | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | +| Webhooks (acceptance/load testing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | +| Manual gating (approve/pause/resume) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | +| Request success rate check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | +| Request duration check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | +| Custom metric checks | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | -For SMI compatible service mesh solutions like Open Service Mesh, Consul Connect or Nginx Service Mesh, +For other SMI compatible service mesh solutions like Consul Connect or Nginx Service Mesh, [Prometheus MetricTemplates](https://docs.flagger.app/usage/metrics#prometheus) can be used to implement the request success rate and request duration checks. diff --git a/charts/flagger/README.md b/charts/flagger/README.md index 98a51cc3..3ef10c6d 100644 --- a/charts/flagger/README.md +++ b/charts/flagger/README.md @@ -7,7 +7,7 @@ Flagger can run automated application analysis, testing, promotion and rollback * A/B Testing (HTTP headers and cookies traffic routing) * Blue/Green (traffic switching and mirroring) -Flagger works with service mesh solutions (Istio, Linkerd, AWS App Mesh) and with Kubernetes ingress controllers +Flagger works with service mesh solutions (Istio, Linkerd, AWS App Mesh, Open Service Mesh) and with Kubernetes ingress controllers (NGINX, Skipper, Gloo, Contour, Traefik). Flagger can be configured to send alerts to various chat platforms such as Slack, Microsoft Teams, Discord and Rocket. @@ -96,6 +96,15 @@ $ helm upgrade -i flagger flagger/flagger \ --set meshProvider=traefik ``` +To install Flagger for **Open Service Mesh (OSM)** (requires OSM to have been installed with Prometheus): + +```console +$ helm upgrade -i flagger flagger/flagger \ + --namespace=osm-system \ + --set meshProvider=osm \ + --set metricsServer=http://osm-prometheus.osm-system.svc:7070 +``` + The [configuration](#configuration) section lists the parameters that can be configured during installation. ## Uninstalling the Chart diff --git a/charts/loadtester/README.md b/charts/loadtester/README.md index ab1f3192..2b6bc54e 100644 --- a/charts/loadtester/README.md +++ b/charts/loadtester/README.md @@ -26,7 +26,7 @@ helm upgrade -i flagger-loadtester flagger/loadtester The command deploys loadtester on the Kubernetes cluster in the default namespace. > **Tip**: Note that the namespace where you deploy the load tester should -> have the Istio, App Mesh or Linkerd sidecar injection enabled +> have the Istio, App Mesh, Linkerd or Open Service Mesh sidecar injection enabled The [configuration](#configuration) section lists the parameters that can be configured during installation. diff --git a/docs/diagrams/flagger-osm-traffic-split.png b/docs/diagrams/flagger-osm-traffic-split.png new file mode 100644 index 00000000..e37f6b95 Binary files /dev/null and b/docs/diagrams/flagger-osm-traffic-split.png differ diff --git a/docs/gitbook/README.md b/docs/gitbook/README.md index dd5d8500..d4946e5d 100644 --- a/docs/gitbook/README.md +++ b/docs/gitbook/README.md @@ -10,7 +10,7 @@ version in production by gradually shifting traffic to the new version while mea and running conformance tests. Flagger implements several deployment strategies (Canary releases, A/B testing, Blue/Green mirroring) -using a service mesh (App Mesh, Istio, Linkerd) +using a service mesh (App Mesh, Istio, Linkerd, Open Service Mesh) or an ingress controller (Contour, Gloo, NGINX, Skipper, Traefik) for traffic routing. For release analysis, Flagger can query Prometheus, Datadog, New Relic, CloudWatch or Graphite and for alerting it uses Slack, MS Teams, Discord and Rocket. @@ -36,6 +36,7 @@ After installing Flagger, you can follow one of these tutorials to get started: * [Istio](tutorials/istio-progressive-delivery.md) * [Linkerd](tutorials/linkerd-progressive-delivery.md) * [AWS App Mesh](tutorials/appmesh-progressive-delivery.md) +* [Open Service Mesh](tutorials/osm-progressive-delivery.md) **Ingress controller tutorials** diff --git a/docs/gitbook/SUMMARY.md b/docs/gitbook/SUMMARY.md index d7e05aee..5439a16d 100644 --- a/docs/gitbook/SUMMARY.md +++ b/docs/gitbook/SUMMARY.md @@ -30,6 +30,7 @@ * [NGINX Canary Deployments](tutorials/nginx-progressive-delivery.md) * [Skipper Canary Deployments](tutorials/skipper-progressive-delivery.md) * [Traefik Canary Deployments](tutorials/traefik-progressive-delivery.md) +* [Open Service Mesh Deployments](tutorials/osm-progressive-delivery.md) * [Blue/Green Deployments](tutorials/kubernetes-blue-green.md) * [Canary analysis with Prometheus Operator](tutorials/prometheus-operator.md) * [Zero downtime deployments](tutorials/zero-downtime-deployments.md) diff --git a/docs/gitbook/install/flagger-install-on-kubernetes.md b/docs/gitbook/install/flagger-install-on-kubernetes.md index 01ba5ef3..1729cdcf 100644 --- a/docs/gitbook/install/flagger-install-on-kubernetes.md +++ b/docs/gitbook/install/flagger-install-on-kubernetes.md @@ -71,6 +71,16 @@ helm upgrade -i flagger flagger/flagger \ --set metricsServer=http://appmesh-prometheus:9090 ``` +Deploy Flagger for **Open Service Mesh (OSM)** (requires OSM to have been installed with Prometheus): + +```console +$ helm upgrade -i flagger flagger/flagger \ +--namespace=osm-system \ +--set crd.create=false \ +--set meshProvider=osm \ +--set metricsServer=http://osm-prometheus.osm-system.svc:7070 +``` + You can install Flagger in any namespace as long as it can talk to the Prometheus service on port 9090. For ingress controllers, the install instructions are: @@ -173,6 +183,14 @@ kustomize build https://github.com/fluxcd/flagger/kustomize/linkerd?ref=main | k This deploys Flagger in the `linkerd` namespace and sets the metrics server URL to Linkerd's Prometheus instance. +Install Flagger for Open Service Mesh: + +```bash +kustomize build https://github.com/fluxcd/flagger/kustomize/osm?ref=main | kubectl apply -f - +``` + +This deploys Flagger in the `osm-system` namespace and sets the metrics server URL to OSM's Prometheus instance. + If you want to install a specific Flagger release, add the version number to the URL: ```bash @@ -202,7 +220,7 @@ metadata: name: app namespace: test spec: - # can be: kubernetes, istio, linkerd, appmesh, nginx, skipper, gloo, traefik + # can be: kubernetes, istio, linkerd, appmesh, nginx, skipper, gloo, traefik, osm # use the kubernetes provider for Blue/Green style deployments provider: nginx ``` diff --git a/docs/gitbook/tutorials/osm-progressive-delivery.md b/docs/gitbook/tutorials/osm-progressive-delivery.md new file mode 100644 index 00000000..caf13400 --- /dev/null +++ b/docs/gitbook/tutorials/osm-progressive-delivery.md @@ -0,0 +1,355 @@ +# Open Service Mesh Canary Deployments + +This guide shows you how to use Open Service Mesh (OSM) and Flagger to automate canary deployments. + +![Flagger OSM Traffic Split](https://raw.githubusercontent.com/fluxcd/flagger/main/docs/diagrams/flagger-osm-traffic-split.png) + +## Prerequisites + +Flagger requires a Kubernetes cluster **v1.16** or newer and Open Service Mesh **0.9.1** or newer. + +Install Open Service Mesh with Prometheus and permissive traffic policy enabled. + +```bash +osm install \ +--set=OpenServiceMesh.deployPrometheus=true \ +--set=OpenServiceMesh.enablePermissiveTrafficPolicy=true +``` + +Install Flagger in the `osm-system` namespace using `kubectl`. + +```bash +kubectl apply -k https://github.com/fluxcd/flagger//kustomize/osm?ref=main +``` + +Alternatively, Flagger can be installed in the `osm-system` namespace using `helm`. + +```bash +helm upgrade -i flagger flagger/flagger \ +--namespace=osm-system \ +--set meshProvider=osm \ +--set metricsServer=http://osm-prometheus.osm-system.svc:7070 +``` + +## Bootstrap + +Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA), +then creates a series of objects (Kubernetes deployments, ClusterIP services and SMI traffic split). +These objects expose the application inside the mesh and drive the canary analysis and promotion. + +Create a `test` namespace and enable osm namespace monitoring and metrics scraping for the namespace. + +```bash +kubectl create namespace test +osm namespace add test +osm metrics enable --namespace test +``` + +Create a `podinfo` deployment and a horizontal pod autoscaler: + +```bash +kubectl apply -k https://github.com/fluxcd/flagger//kustomize/podinfo?ref=main +``` + +Install the load testing service to generate traffic during the canary analysis: + +```bash +kubectl apply -k https://github.com/fluxcd/flagger//kustomize/tester?ref=main +``` + +Create a canary custom resource for the `podinfo` deployment. +The following `podinfo` canary custom resource instructs Flagger to: +1. monitor any changes to the `podinfo` deployment created earlier, +2. detect `podinfo` deployment revision changes, and +3. start a Flagger canary analysis, rollout, and promotion if there were deployment revision changes. + +```yaml +apiVersion: flagger.app/v1beta1 +kind: Canary +metadata: + name: podinfo + namespace: test +spec: + provider: osm + # deployment reference + targetRef: + apiVersion: apps/v1 + kind: Deployment + name: podinfo + # HPA reference (optional) + autoscalerRef: + apiVersion: autoscaling/v2beta2 + kind: HorizontalPodAutoscaler + name: podinfo + # the maximum time in seconds for the canary deployment + # to make progress before it is rolled back (default 600s) + progressDeadlineSeconds: 60 + service: + # ClusterIP port number + port: 9898 + # container port number or name (optional) + targetPort: 9898 + analysis: + # schedule interval (default 60s) + interval: 30s + # max number of failed metric checks before rollback + threshold: 5 + # max traffic percentage routed to canary + # percentage (0-100) + maxWeight: 50 + # canary increment step + # percentage (0-100) + stepWeight: 5 + # OSM Prometheus checks + metrics: + - name: request-success-rate + # minimum req success rate (non 5xx responses) + # percentage (0-100) + thresholdRange: + min: 99 + interval: 1m + - name: request-duration + # maximum req duration P99 + # milliseconds + thresholdRange: + max: 500 + interval: 30s + # testing (optional) + webhooks: + - name: acceptance-test + type: pre-rollout + url: http://flagger-loadtester.test/ + timeout: 30s + metadata: + type: bash + cmd: "curl -sd 'test' http://podinfo-canary.test:9898/token | grep token" + - name: load-test + type: rollout + url: http://flagger-loadtester.test/ + timeout: 5s + metadata: + cmd: "hey -z 2m -q 10 -c 2 http://podinfo-canary.test:9898/" +``` + +Save the above resource as podinfo-canary.yaml and then apply it: + +```bash +kubectl apply -f ./podinfo-canary.yaml +``` + +When the canary analysis starts, Flagger will call the pre-rollout webhooks before routing traffic to the canary. +The canary analysis will run for five minutes while validating the HTTP metrics and rollout hooks every half a minute. + +After a couple of seconds Flagger will create the canary objects. + +```bash +# applied +deployment.apps/podinfo +horizontalpodautoscaler.autoscaling/podinfo +ingresses.extensions/podinfo +canary.flagger.app/podinfo + +# generated +deployment.apps/podinfo-primary +horizontalpodautoscaler.autoscaling/podinfo-primary +service/podinfo +service/podinfo-canary +service/podinfo-primary +trafficsplits.split.smi-spec.io/podinfo +``` + +After the boostrap, the `podinfo` deployment will be scaled to zero and the traffic to `podinfo.test` will be routed to the primary pods. +During the canary analysis, the `podinfo-canary.test` address can be used to target directly the canary pods. + +## Automated Canary Promotion + +Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators like HTTP requests success rate, requests average duration and pod health. +Based on analysis of the KPIs a canary is promoted or aborted. + +![Flagger Canary Stages](https://raw.githubusercontent.com/fluxcd/flagger/main/docs/diagrams/flagger-canary-steps.png) + +Trigger a canary deployment by updating the container image: + +```bash +kubectl -n test set image deployment/podinfo \ +podinfod=stefanprodan/podinfo:3.1.1 +``` + +Flagger detects that the deployment revision changed and starts a new rollout. + + +```text +kubectl -n test describe canary/podinfo + +Status: + Canary Weight: 0 + Failed Checks: 0 + Phase: Succeeded +Events: + New revision detected! Scaling up podinfo.test + Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available + Pre-rollout check acceptance-test passed + Advance podinfo.test canary weight 5 + Advance podinfo.test canary weight 10 + Advance podinfo.test canary weight 15 + Advance podinfo.test canary weight 20 + Advance podinfo.test canary weight 25 + Waiting for podinfo.test rollout to finish: 1 of 2 updated replicas are available + Advance podinfo.test canary weight 30 + Advance podinfo.test canary weight 35 + Advance podinfo.test canary weight 40 + Advance podinfo.test canary weight 45 + Advance podinfo.test canary weight 50 + Copying podinfo.test template spec to podinfo-primary.test + Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available + Promotion completed! Scaling down podinfo.test +``` + +**Note** that if you apply any new changes to the `podinfo` deployment during the canary analysis, Flagger will restart the analysis. + +A canary deployment is triggered by changes in any of the following objects: + +* Deployment PodSpec \(container image, command, ports, env, resources, etc\) +* ConfigMaps mounted as volumes or mapped to environment variables +* Secrets mounted as volumes or mapped to environment variables + +You can monitor all canaries with: + +```bash +watch kubectl get canaries --all-namespaces + +NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME +test podinfo Progressing 15 2019-06-30T14:05:07Z +prod frontend Succeeded 0 2019-06-30T16:15:07Z +prod backend Failed 0 2019-06-30T17:05:07Z +``` + +## Automated Rollback + +During the canary analysis you can generate HTTP 500 errors and high latency to test if Flagger pauses and rolls back the faulted version. + +Trigger another canary deployment: + +```bash +kubectl -n test set image deployment/podinfo \ +podinfod=stefanprodan/podinfo:3.1.2 +``` + +Exec into the load tester pod with: + +```bash +kubectl -n test exec -it flagger-loadtester-xx-xx sh +``` + +Repeatedly generate HTTP 500 errors: + +```bash +watch -n 1 curl http://podinfo-canary.test:9898/status/500 +``` + +Repeatedly generate latency: + +```bash +watch -n 1 curl http://podinfo-canary.test:9898/delay/1 +``` + +When the number of failed checks reaches the canary analysis thresholds defined in the `podinfo` canary custom resource earlier, the traffic is routed back to the primary, the canary is scaled to zero and the rollout is marked as failed. + +```text +kubectl -n test describe canary/podinfo + +Status: + Canary Weight: 0 + Failed Checks: 10 + Phase: Failed +Events: + Starting canary analysis for podinfo.test + Pre-rollout check acceptance-test passed + Advance podinfo.test canary weight 5 + Advance podinfo.test canary weight 10 + Advance podinfo.test canary weight 15 + Halt podinfo.test advancement success rate 69.17% < 99% + Halt podinfo.test advancement success rate 61.39% < 99% + Halt podinfo.test advancement success rate 55.06% < 99% + Halt podinfo.test advancement request duration 1.20s > 0.5s + Halt podinfo.test advancement request duration 1.45s > 0.5s + Rolling back podinfo.test failed checks threshold reached 5 + Canary failed! Scaling down podinfo.test +``` + +## Custom Metrics + +The canary analysis can be extended with Prometheus queries. + +Let's a define a check for 404 not found errors. +Edit the canary analysis (`podinfo-canary.yaml` file) and add the following metric. +For more information on creating additional custom metrics using OSM metrics, please check the [metrics available in OSM](https://docs.openservicemesh.io/docs/guides/observability/metrics/#available-metrics). + +```yaml + analysis: + metrics: + - name: "404s percentage" + threshold: 3 + query: | + 100 - ( + sum( + rate( + osm_request_total{ + destination_namespace="test", + destination_kind="Deployment", + destination_name="podinfo", + response_code!="404" + }[1m] + ) + ) + / + sum( + rate( + osm_request_total{ + destination_namespace="test", + destination_kind="Deployment", + destination_name="podinfo" + }[1m] + ) + ) * 100 + ) +``` + +The above configuration validates the canary version by checking if the HTTP 404 req/sec percentage is below three percent of the total traffic. +If the 404s rate reaches the 3% threshold, then the analysis is aborted and the canary is marked as failed. + +Trigger a canary deployment by updating the container image: + +```bash +kubectl -n test set image deployment/podinfo \ +podinfod=stefanprodan/podinfo:3.1.3 +``` + +Exec into the load tester pod with: + +```bash +kubectl -n test exec -it flagger-loadtester-xx-xx sh +``` + +Repeatedly generate 404s: + +```bash +watch -n 1 curl http://podinfo-canary.test:9898/status/404 +``` + +Watch Flagger logs to confirm successful canary rollback. + +```text +kubectl -n osm-system logs deployment/flagger -f | jq .msg + +Starting canary deployment for podinfo.test +Pre-rollout check acceptance-test passed +Advance podinfo.test canary weight 5 +Halt podinfo.test advancement 404s percentage 6.20 > 3 +Halt podinfo.test advancement 404s percentage 6.45 > 3 +Halt podinfo.test advancement 404s percentage 7.22 > 3 +Halt podinfo.test advancement 404s percentage 6.50 > 3 +Halt podinfo.test advancement 404s percentage 6.34 > 3 +Rolling back podinfo.test failed checks threshold reached 5 +Canary failed! Scaling down podinfo.test +``` diff --git a/docs/gitbook/usage/deployment-strategies.md b/docs/gitbook/usage/deployment-strategies.md index 29a33186..09011868 100644 --- a/docs/gitbook/usage/deployment-strategies.md +++ b/docs/gitbook/usage/deployment-strategies.md @@ -3,11 +3,11 @@ Flagger can run automated application analysis, promotion and rollback for the following deployment strategies: * **Canary Release** \(progressive traffic shifting\) - * Istio, Linkerd, App Mesh, NGINX, Skipper, Contour, Gloo Edge, Traefik + * Istio, Linkerd, App Mesh, NGINX, Skipper, Contour, Gloo Edge, Traefik, Open Service Mesh * **A/B Testing** \(HTTP headers and cookies traffic routing\) * Istio, App Mesh, NGINX, Contour, Gloo Edge * **Blue/Green** \(traffic switching\) - * Kubernetes CNI, Istio, Linkerd, App Mesh, NGINX, Contour, Gloo Edge + * Kubernetes CNI, Istio, Linkerd, App Mesh, NGINX, Contour, Gloo Edge, Open Service Mesh * **Blue/Green Mirroring** \(traffic shadowing\) * Istio diff --git a/kustomize/README.md b/kustomize/README.md index e719ca5a..17c23b14 100644 --- a/kustomize/README.md +++ b/kustomize/README.md @@ -34,6 +34,14 @@ kustomize build https://github.com/fluxcd/flagger/kustomize/linkerd?ref=main | k This deploys Flagger in the `linkerd` namespace and sets the metrics server URL to linkerd-viz extension's Prometheus instance which lives under `linkerd-viz` namespace by default. +Install Flagger for Open Service Mesh: + +```bash +kustomize build https://github.com/fluxcd/flagger/kustomize/osm?ref=main | kubectl apply -f - +``` + +This deploys Flagger in the `osm-system` namespace and sets the metrics server URL to OSM's Prometheus instance. + If you want to install a specific Flagger release, add the version number to the URL: ```bash @@ -68,7 +76,7 @@ metadata: name: app namespace: test spec: - # can be: kubernetes, istio, linkerd, appmesh, nginx, skipper, gloo + # can be: kubernetes, istio, linkerd, appmesh, nginx, skipper, gloo, osm # use the kubernetes provider for Blue/Green style deployments provider: nginx ```