Compare commits

...

1146 Commits

Author SHA1 Message Date
Stefan Prodan
1cca5a455b Merge pull request #422 from weaveworks/prep-0.23.0
Release v0.23.0
2020-02-06 15:06:23 +02:00
stefanprodan
1b651500a1 Release v0.23.0 2020-02-06 14:49:04 +02:00
Stefan Prodan
e457b6d35c Merge pull request #420 from ta924/manualrollback
Add support for gated rollback
2020-02-06 13:48:32 +02:00
Tanner Altares
402dda71e6 manual push to trigger build 2020-02-05 19:17:45 -06:00
Tanner Altares
69e969ac51 modify the hook name 2020-02-05 14:49:35 -06:00
Tanner Altares
edbc373109 add docs for manual rollback 2020-02-05 14:14:13 -06:00
Tanner Altares
1d23c0f0a2 update CRD manifest to add rollback enum to webhook validation 2020-02-05 10:29:32 -06:00
Tanner Altares
fa950e1a48 support gated rollback 2020-01-30 15:11:59 -06:00
Stefan Prodan
e31ecbedf0 Merge pull request #416 from weaveworks/service-name
Implement service name override
2020-01-28 21:22:41 +02:00
stefanprodan
b982c9e2ae Fix service pod selector 2020-01-26 18:52:15 +02:00
stefanprodan
3766c843fe Add service name field to docs 2020-01-26 13:00:07 +02:00
stefanprodan
e00d9962d6 Use service name override in Kubernetes e2e tests 2020-01-26 12:59:51 +02:00
stefanprodan
940e547e88 Implement service name override
Use targetRef.name as the Kubernetes service name prefix only if service name is not specified
Warn about routing conflicts when service name changes
2020-01-26 12:48:49 +02:00
stefanprodan
e3ecebc9ae Add service name field to Canary CRD 2020-01-26 12:46:08 +02:00
stefanprodan
c38bd144e4 Update Kubernetes packages to v1.17.1 2020-01-25 12:51:44 +02:00
Stefan Prodan
2be6f3d678 Merge pull request #412 from weaveworks/prep-release-0.22.0
Release v0.22.0
2020-01-16 19:50:25 +02:00
stefanprodan
3d7091a56b Use Kubernetes v1.17.0 in e2e tests 2020-01-16 19:33:17 +02:00
stefanprodan
1f0305949e Update Prometheus to v2.15.2 2020-01-16 14:48:06 +02:00
stefanprodan
1332db85c5 Add selector-labels example to docs
Fix: #403
2020-01-16 14:38:50 +02:00
stefanprodan
1f06ec838d Release Flagger v0.22.0 2020-01-16 14:32:33 +02:00
Stefan Prodan
308351918c Merge pull request #411 from weaveworks/contour-up
Update Contour to v1.1 and add Linkerd header
2020-01-16 14:22:51 +02:00
stefanprodan
558a1fc6e6 Add Linkerd l5d-dst-override header to Contour routes 2020-01-16 11:26:02 +02:00
stefanprodan
bc3256e1c5 Update Contour to v1.1 2020-01-16 11:08:55 +02:00
Stefan Prodan
6eaf421f98 Merge pull request #409 from weaveworks/event-webhook
Implement event dispatching webhook
2020-01-16 11:02:32 +02:00
stefanprodan
1271f12d3f Add the event webhook type to docs 2020-01-15 14:29:51 +02:00
stefanprodan
4776b1d285 Implement events dispatching for the event webhook type 2020-01-15 14:12:22 +02:00
stefanprodan
e4dc923299 Add event webhook type to CRD 2020-01-15 14:10:38 +02:00
Stefan Prodan
98ba38d436 Merge pull request #408 from weaveworks/e2e-updates
e2e: Update Kubernetes Kind to v0.7.0
2020-01-15 13:27:14 +02:00
stefanprodan
9d765feb38 Remove deprecated Kind command from e2e 2020-01-14 13:12:54 +02:00
stefanprodan
7e6a70bdbf Update Kubernetes Kind to v0.7.0 2020-01-14 12:55:20 +02:00
Stefan Prodan
455ec1b6e7 Merge pull request #407 from weaveworks/istio-1.4
Update Istio e2e to v1.4.3
2020-01-14 12:48:12 +02:00
Stefan Prodan
3b152a370f Merge pull request #406 from weaveworks/kube-1.17
Update Kubernetes packages to 1.17
2020-01-13 16:03:40 +02:00
stefanprodan
8d7d5e6810 Update Istio e2e to v1.4.3 2020-01-11 20:59:00 +02:00
stefanprodan
8dc4c03258 Update Kubernetes packages to 1.17 2020-01-11 18:24:31 +02:00
Stefan Prodan
0082b3307b Merge pull request #401 from mrparkers/event-webhook
adds general purpose event webhook
2020-01-11 17:54:32 +02:00
Michael Parker
b1a9c33d36 add docs 2020-01-09 16:11:03 -06:00
Michael Parker
6e06cf1074 use unix timestamp ms 2020-01-09 16:10:56 -06:00
Michael Parker
8d61e6f893 rename 2020-01-09 14:26:53 -06:00
Michael Parker
9c71e70a0a webhook tests 2020-01-09 14:25:43 -06:00
Michael Parker
91395ea1ab deepcopy canary for failed notification 2020-01-09 11:05:22 -06:00
Michael Parker
0894304fce use canary copy for new revision notification 2020-01-09 10:45:13 -06:00
Michael Parker
9cfa0ac43f update event payload schema 2020-01-07 11:11:52 -06:00
Michael Parker
1d5029d607 Merge branch 'event-webhook' of github.com:mrparkers/flagger into event-webhook 2020-01-07 09:39:13 -06:00
Michael Parker
e6d1880c93 use correct event type 2020-01-07 09:38:14 -06:00
Michael Parker
6da533090a Update controller.go 2020-01-06 19:12:39 -06:00
Michael Parker
17efcaa6d1 update helm chart 2020-01-06 16:35:52 -06:00
Michael Parker
38dfda9d8f add event-webhook command line flag 2020-01-06 16:35:42 -06:00
stefanprodan
0abc254ef2 Add Contour TLS guide to docs 2020-01-06 16:29:04 +02:00
Stefan Prodan
db427b5e54 Merge pull request #400 from weaveworks/release-0.21.0
Release 0.21.0
2020-01-06 10:23:46 +00:00
stefanprodan
b49d63bdfe Update e2e tests to Linkerd 2.6.1 2020-01-06 12:02:53 +02:00
stefanprodan
c84f7addff Release 0.21.0 2020-01-06 11:43:48 +02:00
Stefan Prodan
5d72398925 Merge pull request #397 from weaveworks/contour
Add support for Contour ingress controller
2020-01-06 08:08:47 +00:00
stefanprodan
11d16468c9 Add Contour TLS guide link to docs 2019-12-29 13:36:55 +02:00
Stefan Prodan
82b61d69b7 Merge pull request #399 from int128/pod-monitor
Add PodMonitor template to flagger chart
2019-12-24 14:35:39 +02:00
Hidetake Iwata
824391321f Add PodMonitor template to flagger chart 2019-12-24 12:55:40 +09:00
stefanprodan
a7c242e437 Add user agent match examples to Contour docs 2019-12-20 18:26:18 +02:00
stefanprodan
1544610203 Add Contour e2e test for canary rollback 2019-12-20 14:38:06 +02:00
stefanprodan
14ca775ed9 Set Contour namespace in kustomization 2019-12-20 14:33:03 +02:00
stefanprodan
f1d29f5951 Set Contour idle timeout to 5m 2019-12-20 14:32:24 +02:00
stefanprodan
ad0a66ffcc Add Contour usage docs and diagrams 2019-12-20 11:47:44 +02:00
stefanprodan
4288fa261c Add Contour reference to docs 2019-12-20 11:47:00 +02:00
stefanprodan
a537637dc9 Add Flagger Kustomize installer for Contour 2019-12-20 11:46:23 +02:00
stefanprodan
851c6701b3 Add unit tests for Contour prefix, timeout and retries 2019-12-19 19:06:47 +02:00
stefanprodan
bb4591106a Add Contour URL prefix 2019-12-19 18:48:31 +02:00
stefanprodan
7641190ecb Add Contour timeout and retry policies 2019-12-19 18:27:35 +02:00
stefanprodan
02b579f128 Add unit tests for Contour routes 2019-12-19 15:30:53 +02:00
stefanprodan
9cf6b407f1 Add unit tests for Contour router reconciliation 2019-12-19 15:15:02 +02:00
stefanprodan
c3564176f8 Add unit tests for Contour observer 2019-12-19 12:41:39 +02:00
stefanprodan
ae9cf57fd5 Add e2e tests for Contour header routing 2019-12-19 12:22:57 +02:00
stefanprodan
ae63b01373 Implement Contour A/B testing 2019-12-19 12:02:20 +02:00
stefanprodan
c066a9163b Set HTTPProxy status on init 2019-12-19 09:58:32 +02:00
stefanprodan
38b04f2690 Add Contour canary e2e tests 2019-12-19 09:38:23 +02:00
stefanprodan
ee0e7b091a Implement Contour router for traffic shifting 2019-12-18 19:29:17 +02:00
stefanprodan
e922c3e9d9 Add Contour metrics 2019-12-18 19:29:17 +02:00
stefanprodan
2c31a4bf90 Add Contour CRD to Flagger RBAC 2019-12-18 19:29:17 +02:00
stefanprodan
7332e6b173 Add Contour HTTPProxy CRD and clientset 2019-12-18 19:29:17 +02:00
Stefan Prodan
968d67a7c3 Merge pull request #386 from mumoshu/envoy-canary-analysis
feat: Support for canary analysis on deployments and services behind Envoy
2019-12-18 19:22:18 +02:00
Yusuke Kuoka
266b957fc6 Fix CrossoverServiceObserver's ID 2019-12-18 22:11:21 +09:00
Yusuke Kuoka
357ef86c8b Differentiate AppMesh observer vs Crossover observer
To not break AppMesh integration.
2019-12-18 22:03:30 +09:00
Yusuke Kuoka
d75ade5e8c Fix envoy dashboard, scheduler, and envoy metrics provider to correctly pass canary analysis and show graphs 2019-12-18 10:55:49 +09:00
Yusuke Kuoka
806b95c8ce Do send http requests only to canary for canary analysis 2019-12-18 09:06:22 +09:00
Yusuke Kuoka
bf58cd763f Do use correct envoy metrics for canary analysis 2019-12-18 09:05:37 +09:00
Yusuke Kuoka
52856177e3 Fix trafficsplits api version for envoy+crossover 2019-12-18 09:03:41 +09:00
Yusuke Kuoka
58c3cebaac Fix the dashboard and the steps to browse it 2019-12-17 20:18:33 +09:00
Yusuke Kuoka
1e5d05c3fc Improve Envoy/Crossover installation experience with the chart registry 2019-12-17 17:02:50 +09:00
Yusuke Kuoka
020129bf5c Fix misconfiguration 2019-12-17 15:45:16 +09:00
Stefan Prodan
3ff0786e1f Merge pull request #394 from weaveworks/helm-tester-v3.0.1
Update Helm tester to Helm v3.0.1
2019-12-17 08:21:57 +02:00
stefanprodan
a60dc55dad Update Helm tester to Helm v3.0.1 2019-12-17 00:10:11 +02:00
Stefan Prodan
ff6acae544 Merge pull request #391 from weaveworks/appmesh-docs-fix
App Mesh docs fixes
2019-12-06 00:13:34 +07:00
stefanprodan
09b5295c85 Fix App Mesh gateway namespace 2019-12-05 23:39:13 +07:00
stefanprodan
9e423a6f71 Fix metrics-server install for EKS 2019-12-05 23:36:58 +07:00
Stefan Prodan
0ef05edf1e Merge pull request #390 from weaveworks/e2e-kube-1.16
Update e2e tests to Kubernetes v1.16
2019-12-05 18:06:39 +07:00
stefanprodan
a59901aaa9 Update e2e tests to Kubernetes 1.16 2019-12-04 15:35:36 +07:00
Stefan Prodan
53be3e07d2 Merge pull request #389 from weaveworks/release-0.20.4
Release 0.20.4
2019-12-03 14:56:40 +07:00
stefanprodan
2eb2ae52cd Release v0.20.4 2019-12-03 14:31:07 +07:00
stefanprodan
7bcc76eca0 Update Grafana to 6.5.1 2019-12-03 14:30:03 +07:00
Yusuke Kuoka
0d531e7bd1 Fix loadtester config in the envoy doc 2019-12-01 23:29:21 +09:00
Yusuke Kuoka
08851f83c7 Make envoy + crossover installation a bit more understandable 2019-12-01 23:25:29 +09:00
Stefan Prodan
295f5d7b39 Merge pull request #384 from weaveworks/svc-init
Add initialization phase to Kubernetes router
2019-12-01 10:08:18 +07:00
Yusuke Kuoka
a828524957 Add the guide for using Envoy and Crossover for Deployment targets
Ref #385
2019-11-30 13:03:01 +09:00
Yusuke Kuoka
6661406b75 Metrics provider for deployments and services behind Envoy
Assumes `envoy:smi` as the mesh provider name as I've successfully tested the progressive delivery for Envoy + Crossover with it.

This enhances Flagger to translate it to the metrics provider name of `envoy` for deployment targets, or `envoy:service` for service targets.

The `envoy` metrics provider is equivalent to `appmesh`, as both relies on the same set of standard metrics exposed by Envoy itself.

The `envoy:service` is almost the same as the `envoy` provider, but removing the condition on pod name, as we only need to filter on the backing service name = envoy_cluster_name. We don't consider other Envoy xDS implementations that uses anything that is different to original servicen ames as `envoy_cluster_name`, for now.

Ref #385
2019-11-30 13:03:01 +09:00
stefanprodan
8766523279 Add initialization phase to Kubernetes router
Create Kubernetes services before deployments because Envoy's readiness depends on existing ClusterIPs
2019-11-27 22:15:04 +02:00
Stefan Prodan
b02a6da614 Merge pull request #383 from weaveworks/e2e-ups
Update nginx-ingress to 1.26.0
2019-11-27 18:51:27 +02:00
stefanprodan
89d7cb1b04 Update nginx-ingress to 1.26.0 2019-11-27 17:48:37 +02:00
Stefan Prodan
59d18de753 Merge pull request #372 from mumoshu/svc-support
feat: Canary-release anything behind K8s service
2019-11-27 16:44:56 +02:00
Yusuke Kuoka
e1d8703a15 Refactor to merge KubernetesServiceRouter into ServiceController
The current design is that everything related to managing the targeted resource should go into the respective implementation of `canary.Controller`. In the service-canary use-case our target is Service so rather than splitting and scattering the logics over Controller and Router, everything should naturally go to `ServiceController`. Maybe at the time of writing the first implementation, I was confusing the target service vs the router.
2019-11-27 22:40:40 +09:00
Yusuke Kuoka
1ba595bc6f feat: Canary-release anything behind K8s service
Resolves #371

---

This adds the support for `corev1.Service` as the `targetRef.kind`, so that we can use Flagger just for canary analysis and traffic-shifting on existing and pre-created services. Flagger doesn't touch deployments and HPAs in this mode.

This is useful for keeping your full-control on the resources backing the service to be canary-released, including pods(behind a ClusterIP service) and external services(behind an ExternalName service).

Major use-case in my mind are:

- Canary-release a K8s cluster. You create two clusters and a master cluster. In the master cluster, you create two `ExternalName` services pointing to (the hostname of the loadbalancer of the targeted app instance in) each cluster. Flagger runs on the master cluster and helps safely rolling-out a new K8s cluster by doing a canary release on the `ExternalName` service.
- You want annotations and labels added to the service for integrating with things like external lbs(without extending Flagger to support customizing any aspect of the K8s service it manages

**Design**:

A canary release on a K8s service is almost the same as one on a K8s deployment. The only fundamental difference is that it operates only on a set of K8s services.

For example, one may start by creating two Helm releases for `podinfo-blue` and `podinfo-green`, and a K8s service `podinfo`. The `podinfo` service should initially have the same `Spec` as that of  `podinfo-blue`.

On a new release, you update `podinfo-green`, then trigger Flagger by updating the K8s service `podinfo` so that it points to pods or `externalName` as declared in `podinfo-green`. Flagger does the rest. The end result is the traffic to `podinfo` is gradually and safely shifted from `podinfo-blue` to `podinfo-green`.

**How it works**:

Under the hood, Flagger maintains two K8s services, `podinfo-primary` and `podinfo-canary`. Compared to canaries on K8s deployments, it doesn't create the service named `podinfo`, as it is already provided by YOU.

Once Flagger detects the change in the `podinfo` service, it updates the `podinfo-canary` service and the routes, then analyzes the canary. On successful analysis, it promotes the canary service to the `podinfo-primary` service. You expose the `podinfo` service via any L7 ingress solution or a service mesh so that the traffic is managed by Flagger for safe deployments.

**Giving it a try**:

To give it a try, create a `Canary` as usual, but its `targetRef` pointed to a K8s service:

```
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
  name: podinfo
spec:
  provider: kubernetes
  targetRef:
    apiVersion: core/v1
    kind: Service
    name: podinfo
  service:
    port: 9898
  canaryAnalysis:
    # schedule interval (default 60s)
    interval: 10s
    # max number of failed checks before rollback
    threshold: 2
    # number of checks to run before rollback
    iterations: 2
    # Prometheus checks based on
    # http_request_duration_seconds histogram
    metrics: []
```

Create a K8s service named `podinfo`, and update it. Now watch for the services `podinfo`, `podinfo-primary`, `podinfo-canary`.

Flagger tracks `podinfo` service for changes. Upon any change, it reconciles `podinfo-primary` and `podinfo-canary` services. `podinfo-canary` always replicate the latest `podinfo`. In contract, `podinfo-primary` replicates the latest successful `podinfo-canary`.

**Notes**:

- For the canary cluster use-case, we would need to write a K8s operator to, e.g. for App Mesh, sync `ExternalName` services to AppMesh `VirtualNode`s. But that's another story!
2019-11-27 09:07:29 +09:00
Stefan Prodan
446a2b976c Merge pull request #380 from weaveworks/skip-primary-check
Skip primary check on skip analysis
2019-11-26 14:25:57 +02:00
stefanprodan
9af6ade54d Skip primary check on skip analysis 2019-11-25 23:48:22 +02:00
Stefan Prodan
3fbe62aa47 Merge pull request #378 from weaveworks/refac-deployer
Refactor canary package
2019-11-25 21:03:16 +02:00
stefanprodan
4454c9b5b5 Add canary factory for Kubernetes targets
- extract Kubernetes operations to controller interface
- implement controller interface for kind Deployment
2019-11-25 18:45:19 +02:00
Stefan Prodan
c2cf9bf4b1 Merge pull request #373 from sfxworks/deployment-fix
Upgrade deployment spec to apps v1
2019-11-23 16:55:14 +00:00
Samuel Walker
3afc7978bd upgrade deployment spec to apps v1 2019-11-18 11:10:15 -05:00
stefanprodan
7a0ba8b477 Update v0.20.3 changelog 2019-11-13 14:06:14 +02:00
Stefan Prodan
0eb21a98a5 Merge pull request #368 from weaveworks/wrk
Add wrk to load tester tools
2019-11-13 13:59:28 +02:00
stefanprodan
2876092912 Update flagger-appmesh-gateway to 1.1.0 2019-11-13 13:07:59 +02:00
stefanprodan
3dbfa34a53 Add wrk to load tester tools
- add wrk v4.0.2
- update Helm v2 to 2.16.1
- update Helm v3 to 3.0.0-rc.3
2019-11-13 12:54:47 +02:00
Stefan Prodan
656f81787c Merge pull request #367 from andrew-demb/patch-1
Fixed readiness/liveness probe example in docs
2019-11-13 12:10:19 +02:00
Andrii Dembitskyi
920d558fde Fixed readiness/liveness probe example in docs 2019-11-13 09:24:12 +02:00
stefanprodan
638a9f1c93 Fix App Mesh gateway deployment 2019-11-12 13:18:45 +02:00
stefanprodan
f1c3ee7a82 Release v0.20.3 2019-11-11 19:14:05 +02:00
Stefan Prodan
878f106573 Merge pull request #365 from weaveworks/appmesh-gateway-chart
Add App Mesh gateway chart
2019-11-08 21:40:21 +02:00
stefanprodan
945eded6bf Add the App Mesh Gateway to docs 2019-11-08 21:02:51 +02:00
stefanprodan
f94f9c23d6 Patch cluster role bindings in kustomization 2019-11-08 12:40:14 +02:00
stefanprodan
527b73e8ef Use App Mesh Prometheus in kustomization 2019-11-08 12:39:45 +02:00
stefanprodan
d4555c5919 Use weaveworks logo in Helm charts 2019-11-08 12:38:47 +02:00
stefanprodan
560bb93e3d Add App Mesh gateway Helm chart 2019-11-08 12:38:06 +02:00
Stefan Prodan
e7fc72e6b5 Merge pull request #364 from weaveworks/release-0.20.2
Release v0.20.2
2019-11-07 12:08:18 +02:00
stefanprodan
4203232b05 Release v0.20.2 2019-11-07 11:34:25 +02:00
stefanprodan
a06aa05201 Add canary namespace to Linkerd webhooks example 2019-11-07 11:34:00 +02:00
Stefan Prodan
8e582e9b73 Merge pull request #363 from weaveworks/no-hpa
Use the specified replicas when scaling up the canary
2019-11-07 10:44:31 +02:00
stefanprodan
0e9fe8a446 Remove the traffic mention from the custom metrics error log
Fix: #361
2019-11-07 09:36:38 +02:00
stefanprodan
27b4bcc648 Use the specified replicas when scaling up the canary 2019-11-07 09:34:53 +02:00
Stefan Prodan
614b7c74c4 Merge pull request #358 from weaveworks/appmesh-gateway
Expose canaries on public domains with App Mesh Gateway
2019-11-06 13:21:20 +02:00
Stefan Prodan
5901129ec6 Merge pull request #359 from KeisukeYamashita/fix-typo-in-how-it-works
Fix typo in section "Webhook" of how-it-works.md
2019-11-06 13:20:53 +02:00
KeisukeYamashita
ded14345b4 doc(how-it-works): fix typo ca to can in how it works doc 2019-11-05 17:39:45 +09:00
stefanprodan
dd272c6870 Expose canaries on public domains with App Mesh Gateway
- map canary service hosts to domain gateway annotation
- map canary retries and timeout to gateway annotations
2019-11-04 18:26:28 +02:00
Stefan Prodan
b31c7c6230 Merge pull request #356 from weaveworks/docs-cleanup
Docs cleanup
2019-11-04 00:52:47 +02:00
stefanprodan
b0297213c3 Use kustomize in Istio docs 2019-11-04 00:35:28 +02:00
stefanprodan
d0fba2d111 Update Istio SMI tutorial 2019-11-04 00:13:19 +02:00
stefanprodan
9924cc2152 Update NGINX usage docs 2019-11-04 00:12:51 +02:00
Stefan Prodan
008a74f86c Merge pull request #354 from weaveworks/prep-0.20.1
Release v0.20.1
2019-11-03 12:29:14 +02:00
stefanprodan
4ca110292f Add v0.20.1 changelog 2019-11-03 11:57:58 +02:00
stefanprodan
55b4c19670 Release v0.20.1 2019-11-03 11:47:16 +02:00
stefanprodan
8349dd1cda Release load tester v0.11.0
- tools updates: Helm v2.15.1, Helm v3.0.0-rc.2, rimusz helm-tiller v0.9.3, gPRC probe v0.3.1
- add hey test during build
2019-11-03 11:46:18 +02:00
Stefan Prodan
402fb66b2a Merge pull request #353 from weaveworks/fix-promql
Fix Prometheus query escape
2019-11-03 11:04:43 +02:00
stefanprodan
f991274b97 Fix Prometheus query escape
Removing whitespace without trimming spaces
2019-11-03 00:01:32 +02:00
Stefan Prodan
0d94a49b6a Merge pull request #350 from laszlocph/update-hey-link
Updating hey release link
2019-10-30 09:01:56 +02:00
Laszlo Fogas
7c14225442 Updating hey release link 2019-10-30 06:40:57 +01:00
stefanprodan
2af0a050bc Fix Prometheus URL in EKS install docs 2019-10-29 18:32:15 +02:00
Stefan Prodan
582f8d6abd Merge pull request #346 from weaveworks/e2e-up
e2e testing: update providers
2019-10-28 16:26:06 +02:00
stefanprodan
eeea3123ac Update e2e NGINX ingress to v1.24.4 2019-10-28 16:08:00 +02:00
stefanprodan
51fe43e169 Update e2e Helm to v2.15.1 2019-10-28 15:32:02 +02:00
stefanprodan
6e6b127092 Update loadtester Helm to v3.0.0-beta.5 2019-10-28 15:31:17 +02:00
stefanprodan
c9bacdfe05 Update Istio to v1.3.3 2019-10-28 15:19:17 +02:00
stefanprodan
f56a69770c Update Linkerd to v2.6.0 2019-10-28 14:42:16 +02:00
Stefan Prodan
0196124c9f Merge pull request #343 from weaveworks/prep-0.20.0
Release v0.20.0
2019-10-22 19:11:59 +03:00
stefanprodan
63756d9d5f Add changelog for v0.20.0 2019-10-22 17:54:18 +03:00
stefanprodan
8e346960ac Add blue/green service mesh docs 2019-10-22 16:57:49 +03:00
stefanprodan
1b485b3459 Release v0.20.0 2019-10-22 09:39:14 +03:00
Stefan Prodan
ee05108279 Merge pull request #344 from weaveworks/gloo-refactoring
Gloo integration refactoring
2019-10-22 09:38:19 +03:00
stefanprodan
dfaa039c9c Update Goo docs 2019-10-22 00:48:15 +03:00
stefanprodan
46579d2ee6 Refactor Gloo integration
- build Gloo UpstreamGroup clientset
- drop solo-io, envoyproxy, hcl, consul, opencensus, apiextensions deps
- use the native routers with supergloo
2019-10-21 16:33:47 +03:00
Stefan Prodan
f372523fb8 Merge pull request #342 from weaveworks/prom-config
Implement metrics server override
2019-10-17 17:24:24 +03:00
stefanprodan
5e434df6ea Exclude high cardinality cAdvisor metrics 2019-10-17 13:02:18 +03:00
stefanprodan
d6c5bdd241 Implement metrics server override 2019-10-17 11:37:54 +03:00
stefanprodan
cdcd97244c Add the metrics server field to CRD 2019-10-17 11:36:25 +03:00
Stefan Prodan
60c4bba263 Merge pull request #340 from weaveworks/appmesh-ab-testing
Implement App Mesh A/B testing
2019-10-17 10:54:31 +03:00
stefanprodan
2b73bc5e38 Fix A/B testing examples 2019-10-17 09:12:39 +03:00
stefanprodan
03652dc631 Add App Mesh http match headers tests 2019-10-16 15:43:26 +03:00
stefanprodan
00155aff37 Add App Mesh A/B testing example to docs 2019-10-16 10:49:33 +03:00
stefanprodan
206c3e6d7a Implement App Mesh A/B testing 2019-10-15 16:39:54 +03:00
Stefan Prodan
8345fea812 Merge pull request #338 from weaveworks/appmesh-up
Implement App Mesh HTTP retry policy
2019-10-15 08:45:49 +03:00
stefanprodan
c11dba1e05 Add retry policy to docs and examples 2019-10-14 21:03:57 +03:00
stefanprodan
7d4c3c5814 Implement App Mesh HTTP retry policy 2019-10-14 20:27:48 +03:00
stefanprodan
9b36794c9d Update App Mesh CRD 2019-10-14 20:26:46 +03:00
Stefan Prodan
1f34c656e9 Merge pull request #336 from weaveworks/appmesh-router-fix
Generate unique names for App Mesh virtual routers and routes
2019-10-14 19:25:08 +03:00
stefanprodan
9982dc9c83 Generate unique names for App Mesh virtual routers and routes 2019-10-14 19:07:10 +03:00
Stefan Prodan
780f3d2ab9 Merge pull request #334 from weaveworks/env-vars
Allow setting Slack and Teams URLs with env vars
2019-10-10 09:05:04 +03:00
stefanprodan
1cb09890fb Add env to chart options to be used for Slack and Teams URLs 2019-10-09 16:53:34 +03:00
stefanprodan
faae6a7c3b Add env vars for Slack and Teams URLs 2019-10-09 16:03:30 +03:00
Stefan Prodan
d4250f3248 Merge pull request #333 from weaveworks/default-labels
Add the app/name label to services and primary deployment
2019-10-09 13:45:14 +03:00
stefanprodan
a8ee477b62 Add selector labels option to Helm chart 2019-10-09 13:22:10 +03:00
stefanprodan
673b6102a7 Add the name label to ClusterIP services and primary deployment 2019-10-09 13:01:15 +03:00
Stefan Prodan
316de42a2c Merge pull request #331 from weaveworks/prep-v0.19.0
Release v0.19.0
2019-10-08 13:22:16 +03:00
stefanprodan
dfb4b35e6c Release v0.19.0 2019-10-08 12:02:37 +03:00
Stefan Prodan
61ab596d1b Merge pull request #327 from weaveworks/target-port
Implement canary service target port
2019-10-08 11:10:04 +03:00
stefanprodan
3345692751 Add service target port to docs 2019-10-07 11:56:03 +03:00
stefanprodan
dff9287c75 Add target port to NGINX e2e tests 2019-10-07 10:01:28 +03:00
stefanprodan
b5fb7cdae5 Add target port number to Gloo e2e tests
Update Gloo to v0.20.2
Enable Gloo discovery Fix: #328
2019-10-07 09:34:23 +03:00
stefanprodan
2e79817437 Add target port number e2e test for Linkerd 2019-10-06 13:35:58 +03:00
stefanprodan
5f439adc36 Use kustomize in Linkerd e2e tests 2019-10-06 12:58:26 +03:00
stefanprodan
45df96ff3c Format imports 2019-10-06 12:54:01 +03:00
stefanprodan
98ee150364 Add target port and gPRC e2e tests for Linkerd 2019-10-06 12:26:03 +03:00
stefanprodan
d328a2146a Fix loadtester image tag 2019-10-06 11:43:25 +03:00
stefanprodan
4513f2e8be Use Docker Hub in e2e tests 2019-10-06 11:42:49 +03:00
stefanprodan
095fef1de6 Release loadtester v0.9.0 with gRPC health check 2019-10-06 11:26:42 +03:00
stefanprodan
754f02a30f Add gRPC acceptance test to Istio e2e tests 2019-10-06 11:03:00 +03:00
stefanprodan
01a4e7f6a8 Add service target port to Istio e2e tests 2019-10-06 11:02:05 +03:00
stefanprodan
6bba84422d Add service target port to Kubernetes e2e tests 2019-10-06 10:44:42 +03:00
stefanprodan
26190d0c6a Use podinfo v3.1.0 for e2e tests 2019-10-06 10:42:30 +03:00
stefanprodan
2d9098e43c Add target port number and name tests 2019-10-06 10:31:50 +03:00
stefanprodan
7581b396b2 Implement service target port 2019-10-06 10:21:34 +03:00
stefanprodan
67a6366906 Add service.targetPort field to Canary CRD 2019-10-06 10:04:21 +03:00
Stefan Prodan
5605fab740 Merge pull request #326 from weaveworks/force-bg
Enforce blue/green when using kubernetes networking
2019-10-05 18:55:13 +03:00
stefanprodan
b76d0001ed Move Istio routing docs to FAQ 2019-10-05 18:13:40 +03:00
stefanprodan
625eed0840 Enforce blue/green when using kubernetes networking
Use blue/green with ten iterations and warn that progressive traffic shifting and HTTP headers routing are not compatible with Kubernetes L4 networking.
2019-10-05 17:59:34 +03:00
stefanprodan
37f9151de3 Add traffic mirroring documentation 2019-10-05 16:23:43 +03:00
Stefan Prodan
20af98e4dc Merge pull request #325 from weaveworks/appmesh-grcp
Allow gPRC protocol for App Mesh
2019-10-05 12:49:07 +03:00
stefanprodan
76800d0ed0 Update canary spec in docs 2019-10-05 12:15:54 +03:00
stefanprodan
3103bde7f7 Use the App Mesh Prometheus chart in docs 2019-10-05 11:52:41 +03:00
stefanprodan
298d8c2d65 Allow gPRC protocol for App Mesh
Use the canary service port name to set http or grpc protocol on App Mesh virtual nodes and virtual routers
2019-10-05 11:21:43 +03:00
Stefan Prodan
5cdacf81e3 Merge pull request #324 from weaveworks/fix-ports-order
Fix port discovery diff
2019-10-05 11:03:35 +03:00
stefanprodan
2141d88ce1 Enable Prometheus scraping of Flagger metrics 2019-10-05 10:45:35 +03:00
stefanprodan
e8a2d4be2e Fix port discovery diff
Sort service ports by port number before comparing slices
2019-10-05 10:42:01 +03:00
Stefan Prodan
9a9baadf0e Merge pull request #311 from andrewjjenkins/mirror
Add traffic mirroring for Istio service mesh
2019-10-05 10:34:25 +03:00
Andrew Jenkins
a21e53fa31 Document traffic mirroring in the FAQ 2019-10-03 14:33:49 -06:00
Andrew Jenkins
61f8aea7d8 add Traffic Mirroring to Blue/Green deployments
Traffic mirroring for blue/green will mirror traffic for the entire
canary analysis phase of the blue/green deployment.
2019-10-03 14:33:49 -06:00
Andrew Jenkins
e384b03d49 Add Traffic Mirroring for Istio Service Mesh
Traffic mirroring is a pre-stage for canary deployments.  When mirroring
is enabled, at the beginning of a canary deployment traffic is mirrored
to the canary instead of shifted for one canary period.  The service
mesh should mirror by copying the request and sending one copy to the
primary and one copy to the canary; only the response from the primary
is sent to the user.  The response from the canary is only used for
collecting metrics.

Once the mirror period is over, the canary proceeds as usual, shifting
traffic from primary to canary until complete.

Added TestScheduler_Mirroring unit test.
2019-10-03 14:33:49 -06:00
Stefan Prodan
0c60cf39f8 Merge pull request #323 from weaveworks/prep-0.18.6
Release v0.18.6
2019-10-03 15:19:51 +03:00
stefanprodan
268fa9999f Release v0.18.6 2019-10-03 15:00:12 +03:00
stefanprodan
ff7d4e747c Update Linkerd to v2.5.0 2019-10-03 14:48:26 +03:00
stefanprodan
121fc57aa6 Update Prometheus to v2.12.0 2019-10-03 14:46:34 +03:00
Stefan Prodan
991fa1cfc8 Merge pull request #322 from weaveworks/appmesh-acceptance-testing
Add support for acceptance testing when using App Mesh
2019-10-03 14:31:51 +03:00
stefanprodan
fb2961715d Add App Mesh acceptance tests example to docs 2019-10-03 12:11:11 +03:00
stefanprodan
74c1c2f1ef Add App Mesh request duration metric check to docs
Fix: #143 depend on App Mesh Envoy >1.11
2019-10-03 11:52:56 +03:00
stefanprodan
4da6c1b6e4 Create canary virtual service during App Mesh reconciliation
Allows the canary pods to be accessed from inside the mesh during the canary analysis for conformance and load testing
2019-10-03 11:43:47 +03:00
Stefan Prodan
fff03b170f Merge pull request #320 from bvwells/json-tag
Fix JSON tag on virtual node condition
2019-10-03 11:07:05 +03:00
Stefan Prodan
434acbb71b Merge pull request #319 from weaveworks/appmesh-docs
Update App Mesh install docs
2019-10-03 10:55:45 +03:00
Ben Wells
01962c32cd Fix JSON tag on virtual node condition 2019-10-03 08:46:39 +01:00
stefanprodan
6b0856a054 Update App Mesh Envoy ingress to v1.11.1 2019-10-03 10:02:58 +03:00
stefanprodan
708dbd6bbc Use official App Mesh Helm charts in docs 2019-10-03 09:52:42 +03:00
Stefan Prodan
e3801cbff6 Merge pull request #318 from bvwells/notifier-fields
Fix slack/teams notification fields
2019-10-03 09:50:25 +03:00
Ben Wells
fc68635098 Fix slack/teams notification of fields 2019-10-02 22:35:16 +01:00
Stefan Prodan
6706ca5d65 Merge pull request #317 from weaveworks/appmesh-kustomize
Add Kustomize installer for App Mesh
2019-10-02 21:40:04 +03:00
stefanprodan
44c2fd57c5 Add App Mesh Kustomize installer to docs 2019-10-02 20:12:04 +03:00
stefanprodan
a9aab3e3ac Add Kustomize installer for App Mesh 2019-10-02 20:05:52 +03:00
Stefan Prodan
6478d0b6cf Merge pull request #316 from weaveworks/prep-0.18.5
Release v0.18.5
2019-10-02 18:10:01 +03:00
stefanprodan
958af18dc0 Add changelog for v0.18.5 2019-10-02 17:51:06 +03:00
stefanprodan
54b8257c60 Release v0.18.5 2019-10-02 16:51:08 +03:00
Stefan Prodan
e86f62744e Merge pull request #315 from nilscan/appmesh-init
Skip primary check for appmesh
2019-10-02 09:17:08 +03:00
nilscan
0734773993 Skip primary check for appmesh 2019-10-02 14:29:48 +13:00
Stefan Prodan
888cc667f1 Merge pull request #314 from weaveworks/podinfo-updates
Update podinfo to v3.1.0 and go to v1.13
2019-09-27 17:20:52 +03:00
stefanprodan
053d0da617 Remove thrift replace from go.mod 2019-09-27 16:59:15 +03:00
stefanprodan
7a4e0bc80c Update go mod to 1.13 2019-09-27 16:53:55 +03:00
stefanprodan
7b7306584f Update alpine to 3.10 2019-09-27 16:33:56 +03:00
stefanprodan
d6027af632 Update go to 1.13 in CI 2019-09-27 16:33:06 +03:00
stefanprodan
761746af21 Update podinfo to v3.1.0 2019-09-27 15:52:30 +03:00
stefanprodan
510a6eaaed Add JWT token issuing test to podinfo chart 2019-09-27 15:19:03 +03:00
Andrew Jenkins
655df36913 Extend test SetupMocks() to take arbitrary Canary resources
SetupMocks() currently takes a bool switch that tells it to configure
against either a shifting canary or an A-B canary.  I'll need a third
canary that has mirroring turned on so I changed this to an interface
that just takes the canary to configure (and configs the default
shifting canary if you pass nil).
2019-09-24 16:15:45 -06:00
Andrew Jenkins
2e079ba7a1 Add mirror to router interface and implement for istio router
The mirror option will be used to tell routers to configure traffic
mirroring.  Implement mirror for GetRoutes and SetRoutes for Istio.  For
other routers, GetRoutes always returns mirror == false, and SetRoutes
ignores mirror.

After this change there is no behavior change because no code sets
mirror true (yet).

Enhanced TestIstioRouter_SetRoutes and TestIstioRouter_GetRoutes.
2019-09-24 16:15:45 -06:00
Stefan Prodan
9df6bfbb5e Merge pull request #310 from weaveworks/canary-promotion
Canary promotion improvements
2019-09-24 14:19:43 +03:00
stefanprodan
2ff86fa56e Fix canary weight max value 2019-09-24 10:16:22 +03:00
stefanprodan
1b2e0481b9 Add promoting phase to status condition 2019-09-24 09:57:42 +03:00
stefanprodan
fe96af64e9 Add canary phases tests 2019-09-23 22:24:40 +03:00
stefanprodan
77d8e4e4d3 Use the promotion phase in A/B testing and Blue/Green 2019-09-23 22:14:44 +03:00
stefanprodan
800b0475ee Run the canary promotion on a separate stage
After the analysis finishes, Flagger will do the promotion and wait for the primary rollout to finish before routing all the traffic back to it. This ensures a smooth transition to the new version avoiding dropping in-flight requests.
2019-09-23 21:57:24 +03:00
stefanprodan
b58e13809c Add promoting phase to canary status conditions 2019-09-23 21:48:09 +03:00
Stefan Prodan
9845578cdd Merge pull request #307 from weaveworks/confirm-promotion
Implement confirm-promotion hook
2019-09-23 12:32:52 +03:00
stefanprodan
96ccfa54fb Add confirm-promotion hook example to docs 2019-09-22 14:14:35 +03:00
stefanprodan
b8a64c79be Add confirm-promotion webhook to e2e tests 2019-09-22 13:44:55 +03:00
stefanprodan
4a4c261a88 Add confirm-promotion webhook type to CRD 2019-09-22 13:36:07 +03:00
stefanprodan
8282f86d9c Implement confirm-promotion hook
The confirm promotion hooks are executed right before the promotion step. The canary promotion is paused until the hooks return HTTP 200. While the promotion is paused, Flagger will continue to run the metrics checks and load tests.
2019-09-22 13:23:19 +03:00
Stefan Prodan
2b6966d8e3 Merge pull request #306 from weaveworks/e2e-updates
Update end-to-end tests to Istio 1.3.0
2019-09-22 12:37:05 +03:00
stefanprodan
c667c947ad Istio e2e: update job names 2019-09-22 12:12:06 +03:00
stefanprodan
105b28bf42 Update e2e to Kind 0.5.1 and Istio to 1.3.0 2019-09-22 12:05:35 +03:00
Stefan Prodan
37a1ff5c99 Merge pull request #305 from weaveworks/service-mesh-blue-green
Implement B/G for service mesh providers
2019-09-22 12:01:10 +03:00
stefanprodan
d19a070faf Add canary status checks to Istio e2e tests 2019-09-22 11:45:07 +03:00
stefanprodan
d908355ab3 Add Blue/Green e2e tests 2019-09-22 09:32:25 +03:00
stefanprodan
a6d86f2e81 Skip mesh routers for B/G when provider is kubernetes 2019-09-22 00:48:42 +03:00
stefanprodan
9d856a4f96 Implement B/G for service mesh providers
Blue/Green steps:
- scale up green
- run conformance tests on green
- run load tests and metric checks on green
- route traffic to green
- promote green spec over blue
- wait for blue rollout
- route traffic to blue
2019-09-21 21:21:33 +03:00
Stefan Prodan
a7112fafb0 Merge pull request #304 from nilscan/pod-annotations
Add pod annotations on all deployments
2019-09-19 02:11:30 +01:00
nilscan
93f9e51280 Add pod annotations on all deployments 2019-09-19 12:42:22 +12:00
Stefan Prodan
65e9a402cf Merge pull request #297 from weaveworks/prep-0.18.4
Release v0.18.4
2019-09-08 11:37:47 +03:00
stefanprodan
f7513b33a6 Release v0.18.4 2019-09-08 11:21:16 +03:00
Stefan Prodan
0b3fa517d3 Merge pull request #296 from weaveworks/helmv3-tester
Implement Helm v3 tester
2019-09-08 09:49:52 +03:00
stefanprodan
507075920c Implement Helm v3 tester 2019-09-08 09:33:34 +03:00
Stefan Prodan
a212f032a6 Merge pull request #295 from weaveworks/grpc-hc
Add gPRC health check to load tester
2019-09-06 17:00:22 +03:00
stefanprodan
eb8755249f Update cert-manager to v0.10 2019-09-06 16:44:39 +03:00
stefanprodan
73bb2a9fa2 Release loadtester 0.7.1 2019-09-06 16:21:22 +03:00
stefanprodan
5d3ffa8c90 Add grpc_health_probe to load tester image 2019-09-06 16:19:23 +03:00
Stefan Prodan
87f143f5fd Merge pull request #293 from kislitsyn/nginx-annotations-prefix
Add annotations prefix for ingresses
2019-09-06 13:22:42 +03:00
Anton Kislitcyn
f56b6dd6a7 Add annotations prefix for ingresses 2019-09-06 11:36:06 +02:00
Stefan Prodan
5e40340f9c Merge pull request #289 from nilscan/owide
Add Wide columns in CRD
2019-09-04 14:59:17 +03:00
nilscan
2456737df7 Add Wide columns in CRD 2019-09-03 12:54:14 +12:00
stefanprodan
1191d708de Fix Prometheus GKE install docs 2019-08-30 13:13:36 +03:00
Stefan Prodan
4d26971fc7 Merge pull request #286 from jwenz723/patch-1
Enhanced error logging
2019-08-29 09:14:16 +03:00
Jeff Wenzbauer
0421b32834 Enhanced error logging
Updated the formatting of the `out` to be logged as a string rather than a bunch of bytes.
2019-08-28 12:43:08 -06:00
Stefan Prodan
360dd63e49 Merge pull request #282 from weaveworks/prep-0.18.3
Release 0.18.3
2019-08-22 18:53:15 +03:00
stefanprodan
f1670dbe6a Add 0.18.3 changelog 2019-08-22 18:39:47 +03:00
stefanprodan
e7ad5c0381 Release load tester v0.7.0 2019-08-22 18:31:05 +03:00
stefanprodan
2cfe2a105a Release Flagger v0.18.3 2019-08-22 18:30:46 +03:00
Stefan Prodan
bc83cee503 Merge pull request #278 from mjallday/patch-1
Embedding Health Check Protobuf
2019-08-22 18:19:58 +03:00
Stefan Prodan
5091d3573c Merge pull request #281 from weaveworks/fix-appmesh-crd
Fix App Mesh backends validation in CRD
2019-08-22 10:02:38 +03:00
Marshall Jones
ffe5dd91c5 Add an example and fix path to downloaded proto file 2019-08-21 15:15:01 -07:00
stefanprodan
d76b560967 Bump podinfo version in the App Mesh demo 2019-08-21 21:52:36 +03:00
stefanprodan
f062ef3a57 Fix App Mesh backends validation in CRD 2019-08-21 21:45:36 +03:00
Stefan Prodan
5fc1baf4df Merge pull request #280 from vbehar/loadtester-helm-tillerless
loadtester: add support for tillerless helm
2019-08-21 17:25:44 +03:00
Vincent Behar
777b77b69e loadtester: add support for tillerless helm
- upgrade helm to 2.14, and install the [helm-tiller](https://github.com/rimusz/helm-tiller) plugin to run in "tillerless" mode - with a local tiller instance
- also add support to create RBAC resources in the loadtester chart, because when running in tillerless mode, the pod service account will be used instead of the tiller one - so we need to give him specific permissions

this allow the use of the loadtester to run `helm test` in tillerless mode, with `helm tiller run -- helm test` for example
2019-08-21 15:54:49 +02:00
Marshall Jones
5d221e781a Propose Embedding Health Check Proto
Copy this file https://github.com/grpc/grpc-proto/blob/master/grpc/health/v1/health.proto into the ghz folder for use when promoting a canary running a grpc service. 

This repo describes the file:

> This repository contains common protocol definitions for peripheral services around gRPC such as health checking, load balancing etc..

Any app that chooses to implement this interface (which imo should be any grpc service exposing a health check) will then be able to use this without providing reflection. 

I'm not a grpc expert so I'm not sure what the best practices are around allowing reflection on the server but this seems like a simple solution for those who choose not to enable it.

Slack discussion on the weave users slack is here - https://weave-community.slack.com/archives/CGLQLLH9Q/p1566358441123400

You can utilize this file like so 

`/ghz --proto=/tmp/health.proto --call=grpc.health.v1.Health/Check ...`
2019-08-20 20:47:30 -07:00
Stefan Prodan
ddab72cd59 Merge pull request #276 from weaveworks/podinfo
Update podinfo to v2.0
2019-08-14 10:46:06 +03:00
stefanprodan
87d0b33327 Add provider field to nginx and gloo docs 2019-08-14 10:14:00 +03:00
stefanprodan
225a9015bb Update podinfo to v2.0 2019-08-14 09:28:36 +03:00
Stefan Prodan
c0b60b1497 Merge pull request #272 from weaveworks/appmesh
Set HTTP listeners for AppMesh virtual routers
2019-08-13 09:48:49 +03:00
Stefan Prodan
0463c19825 Merge pull request #275 from hiddeco/build/codegen
Support non `$GOPATH/src` location for codegen
2019-08-13 09:48:27 +03:00
Hidde Beydals
8e70aa90c1 Support non $GOPATH/src location for codegen
This commit fixes two things:

- it ensures the code generation works no matter the location of the
  project directory
- as a side effect; fixes the `hack/verify-codegen.sh` ran during CI

The previous script (or more specific: the `code-generator` library)
made the assumption during execution that the project was placed
inside `$GOPATH/src` and made the modifications there.

The idea of Go Modules is however that a project and/or package can
be placed anywhere, and this is also what the CI did, resulting in a
comparison of two identical `cp -r` copied directories. Giving a
false green light on every CI run.

To work around this limitation in `code-generator`: create a
temporary directory, use this as an output base and copy
everything back once generated.
2019-08-12 22:41:10 +02:00
stefanprodan
0a418eb88a Add notifier tests 2019-08-12 09:47:11 +03:00
stefanprodan
040dbb8d03 Add http listener to virtual router reconciliation 2019-08-10 11:04:15 +03:00
stefanprodan
64f2288bdd Add listeners to AppMesh virtual router 2019-08-10 10:58:20 +03:00
Stefan Prodan
8008562a33 Merge pull request #271 from weaveworks/crd
Add missing fields to CRD validation spec
2019-08-07 11:09:07 +03:00
stefanprodan
a39652724d Add confirm and pre rollout hooks to e2e tests 2019-08-07 10:55:15 +03:00
stefanprodan
691c3c4f36 Add missing fields to CRD validation spec 2019-08-07 10:22:07 +03:00
Stefan Prodan
f6fa5e3891 Merge pull request #270 from weaveworks/prep-0.18.2
Release v0.18.2
2019-08-05 18:57:54 +03:00
stefanprodan
a305a0b705 Release v0.18.2 2019-08-05 18:43:57 +03:00
Stefan Prodan
dfe619e2ea Merge pull request #269 from weaveworks/helm-circleci
Publish Helm chart from CircleCI
2019-08-05 17:57:21 +03:00
stefanprodan
2b3d425b70 Publish Helm chart from CircleCI 2019-08-05 17:08:33 +03:00
Stefan Prodan
6e55fea413 Merge pull request #268 from weaveworks/istio-1.2.3
Update e2e backends
2019-08-03 15:44:53 +03:00
stefanprodan
b6a08b6615 Fix AppMesh mesh name in docs 2019-08-03 15:24:31 +03:00
stefanprodan
eaa6906516 Update e2e NGINX ingress to v1.12.1 2019-08-03 13:42:27 +03:00
stefanprodan
62a7a92f2a Update e2e Gloo to v0.18.8 2019-08-03 13:01:57 +03:00
stefanprodan
3aeb0945c5 Update e2e Istio to v1.2.3 2019-08-03 12:05:21 +03:00
Stefan Prodan
e8c85efeae Merge pull request #267 from fcantournet/fix_virtualservices_multipleports
Fix Port discovery with multiple port services
2019-08-03 12:04:04 +03:00
Félix Cantournet
6651f6452b Multiple port canary: fix FAQ and add e2e tests 2019-08-02 14:23:58 +02:00
Félix Cantournet
0ca48d77be Fix Port discovery with multiple port services
This fixes issue https://github.com/weaveworks/flagger/issues/263

We actually don't need to specify any ports in the VirtualService
and DestinationRules.
Istio will create clusters/listerners for each named port we have declared in
the kubernetes services and the router can be shared as it operates only on L7 criterias

Also contains a tiny clean-up of imports
2019-08-02 10:07:00 +02:00
Stefan Prodan
a9e0e018e3 Merge pull request #266 from ExpediaInc/master
Parameterize image pull secrets for private docker repos
2019-08-01 11:07:53 +03:00
Sky Moon
122d11f445 Merge pull request #1 from ExpediaInc/parameterizeImagePullSecrets
parameterize image pull secrets for private docker repos.
2019-08-01 00:50:15 -07:00
cmoon
b03555858c parameterize image pull secrets for private docker repos. 2019-08-01 00:47:07 -07:00
Stefan Prodan
dcc5a40441 Merge pull request #262 from weaveworks/prep-0.18.1
Release v0.18.1
2019-07-30 13:52:25 +03:00
stefanprodan
8c949f59de Package helm charts locally 2019-07-30 13:35:09 +03:00
stefanprodan
e8d91a0375 Release v0.18.1 2019-07-30 13:22:51 +03:00
Stefan Prodan
fae9aa664d Merge pull request #261 from weaveworks/blue-green-e2e
Fix Blue/Green metrics provider and add e2e tests
2019-07-30 13:16:20 +03:00
stefanprodan
c31e9e5a96 Use Linkerd metrics for ingress and kubernetes routers 2019-07-30 13:00:28 +03:00
stefanprodan
99fff98274 Kustomize: set Flagger log level to info 2019-07-30 12:43:02 +03:00
stefanprodan
11d84bf35d Enable kubernetes metric provider 2019-07-30 12:27:53 +03:00
stefanprodan
e56ba480c7 Add Blue/Green e2e tests 2019-07-30 12:02:15 +03:00
Stefan Prodan
b9f0517c5d Merge pull request #255 from weaveworks/prep-0.18.0
Release v0.18.0
2019-07-29 16:06:23 +03:00
stefanprodan
6e66f02585 Update changelog 2019-07-29 15:52:50 +03:00
stefanprodan
5922e96044 Merge branch 'prep-0.18.0' of https://github.com/weaveworks/flagger into prep-0.18.0 2019-07-29 15:06:43 +03:00
stefanprodan
f36e7e414a Add manual gating link to readme 2019-07-29 15:06:31 +03:00
stefanprodan
606754d4a5 Disable supergloo e2e 2019-07-29 15:06:31 +03:00
stefanprodan
a3847e64df Add Kustomize download link to docs 2019-07-29 15:06:31 +03:00
stefanprodan
7a3f9f2e73 Use Kustomize for Istio e2e testing 2019-07-29 15:06:31 +03:00
stefanprodan
2e4e8b0bf9 Make installer work with Kustomize v3 2019-07-29 15:06:31 +03:00
stefanprodan
951fe80115 Use crd.create=false in docs 2019-07-29 15:06:30 +03:00
stefanprodan
c0a8149acb Add kubectl min version to Kustomize docs 2019-07-29 15:06:30 +03:00
stefanprodan
80b75b227d Add CRD install step to chart 2019-07-29 15:06:30 +03:00
stefanprodan
dff7de09f2 Use kubectl for CRD install 2019-07-29 15:06:30 +03:00
stefanprodan
b3bbadfccf Add v0.18.0 to changelog 2019-07-29 15:06:30 +03:00
stefanprodan
fc676e3cb7 Release v0.18.0 2019-07-29 15:06:30 +03:00
stefanprodan
860c82dff9 Remove test artifacts 2019-07-29 15:06:30 +03:00
Stefan Prodan
4829f5af7f Merge pull request #257 from weaveworks/promotion
Implement promotion finalising state
2019-07-29 15:03:05 +03:00
stefanprodan
c463b6b231 Add finalising state tests 2019-07-29 14:02:16 +03:00
stefanprodan
b2ca0c4c16 Implement finalising state
Set the canary status to finalising after routing the traffic back to the primary. Run one final loop before scaling the canary to zero so that the canary has a chance to process all inflight requests.
2019-07-29 13:52:11 +03:00
stefanprodan
69875cb3dc Add finalising status phase to CRD 2019-07-29 13:43:30 +03:00
stefanprodan
9e33a116d4 Add manual gating link to readme 2019-07-29 11:33:28 +03:00
stefanprodan
dab3d53b65 Disable supergloo e2e 2019-07-28 11:28:00 +03:00
stefanprodan
e3f8bff6fc Add Kustomize download link to docs 2019-07-27 15:51:22 +03:00
stefanprodan
0648d81d34 Use Kustomize for Istio e2e testing 2019-07-27 14:49:57 +03:00
stefanprodan
ece5c4401e Make installer work with Kustomize v3 2019-07-27 14:45:49 +03:00
stefanprodan
bfc64c7cf1 Use crd.create=false in docs 2019-07-27 13:20:55 +03:00
stefanprodan
0a2c134ece Add kubectl min version to Kustomize docs 2019-07-27 13:07:47 +03:00
stefanprodan
8bea9253c3 Add CRD install step to chart 2019-07-27 13:06:27 +03:00
stefanprodan
e1dacc3983 Use kubectl for CRD install 2019-07-26 15:52:00 +03:00
stefanprodan
0c6a7355e7 Add v0.18.0 to changelog 2019-07-26 14:05:19 +03:00
stefanprodan
83046282c3 Release v0.18.0 2019-07-26 13:53:40 +03:00
stefanprodan
65c9817295 Remove test artifacts 2019-07-26 13:51:15 +03:00
Stefan Prodan
e4905d3d35 Merge pull request #254 from weaveworks/podinfo
Use Kustomize installer in Linkerd docs
2019-07-26 13:44:51 +03:00
stefanprodan
6bc0670a7a Use Kustomize installer in Linkerd docs 2019-07-26 13:30:28 +03:00
stefanprodan
95ff6adc19 Use podinfo 1.7 in GitOps demo 2019-07-26 13:20:06 +03:00
stefanprodan
7ee51c7def Add podinfo to Kustomize installer 2019-07-26 13:19:36 +03:00
Stefan Prodan
dfa065b745 Merge pull request #251 from weaveworks/gates
Implement confirm rollout gate, hook and API
2019-07-26 01:40:35 +03:00
stefanprodan
e3b03debde Use podinfo v1.7 2019-07-26 01:25:44 +03:00
Stefan Prodan
ef759305cb Merge pull request #253 from grampelberg/master
Update Linkerd to use correct canaries directory.
2019-07-26 00:24:52 +03:00
grampelberg
ad65497d4e Update Linkerd to use correct canaries directory. 2019-07-25 11:10:52 -07:00
stefanprodan
163f5292b0 Push a notification when a canary is waiting for approval 2019-07-25 19:13:22 +03:00
stefanprodan
e07a82d024 Add manual gating to docs 2019-07-25 13:32:58 +03:00
stefanprodan
046245a8b5 Use Gloo 0.17.6 in e2e tests 2019-07-24 19:54:33 +03:00
stefanprodan
aa6a180bcc Remove Gloo NodePort from e2e tests 2019-07-24 19:44:06 +03:00
stefanprodan
c4d28e14fc Upgrade Gloo e2e to v0.17.5 2019-07-24 19:35:02 +03:00
stefanprodan
bc4bdcdc1c Upgrade Gloo e2e to v0.17.6 2019-07-24 19:21:41 +03:00
stefanprodan
be22ff9951 Bump load tester version 2019-07-24 16:28:46 +03:00
stefanprodan
f204fe53f4 Implement canary gating API with in-memory storage
POST /gate/[check|open|close]
2019-07-24 16:14:22 +03:00
stefanprodan
28e7e89047 Pause or resume analysis on confirmation gate toggle 2019-07-24 16:09:13 +03:00
stefanprodan
75d49304f3 Add confirm-rollout hook to docs 2019-07-24 12:17:11 +03:00
stefanprodan
04cbacb6e0 Implement confirm rollout gate and hook
The confirm-rollout hooks are executed before the pre-rollout hooks. Flagger will halt the canary rollout until the confirm webhook returns HTTP status 200.
2019-07-24 12:09:39 +03:00
stefanprodan
c46c7b9e21 Add canary status conditions to docs 2019-07-24 12:04:05 +03:00
stefanprodan
919dafa567 Add gate halt and approve endpoints 2019-07-24 12:02:44 +03:00
stefanprodan
dfdcfed26e Add Waiting canary status phase
means the canary rollout is paused (waiting for confirmation to proceed)
2019-07-24 12:00:04 +03:00
Stefan Prodan
a0a4d4cfc5 Merge pull request #248 from weaveworks/ghz
Add gRPC load testing tool
2019-07-23 12:44:04 +03:00
stefanprodan
970a589fd3 Add load tester to kustomize installer 2019-07-23 12:30:38 +03:00
stefanprodan
56d2c0952a Add gPRC load test example to docs 2019-07-22 15:16:13 +03:00
stefanprodan
4871be0345 Release loadtester v0.5.0 2019-07-22 14:57:14 +03:00
stefanprodan
e3e112e279 Add gRPC load testing tool
https://ghz.sh
2019-07-22 14:55:19 +03:00
Stefan Prodan
d2cbd40d89 Merge pull request #240 from weaveworks/refactor
Refactor canary change detection and status
2019-07-22 14:33:02 +03:00
stefanprodan
3786a49f00 Update Linkerd e2e to v2.4.0 2019-07-16 11:20:42 +02:00
stefanprodan
ff4aa62061 Retry canary status update on conflict 2019-07-10 11:31:20 +03:00
stefanprodan
9b6cfdeef7 Update Canary CRD helm chart and Kustomize 2019-07-10 09:55:46 +03:00
stefanprodan
9d89e0c83f Log status update error 2019-07-10 09:55:20 +03:00
stefanprodan
559cbd0d36 Pin NGINX helm chart to v1.8.2 2019-07-10 09:49:39 +03:00
stefanprodan
caea00e47f Pin NGINX helm chart to version 1.8.2 2019-07-10 09:42:49 +03:00
stefanprodan
b26542f38d Do not trigger a canary deployment on manual rollback
Save the primary spec hash and check if it matches the canary spec. If the canary hash is identical with the primary one skip promotion.
2019-07-10 09:08:33 +03:00
Stefan Prodan
bbab7ce855 Merge pull request #238 from weaveworks/prep-0.17.0
Release v0.17.0
2019-07-10 08:46:10 +03:00
stefanprodan
afa2d079f6 Add status conditions and descriptions to CRD 2019-07-09 17:11:13 +03:00
stefanprodan
108bf9ca65 Add initializing canary phase/status condition reason
Fix HPA reconciliation min replicas diff
2019-07-09 17:10:43 +03:00
stefanprodan
438f952128 Implement status conditions
Add Promoted status condition with the following reasons: Initialized, Progressing, Succeeded, Failed
Usage: `kubectl wait canary/app --for=condition=promoted`
Fix: #184
2019-07-09 15:22:56 +03:00
stefanprodan
3e84799644 Detect changes in pod template metadata
Use the pod template spec hash to track changes (breaking)
2019-07-09 08:52:31 +03:00
stefanprodan
d6e80bac7f Update webhook mTLS FAQ
Fix: #239
2019-07-08 17:21:59 +03:00
stefanprodan
9b3b24bddf Add v0.17.0 changelog 2019-07-08 15:02:09 +03:00
stefanprodan
5c831ae482 Add Linkerd to docs 2019-07-08 13:43:01 +03:00
stefanprodan
78233fafd3 Release v0.17.0 2019-07-08 13:35:36 +03:00
Stefan Prodan
73c3e07859 Merge pull request #236 from weaveworks/leader-election
Implement leader election
2019-07-08 11:32:06 +03:00
stefanprodan
10c61daee4 Exit when losing leadership 2019-07-07 13:09:05 +03:00
stefanprodan
b1bb9fa114 Enable leader election for e2e testing 2019-07-07 12:19:09 +03:00
stefanprodan
a7f4b6d2ae Add leader election and pod anti affinity to chart 2019-07-07 12:08:08 +03:00
stefanprodan
b937c4ea8d Implement leader election
Add enable-leader-election and leader-election-namespace flags
2019-07-07 11:41:27 +03:00
Stefan Prodan
e577311b64 Merge pull request #235 from weaveworks/msteams
Implement MS Teams notifications
2019-07-07 11:40:00 +03:00
stefanprodan
b847345308 Add 5 seconds timeout to notifier 2019-07-06 18:02:45 +03:00
stefanprodan
85e683446f Add MS Teams to docs 2019-07-06 17:22:39 +03:00
stefanprodan
4f49aa5760 Add MS Teams webhook field to chart 2019-07-06 17:14:50 +03:00
stefanprodan
8ca9cf24bb Implement MS Teams notifier 2019-07-06 17:14:21 +03:00
stefanprodan
61d0216c21 Add traffic routing to notifications 2019-07-06 17:13:09 +03:00
stefanprodan
ba4a2406ba Refactor notifier to allow more implementations 2019-07-06 15:47:12 +03:00
Stefan Prodan
c2974416b4 Merge pull request #234 from weaveworks/psp
Add pod security policy to Helm chart
2019-07-06 11:52:58 +03:00
stefanprodan
48fac4e876 Disable privilege escalation 2019-07-06 10:38:17 +03:00
stefanprodan
f0add9a67c Use a role binding for the PSP rbac 2019-07-05 17:10:05 +03:00
stefanprodan
20f9df01c2 Add pod security policy to Helm chart
- disable privileged, hostIPC, hostNetwork and hostPID
- add psp flag to chart readme
2019-07-05 16:47:41 +03:00
Stefan Prodan
514e850072 Merge pull request #232 from weaveworks/kustomize
Add Kustomize installer
2019-07-05 16:23:44 +03:00
stefanprodan
61fe78a982 Mention Prometheus data retention in docs 2019-07-04 15:07:05 +03:00
stefanprodan
c4b066c845 Add Kustomize installer to docs 2019-07-04 13:45:56 +03:00
stefanprodan
d24a23f3bd Kustomize installer: add installer readme 2019-07-04 13:29:02 +03:00
stefanprodan
22045982e2 Kustomize installer: add Linkerd overlay 2019-07-04 13:27:23 +03:00
stefanprodan
f496f1e18f Kustomize installer: add Istio overlay 2019-07-04 13:27:03 +03:00
stefanprodan
2e802432c4 Kustomize installer: add Kubernetes overlay 2019-07-04 13:26:52 +03:00
stefanprodan
a2f747e16f Kustomize installer: add Prometheus base manifests 2019-07-04 13:26:22 +03:00
stefanprodan
982338e162 Kustomize installer: add Flagger base manifests 2019-07-04 13:26:08 +03:00
Stefan Prodan
03fe4775dd Merge pull request #231 from weaveworks/gloo-0.14.2
Update Gloo and Prometheus
2019-07-02 09:24:14 +03:00
stefanprodan
def7d9bde0 Update Prometheus to v2.10.0 and set retention to 2h 2019-07-02 09:10:56 +03:00
stefanprodan
a58a7cbeeb Update Gloo to 0.14.2 2019-07-01 16:07:38 +03:00
Stefan Prodan
82ca66c23b Merge pull request #230 from weaveworks/linkerd
Add Linkerd support and e2e testing
2019-07-01 10:28:38 +03:00
stefanprodan
92c971c0d7 Add ingress and A/B testing example to Linkerd docs 2019-07-01 10:02:50 +03:00
stefanprodan
30c4faf72b Add Linkerd canary deployments docs 2019-07-01 09:23:38 +03:00
stefanprodan
85ee7d17cf Set min analysis interval to 10s 2019-07-01 09:23:05 +03:00
stefanprodan
a6d278ae91 Add Linkerd traffic split diagram 2019-06-30 13:52:59 +03:00
stefanprodan
ad8d02f701 Use Linkerd metrics when NGINX is the mesh ingress
Set the metrics provider to Linkerd Prometheus when using NGINX as Linkerd Ingress. This mitigates the lack of canary metrics in the NGINX controller exporter.
2019-06-30 13:03:27 +03:00
stefanprodan
00fa5542f7 Add linkerd as mesh provider 2019-06-30 12:46:23 +03:00
stefanprodan
9ed2719d19 Add canary rollback test to Linkerd e2e 2019-06-30 10:36:37 +03:00
stefanprodan
8a809baf35 Linkerd e2e testing: set canary max weight to 50% 2019-06-29 16:06:16 +03:00
stefanprodan
ff90c42fa7 Fix Linkerd CLI install 2019-06-29 15:53:36 +03:00
stefanprodan
d651e8fe48 Fix Linkerd metrics test 2019-06-29 15:23:40 +03:00
stefanprodan
bc613905e9 Add Linkerd edge-19.6.4 e2e testing 2019-06-29 15:20:35 +03:00
stefanprodan
e3321118e5 Fix linkerd success rate query 2019-06-29 15:03:36 +03:00
Stefan Prodan
31f526cbd6 Merge pull request #229 from weaveworks/istio-1.2.2
Update Istio e2e to v1.2.2
2019-06-29 10:53:31 +03:00
stefanprodan
493554178f Update Istio e2e to v1.2.2
Disable galley and MCP
2019-06-29 10:27:40 +03:00
Stefan Prodan
004b1cc7dd Merge pull request #228 from weaveworks/updates
Update Grafana and Kubernetes Kind
2019-06-27 10:31:20 +03:00
stefanprodan
767602592c Bump podinfo chart version 2019-06-27 10:19:34 +03:00
stefanprodan
34676acaf5 Add Istio TLS mode to podinfo chart 2019-06-27 10:14:06 +03:00
stefanprodan
491ab7affa Update Grafana to v6.2.5 2019-06-27 10:13:25 +03:00
stefanprodan
b522bbd903 Update Kubernetes Kind to v0.4.0 2019-06-27 09:58:56 +03:00
Stefan Prodan
dd3bc28806 Merge pull request #227 from dcherman/validate-k8s-version
Validate the minimum supported k8s version
2019-06-27 09:56:25 +03:00
Daniel Herman
764e7e275d Validate the minimum supported k8s version
Fixes #103
2019-06-27 00:18:08 -04:00
stefanprodan
931c051153 Fix tag composition in release script 2019-06-24 17:46:12 +03:00
Stefan Prodan
3da86fe118 Merge pull request #224 from weaveworks/hpa
Update primary HPA only when canary changed
2019-06-24 17:14:26 +03:00
stefanprodan
93f37a3022 Update primary HPA only when canary changed 2019-06-24 16:21:22 +03:00
stefanprodan
77b3d861e6 Add release workflow to CI 2019-06-24 16:08:56 +03:00
Stefan Prodan
ce0e16ffe8 Merge pull request #222 from weaveworks/release-v0.16.0
Release v0.16.0
2019-06-24 13:53:59 +03:00
stefanprodan
fb9709ae78 Add Blue/Green to FAQ 2019-06-23 15:48:13 +03:00
stefanprodan
191c3868ab Update changelog for v0.16.0 2019-06-23 13:52:01 +03:00
stefanprodan
d076f0859e Release v0.16.0 2019-06-23 13:48:39 +03:00
stefanprodan
df24ba86d0 Add Blue/Green tutorial to docs 2019-06-23 13:40:12 +03:00
stefanprodan
3996bcfa67 Add canary provider field to docs 2019-06-23 13:35:36 +03:00
Stefan Prodan
9e8a4ad384 Merge pull request #221 from weaveworks/gloo-v0.14.0
Update Gloo e2e testing to v0.14.0
2019-06-23 11:35:31 +03:00
stefanprodan
26ee668612 Use Kind 0.2.1 for Gloo e2e 2019-06-23 11:22:21 +03:00
stefanprodan
e3c102e7f8 Use test ns for Gloo virtual service in e2e 2019-06-22 17:49:13 +03:00
stefanprodan
ba60b127ea Use Kind 0.3.0 for Gloo e2e 2019-06-22 16:50:50 +03:00
stefanprodan
74c69dc07e Update Gloo e2e to v0.14.0 2019-06-22 16:48:38 +03:00
Stefan Prodan
0687d89178 Merge pull request #220 from weaveworks/update-e2e
Update e2e tests to Kubernetes 1.14
2019-06-22 16:12:20 +03:00
stefanprodan
7a454c005f Use Kind 0.2.1 for supergloo e2e 2019-06-22 15:55:48 +03:00
stefanprodan
2ce4f3a93e Revert supergloo upgrade (Istio 1.1 not supported in v0.3.23) 2019-06-22 15:41:17 +03:00
stefanprodan
7baaaebdd4 Use Kind 0.2.1 for Gloo e2e 2019-06-22 15:28:45 +03:00
stefanprodan
608c7f7a31 Use Istio 1.1.3 for supergloo e2e testing 2019-06-22 15:15:38 +03:00
stefanprodan
1a0daa8678 Use http probes with Kind 0.3.0 2019-06-22 14:58:01 +03:00
stefanprodan
ed0d25af97 Revert to Kind 0.2.1 2019-06-22 14:33:06 +03:00
stefanprodan
720d04aba1 Update Supergloo to v0.3.23 2019-06-22 14:16:30 +03:00
stefanprodan
901648393a Update Kubernetes Kind to v0.3.0 2019-06-22 14:02:33 +03:00
Stefan Prodan
b5acd817fc Merge pull request #219 from weaveworks/istio-1.2.0
Fix Istio 1.2.0 e2e testing
2019-06-22 13:58:48 +03:00
stefanprodan
2586fc6ef0 Update Kubernetes Kind to v0.3.0 2019-06-22 13:58:18 +03:00
stefanprodan
62e0eb6395 Update changelog 2019-06-22 13:44:47 +03:00
stefanprodan
768b0490e2 Show CircleCI build status 2019-06-22 13:43:39 +03:00
stefanprodan
852454fa2c Fix Istio v1.2.0 e2e testing by enabling galley 2019-06-22 13:42:57 +03:00
Stefan Prodan
970b67d6f6 Merge pull request #212 from marcoferrer/bump-e2e-istio-version
Upgrade e2e tests to Istio v1.2.0
2019-06-22 13:41:11 +03:00
Stefan Prodan
ea0eddff82 Merge pull request #218 from weaveworks/ci
Refactor CI and e2e testing
2019-06-22 13:11:29 +03:00
stefanprodan
0d4d2ac37b CircleCI: Build and push load tester 2019-06-22 09:51:37 +03:00
stefanprodan
d0591916a4 Update k8s packages 2019-06-21 23:42:59 +03:00
stefanprodan
6a8aef8675 CircleCI: workaround for code gen 2019-06-21 21:47:09 +03:00
stefanprodan
a894a7a0ce CircleCI: update code gen package 2019-06-21 20:36:18 +03:00
stefanprodan
0bbe724b8c CircleCI: chmod k8s code gen 2019-06-21 16:43:16 +03:00
stefanprodan
bea22c0259 CircleCI: run go mod download 2019-06-21 16:40:25 +03:00
stefanprodan
6363580120 Fix k8s code gen 2019-06-21 16:34:19 +03:00
stefanprodan
cbdc7ef2d3 Build and run k8s code gen with go modules 2019-06-21 16:31:51 +03:00
stefanprodan
0959406609 Remove vendor dir 2019-06-21 16:31:19 +03:00
stefanprodan
cf41f9a478 CircleCI - fix deprecated goreleaser config 2019-06-21 16:15:51 +03:00
stefanprodan
6fe6a41e3e CircleCI - cleanup branch filters 2019-06-21 15:58:23 +03:00
stefanprodan
91cd2648d9 CircleCI - run goreleaser for git tags 2019-06-21 15:39:56 +03:00
stefanprodan
240591a6b8 CircleCI - run goreleaser 2019-06-21 15:29:38 +03:00
stefanprodan
2973822113 CircleCI - test goreleaser job 2019-06-21 15:17:39 +03:00
stefanprodan
a6b2b1246c CircleCI - add goreleaser job 2019-06-21 15:07:00 +03:00
stefanprodan
c74456411d CircleCI - install Tiller after Kind create 2019-06-21 14:54:17 +03:00
stefanprodan
31b3fcf906 CircleCI - refactor e2e tests 2019-06-21 14:43:00 +03:00
stefanprodan
767be5b6a8 CircleCI - reset go mod cache 2019-06-21 14:28:29 +03:00
stefanprodan
48834cd8d1 CircleCI - refactor Istio e2e testing 2019-06-21 14:26:12 +03:00
stefanprodan
f4bb0ea9c2 CircleCI - add codecov 2019-06-21 13:58:36 +03:00
stefanprodan
cf25a9a8a5 CircleCI - fix config 2019-06-21 13:47:06 +03:00
stefanprodan
4f0ad7a067 CircleCI - run Istio e2e 2019-06-21 13:45:16 +03:00
stefanprodan
c0fe461a9f CircleCI - push to Docker Hub 2019-06-21 13:33:26 +03:00
stefanprodan
1911143514 CircleCI - run go test 2019-06-21 13:29:32 +03:00
stefanprodan
9b67b360d0 CircleCI - build and push container 2019-06-21 13:26:12 +03:00
stefanprodan
991e01efd2 CircleCI - fix container build 2019-06-21 13:10:27 +03:00
stefanprodan
83b8ae46c9 CircleCI - copy bin to workspace dir 2019-06-21 13:07:59 +03:00
stefanprodan
c3b7aee063 CircleCI - make workspace dir 2019-06-21 13:05:16 +03:00
stefanprodan
66d662c085 CircleCI - fix working_directory 2019-06-21 13:01:47 +03:00
stefanprodan
4d5876fb76 CircleCI - fix job name 2019-06-21 13:00:26 +03:00
stefanprodan
7ca2558a81 CircleCI - fix config 2019-06-21 12:59:26 +03:00
stefanprodan
8957994c1a CircleCI - set job deps 2019-06-21 12:58:17 +03:00
stefanprodan
0147aea69b Build binary and container in CircleCI
Cache go modules
2019-06-21 12:55:27 +03:00
stefanprodan
b5f73d66ec Add version command 2019-06-21 12:54:43 +03:00
Stefan Prodan
6800181594 Merge pull request #217 from weaveworks/provider
Add the service mesh provider to the canary spec
2019-06-21 11:13:17 +03:00
Stefan Prodan
6f5f80a085 Merge pull request #216 from weaveworks/hpa-promotion
Reconcile the primary HPA on canary promotion
2019-06-21 11:13:02 +03:00
stefanprodan
fd23a2f98f Add kubernetes provider type
Synonym to provider `none`, to be used for blue/green deployments
2019-06-20 15:18:48 +03:00
stefanprodan
63cb8a5ba5 Lookup the canary provider field during reconciliation
Override the global provider if one is specified in the canary spec
2019-06-20 14:52:43 +03:00
stefanprodan
4a9e3182c6 Add the mesh provider field to canary CRD 2019-06-20 14:50:21 +03:00
stefanprodan
5cbc3df7b5 Use internal load testers address in canary example 2019-06-20 13:32:06 +03:00
stefanprodan
dcadc2303f Add HPA promotion tests 2019-06-20 13:31:34 +03:00
stefanprodan
cf5f364ed2 Update the primary HPA on canary promotion 2019-06-20 13:30:55 +03:00
Stefan Prodan
e45ace5d9b Merge pull request #211 from weaveworks/noprouter
Allow blue/green deployments without a service mesh provider
2019-06-20 13:02:02 +03:00
Marco Ferrer
6e7421b0d8 Upgrade e2e tests to Istio v1.2.0 2019-06-19 13:26:22 -04:00
stefanprodan
647d02890f Add HTTP metrics when no mesh provider is specified
Implement request-success-rate and request-duration checks using http_request_duration_seconds histogram
2019-06-19 13:15:17 +03:00
stefanprodan
7e72d23b60 Bump load tester version to 0.4.0 2019-06-19 13:12:04 +03:00
stefanprodan
9fada306f0 Add a service mesh provider of type none
To be used for Kubernetes blue/green deployments with the no-operations router
2019-06-19 12:02:40 +03:00
stefanprodan
8d1cc83405 Add a no-operation router
To be used for Kubernetes blue/green deployments (no service mesh or ingress controller)
2019-06-19 12:01:02 +03:00
Stefan Prodan
1979bc59d0 Merge pull request #210 from weaveworks/nop-router
Kubernetes service reconciliation improvements
2019-06-19 11:49:10 +03:00
stefanprodan
bf7ebc9708 Skip readiness check on init for Istio SMI 2019-06-19 11:16:11 +03:00
stefanprodan
dc3cde88d2 Use Helm to install Flagger for Istio e2e tests 2019-06-19 11:03:44 +03:00
stefanprodan
98beb1011e Skip primary check on init when using Istio
The deployment will become ready after the ClusterIP are created
2019-06-19 10:50:55 +03:00
stefanprodan
8c59e9d2b4 Fix metrics URL getter 2019-06-19 10:30:19 +03:00
stefanprodan
9a87d47f45 Check primary readiness on initialisation
Wait for the primary to become ready before scaling down the canary in the init phase
2019-06-19 09:49:25 +03:00
stefanprodan
f25023ed1b Include selector in service reconciliation
- detect changes in the Kubernetes service selectors and ports
- preserve the immutable fields when updating the ClusterIP services
2019-06-18 17:57:00 +03:00
stefanprodan
806b233d58 Fix typo in ClusterIP FAQ 2019-06-18 13:35:14 +03:00
Stefan Prodan
677ee8d639 Merge pull request #207 from weaveworks/port-discovery
Implement port discovery
2019-06-18 13:09:35 +03:00
stefanprodan
61ac8d7a8c Add port discovery to canary example 2019-06-18 12:46:21 +03:00
stefanprodan
278680b248 Add port discovery to changelog 2019-06-18 12:43:50 +03:00
stefanprodan
5e4a58a1c1 Upgrade e2e tests to Istio v1.1.9 2019-06-18 11:35:21 +03:00
stefanprodan
757b5ca22e Add missing config params to chart readme 2019-06-18 11:35:07 +03:00
stefanprodan
6d1da5bb45 Use container name in port discovery
If the port name is missing, append the container name to the tcp port name
2019-06-17 20:50:21 +03:00
stefanprodan
9ca79d147d Add Istio virtual service merging to FAQ 2019-06-17 12:06:53 +03:00
stefanprodan
37fcfe15bb Merge feature comparison table 2019-06-16 11:48:21 +03:00
stefanprodan
a9c7466359 Add pod affinity and label selectors to FAQ 2019-06-16 11:18:51 +03:00
stefanprodan
91a3f2c9a7 Add NGINX A/B testing convention to FAQ 2019-06-16 10:52:33 +03:00
stefanprodan
9aa341d088 Add load tester mTLS to FAQ
Ref: #186
2019-06-16 10:38:07 +03:00
stefanprodan
c9e09fa8eb Add Istio mTLS to FAQ
Fix: #205
2019-06-16 10:36:25 +03:00
stefanprodan
e6257b7531 Add port discovery to FAQ 2019-06-16 10:33:03 +03:00
stefanprodan
aee027c91c Add Kubernetes services to FAQ 2019-06-16 10:32:25 +03:00
stefanprodan
c106796751 Add A/B testing to FAQ 2019-06-16 10:31:25 +03:00
stefanprodan
42bd600482 Update GKE Prometheus config 2019-06-15 16:55:27 +03:00
stefanprodan
47ad81be5b Remove unused go modules 2019-06-15 16:54:52 +03:00
stefanprodan
88c450e3bd Implement port discovery
If port discovery is enabled, Flagger scans the deployment pod template and extracts the container ports excluding the port specified in the canary service spec and Istio proxy ports. All the extra ports will be used when generation the ClusterIP services.
2019-06-15 16:34:32 +03:00
stefanprodan
2ebedd185c Add port discovery field to canary service spec 2019-06-15 16:18:54 +03:00
Stefan Prodan
0fdbef4cda Merge pull request #203 from weaveworks/prep-v0.15.0
Release v0.15.0
2019-06-12 16:50:58 +03:00
stefanprodan
68500dc579 Fix e2e helm install 2019-06-12 16:33:09 +03:00
stefanprodan
12a29f1939 Release v0.15.0 2019-06-12 15:38:56 +03:00
stefanprodan
9974968dee Update Istio e2e to 1.1.8 2019-06-12 14:46:29 +03:00
Stefan Prodan
f2eaa91c9c Merge pull request #202 from weaveworks/gomod
Switch to go mod from dep
2019-06-12 11:15:44 +03:00
Stefan Prodan
f117f72901 Merge pull request #200 from weaveworks/traffic-policy
Add support for Istio traffic policy
2019-06-12 11:15:23 +03:00
stefanprodan
5424126d3c Remove go mod from code gen script 2019-06-11 19:48:11 +03:00
stefanprodan
028933b635 Switch to go mod from dep 2019-06-11 19:37:36 +03:00
stefanprodan
678f79fc61 Revendor with go mod 2019-06-11 19:35:26 +03:00
stefanprodan
933c19fdf4 Add generated destination rules to docs 2019-06-11 11:08:05 +03:00
stefanprodan
d678c59285 Add traffic policy to docs 2019-06-07 14:17:29 +03:00
stefanprodan
2285bd210e Add traffic policy to canary service spec
Attach traffic policy to canary and primary destination rules
2019-06-07 13:58:59 +03:00
stefanprodan
cba6e5f811 Add Istio destination rule to RBAC 2019-06-07 13:32:34 +03:00
stefanprodan
3fa9f37192 Reconcile Istio destination rule
Remove port selector from virtual service destinations
Ignore the destination weight field when diffing the virtual service spec
2019-06-07 13:31:07 +03:00
stefanprodan
c243756802 Make Istio port selector optional 2019-06-07 13:22:39 +03:00
stefanprodan
27b1b882ea Add destination rule to Istio clientset 2019-06-07 11:52:51 +03:00
Stefan Prodan
2505cbfe15 Merge pull request #198 from weaveworks/release-v0.14.1
Release v0.14.1
2019-06-05 10:36:18 +03:00
stefanprodan
396452b7b6 Add changelog for v0.14.1 2019-06-05 10:28:40 +03:00
stefanprodan
76c82f48a4 Release v0.14.1 2019-06-05 10:28:13 +03:00
Stefan Prodan
948226dd4e Merge pull request #196 from weaveworks/helm-test-hook
Implement Helm and Bash pre-rollout hooks
2019-06-05 10:11:04 +03:00
stefanprodan
1c97fc86c9 Restrict Helm task to a single command 2019-06-05 09:40:18 +03:00
Stefan Prodan
00de7abfde Merge pull request #197 from Laci21/set-url-custom-path
Add ability to set Prometheus url with custom path without trailing "/"
2019-06-04 19:04:56 +03:00
László Bence Nagy
631d93b8d9 Add ability to set Prometheus url with custom path without trailing '/' 2019-06-04 17:31:27 +02:00
stefanprodan
2e38dbc565 Release test runner v0.4.0 2019-06-04 17:27:58 +03:00
stefanprodan
b122f7f71a Add integration tests to docs 2019-06-04 17:27:19 +03:00
stefanprodan
6101557000 Use the canary service as load testing target 2019-06-04 17:26:35 +03:00
stefanprodan
cdc66128a9 Add helm test pre-rollout example to docs 2019-06-04 16:15:48 +03:00
stefanprodan
eace3713ce Add helm test pre-rollout hook example to podinfo chart 2019-06-04 16:15:25 +03:00
stefanprodan
fd50c4b4b7 Add service account option to tester chart 2019-06-04 15:31:02 +03:00
stefanprodan
62a5f8c5d6 Log helm command before running it 2019-06-04 14:56:58 +03:00
stefanprodan
093cb24602 Run tester locally with docker 2019-06-04 14:02:28 +03:00
stefanprodan
4f63f7f9e4 Bump tester version to 0.4.0-beta.5 2019-06-04 14:01:53 +03:00
stefanprodan
9f359327f0 Add generic bash blocking command 2019-06-04 14:01:25 +03:00
stefanprodan
2bc8194d96 Prepend helm to command 2019-06-04 14:00:00 +03:00
stefanprodan
181d50b7b6 Add Helm tester deployment spec
To be deployed in kube-system namespace, uses tiller service account
2019-06-03 15:57:08 +03:00
stefanprodan
3ae995f55c Bump load tester version to v0.4.0-beta.2 2019-06-03 15:56:14 +03:00
stefanprodan
fbb37ad5e4 Add helm command type (blocking) to tester API
To be used as pre-rollout hook
2019-06-03 15:55:40 +03:00
stefanprodan
5cc3b905b4 Add Helm binary to load tester image 2019-06-03 15:54:06 +03:00
Stefan Prodan
abb8d946cc Merge pull request #194 from christian-posta/ceposta-fix-readme
Fix link to Gloo progressive delivery
2019-05-30 22:34:16 +02:00
Christian Posta
797316fc4d Fix link to Gloo progressive delivery 2019-05-30 12:37:59 -07:00
Stefan Prodan
beed6369a0 Merge pull request #190 from olga-mir/fix-promotion-usecase
Fix promoting canary when max weight is not a multiple of step
2019-05-23 15:13:50 +02:00
Olga Mirensky
9618d2ea0d Fix promoting canary when max weight is not a multiple of step 2019-05-23 10:18:19 +10:00
Stefan Prodan
94e5bfc031 Merge pull request #188 from weaveworks/release-v0.14.0
Release v0.14.0
2019-05-21 14:05:04 +02:00
stefanprodan
bb620ad94a Release v0.14.0 changelog 2019-05-21 13:54:54 +02:00
stefanprodan
7c6d1c48a3 Release v0.14.0 2019-05-21 13:54:15 +02:00
Stefan Prodan
bd5d884c8b Merge pull request #187 from weaveworks/docs-smi
Flagger docs SMI
2019-05-21 13:34:19 +02:00
Stefan Prodan
1c06721c9a Merge pull request #185 from weaveworks/docs-gloo
Add Gloo ingress controller docs
2019-05-21 13:24:19 +02:00
stefanprodan
1e29e2c4eb Fix Grafana Prometheus URL 2019-05-19 10:34:36 +03:00
stefanprodan
88c39d7379 Add Gloo canary deployment docs and diagram 2019-05-17 15:07:43 +03:00
stefanprodan
da43a152ba Add Gloo canary deployment example 2019-05-17 13:15:53 +03:00
stefanprodan
ec63aa9999 Add Gloo custom resources to RBAC 2019-05-17 11:55:15 +03:00
Stefan Prodan
7b9df746ad Merge pull request #179 from yuval-k/gloo2
Add support for Gloo
2019-05-17 11:17:27 +03:00
Yuval Kohavi
52d93ddda2 fix router tests 2019-05-16 13:08:53 -04:00
Yuval Kohavi
eb0331f2bf fix tests 2019-05-16 12:48:03 -04:00
Yuval Kohavi
6a66a87a44 PR updates 2019-05-16 07:28:22 -04:00
stefanprodan
f3cc810948 Update Flagger image tag (fix latency check) 2019-05-15 20:31:25 +03:00
Stefan Prodan
12d84b2e24 Merge pull request #183 from weaveworks/metrics-fix
Fix Istio latency check
2019-05-15 20:24:15 +03:00
stefanprodan
58bde24ece Fix Istio request duration test 2019-05-15 20:10:27 +03:00
stefanprodan
5b3fd0efca Set Istio request duration to milliseconds 2019-05-15 20:01:27 +03:00
stefanprodan
ee6e39afa6 Add SMI tutorial 2019-05-15 17:37:29 +03:00
Yuval Kohavi
677b9d9197 gloo metrics 2019-05-14 17:48:13 -04:00
Yuval Kohavi
786c5aa93a Merge remote-tracking branch 'upstream/master' into gloo2 2019-05-14 10:26:57 -04:00
Stefan Prodan
fd44f1fabf Merge pull request #182 from weaveworks/linkerd-metrics
Fix Linkerd promql queries
2019-05-14 15:23:37 +03:00
Stefan Prodan
b20e0178e1 Merge pull request #180 from weaveworks/smi
Add support for SMI
2019-05-14 13:24:29 +03:00
stefanprodan
5a490abfdd Remove the mesh gateway from docs examples 2019-05-14 13:06:52 +03:00
stefanprodan
674c79da94 Fix Linkerd promql queries
- include all inbound traffic stats
2019-05-14 12:14:47 +03:00
stefanprodan
23ebb4235d merge metrics-v2 into smi 2019-05-14 09:53:42 +03:00
Stefan Prodan
b2500d0ccb Merge pull request #181 from weaveworks/metrics-v2
Refactor the metrics package
2019-05-14 09:49:24 +03:00
stefanprodan
ee500d83ac Add Linkerd observer implementation 2019-05-13 17:51:39 +03:00
stefanprodan
0032c14a78 Refactor metrics
- add observer interface with builtin metrics functions
- add metrics observer factory
- add prometheus client
- implement the observer interface for istio, envoy and nginx
- remove deprecated istio and app mesh metric aliases (istio_requests_total, istio_request_duration_seconds_bucket, envoy_cluster_upstream_rq, envoy_cluster_upstream_rq_time_bucket)
2019-05-13 17:34:08 +03:00
stefanprodan
8fd3e927b8 Merge branch 'master' into smi 2019-05-12 13:58:37 +03:00
stefanprodan
1902884b56 Release v0.13.2 2019-05-11 15:16:31 +03:00
Stefan Prodan
98d2805267 Merge pull request #178 from carlossg/issue-177
Fix #177 Do not copy labels from canary to primary deployment
2019-05-11 14:56:22 +03:00
Carlos Sanchez
24a74d3589 Fix #177 Do not copy labels from canary to primary deployment 2019-05-11 13:42:08 +02:00
stefanprodan
7fe273a21d Fix SMI cluster role binding 2019-05-11 14:08:58 +03:00
stefanprodan
bd817cc520 Run SMI Istio e2e tests 2019-05-11 14:00:53 +03:00
stefanprodan
eb856fda13 Add SMI Istio e2e tests 2019-05-11 13:46:24 +03:00
stefanprodan
d63f05c92e Add SMI group to RBAC 2019-05-11 13:45:32 +03:00
stefanprodan
8fde6bdb8a Add SMI Istio adapter deployment 2019-05-11 13:35:36 +03:00
stefanprodan
8148120421 Enable Istio checks for SMI-Istio adapter 2019-05-11 13:06:06 +03:00
stefanprodan
95b8840bf2 Add SMI traffic split to router 2019-05-11 13:05:19 +03:00
stefanprodan
0e8b1ef20f Generate the SMI TrafficSplit clientset 2019-05-11 12:49:23 +03:00
Yuval Kohavi
0fbf4dcdb2 add canary promotion 2019-05-10 20:16:21 -04:00
Yuval Kohavi
7aca9468ac re-enable helm 2019-05-10 19:48:22 -04:00
Yuval Kohavi
a6c0f08fcc add gloo to circle 2019-05-10 19:44:46 -04:00
Yuval Kohavi
9c1bcc08bb float -> percent 2019-05-10 19:21:08 -04:00
Yuval Kohavi
87e9dfe3d3 e2e test 2019-05-10 19:16:16 -04:00
Yuval Kohavi
d7be66743e Merge remote-tracking branch 'upstream/master' into gloo2 2019-05-10 10:38:14 -04:00
Stefan Prodan
15463456ec Merge pull request #176 from weaveworks/nginx-tests
Add nginx e2e and unit tests
2019-05-10 12:09:40 +03:00
stefanprodan
752eceed4b Add tests for ingress weight changes 2019-05-10 11:53:12 +03:00
stefanprodan
eadce34d6f Add ingress router unit tests 2019-05-10 11:39:52 +03:00
stefanprodan
11ccf34bbc Document the nginx e2e tests 2019-05-10 10:50:24 +03:00
stefanprodan
e308678ed5 Deploy ingress for nginx e2e tests 2019-05-10 10:40:38 +03:00
stefanprodan
cbe72f0aa2 Add ingress target to nginx e2e tests 2019-05-10 10:29:09 +03:00
stefanprodan
bc84e1c154 Fix typos 2019-05-10 10:24:47 +03:00
stefanprodan
344bd45a0e Add nginx e2e tests 2019-05-10 10:24:35 +03:00
stefanprodan
72014f736f Release v0.13.1 2019-05-09 14:29:42 +03:00
Stefan Prodan
0a2949b6ad Merge pull request #174 from weaveworks/fix-metrics
Fix NGINX promql and custom metrics checks
2019-05-09 14:22:30 +03:00
stefanprodan
2ff695ecfe Fix nginx metrics tests 2019-05-09 14:00:15 +03:00
stefanprodan
8d0b54e059 Add custom metrics to nginx docs 2019-05-09 13:51:37 +03:00
stefanprodan
121a65fad0 Fix nginx promql namespace selector 2019-05-09 13:50:47 +03:00
stefanprodan
ecaa203091 Fix custom metric checks
- escape the prom query before encoding it
2019-05-09 13:49:48 +03:00
Stefan Prodan
6d0e3c6468 Merge pull request #173 from weaveworks/release-v0.13.0
Prepare release v0.13.0
2019-05-08 20:52:18 +03:00
stefanprodan
c933476fff Bump Grafana chart version 2019-05-08 20:26:40 +03:00
stefanprodan
1335210cf5 Add the Prometheus add-on to App Mesh docs 2019-05-08 19:03:53 +03:00
stefanprodan
9d12794600 Add NGINX to readme 2019-05-08 18:30:00 +03:00
stefanprodan
d57fc7d03e Add v0.13.0 change log 2019-05-08 18:05:58 +03:00
stefanprodan
1f9f6fb55a Release v0.13.0 2019-05-08 18:05:47 +03:00
Stefan Prodan
948df55de3 Merge pull request #170 from weaveworks/nginx
Add support for nginx ingress controller
2019-05-08 17:44:29 +03:00
stefanprodan
8914f26754 Add ngnix docs 2019-05-08 17:03:36 +03:00
stefanprodan
79b3370892 Add Prometheus add-on to Flagger chart 2019-05-08 15:44:28 +03:00
stefanprodan
a233b99f0b Add HPA to nginx demo 2019-05-07 11:12:36 +03:00
stefanprodan
0d94c01678 Toggle canary annotation based on weight 2019-05-07 11:10:19 +03:00
stefanprodan
00151e92fe Implement A/B testing for nginx ingress 2019-05-07 10:33:40 +03:00
stefanprodan
f7db0210ea Add nginx ingress controller checks 2019-05-06 18:43:02 +03:00
stefanprodan
cf3ba35fb9 Add nginx ingress controller metrics 2019-05-06 18:42:31 +03:00
stefanprodan
177dc824e3 Implement nginx ingress router 2019-05-06 18:42:02 +03:00
stefanprodan
5f544b90d6 Log mesh provider at startup 2019-05-06 18:41:04 +03:00
stefanprodan
921ac00383 Add ingress ref to CRD and RBAC 2019-05-06 18:33:00 +03:00
Stefan Prodan
7df7218978 Merge pull request #168 from scranton/supergloo
Fix and clarify SuperGloo installation docs
2019-05-06 11:33:40 +03:00
Scott Cranton
e4c6903a01 Fix and clarify SuperGloo installation docs
Added missing `=` for --version, and added brew and helm install options
2019-05-05 15:42:06 -04:00
Stefan Prodan
027342dc72 Merge pull request #167 from weaveworks/grafana-fix
Change dashboard selector to destination workload
2019-05-04 09:03:57 +03:00
stefanprodan
e17a747785 Change dashboard selector to destination workload 2019-05-03 19:32:29 +03:00
Stefan Prodan
e477b37bd0 Merge pull request #162 from weaveworks/fix-vs
Fix duplicate hosts error when using wildcard
2019-05-02 19:17:52 +03:00
Stefan Prodan
ad25068375 Merge pull request #160 from aackerman/patch-1
Update default image repo in flagger chart readme to be weaveworks
2019-05-02 19:17:38 +03:00
stefanprodan
c92230c109 Fix duplicate hosts error when using wildcard 2019-05-02 19:05:54 +03:00
Stefan Prodan
9e082d9ee3 Update charts/flagger/README.md
Co-Authored-By: aackerman <theron17@gmail.com>
2019-05-02 11:05:43 -05:00
Aaron Ackerman
cfd610ac55 Update default image repo in flagger chart readme to be weaveworks 2019-05-02 07:18:00 -05:00
stefanprodan
82067f13bf Add GitOps diagram 2019-05-01 13:09:18 +03:00
Stefan Prodan
242d79e49d Merge pull request #159 from weaveworks/release-v0.12.0
Prepare release v0.12.0
2019-04-29 17:08:16 +03:00
stefanprodan
4f01ecde5a Update changelog 2019-04-29 16:41:26 +03:00
stefanprodan
61141c7479 Release v0.12.0 2019-04-29 16:37:48 +03:00
Stefan Prodan
62429ff710 Merge pull request #158 from weaveworks/docs-supergloo
Add SuperGloo install docs
2019-04-29 16:35:58 +03:00
stefanprodan
82a1f45cc1 Fix load tester image repo 2019-04-29 11:17:19 +03:00
stefanprodan
1a95fc2a9c Add SuperGloo install docs 2019-04-26 19:51:09 +03:00
Stefan Prodan
13816eeafa Merge pull request #151 from yuval-k/supergloo
Supergloo Support
2019-04-25 23:18:22 +03:00
Yuval Kohavi
5279f73c17 use name.namespace instead of namespace.name 2019-04-25 11:10:23 -04:00
Yuval Kohavi
d196bb2856 e2e test 2019-04-24 16:00:55 -04:00
Yuval Kohavi
3f8f634a1b add e2e tests 2019-04-23 18:06:46 -04:00
Yuval Kohavi
350efb2bfe gloo upstream group support 2019-04-23 07:47:50 -04:00
Yuval Kohavi
5ba27c898e remove todo 2019-04-23 07:42:52 -04:00
Stefan Prodan
57f1b63fa1 Merge pull request #156 from weaveworks/docs-fix
Fix Tiller-less install command
2019-04-22 20:15:11 +03:00
stefanprodan
d69e203479 Fix Tiller-less install command 2019-04-22 20:08:56 +03:00
Yuval Kohavi
4d7fae39a8 add retries and cors 2019-04-19 14:41:50 -04:00
Yuval Kohavi
2dc554c92a dep ensure twice 2019-04-19 11:23:32 -04:00
Yuval Kohavi
21c394ef7f pin supergloo\solo-kit 2019-04-19 11:20:28 -04:00
Yuval Kohavi
2173bfc1a0 Merge remote-tracking branch 'origin/master' into supergloo-updated 2019-04-19 11:17:37 -04:00
Yuval Kohavi
a19d016e14 more rules 2019-04-19 10:59:04 -04:00
Stefan Prodan
8f1b5df9e2 Merge pull request #154 from weaveworks/dep-update
Disable bats in load tester artifacts
2019-04-19 13:03:39 +03:00
stefanprodan
2d6b8ecfdf Disable bats in load tester artifacts 2019-04-19 13:02:20 +03:00
Stefan Prodan
8093612011 Merge pull request #153 from weaveworks/dep-update
Update Kubernetes packages to 1.13.1
2019-04-18 20:07:33 +03:00
stefanprodan
39dc761e32 Make codegen work with the klog shim 2019-04-18 19:56:48 +03:00
stefanprodan
0c68983c62 Update deps to Kubernetes 1.13.1 2019-04-18 19:30:55 +03:00
Stefan Prodan
c7539f6e4b Merge pull request #152 from weaveworks/release-v0.11.1
Prepare release v0.11.1
2019-04-18 16:14:55 +03:00
stefanprodan
8cebc0acee Update changelog for v0.11.1 2019-04-18 15:40:48 +03:00
stefanprodan
f60c4d60cf Release v0.11.1 2019-04-18 14:50:26 +03:00
stefanprodan
662f9cba2e Add bats tests to load tester artifacts 2019-04-18 14:34:25 +03:00
stefanprodan
4a82e1e223 Use the builtin metrics in docs 2019-04-18 14:25:55 +03:00
stefanprodan
b60b912bf8 Use the builtin metrics in artifacts 2019-04-18 13:53:13 +03:00
stefanprodan
093348bc60 Release loadtester 0.3.0 with bats support 2019-04-18 13:45:32 +03:00
Yuval Kohavi
37ebbf14f9 fix compile 2019-04-17 18:44:33 -04:00
Yuval Kohavi
156488c8d5 Merge remote-tracking branch 'origin/master' into supergloo-updated 2019-04-17 18:24:41 -04:00
Yuval Kohavi
68d1f583cc more tests 2019-04-17 13:04:02 -04:00
Stefan Prodan
3492b07d9a Merge pull request #150 from weaveworks/release-v0.11.0
Release v0.11.0
2019-04-17 11:25:16 +03:00
stefanprodan
d0b582048f Add change log for v0.11.0 2019-04-17 11:22:02 +03:00
stefanprodan
a82eb7b01f Release v0.11.0 2019-04-17 11:13:31 +03:00
stefanprodan
cd08afcbeb Add bash bats task runner
- run bats tests (blocking requests)
2019-04-17 11:05:10 +03:00
stefanprodan
331942a4ed Switch to Docker Hub from Quay 2019-04-17 11:03:38 +03:00
Yuval Kohavi
aa24d6ff7e minor change 2019-04-16 19:16:49 -04:00
Yuval Kohavi
58c2c19f1e re generate code 2019-04-16 19:16:37 -04:00
Yuval Kohavi
2a91149211 starting adding tests 2019-04-16 19:16:15 -04:00
Yuval Kohavi
868482c240 basics seem working! 2019-04-16 15:10:08 -04:00
Stefan Prodan
4e387fa943 Merge pull request #149 from weaveworks/e2e
Add end-to-end tests for A/B testing and pre/post hooks
2019-04-16 11:11:01 +03:00
stefanprodan
15484363d6 Add A/B testing and hooks to e2e readme 2019-04-16 11:00:03 +03:00
stefanprodan
50b7b74480 Speed up e2e tests by reducing the number of iterations 2019-04-16 10:41:35 +03:00
stefanprodan
adb53c63dd Add e2e tests for A/B testing and pre/post hooks 2019-04-16 10:26:00 +03:00
Stefan Prodan
bdc3a32e96 Merge pull request #148 from weaveworks/selectors
Make the pod selector label configurable
2019-04-16 09:51:05 +03:00
stefanprodan
65f716182b Add default selectors to docs 2019-04-15 13:30:58 +03:00
stefanprodan
6ef72e2550 Make the pod selector configurable
- default labels: app, name and app.kubernetes.io/name
2019-04-15 12:57:25 +03:00
stefanprodan
60f51ad7d5 Move deployer and config tracker to canary package 2019-04-15 11:27:08 +03:00
stefanprodan
a09dc2cbd8 Rename logging package 2019-04-15 11:25:45 +03:00
Stefan Prodan
825d07aa54 Merge pull request #147 from weaveworks/pre-post-rollout-hooks
Add pre/post-rollout webhooks
2019-04-14 19:52:19 +03:00
stefanprodan
f46882c778 Update cert-manager to v0.7 in GKE docs 2019-04-14 12:24:35 +03:00
stefanprodan
663fa08cc1 Add hook type and status to CRD schema validation 2019-04-13 21:21:51 +03:00
stefanprodan
19e625d38e Add pre/post rollout webhooks to docs 2019-04-13 20:30:19 +03:00
stefanprodan
edcff9cd15 Execute pre/post rollout webhooks
- halt the canary advancement if pre-rollout hooks are failing
- include the canary status (Succeeded/Failed) in the post-rollout webhook payload
- ignore post-rollout webhook failures
- log pre/post rollout webhook response result
2019-04-13 15:43:23 +03:00
stefanprodan
e0fc5ecb39 Add hook type to CRD
- pre-rollout execute webhook before routing traffic to canary
- rollout execute webhook during the canary analysis on each iteration
- post-rollout execute webhook after the canary has been promoted or rolled back
Add canary phase to webhook payload
2019-04-13 15:37:41 +03:00
stefanprodan
4ac6629969 Exclude docs branches from CI 2019-04-13 15:33:03 +03:00
Stefan Prodan
68d8dad7c8 Merge pull request #146 from weaveworks/metrics
Unify App Mesh and Istio builtin metric checks
2019-04-13 00:47:35 +03:00
stefanprodan
4ab9ceafc1 Use metrics alias in e2e tests 2019-04-12 17:43:47 +03:00
stefanprodan
352ed898d4 Add request success rate and duration metrics alias 2019-04-12 17:00:04 +03:00
stefanprodan
e091d6a50d Set Envoy request duration to ms 2019-04-12 16:25:34 +03:00
stefanprodan
c651ef00c9 Add Envoy request duration test 2019-04-12 16:09:00 +03:00
stefanprodan
4b17788a77 Add Envoy request duration P99 query 2019-04-12 16:02:23 +03:00
Yuval Kohavi
e5612bca50 dep ensure 2019-04-10 20:20:10 -04:00
Yuval Kohavi
d21fb1afe8 initial supergloo code 2019-04-10 20:19:49 -04:00
Yuval Kohavi
89d0a533e2 dep + vendor 2019-04-10 20:03:53 -04:00
Stefan Prodan
db673dddd9 Merge pull request #141 from weaveworks/default-mesh
Set default mesh gateway only if no gateway is specified
2019-04-08 10:26:33 +03:00
stefanprodan
88ad457e87 Add default mesh gateway to docs and examples 2019-04-06 12:26:11 +03:00
stefanprodan
126b68559e Set default mesh gateway if no gateway is specified 2019-04-06 12:21:30 +03:00
stefanprodan
2cd3fe47e6 Remove FAQ from docs site index 2019-04-03 19:33:37 +03:00
Stefan Prodan
15eb7cce55 Merge pull request #139 from dholbach/add-faq-stub
Add faq stub
2019-04-03 18:33:16 +03:00
Daniel Holbach
13f923aabf remove FAQ from main page for now, as requested by Stefan 2019-04-03 17:25:45 +02:00
Daniel Holbach
0ffb112063 move FAQ into separate file, add links 2019-04-03 14:49:21 +02:00
Daniel Holbach
b4ea6af110 add FAQ stub
closes: #132
2019-04-03 11:10:55 +02:00
Daniel Holbach
611c8f7374 make markdownlint happy
Signed-off-by: Daniel Holbach <daniel@weave.works>
2019-04-03 10:56:04 +02:00
Daniel Holbach
1cc73f37e7 go OCD on the feature table
Signed-off-by: Daniel Holbach <daniel@weave.works>
2019-04-03 10:52:40 +02:00
Stefan Prodan
ca37fc0eb5 Merge pull request #136 from weaveworks/feature-list
Add Istio vs App Mesh feature list
2019-04-02 13:37:20 +03:00
stefanprodan
5380624da9 Add CORS, retries and timeouts to feature list 2019-04-02 13:27:32 +03:00
stefanprodan
aaece0bd44 Add acceptance tests to feature list 2019-04-02 13:05:01 +03:00
stefanprodan
de7cc17f5d Add Istio vs App Mesh feature list 2019-04-02 12:57:15 +03:00
Stefan Prodan
66efa39d27 Merge pull request #134 from dholbach/add-features-section
Seed a features section
2019-04-02 12:44:17 +03:00
Stefan Prodan
ff7c0a105d Merge pull request #130 from weaveworks/metrics-refactor
Refactor the metrics observer
2019-04-02 12:41:59 +03:00
Daniel Holbach
7b29253df4 use white-check-mark instead - hope this work this time 2019-04-01 18:51:44 +02:00
Daniel Holbach
7ef63b341e use white-heavy-check-mark instead 2019-04-01 18:47:10 +02:00
Daniel Holbach
e7bfaa4f1a factor in Stefan's feedback - use a table instead 2019-04-01 18:44:51 +02:00
Daniel Holbach
3a9a408941 Seed a features section
Closes: #133

Signed-off-by: Daniel Holbach <daniel@weave.works>
2019-04-01 18:09:57 +02:00
stefanprodan
3e43963daa Wait for Istio pods to be ready 2019-04-01 14:52:03 +03:00
stefanprodan
69a6e260f5 Bring down the Istio e2e CPU requests 2019-04-01 14:40:11 +03:00
stefanprodan
664e7ad555 Debug e2e load test 2019-04-01 14:33:47 +03:00
stefanprodan
ee4a009a06 Print Istio e2e status 2019-04-01 14:09:19 +03:00
stefanprodan
36dfd4dd35 Bring down Istio Pilot memory requests 2019-04-01 13:09:52 +03:00
stefanprodan
dbf36082b2 Revert Istio e2e to 1.1.0 2019-04-01 12:55:00 +03:00
stefanprodan
3a1018cff6 Change Istio e2e limits 2019-03-31 18:00:03 +03:00
stefanprodan
fc10745a1a Upgrade Istio e2e to v1.1.1 2019-03-31 14:43:08 +03:00
stefanprodan
347cfd06de Upgrade Kubernetes Kind to v0.2.1 2019-03-31 14:30:27 +03:00
stefanprodan
ec759ce467 Add Envoy success rate test 2019-03-31 14:17:39 +03:00
stefanprodan
f211e0fe31 Use go templates to render the builtin promql queries 2019-03-31 13:55:14 +03:00
stefanprodan
c91a128b65 Fix observer mock init 2019-03-30 11:55:41 +02:00
stefanprodan
6a080f3032 Rename observer and recorder 2019-03-30 11:49:43 +02:00
stefanprodan
b2c12c1131 Move observer to metrics package 2019-03-30 11:45:39 +02:00
Stefan Prodan
b945b37089 Merge pull request #127 from weaveworks/metrics
Refactor Prometheus recorder
2019-03-28 15:49:08 +02:00
stefanprodan
9a5529a0aa Add flagger_info metric to docs 2019-03-28 12:11:25 +02:00
stefanprodan
025785389d Refactor Prometheus recorder
- add flagger_info gauge metric
- expose the version and mesh provider as labels
- move the recorder to the metrics package
2019-03-28 11:58:19 +02:00
stefanprodan
48d9a0dede Ensure the status metric is set after a restart 2019-03-28 11:52:13 +02:00
Stefan Prodan
fbdf38e990 Merge pull request #124 from weaveworks/release-v0.10.0
Release v0.10.0 (AWS App Mesh edition)
2019-03-27 14:20:50 +02:00
stefanprodan
ef5bf70386 Update go to 1.12 2019-03-27 13:02:30 +02:00
stefanprodan
274c1469b4 Update changelog v0.10.0 2019-03-27 09:46:06 +02:00
stefanprodan
960d506360 Upgrade mesh definition to v1beta1 2019-03-27 09:40:11 +02:00
stefanprodan
79a6421178 Move load tester to Weaveworks Quay 2019-03-27 09:35:34 +02:00
Stefan Prodan
8b5c004860 Merge pull request #123 from weaveworks/appmesh-v1beta1
Update App Mesh to v1beta1
2019-03-26 21:53:34 +02:00
stefanprodan
f54768772e Fix App Mesh success rate graph 2019-03-26 21:30:18 +02:00
stefanprodan
b9075dc6f9 Update App Mesh to v1beta1 2019-03-26 20:29:40 +02:00
Stefan Prodan
107596ad54 Merge pull request #122 from weaveworks/prep-0.10.0
Reconcile ClusterIP services and prep v0.10.0
2019-03-26 17:25:57 +02:00
stefanprodan
3c6a2b1508 Update changelog for v0.10.0 2019-03-26 17:12:46 +02:00
stefanprodan
f996cba354 Set Grafana dashboards readable IDs 2019-03-26 17:12:46 +02:00
stefanprodan
bdd864fbdd Add port and mesh name to CRD validation 2019-03-26 17:12:46 +02:00
stefanprodan
ca074ef13f Rename router sync to reconcile 2019-03-26 17:12:46 +02:00
stefanprodan
ddd3a8251e Reconcile ClusterIP services
- add svc update tests fix: #114
2019-03-26 17:12:46 +02:00
stefanprodan
f5b862dc1b Add App Mesh docs to readme 2019-03-26 17:12:45 +02:00
stefanprodan
d45d475f61 Add Grafana dashboard link and update screen 2019-03-26 17:12:45 +02:00
stefanprodan
d0f72ea3fa Bump Flagger version to 0.10.0 2019-03-26 17:12:45 +02:00
stefanprodan
5ed5d1e5b6 Fix load tester install command for zsh 2019-03-26 11:57:01 +02:00
stefanprodan
311b14026e Release loadtester chart v0.2.0 2019-03-26 11:33:07 +02:00
stefanprodan
67cd722b54 Release Grafana chart v1.1.0 2019-03-26 11:16:42 +02:00
stefanprodan
7f6247eb7b Add jq requirement to App Mesh installer 2019-03-26 10:14:44 +02:00
Stefan Prodan
f3fd515521 Merge pull request #121 from weaveworks/stats
Fix canary status prom metric
2019-03-26 09:58:54 +02:00
stefanprodan
9db5dd0d7f Update changelog 2019-03-26 09:58:40 +02:00
stefanprodan
d07925d79d Fix canary status prom metrics 2019-03-25 17:26:22 +02:00
Stefan Prodan
662d0f31ff Merge pull request #119 from weaveworks/appmesh-ref
App Mesh docs
2019-03-25 15:13:35 +02:00
stefanprodan
2c5ad0bf8f Disable App Mesh ingress for load tester 2019-03-25 15:00:57 +02:00
stefanprodan
0ea76b986a Prevent the CRD from being removed by Helm 2019-03-25 15:00:23 +02:00
stefanprodan
3c4253c336 Docs fixes 2019-03-25 14:59:55 +02:00
stefanprodan
77ba28e91c Use App Mesh install script 2019-03-25 09:41:56 +02:00
stefanprodan
6399e7586c Update overview diagram 2019-03-24 15:10:13 +02:00
Stefan Prodan
1caa62adc8 Merge pull request #118 from weaveworks/appmesh
App Mesh refactoring + docs
2019-03-24 13:53:40 +02:00
stefanprodan
8fa558f124 Add intro do App Mesh tutorial 2019-03-24 13:38:00 +02:00
stefanprodan
191228633b Add App Mesh backend example 2019-03-24 13:11:03 +02:00
stefanprodan
8a981f935a Add App Mesh GitOps diagram 2019-03-24 12:26:35 +02:00
stefanprodan
a8ea9adbcc Add the Slack notifications to App Mesh tutorial 2019-03-23 17:27:53 +02:00
stefanprodan
685d94c44b Add Grafana screen to App Mesh docs 2019-03-23 16:26:24 +02:00
stefanprodan
153ed1b044 Add the App Mesh ingress to docs 2019-03-23 16:09:30 +02:00
stefanprodan
7788f3a1ba Add virtual node for App Mesh ingress 2019-03-23 15:56:52 +02:00
stefanprodan
cd99225f9b Add App Mesh canary deployments tutorial 2019-03-23 15:41:04 +02:00
stefanprodan
33ba3b8d4a Add App Mesh canary demo definitions 2019-03-23 15:40:27 +02:00
stefanprodan
d222dd1069 Change GitHub raw URLs to Weaveworks org 2019-03-23 13:38:57 +02:00
stefanprodan
39fd3d46ba Add App Mesh ingress gateway deployment 2019-03-23 13:22:33 +02:00
stefanprodan
419c1804b6 Add App Mesh telemetry deployment 2019-03-23 13:21:52 +02:00
stefanprodan
ae0351ddad Exclude the namespace from AppMesh object names
ref: https://github.com/aws/aws-app-mesh-controller-for-k8s/issues/14
2019-03-23 11:25:39 +02:00
stefanprodan
941be15762 Fix typo in comments 2019-03-23 11:25:31 +02:00
stefanprodan
578ebcf6ed Use pod name filter in Envoy metrics query
Add support for Weave Cloud Prometheus agent
2019-03-23 11:20:51 +02:00
Stefan Prodan
27ab4b08f9 Merge pull request #112 from weaveworks/appmesh-grafana
Add AppMesh Grafana dashboard
2019-03-21 16:19:05 +02:00
stefanprodan
428b2208ba Add Flagger logo svg format (CNCF landscape) 2019-03-21 16:05:51 +02:00
Stefan Prodan
438c553d60 Merge pull request #113 from tanordheim/service-port-name
Allow setting name of ports in generated services
2019-03-21 15:38:21 +02:00
Trond Nordheim
90cb293182 Add GRPC canary analysis custom query example 2019-03-21 14:27:11 +01:00
Trond Nordheim
1f9f93ebe4 Add portName information to how it works-guide 2019-03-21 12:41:44 +01:00
Trond Nordheim
f5b97fbb74 Add support for naming generated service ports 2019-03-21 12:37:02 +01:00
stefanprodan
ce79244126 Add maintainers 2019-03-21 12:34:28 +02:00
stefanprodan
3af5d767d8 Add AppMesh dashboard screen 2019-03-21 12:19:04 +02:00
stefanprodan
3ce3efd2f2 Fix Grafana port forward command 2019-03-21 12:11:11 +02:00
stefanprodan
8108edea31 Add AppMesh Grafana dashboard 2019-03-21 12:06:50 +02:00
Stefan Prodan
5d80087ab3 Merge pull request #109 from weaveworks/appmesh-docs
Add EKS App Mesh install docs
2019-03-21 11:02:35 +02:00
stefanprodan
6593be584d Remove namespace from Prometheus URL 2019-03-21 10:52:21 +02:00
stefanprodan
a0f63f858f Update changelog and roadmap 2019-03-21 09:57:49 +02:00
stefanprodan
49914f3bd5 Add EKS App Mesh install link to readme 2019-03-20 23:46:14 +02:00
stefanprodan
71988a8b98 Add EKS App Mesh install guide 2019-03-20 23:40:09 +02:00
stefanprodan
d65be6ef58 Add AppMesh virtual node to load tester chart 2019-03-20 23:39:34 +02:00
Stefan Prodan
38c40d02e7 Merge pull request #108 from weaveworks/ww
Move to weaveworks org
2019-03-20 19:10:56 +02:00
stefanprodan
9e071b9d60 Move to weaveworks Quay 2019-03-20 18:52:27 +02:00
stefanprodan
b4ae060122 Move to weaveworks org 2019-03-20 18:26:04 +02:00
Stefan Prodan
436656e81b Merge pull request #107 from stefanprodan/appmesh
AWS App Mesh integration
2019-03-20 17:57:25 +02:00
stefanprodan
d7e111b7d4 Add mesh provider option to Helm chart 2019-03-20 12:59:35 +02:00
stefanprodan
4b6126dd1a Add Envoy HTTP success rate metric check 2019-03-19 15:52:26 +02:00
stefanprodan
89faa70196 Fix canary virtual node DNS discovery 2019-03-19 15:51:17 +02:00
stefanprodan
6ed9d4a1db Add AppMesh CRDs to Flagger's RBAC 2019-03-18 10:13:46 +02:00
stefanprodan
9d0e38c2e1 Disable primary readiness check in tests 2019-03-17 12:50:48 +02:00
stefanprodan
8b758fd616 Set primary min replicas 2019-03-17 12:30:11 +02:00
stefanprodan
14369d8be3 Fix virtual node backends 2019-03-17 12:29:04 +02:00
stefanprodan
7b4153113e Fix router tests 2019-03-17 11:14:47 +02:00
stefanprodan
7d340c5e61 Change mesh providers based on cmd flag 2019-03-17 10:52:52 +02:00
stefanprodan
337c94376d Add AppMesh routing tests 2019-03-17 10:29:40 +02:00
stefanprodan
0ef1d0b2f1 Implement AppMesh routing ops 2019-03-17 10:29:21 +02:00
stefanprodan
5cf67bd4e0 Add AppMesh router sync tests 2019-03-16 14:14:24 +02:00
stefanprodan
f22be17852 Add AppMesh router sync implementation
Sync virtual nodes and virtual services
2019-03-16 14:13:58 +02:00
stefanprodan
48e79d5dd4 Add mesh provider flag 2019-03-16 14:12:23 +02:00
stefanprodan
59f5a0654a Add AppMesh fields to Canary CRD 2019-03-16 14:11:24 +02:00
stefanprodan
6da2e11683 Add AppMesh CRDs and Kubernetes client 2019-03-16 14:10:09 +02:00
stefanprodan
802c087a4b Fix Istio Gateway certificate
fix: #102
2019-03-14 17:52:58 +02:00
Stefan Prodan
ed2048e9f3 Merge pull request #105 from stefanprodan/svc
Copy pod labels from canary to primary
2019-03-14 01:59:20 +02:00
stefanprodan
437b1d30c0 Copy labels from canary to primary 2019-03-14 01:49:07 +02:00
stefanprodan
ba1788cbc5 Change default ClusterIP to point to primary
- ensures that the routing works without a service mesh
2019-03-14 01:48:10 +02:00
Stefan Prodan
773094a20d Merge pull request #99 from stefanprodan/loadtester-v0.2.0
Release loadtester v0.2.0
2019-03-12 12:07:47 +02:00
stefanprodan
5aa39106a0 Update loadtester cmd for e2e testing 2019-03-12 11:58:16 +02:00
stefanprodan
a9167801ba Release loadtester v0.2.0 2019-03-12 11:55:31 +02:00
Stefan Prodan
62f4a6cb96 Merge pull request #90 from cloudang/ngrinder
Support delegation to external load testing tools
2019-03-12 11:47:30 +02:00
Stefan Prodan
ea2b41e96e Merge pull request #98 from peterj/master
Upgrade Alpine to 3.9
2019-03-12 10:18:16 +02:00
Alex Wong
d28ce650e9 fix typo 2019-03-12 15:40:20 +08:00
Alex Wong
1bfcdba499 update vendor 2019-03-12 15:00:55 +08:00
Alex Wong
e48faa9144 add docs for ngrinder load testing 2019-03-12 14:59:55 +08:00
Alex Wong
33fbe99561 remove logCmdOutput from docs and k8s resources definition 2019-03-12 14:35:39 +08:00
Alex Wong
989925b484 update canary spec example, cmd flag logCmdOutput moved here 2019-03-12 14:33:23 +08:00
Alex Wong
7dd66559e7 add metadata field 'cmd' 2019-03-12 14:31:30 +08:00
Alex Wong
2ef1c5608e remove logCmdOutput flag 2019-03-12 14:31:00 +08:00
Alex Wong
b5932e8905 support time duration literal 2019-03-12 14:29:50 +08:00
Peter Jausovec
37999d3250 Upgrade Alpine to 3.9. Fixes #89 2019-03-11 20:17:15 -07:00
Stefan Prodan
83985ae482 Merge pull request #93 from stefanprodan/release-v0.9.0
Release v0.9.0
2019-03-11 15:26:00 +02:00
Stefan Prodan
3adfcc837e Merge pull request #94 from stefanprodan/fix-abtest-routing
Fix A/B Testing HTTP URI match conditions
2019-03-11 15:15:42 +02:00
stefanprodan
c720fee3ab Target the canary header in the load test 2019-03-11 15:04:01 +02:00
stefanprodan
881387e522 Fix HTTP URI match conditions 2019-03-11 14:54:17 +02:00
stefanprodan
d9f3378e29 Add change log for v0.9.0 2019-03-11 14:03:55 +02:00
stefanprodan
ba87620225 Release v0.9.0 2019-03-11 13:57:10 +02:00
Stefan Prodan
1cd0c49872 Merge pull request #88 from stefanprodan/ab-testing
A/B testing - canary with session affinity
2019-03-11 13:55:06 +02:00
stefanprodan
12ac96deeb Document how to enable A/B testing 2019-03-11 12:58:33 +02:00
Alex Wong
17e6f35785 add gock.v1 dependency 2019-03-11 10:07:50 +08:00
Stefan Prodan
bd115633a3 Merge pull request #91 from huydinhle/update-analysis-interval
Sync job when canary's interval changes
fix #86
2019-03-09 19:58:34 +02:00
stefanprodan
86ea172380 Fix weight metric report 2019-03-08 23:28:45 +02:00
stefanprodan
d87bbbbc1e Add A/B testing tutorial 2019-03-08 21:26:52 +02:00
Huy Le
6196f69f4d Create New Job when Canary's Interval changes
- Currently whenever the Canary analysis interval changes, flagger does
not reflect this into canary's job.
- This change will make sure the canary analysis interval got updated whenever
the Canary object's interval changes
2019-03-08 10:27:34 -08:00
Alex Wong
be31bcf22f mocked test 2019-03-08 22:20:29 +08:00
Alex Wong
cba2135c69 add comments 2019-03-08 22:20:16 +08:00
Alex Wong
2e52573499 add gock dep 2019-03-08 22:20:02 +08:00
Alex Wong
b2ce1ed1fb test for ngrinder task 2019-03-08 21:30:26 +08:00
Alex Wong
77a485af74 poll ngrinder task status 2019-03-08 21:29:58 +08:00
stefanprodan
d8b847a973 Mention session affinity in docs 2019-03-08 15:05:44 +02:00
stefanprodan
e80a3d3232 Add A/B testing scheduling unit tests 2019-03-08 13:06:39 +02:00
stefanprodan
780ba82385 Log namespace restriction if one exists 2019-03-08 13:05:25 +02:00
stefanprodan
6ba69dce0a Add iterations field to CRD validation 2019-03-08 12:31:35 +02:00
stefanprodan
3c7a561db8 Add Istio routes A/B testing unit tests 2019-03-08 12:24:43 +02:00
stefanprodan
49c942bea0 Add A/B testing examples 2019-03-08 11:55:04 +02:00
stefanprodan
bf1ca293dc Implement fix routing for canary analysis
Allow A/B testing scenarios where instead of weighted routing the traffic is split between the primary and canary based on HTTP headers or cookies.
2019-03-08 11:54:41 +02:00
stefanprodan
62b906d30b Add canary HTTP match conditions and iterations 2019-03-08 11:49:32 +02:00
Alex Wong
65bf048189 add ngrinder support 2019-03-08 15:50:44 +08:00
Alex Wong
a498ed8200 move original cmd tester to standalone source 2019-03-08 15:50:26 +08:00
Alex Wong
9f12bbcd98 refactoring loadtester to support external testing platform 2019-03-08 15:49:35 +08:00
Stefan Prodan
fcd520787d Merge pull request #84 from stefanprodan/release-v0.8.0
Release v0.8.0
2019-03-06 21:30:09 +02:00
stefanprodan
e2417e4e40 Skip e2e tests for release branches 2019-03-06 21:21:48 +02:00
stefanprodan
70a2cbf1c6 Add change log for v0.8.0 2019-03-06 21:17:37 +02:00
stefanprodan
fa0c6af6aa Release v0.8.0 2019-03-06 21:17:13 +02:00
Stefan Prodan
4f1abd0c8d Merge pull request #83 from stefanprodan/cors-policy
Add CORS policy support
2019-03-06 20:31:37 +02:00
stefanprodan
41e839aa36 Fix virtual service example 2019-03-06 15:56:20 +02:00
stefanprodan
2fd1593ad2 Use service headers to set Envoy timeout 2019-03-06 15:38:14 +02:00
stefanprodan
27b601c5aa Add CORS policy example 2019-03-06 15:37:28 +02:00
stefanprodan
5fc69134e3 Add CORS policy test 2019-03-06 15:34:51 +02:00
stefanprodan
9adc0698bb Add CORS policy to Istio router 2019-03-06 15:34:36 +02:00
stefanprodan
119c2ff464 Add CORS policy to Canary CRD 2019-03-06 15:33:53 +02:00
Stefan Prodan
f3a4201c7d Merge pull request #82 from stefanprodan/headers-ops
Add support for HTTP request header manipulation rules
2019-03-06 14:58:05 +02:00
stefanprodan
8b6aa73df0 Fix request header test 2019-03-06 13:51:04 +02:00
stefanprodan
1d4dfb0883 Add request header add test 2019-03-06 13:46:19 +02:00
stefanprodan
eab7f126a6 Use request.add for header append operation 2019-03-06 13:45:46 +02:00
stefanprodan
fe7547d83e Update Envoy headers example 2019-03-06 12:42:34 +02:00
stefanprodan
7d0df82861 Add header manipulation rules to Canary CRD 2019-03-06 12:41:53 +02:00
stefanprodan
7f0cd27591 Add Header manipulation rules to Istio Virtual Service 2019-03-06 12:17:41 +02:00
Stefan Prodan
e094c2ae14 Merge pull request #80 from stefanprodan/istio
Add Istio k8s client
2019-03-06 11:55:27 +02:00
Stefan Prodan
a5d438257f Merge pull request #78 from huydinhle/namespace-watcher
Add namespace flag
2019-03-06 11:10:17 +02:00
Huy Le
d8cb8f1064 Added Namespace Flag for Flagger
- introduce the namespace flag for flagger to watch a single namespace
for Canary Objects
2019-03-05 20:57:00 -08:00
stefanprodan
a8d8bb2d6f Fix go fmt 2019-03-06 01:54:31 +02:00
stefanprodan
a76ea5917c Remove knative pkg
CORS and RetryOn are missing from the knative pkg.
Until Istio has an official k8s client, we'll maintain our own.
2019-03-06 01:47:13 +02:00
stefanprodan
b0b6198ec8 Add Istio virtual service and signal packages 2019-03-06 01:43:09 +02:00
Stefan Prodan
eda97f35d2 Merge pull request #73 from huydinhle/fined-grained-rbac
Fine-grained RBAC
2019-03-06 00:06:40 +02:00
Huy Le
2b6507d35a fine-grained rbac for flagger helm 2019-03-05 11:29:34 -08:00
stefanprodan
f7c4d5aa0b Disable PR comments when coverage doesn't change 2019-03-05 16:25:30 +02:00
Stefan Prodan
74f07cffa6 Merge pull request #72 from stefanprodan/router
Refactor routing management
2019-03-05 12:28:11 +02:00
Stefan Prodan
79c8ff0af8 Merge pull request #74 from cloudang/options
Command line options for easier debugging
2019-03-05 12:07:03 +02:00
stefanprodan
ac544eea4b Extend test coverage to all packages 2019-03-05 11:59:40 +02:00
Alex Wong
231a32331b move flags to main packages 2019-03-05 17:48:55 +08:00
Alex Wong
104e8ef050 Add options for customizing threadiness, logger encoding, and global logger level 2019-03-05 14:30:23 +08:00
Alex Wong
296015faff update .gitignore 2019-03-05 12:15:27 +08:00
stefanprodan
9a9964c968 Add ClusterIP host to virtual service 2019-03-05 02:27:56 +02:00
stefanprodan
0d05d86e32 Add Istio routing tests 2019-03-05 02:18:07 +02:00
stefanprodan
9680ca98f2 Rename service router to Kubernetes router 2019-03-05 02:12:52 +02:00
stefanprodan
42b850ca52 Replace controller routing management with router pkg 2019-03-05 02:04:55 +02:00
stefanprodan
3f5c22d863 Extract routing to dedicated package
- split routing management into Kubernetes service router and Istio Virtual service router
2019-03-05 02:02:58 +02:00
Stefan Prodan
535a92e871 Merge pull request #70 from stefanprodan/append-headers
Allow headers to be appended to HTTP requests
2019-03-04 10:39:43 +02:00
stefanprodan
3411a6a981 Add delay Envoy shutdown tip to docs 2019-03-03 14:03:34 +02:00
stefanprodan
b5adee271c Add zero downtime deployments tutorial 2019-03-03 13:24:15 +02:00
stefanprodan
e2abcd1323 Add append headers PR to changelog 2019-03-03 10:33:08 +02:00
Stefan Prodan
25fbe7ecb6 Merge pull request #71 from huydinhle/namepace-typo
Fixed namepace typo in the repo
2019-03-03 10:29:29 +02:00
Huy Le
6befee79c2 Fixed namepace typo in the repo 2019-03-02 13:49:42 -08:00
stefanprodan
f09c5a60f1 Add Envoy headers to e2e tests 2019-03-02 14:26:17 +02:00
stefanprodan
52e89ff509 Add Envoy timeout and retry policy to docs 2019-03-02 13:48:19 +02:00
stefanprodan
35e20406ef Append HTTP headers when configuring routing 2019-03-02 13:35:36 +02:00
stefanprodan
c6e96ff1bb Add append headers field to Canary CRD 2019-03-02 13:33:03 +02:00
Stefan Prodan
793ab524b0 Merge pull request #68 from stefanprodan/fix-docs
Add Getting Help section to readme
2019-03-02 10:36:40 +02:00
stefanprodan
5a479d0187 Add Weaveworks Slack links 2019-03-02 10:26:54 +02:00
stefanprodan
a23e4f1d2a Add timeout and reties example to docs 2019-03-02 10:26:34 +02:00
Stefan Prodan
bd35a3f61c Merge pull request #66 from stefanprodan/fix-mesh
Avoid mesh gateway duplicates
2019-03-02 01:27:00 +02:00
stefanprodan
197e987d5f Avoid mesh gateway duplicates 2019-03-01 13:09:27 +02:00
stefanprodan
7f29beb639 Don't run e2e tests for docs branches 2019-02-28 18:55:58 +02:00
Stefan Prodan
1140af8dc7 Merge pull request #63 from stefanprodan/release-0.7.0
Release v0.7.0
2019-02-28 17:12:27 +02:00
stefanprodan
a2688c3910 Add link to custom metrics docs 2019-02-28 16:58:26 +02:00
stefanprodan
75b27ab3f3 Add change log for v0.7.0 2019-02-28 16:56:49 +02:00
stefanprodan
59d3f55fb2 Release v0.7.0 2019-02-28 16:05:48 +02:00
Stefan Prodan
f34739f334 Merge pull request #62 from stefanprodan/retries
Add timeout and retries
2019-02-28 15:36:46 +02:00
stefanprodan
90c71ec18f Update roadmap with alternatives to Istio 2019-02-28 15:09:24 +02:00
stefanprodan
395234d7c8 Add promql custom check to readme 2019-02-28 00:33:47 +02:00
stefanprodan
e322ba0065 Add timeout and retries to router 2019-02-28 00:05:40 +02:00
stefanprodan
6db8b96f72 Add timeout and retries example to docs 2019-02-28 00:02:48 +02:00
stefanprodan
44d7e96e96 Add timeout and retries fields to Canary CRD 2019-02-28 00:02:01 +02:00
Stefan Prodan
1662479c8d Merge pull request #60 from stefanprodan/custom-metrics
Add support for custom metrics
2019-02-27 23:31:05 +02:00
stefanprodan
2e351fcf0d Add a custom metric example to docs 2019-02-27 16:37:42 +02:00
stefanprodan
5d81876d07 Make the metric interval optional
- set default value to 1m
2019-02-27 16:03:56 +02:00
stefanprodan
c81e6989ec Add e2e tests for custom metrics 2019-02-27 15:49:09 +02:00
stefanprodan
4d61a896c3 Add custom promql queries support 2019-02-27 15:48:31 +02:00
stefanprodan
d148933ab3 Add metric query field to Canary CRD 2019-02-27 15:46:09 +02:00
Stefan Prodan
04a56a3591 Merge pull request #57 from stefanprodan/release-0.6.0
Release v0.6.0
2019-02-26 01:45:10 +02:00
stefanprodan
4a354e74d4 Update roadmap 2019-02-25 23:45:54 +02:00
stefanprodan
1e3e6427d5 Add link to virtual service docs 2019-02-25 23:22:49 +02:00
stefanprodan
38826108c8 Add changelog for v0.6.0 2019-02-25 23:01:35 +02:00
stefanprodan
4c4752f907 Release v0.6.0 2019-02-25 20:10:33 +02:00
Stefan Prodan
94dcd6c94d Merge pull request #55 from stefanprodan/http-match
Add HTTP match and rewrite to Canary service spec
2019-02-25 20:04:12 +02:00
stefanprodan
eabef3db30 Router improvements
- change virtual service route to canary service
- keep the existing destination weights on virtual service updates
- set the match conditions and URI rewrite when changing the traffic weight
2019-02-25 03:14:45 +02:00
stefanprodan
6750f10ffa Add HTTP match and rewrite docs 2019-02-25 03:07:39 +02:00
stefanprodan
56cb888cbf Add HTTP match and rewrite to virtual service 2019-02-25 00:08:06 +02:00
stefanprodan
b3e7fb3417 Add HTTP match and rewrite to Canary service spec 2019-02-25 00:06:14 +02:00
stefanprodan
2c6e1baca2 Update istio client 2019-02-25 00:05:09 +02:00
Stefan Prodan
c8358929d1 Merge pull request #54 from stefanprodan/vsvc
Refactor virtual service sync
2019-02-24 21:18:01 +02:00
stefanprodan
1dc7677dfb Add tests for virtual service sync 2019-02-24 19:58:01 +02:00
stefanprodan
8e699a7543 Detect changes in virtual service
- ignore destination weight when comparing the two specs
2019-02-24 18:25:12 +02:00
Stefan Prodan
cbbabdfac0 Merge pull request #53 from stefanprodan/kind
Add CircleCI workflow for end-to-end testing with Kubernetes Kind
2019-02-24 12:44:20 +02:00
stefanprodan
9d92de234c Increase promotion e2e wait time to 10s 2019-02-24 11:55:37 +02:00
stefanprodan
ba65975fb5 Add e2e testing docs 2019-02-24 11:41:22 +02:00
stefanprodan
ef423b2078 Move Flagger e2e build to a dedicated job 2019-02-24 03:10:50 +02:00
stefanprodan
f451b4e36c Split e2e prerequisites 2019-02-24 02:52:25 +02:00
stefanprodan
0856e13ee6 Use kind kubeconfig 2019-02-24 02:35:36 +02:00
stefanprodan
87b9fa8ca7 Move cluster init to prerequisites 2019-02-24 02:24:23 +02:00
stefanprodan
5b43d3d314 Use local docker image for e2e testing 2019-02-24 02:11:32 +02:00
stefanprodan
ac4972dd8d Fix e2e paths 2019-02-24 02:09:45 +02:00
stefanprodan
8a8f68af5d Test CircleCI 2019-02-24 02:02:37 +02:00
stefanprodan
c669dc0c4b Run e2e tests with CircleCI 2019-02-24 01:58:18 +02:00
stefanprodan
863a5466cc Add e2e prerequisites 2019-02-24 01:58:03 +02:00
stefanprodan
e2347c84e3 Use absolute paths in e2e tests 2019-02-24 01:11:04 +02:00
stefanprodan
e0e673f565 Install e2e deps and run tests 2019-02-24 01:03:39 +02:00
stefanprodan
30cbf2a741 Add e2e tests
- create Kubernetes cluster with Kind
- install Istio and Prometheus
- install Flagger
- test canary init and promotion
2019-02-24 01:02:15 +02:00
stefanprodan
f58de3801c Add Istio install values for e2e testing 2019-02-24 01:00:03 +02:00
Stefan Prodan
7c6b88d4c1 Merge pull request #51 from carlossg/update-virtualservice
Update VirtualService when the Canary service spec changes
2019-02-20 09:07:27 +00:00
Carlos Sanchez
0c0ebaecd5 Compare only hosts and gateways 2019-02-19 19:54:38 +01:00
Carlos Sanchez
1925f99118 If generated VirtualService already exists update it
Only if spec has changed
2019-02-19 19:40:46 +01:00
Stefan Prodan
6f2a22a1cc Merge pull request #47 from stefanprodan/release-0.5.1
Release v0.5.1
2019-02-14 12:12:11 +01:00
stefanprodan
ee04082cd7 Release v0.5.1 2019-02-13 18:59:34 +02:00
Stefan Prodan
efd901ac3a Merge pull request #46 from stefanprodan/skip-canary
Add option to skip the canary analysis
2019-02-13 17:28:07 +01:00
stefanprodan
e565789ae8 Add link to Helm GitOps repo 2019-02-13 18:18:37 +02:00
stefanprodan
d3953004f6 Add docs links and trim down the readme 2019-02-13 16:39:48 +02:00
stefanprodan
df1d9e3011 Add skip analysis test 2019-02-13 15:56:40 +02:00
stefanprodan
631c55fa6e Document how to skip the canary analysis 2019-02-13 15:31:01 +02:00
stefanprodan
29cdd43288 Implement skip analysis
When skip analysis is enabled, Flagger checks if the canary deployment is healthy and promotes it without analysing it. If an analysis is underway, Flagger cancels it and runs the promotion.
2019-02-13 15:30:29 +02:00
stefanprodan
9b79af9fcd Add skipAnalysis field to Canary CRD 2019-02-13 15:27:45 +02:00
stefanprodan
2c9c1adb47 Fix docs summary 2019-02-13 13:05:57 +02:00
Stefan Prodan
5dfb5808c4 Merge pull request #44 from stefanprodan/helm-docs
Add Helm and Weave Flux GitOps article
2019-02-13 11:51:38 +01:00
stefanprodan
bb0175aebf Add canary rollback scenario 2019-02-13 12:48:26 +02:00
stefanprodan
adaf4c99c0 Add GitOps example to Helm guide 2019-02-13 02:14:40 +02:00
stefanprodan
bed6ed09d5 Add tutorial for canaries with Helm 2019-02-13 00:52:49 +02:00
stefanprodan
4ff67a85ce Add configmap demo to podinfo 2019-02-13 00:51:44 +02:00
stefanprodan
702f4fcd14 Add configmap demo to podinfo 2019-02-12 19:12:10 +02:00
Stefan Prodan
8a03ae153d Merge pull request #43 from stefanprodan/app-validation
Add validation for label selectors
2019-02-11 10:55:34 +01:00
stefanprodan
434c6149ab Package all charts 2019-02-11 11:47:46 +02:00
stefanprodan
97fc4a90ae Add validation for label selectors
- Reject deployment if the pod label selector doesn't match 'app: <DEPLOYMENT_NAME>'
2019-02-11 11:46:59 +02:00
Stefan Prodan
217ef06930 Merge pull request #41 from stefanprodan/demo
Add canary deployment demo Helm chart
2019-02-11 10:20:48 +01:00
stefanprodan
71057946e6 Fix podinfo helm tests 2019-02-10 17:38:33 +02:00
stefanprodan
a74ad52c72 Add dashboard screens 2019-02-10 12:07:44 +02:00
stefanprodan
12d26874f8 Add canary deployment demo chart based on podinfo 2019-02-10 11:48:51 +02:00
stefanprodan
27de9ce151 Session affinity incompatible with destinations weight
- consistent hashing does not apply across multiple subsets
2019-02-10 11:47:01 +02:00
stefanprodan
9e7cd5a8c5 Disable Stackdriver monitoring
- Istio add-on v1.0.3 stackdriver adapter is missing the zone label
2019-02-10 11:37:01 +02:00
stefanprodan
38cb487b64 Allow Grafana anonymous access 2019-02-09 23:45:42 +02:00
stefanprodan
05ca266c5e Add HPA add-on to GKE docs 2019-02-04 16:52:03 +02:00
Stefan Prodan
5cc26de645 Merge pull request #40 from stefanprodan/gke
Flagger install docs revamp
2019-02-02 12:43:15 +01:00
stefanprodan
2b9a195fa3 Add cert-manager diagram to docs 2019-02-02 13:36:51 +02:00
stefanprodan
4454749eec Add load tester install instructions to docs 2019-02-02 13:01:48 +02:00
stefanprodan
b435a03fab Document Istio requirements 2019-02-02 12:16:16 +02:00
stefanprodan
7c166e2b40 Restructure the install docs 2019-02-02 02:20:02 +02:00
stefanprodan
f7a7963dcf Add Flagger install guide for GKE 2019-02-02 02:19:25 +02:00
stefanprodan
9c77c0d69c Add GKE Istio diagram 2019-02-02 02:18:31 +02:00
stefanprodan
e8a9555346 Add GKE Istio Gateway and Prometheus definitions 2019-02-02 02:17:55 +02:00
Stefan Prodan
59751dd007 Merge pull request #39 from stefanprodan/changelog
Add changelog
2019-01-31 17:29:47 +01:00
stefanprodan
9c4d4d16b6 Add PR links to changelog 2019-01-31 12:17:52 +02:00
stefanprodan
0e3d1b3e8f Improve changelog formatting 2019-01-31 12:11:47 +02:00
stefanprodan
f119b78940 Add features and fixes to changelog 2019-01-31 12:08:32 +02:00
stefanprodan
456d914c35 Release v0.5.0 2019-01-30 14:54:03 +02:00
Stefan Prodan
737507b0fe Merge pull request #37 from stefanprodan/track-configs
Track changes in ConfigMaps and Secrets
2019-01-30 13:46:56 +01:00
stefanprodan
4bcf82d295 Copy annotations from canary to primary on promotion 2019-01-28 11:02:33 +02:00
stefanprodan
e9cd7afc8a Add configs track changes to docs 2019-01-28 10:50:30 +02:00
stefanprodan
0830abd51d Trigger a rolling update when configs change
- generate a unique pod annotation on promotion
2019-01-28 10:49:43 +02:00
stefanprodan
5b296e01b3 Detect changes in configs and trigger canary analysis
- restart analysis if a ConfigMap or Secret changes during rollout
- add tests for tracked changes
2019-01-26 12:36:27 +02:00
stefanprodan
3fd039afd1 Add tracked configs checksum to canary status 2019-01-26 12:33:15 +02:00
stefanprodan
5904348ba5 Refactor tests
- consolidate fake clients and mock objects
2019-01-26 00:39:33 +02:00
stefanprodan
1a98e93723 Add config and secret volumes tests 2019-01-25 23:47:50 +02:00
stefanprodan
c9685fbd13 Add ConfigMap env from source tests 2019-01-25 18:58:23 +02:00
stefanprodan
dc347e273d Add secrets from env tests 2019-01-25 18:27:05 +02:00
stefanprodan
8170916897 Add ConfigMap tracking tests 2019-01-25 18:03:36 +02:00
stefanprodan
71cd4e0cb7 Include ConfigMaps and Secrets in promotion
- create primary configs and secrets at bootstrap
- copy configs and secrets from canary to primary and update the pod spec on promotion
2019-01-25 16:03:51 +02:00
stefanprodan
0109788ccc Discover config maps and secrets
- scan target deployment volumes and containers for configmaps and secrets
2019-01-25 13:20:46 +02:00
stefanprodan
1649dea468 Add config maps and secrets manifests for testing 2019-01-25 11:19:34 +02:00
Stefan Prodan
b8a7ea8534 Merge pull request #35 from stefanprodan/gh-actions
Publish charts with GitHub Actions
2019-01-24 11:52:54 +01:00
stefanprodan
afe4d59d5a Move Helm repository to gh-pages branch 2019-01-24 12:47:36 +02:00
stefanprodan
0f2697df23 Publish charts with GitHub Actions 2019-01-24 12:38:45 +02:00
stefanprodan
05664fa648 Release v0.4.1 2019-01-24 12:17:37 +02:00
Stefan Prodan
3b2564f34b Merge pull request #33 from stefanprodan/loadtest
Add load testing service
2019-01-24 11:04:31 +01:00
stefanprodan
dd0cf2d588 Add load tester dockerfile to docs 2019-01-23 15:12:23 +02:00
stefanprodan
7c66f23c6a Add load tester Helm chart 2019-01-21 21:02:40 +02:00
stefanprodan
a9f034de1a Add load testing diagram 2019-01-21 18:02:44 +02:00
stefanprodan
6ad2dca57a Add load testing setup to docs 2019-01-21 17:29:04 +02:00
stefanprodan
e8353c110b Release load tester v0.0.2 2019-01-21 13:37:26 +02:00
stefanprodan
dbf26ddf53 Add load tester flag to log the cmd output 2019-01-21 13:36:08 +02:00
stefanprodan
acc72d207f Change container image tag format 2019-01-20 17:27:08 +02:00
stefanprodan
a784f83464 Add loadtester manifests 2019-01-20 15:59:41 +02:00
stefanprodan
07d8355363 Rename load testing service to flagger-loadtester 2019-01-20 14:28:45 +02:00
stefanprodan
f7a439274e Go format API types 2019-01-20 14:10:10 +02:00
stefanprodan
bd6d446cb8 Go format scheduler 2019-01-20 14:04:10 +02:00
stefanprodan
385d0e0549 Add load test runner service
- embed rakyll/hey in the runner container image
2019-01-20 14:00:14 +02:00
stefanprodan
02236374d8 Run the wekbooks before the metrics checks
- log warning when no values are found for Istio metric due to lack of traffic
2019-01-20 13:54:44 +02:00
stefanprodan
c46fe55ad0 Release v0.4.0 2019-01-18 12:49:36 +02:00
Stefan Prodan
36a54fbf2a Merge pull request #31 from stefanprodan/reset
Restart analysis if revision changes during validation
2019-01-18 10:25:38 +01:00
stefanprodan
60f6b05397 Refactor scheduler tests 2019-01-18 11:14:27 +02:00
stefanprodan
6d8a7343b7 Add tests for analysis restart and canary promotion 2019-01-18 11:05:40 +02:00
stefanprodan
aff8b117d4 Restart validation if revision changes during analysis 2019-01-17 15:13:59 +02:00
Stefan Prodan
1b3c3b22b3 Merge pull request #29 from stefanprodan/status
Use Kubernetes 1.11 CRD status sub-resource
2019-01-17 13:06:28 +01:00
stefanprodan
1d31b5ed90 Add canary name and namespace to controller logs
- zap key-value: canary=name.namespace
2019-01-17 13:58:10 +02:00
stefanprodan
1ef310f00d Add traffic weight to canary status
- show current weight on kubectl get canaries and kubectl get all
2019-01-16 16:29:59 +02:00
stefanprodan
acdd2c46d5 Refactor Canary status
- add status phases (Initialized, Progressing, Succeeded, Failed)
- rename status revision to LastAppliedSpec
2019-01-16 15:06:38 +02:00
stefanprodan
9872e6bc16 Skip readiness checks if canary analysis finished 2019-01-16 13:18:53 +02:00
stefanprodan
10c2bdec86 Use deep copy when updating the virtual service routes 2019-01-16 13:13:07 +02:00
stefanprodan
4bf3b70048 Use CRD UpdateStatus for Canary status updated
- requires Kubernetes >=1.11
2019-01-16 01:00:39 +02:00
stefanprodan
ada446bbaa Drop compatibility with Kubernetes 1.10 2019-01-16 00:58:51 +02:00
stefanprodan
c4981ef4db Add status and additional printer columns to CRD 2019-01-16 00:57:46 +02:00
Stefan Prodan
d1b84cd31d Merge pull request #28 from stefanprodan/naming
Fix for when canary name is different to the target name
2019-01-15 23:32:41 +01:00
stefanprodan
9232c8647a Check if multiple canaries have the same target
- log an error on target duplication ref #13
2019-01-15 21:43:05 +02:00
stefanprodan
23e8c7d616 Fix for when canary name is different to the target name
- use target name consistent at bootstrap
2019-01-15 21:18:46 +02:00
Stefan Prodan
42607fbd64 Merge pull request #26 from carlossg/service-name
Fix VirtualService routes
2019-01-15 19:38:38 +01:00
stefanprodan
28781a5f02 Use deep copy when updating the deployment object
- fix canary status update logs
2019-01-15 20:37:14 +02:00
stefanprodan
3589e11244 Bump dev version 2019-01-15 20:36:59 +02:00
Carlos Sanchez
5e880d3942 Wrong VirtualService routes
If deployment name is different from canary name
the virtual service routes are created with canary name
but the services are created with deployment name

Note that canary name should match deployment name
2019-01-15 18:44:50 +01:00
stefanprodan
f7e675144d Release v0.3.0 2019-01-11 20:10:41 +02:00
Stefan Prodan
3bff2c339b Merge pull request #20 from stefanprodan/scheduler
Add canary analysis schedule interval to CRD
2019-01-11 19:06:17 +01:00
Stefan Prodan
b035c1e7fb Merge pull request #25 from carlossg/virtualservice-naming
Tries to create VirtualService that already exists
2019-01-11 18:03:57 +01:00
Carlos Sanchez
7ae0d49e80 Tries to create VirtualService that already exists
When canary name is different than deployment name

VirtualService croc-hunter-jenkinsx.jx-staging create error virtualservices.networking.istio.io "croc-hunter-jenkinsx" already exists
2019-01-11 17:47:52 +01:00
Stefan Prodan
07f66e849d Merge branch 'master' into scheduler 2019-01-11 15:07:03 +01:00
Stefan Prodan
06c29051eb Merge pull request #24 from carlossg/log-fix
Fix bad error message
2019-01-11 15:05:37 +01:00
stefanprodan
83118faeb3 Fix autoscalerRef tests 2019-01-11 13:51:44 +02:00
stefanprodan
aa2c28c733 Make autoscalerRef optional
- use anyOf as a workaround for the openAPI object validation not accepting empty values
- fix #23
2019-01-11 13:42:32 +02:00
stefanprodan
10185407f6 Use httpbin.org for webhook testing 2019-01-11 13:12:53 +02:00
Carlos Sanchez
c1bde57c17 Fix bad error message
"controller/scheduler.go:217","msg":"deployment . update error Canary.flagger.app \"jx-staging-croc-hunter-jenkinsx\" is invalid: []: Invalid value: map[string]interface {}{\"metadata\":map[string]interface {}{\"name\":\"jx-staging-croc-hunter-jenkinsx\", \"namespace\":\"jx-staging\", \"selfLink\":\"/apis/flagger.app/v1alpha2/namespaces/jx-staging/canaries/jx-staging-croc-hunter-jenkinsx\", \"uid\":\"b248877e-1406-11e9-bf64-42010a8000c6\", \"resourceVersion\":\"30650895\", \"generation\":1, \"creationTimestamp\":\"2019-01-09T12:04:20Z\"}, \"spec\":map[string]interface {}{\"canaryAnalysis\":map[string]interface {}{\"threshold\":5, \"maxWeight\":50, \"stepWeight\":10, \"metrics\":[]interface {}{map[string]interface {}{\"name\":\"istio_requests_total\", \"interval\":\"1m\", \"threshold\":99}, map[string]interface {}{\"name\":\"istio_request_duration_seconds_bucket\", \"interval\":\"30s\"istio-system/flagger-b486d78c8-fkmbr[flagger]: {"level":"info","ts":"2019-01-09T12:14:05.158Z","caller":"controller/deployer.go:228","msg":"Scaling down jx-staging-croc-hunter-jenkinsx.jx-staging"}
2019-01-09 13:17:17 +01:00
stefanprodan
882b4b2d23 Update the control loop interval flag description 2019-01-08 13:15:10 +02:00
Stefan Prodan
cac585157f Merge pull request #21 from carlossg/patch-1
Qualify letsencrypt api version
2019-01-07 15:45:07 +02:00
Carlos Sanchez
cc2860a49f Qualify letsencrypt api version
Otherwise getting

    error: unable to recognize "./letsencrypt-issuer.yaml": no matches for kind "Issuer" in version "v1alpha2"
2019-01-07 14:38:53 +01:00
stefanprodan
bec96356ec Bump CRD version to v1alpha3
- new field canaryAnalysis.interval
2019-01-07 01:03:31 +02:00
stefanprodan
b5c648ea54 Bump version to 0.3.0-beta.1 2019-01-07 00:30:09 +02:00
stefanprodan
e6e3e500be Schedule canary analysis based on interval 2019-01-07 00:26:01 +02:00
stefanprodan
537e8fdaf7 Add canary analysis interval to CRD 2019-01-07 00:24:43 +02:00
stefanprodan
322c83bdad Add docs site link to chart 2019-01-06 18:18:47 +02:00
stefanprodan
41f0ba0247 Document the CRD target ref and control loop interval 2019-01-05 10:22:00 +02:00
stefanprodan
b67b49fde6 Change the default analysis interval to 1m 2019-01-05 01:05:27 +02:00
2410 changed files with 38477 additions and 773084 deletions

272
.circleci/config.yml Normal file
View File

@@ -0,0 +1,272 @@
version: 2.1
jobs:
build-binary:
docker:
- image: circleci/golang:1.13
working_directory: ~/build
steps:
- checkout
- restore_cache:
keys:
- go-mod-v3-{{ checksum "go.sum" }}
- run:
name: Run go mod download
command: go mod download
- run:
name: Run go fmt
command: make test-fmt
- run:
name: Build Flagger
command: |
CGO_ENABLED=0 GOOS=linux go build \
-ldflags "-s -w -X github.com/weaveworks/flagger/pkg/version.REVISION=${CIRCLE_SHA1}" \
-a -installsuffix cgo -o bin/flagger ./cmd/flagger/*.go
- run:
name: Build Flagger load tester
command: |
CGO_ENABLED=0 GOOS=linux go build \
-a -installsuffix cgo -o bin/loadtester ./cmd/loadtester/*.go
- run:
name: Run unit tests
command: |
go test -race -coverprofile=coverage.txt -covermode=atomic $(go list ./pkg/...)
bash <(curl -s https://codecov.io/bash)
- run:
name: Verify code gen
command: make test-codegen
- save_cache:
key: go-mod-v3-{{ checksum "go.sum" }}
paths:
- "/go/pkg/mod/"
- persist_to_workspace:
root: bin
paths:
- flagger
- loadtester
push-container:
docker:
- image: circleci/golang:1.13
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- attach_workspace:
at: /tmp/bin
- run: test/container-build.sh
- run: test/container-push.sh
push-binary:
docker:
- image: circleci/golang:1.13
working_directory: ~/build
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- restore_cache:
keys:
- go-mod-v3-{{ checksum "go.sum" }}
- run: test/goreleaser.sh
e2e-istio-testing:
machine: true
steps:
- checkout
- attach_workspace:
at: /tmp/bin
- run: test/container-build.sh
- run: test/e2e-kind.sh
- run: test/e2e-istio.sh
- run: test/e2e-tests.sh
e2e-kubernetes-testing:
machine: true
steps:
- checkout
- attach_workspace:
at: /tmp/bin
- run: test/container-build.sh
- run: test/e2e-kind.sh v1.17.0
- run: test/e2e-kubernetes.sh
- run: test/e2e-kubernetes-tests.sh
e2e-kubernetes-svc-testing:
machine: true
steps:
- checkout
- attach_workspace:
at: /tmp/bin
- run: test/container-build.sh
- run: test/e2e-kind.sh
- run: test/e2e-kubernetes.sh
- run: test/e2e-kubernetes-svc-tests.sh
e2e-smi-istio-testing:
machine: true
steps:
- checkout
- attach_workspace:
at: /tmp/bin
- run: test/container-build.sh
- run: test/e2e-kind.sh
- run: test/e2e-smi-istio.sh
- run: test/e2e-tests.sh canary
e2e-gloo-testing:
machine: true
steps:
- checkout
- attach_workspace:
at: /tmp/bin
- run: test/container-build.sh
- run: test/e2e-kind.sh
- run: test/e2e-gloo.sh
- run: test/e2e-gloo-tests.sh
e2e-nginx-testing:
machine: true
steps:
- checkout
- attach_workspace:
at: /tmp/bin
- run: test/container-build.sh
- run: test/e2e-kind.sh
- run: test/e2e-nginx.sh
- run: test/e2e-nginx-tests.sh
- run: test/e2e-nginx-cleanup.sh
- run: test/e2e-nginx-custom-annotations.sh
- run: test/e2e-nginx-tests.sh
e2e-linkerd-testing:
machine: true
steps:
- checkout
- attach_workspace:
at: /tmp/bin
- run: test/container-build.sh
- run: test/e2e-kind.sh
- run: test/e2e-linkerd.sh
- run: test/e2e-linkerd-tests.sh
e2e-contour-testing:
machine: true
steps:
- checkout
- attach_workspace:
at: /tmp/bin
- run: test/container-build.sh
- run: test/e2e-kind.sh
- run: test/e2e-contour.sh
- run: test/e2e-contour-tests.sh
push-helm-charts:
docker:
- image: circleci/golang:1.13
steps:
- checkout
- run:
name: Install kubectl
command: sudo curl -L https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl && sudo chmod +x /usr/local/bin/kubectl
- run:
name: Install helm
command: sudo curl -L https://storage.googleapis.com/kubernetes-helm/helm-v2.14.2-linux-amd64.tar.gz | tar xz && sudo mv linux-amd64/helm /bin/helm && sudo rm -rf linux-amd64
- run:
name: Initialize helm
command: helm init --client-only --kubeconfig=$HOME/.kube/kubeconfig
- run:
name: Lint charts
command: |
helm lint ./charts/*
- run:
name: Package charts
command: |
mkdir $HOME/charts
helm package ./charts/* --destination $HOME/charts
- run:
name: Publish charts
command: |
if echo "${CIRCLE_TAG}" | grep -Eq "[0-9]+(\.[0-9]+)*(-[a-z]+)?$"; then
REPOSITORY="https://weaveworksbot:${GITHUB_TOKEN}@github.com/weaveworks/flagger.git"
git config user.email weaveworksbot@users.noreply.github.com
git config user.name weaveworksbot
git remote set-url origin ${REPOSITORY}
git checkout gh-pages
mv -f $HOME/charts/*.tgz .
helm repo index . --url https://flagger.app
git add .
git commit -m "Publish Helm charts v${CIRCLE_TAG}"
git push origin gh-pages
else
echo "Not a release! Skip charts publish"
fi
workflows:
version: 2
build-test-push:
jobs:
- build-binary:
filters:
branches:
ignore:
- gh-pages
- e2e-istio-testing:
requires:
- build-binary
- e2e-kubernetes-testing:
requires:
- build-binary
- e2e-gloo-testing:
requires:
- build-binary
- e2e-nginx-testing:
requires:
- build-binary
- e2e-linkerd-testing:
requires:
- build-binary
- e2e-contour-testing:
requires:
- build-binary
- push-container:
requires:
- build-binary
- e2e-istio-testing
- e2e-kubernetes-testing
- e2e-gloo-testing
- e2e-nginx-testing
- e2e-linkerd-testing
release:
jobs:
- build-binary:
filters:
branches:
ignore: /.*/
tags:
ignore: /^chart.*/
- push-container:
requires:
- build-binary
filters:
branches:
ignore: /.*/
tags:
ignore: /^chart.*/
- push-binary:
requires:
- push-container
filters:
branches:
ignore: /.*/
tags:
ignore: /^chart.*/
- push-helm-charts:
requires:
- push-container
filters:
branches:
ignore: /.*/
tags:
ignore: /^chart.*/

View File

@@ -6,3 +6,6 @@ coverage:
threshold: 50
base: auto
patch: off
comment:
require_changes: yes

1
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1 @@
* @stefanprodan

17
.github/_main.workflow vendored Normal file
View File

@@ -0,0 +1,17 @@
workflow "Publish Helm charts" {
on = "push"
resolves = ["helm-push"]
}
action "helm-lint" {
uses = "stefanprodan/gh-actions/helm@master"
args = ["lint charts/*"]
}
action "helm-push" {
needs = ["helm-lint"]
uses = "stefanprodan/gh-actions/helm-gh-pages@master"
args = ["charts/*","https://flagger.app"]
secrets = ["GITHUB_TOKEN"]
}

6
.gitignore vendored
View File

@@ -11,3 +11,9 @@
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
.DS_Store
bin/
_tmp/
artifacts/gcloud/
.idea

View File

@@ -1,14 +1,18 @@
builds:
- main: ./cmd/flagger
binary: flagger
ldflags: -s -w -X github.com/stefanprodan/flagger/pkg/version.REVISION={{.Commit}}
ldflags: -s -w -X github.com/weaveworks/flagger/pkg/version.REVISION={{.Commit}}
goos:
- linux
goarch:
- amd64
env:
- CGO_ENABLED=0
archive:
name_template: "{{ .Binary }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}"
files:
- none*
archives:
- name_template: "{{ .Binary }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}"
files:
- none*
changelog:
filters:
exclude:
- '^CircleCI'

View File

@@ -1,45 +0,0 @@
sudo: required
language: go
go:
- 1.11.x
services:
- docker
addons:
apt:
packages:
- docker-ce
script:
- set -e
- make test-fmt
- make test-codegen
- go test -race -coverprofile=coverage.txt -covermode=atomic ./pkg/controller/
- make build
after_success:
- if [ -z "$DOCKER_USER" ]; then
echo "PR build, skipping image push";
else
docker tag stefanprodan/flagger:latest quay.io/stefanprodan/flagger:${TRAVIS_COMMIT};
echo $DOCKER_PASS | docker login -u=$DOCKER_USER --password-stdin quay.io;
docker push quay.io/stefanprodan/flagger:${TRAVIS_COMMIT};
fi
- if [ -z "$TRAVIS_TAG" ]; then
echo "Not a release, skipping image push";
else
docker tag stefanprodan/flagger:latest quay.io/stefanprodan/flagger:${TRAVIS_TAG};
echo $DOCKER_PASS | docker login -u=$DOCKER_USER --password-stdin quay.io;
docker push quay.io/stefanprodan/flagger:$TRAVIS_TAG;
fi
- bash <(curl -s https://codecov.io/bash)
- rm coverage.txt
deploy:
- provider: script
skip_cleanup: true
script: curl -sL http://git.io/goreleaser | bash
on:
tags: true

577
CHANGELOG.md Normal file
View File

@@ -0,0 +1,577 @@
# Changelog
All notable changes to this project are documented in this file.
## 0.23.0 (2020-02-06)
Adds support for service name configuration and rollback webhook
#### Features
- Implement service name override [#416](https://github.com/weaveworks/flagger/pull/416)
- Add support for gated rollback [#420](https://github.com/weaveworks/flagger/pull/420)
## 0.22.0 (2020-01-16)
Adds event dispatching through webhooks
#### Features
- Implement event dispatching webhook [#409](https://github.com/weaveworks/flagger/pull/409)
- Add general purpose event webhook [#401](https://github.com/weaveworks/flagger/pull/401)
#### Improvements
- Update Contour to v1.1 and add Linkerd header [#411](https://github.com/weaveworks/flagger/pull/411)
- Update Istio e2e to v1.4.3 [#407](https://github.com/weaveworks/flagger/pull/407)
- Update Kubernetes packages to 1.17 [#406](https://github.com/weaveworks/flagger/pull/406)
## 0.21.0 (2020-01-06)
Adds support for Contour ingress controller
#### Features
- Add support for Contour ingress controller [#397](https://github.com/weaveworks/flagger/pull/397)
- Add support for Envoy managed by Crossover via SMI [#386](https://github.com/weaveworks/flagger/pull/386)
- Extend canary target ref to Kubernetes Service kind [#372](https://github.com/weaveworks/flagger/pull/372)
#### Improvements
- Add Prometheus operator PodMonitor template to Helm chart [#399](https://github.com/weaveworks/flagger/pull/399)
- Update e2e tests to Kubernetes v1.16 [#390](https://github.com/weaveworks/flagger/pull/390)
## 0.20.4 (2019-12-03)
Adds support for taking over a running deployment without disruption
#### Improvements
- Add initialization phase to Kubernetes router [#384](https://github.com/weaveworks/flagger/pull/384)
- Add canary controller interface and Kubernetes deployment kind implementation [#378](https://github.com/weaveworks/flagger/pull/378)
#### Fixes
- Skip primary check on skip analysis [#380](https://github.com/weaveworks/flagger/pull/380)
## 0.20.3 (2019-11-13)
Adds wrk to load tester tools and the App Mesh gateway chart to Flagger Helm repository
#### Improvements
- Add wrk to load tester tools [#368](https://github.com/weaveworks/flagger/pull/368)
- Add App Mesh gateway chart [#365](https://github.com/weaveworks/flagger/pull/365)
## 0.20.2 (2019-11-07)
Adds support for exposing canaries outside the cluster using App Mesh Gateway annotations
#### Improvements
- Expose canaries on public domains with App Mesh Gateway [#358](https://github.com/weaveworks/flagger/pull/358)
#### Fixes
- Use the specified replicas when scaling up the canary [#363](https://github.com/weaveworks/flagger/pull/363)
## 0.20.1 (2019-11-03)
Fixes promql execution and updates the load testing tools
#### Improvements
- Update load tester Helm tools [#8349dd1](https://github.com/weaveworks/flagger/commit/8349dd1cda59a741c7bed9a0f67c0fc0fbff4635)
- e2e testing: update providers [#346](https://github.com/weaveworks/flagger/pull/346)
#### Fixes
- Fix Prometheus query escape [#353](https://github.com/weaveworks/flagger/pull/353)
- Updating hey release link [#350](https://github.com/weaveworks/flagger/pull/350)
## 0.20.0 (2019-10-21)
Adds support for [A/B Testing](https://docs.flagger.app/usage/progressive-delivery#traffic-mirroring) and retry policies when using App Mesh
#### Features
- Implement App Mesh A/B testing based on HTTP headers match conditions [#340](https://github.com/weaveworks/flagger/pull/340)
- Implement App Mesh HTTP retry policy [#338](https://github.com/weaveworks/flagger/pull/338)
- Implement metrics server override [#342](https://github.com/weaveworks/flagger/pull/342)
#### Improvements
- Add the app/name label to services and primary deployment [#333](https://github.com/weaveworks/flagger/pull/333)
- Allow setting Slack and Teams URLs with env vars [#334](https://github.com/weaveworks/flagger/pull/334)
- Refactor Gloo integration [#344](https://github.com/weaveworks/flagger/pull/344)
#### Fixes
- Generate unique names for App Mesh virtual routers and routes [#336](https://github.com/weaveworks/flagger/pull/336)
## 0.19.0 (2019-10-08)
Adds support for canary and blue/green [traffic mirroring](https://docs.flagger.app/usage/progressive-delivery#traffic-mirroring)
#### Features
- Add traffic mirroring for Istio service mesh [#311](https://github.com/weaveworks/flagger/pull/311)
- Implement canary service target port [#327](https://github.com/weaveworks/flagger/pull/327)
#### Improvements
- Allow gPRC protocol for App Mesh [#325](https://github.com/weaveworks/flagger/pull/325)
- Enforce blue/green when using Kubernetes networking [#326](https://github.com/weaveworks/flagger/pull/326)
#### Fixes
- Fix port discovery diff [#324](https://github.com/weaveworks/flagger/pull/324)
- Helm chart: Enable Prometheus scraping of Flagger metrics [#2141d88](https://github.com/weaveworks/flagger/commit/2141d88ce1cc6be220dab34171c215a334ecde24)
## 0.18.6 (2019-10-03)
Adds support for App Mesh conformance tests and latency metric checks
#### Improvements
- Add support for acceptance testing when using App Mesh [#322](https://github.com/weaveworks/flagger/pull/322)
- Add Kustomize installer for App Mesh [#310](https://github.com/weaveworks/flagger/pull/310)
- Update Linkerd to v2.5.0 and Prometheus to v2.12.0 [#323](https://github.com/weaveworks/flagger/pull/323)
#### Fixes
- Fix slack/teams notification fields mapping [#318](https://github.com/weaveworks/flagger/pull/318)
## 0.18.5 (2019-10-02)
Adds support for [confirm-promotion](https://docs.flagger.app/how-it-works#webhooks) webhooks and blue/green deployments when using a service mesh
#### Features
- Implement confirm-promotion hook [#307](https://github.com/weaveworks/flagger/pull/307)
- Implement B/G for service mesh providers [#305](https://github.com/weaveworks/flagger/pull/305)
#### Improvements
- Canary promotion improvements to avoid dropping in-flight requests [#310](https://github.com/weaveworks/flagger/pull/310)
- Update end-to-end tests to Kubernetes v1.15.3 and Istio 1.3.0 [#306](https://github.com/weaveworks/flagger/pull/306)
#### Fixes
- Skip primary check for App Mesh [#315](https://github.com/weaveworks/flagger/pull/315)
## 0.18.4 (2019-09-08)
Adds support for NGINX custom annotations and Helm v3 acceptance testing
#### Features
- Add annotations prefix for NGINX ingresses [#293](https://github.com/weaveworks/flagger/pull/293)
- Add wide columns in CRD [#289](https://github.com/weaveworks/flagger/pull/289)
- loadtester: implement Helm v3 test command [#296](https://github.com/weaveworks/flagger/pull/296)
- loadtester: add gPRC health check to load tester image [#295](https://github.com/weaveworks/flagger/pull/295)
#### Fixes
- loadtester: fix tests error logging [#286](https://github.com/weaveworks/flagger/pull/286)
## 0.18.3 (2019-08-22)
Adds support for tillerless helm tests and protobuf health checking
#### Features
- loadtester: add support for tillerless helm [#280](https://github.com/weaveworks/flagger/pull/280)
- loadtester: add support for protobuf health checking [#280](https://github.com/weaveworks/flagger/pull/280)
#### Improvements
- Set HTTP listeners for AppMesh virtual routers [#272](https://github.com/weaveworks/flagger/pull/272)
#### Fixes
- Add missing fields to CRD validation spec [#271](https://github.com/weaveworks/flagger/pull/271)
- Fix App Mesh backends validation in CRD [#281](https://github.com/weaveworks/flagger/pull/281)
## 0.18.2 (2019-08-05)
Fixes multi-port support for Istio
#### Fixes
- Fix port discovery for multiple port services [#267](https://github.com/weaveworks/flagger/pull/267)
#### Improvements
- Update e2e testing to Istio v1.2.3, Gloo v0.18.8 and NGINX ingress chart v1.12.1 [#268](https://github.com/weaveworks/flagger/pull/268)
## 0.18.1 (2019-07-30)
Fixes Blue/Green style deployments for Kubernetes and Linkerd providers
#### Fixes
- Fix Blue/Green metrics provider and add e2e tests [#261](https://github.com/weaveworks/flagger/pull/261)
## 0.18.0 (2019-07-29)
Adds support for [manual gating](https://docs.flagger.app/how-it-works#manual-gating) and pausing/resuming an ongoing analysis
#### Features
- Implement confirm rollout gate, hook and API [#251](https://github.com/weaveworks/flagger/pull/251)
#### Improvements
- Refactor canary change detection and status [#240](https://github.com/weaveworks/flagger/pull/240)
- Implement finalising state [#257](https://github.com/weaveworks/flagger/pull/257)
- Add gRPC load testing tool [#248](https://github.com/weaveworks/flagger/pull/248)
#### Breaking changes
- Due to the status sub-resource changes in [#240](https://github.com/weaveworks/flagger/pull/240), when upgrading Flagger the canaries status phase will be reset to `Initialized`
- Upgrading Flagger with Helm will fail due to Helm poor support of CRDs, see [workaround](https://github.com/weaveworks/flagger/issues/223)
## 0.17.0 (2019-07-08)
Adds support for Linkerd (SMI Traffic Split API), MS Teams notifications and HA mode with leader election
#### Features
- Add Linkerd support [#230](https://github.com/weaveworks/flagger/pull/230)
- Implement MS Teams notifications [#235](https://github.com/weaveworks/flagger/pull/235)
- Implement leader election [#236](https://github.com/weaveworks/flagger/pull/236)
#### Improvements
- Add [Kustomize](https://docs.flagger.app/install/flagger-install-on-kubernetes#install-flagger-with-kustomize) installer [#232](https://github.com/weaveworks/flagger/pull/232)
- Add Pod Security Policy to Helm chart [#234](https://github.com/weaveworks/flagger/pull/234)
## 0.16.0 (2019-06-23)
Adds support for running [Blue/Green deployments](https://docs.flagger.app/usage/blue-green) without a service mesh or ingress controller
#### Features
- Allow blue/green deployments without a service mesh provider [#211](https://github.com/weaveworks/flagger/pull/211)
- Add the service mesh provider to the canary spec [#217](https://github.com/weaveworks/flagger/pull/217)
- Allow multi-port services and implement port discovery [#207](https://github.com/weaveworks/flagger/pull/207)
#### Improvements
- Add [FAQ page](https://docs.flagger.app/faq) to docs website
- Switch to go modules in CI [#218](https://github.com/weaveworks/flagger/pull/218)
- Update e2e testing to Kubernetes Kind 0.3.0 and Istio 1.2.0
#### Fixes
- Update the primary HPA on canary promotion [#216](https://github.com/weaveworks/flagger/pull/216)
## 0.15.0 (2019-06-12)
Adds support for customising the Istio [traffic policy](https://docs.flagger.app/how-it-works#istio-routing) in the canary service spec
#### Features
- Generate Istio destination rules and allow traffic policy customisation [#200](https://github.com/weaveworks/flagger/pull/200)
#### Improvements
- Update Kubernetes packages to 1.14 and use go modules instead of dep [#202](https://github.com/weaveworks/flagger/pull/202)
## 0.14.1 (2019-06-05)
Adds support for running [acceptance/integration tests](https://docs.flagger.app/how-it-works#integration-testing) with Helm test or Bash Bats using pre-rollout hooks
#### Features
- Implement Helm and Bash pre-rollout hooks [#196](https://github.com/weaveworks/flagger/pull/196)
#### Fixes
- Fix promoting canary when max weight is not a multiple of step [#190](https://github.com/weaveworks/flagger/pull/190)
- Add ability to set Prometheus url with custom path without trailing '/' [#197](https://github.com/weaveworks/flagger/pull/197)
## 0.14.0 (2019-05-21)
Adds support for Service Mesh Interface and [Gloo](https://docs.flagger.app/usage/gloo-progressive-delivery) ingress controller
#### Features
- Add support for SMI (Istio weighted traffic) [#180](https://github.com/weaveworks/flagger/pull/180)
- Add support for Gloo ingress controller (weighted traffic) [#179](https://github.com/weaveworks/flagger/pull/179)
## 0.13.2 (2019-04-11)
Fixes for Jenkins X deployments (prevent the jx GC from removing the primary instance)
#### Fixes
- Do not copy labels from canary to primary deployment [#178](https://github.com/weaveworks/flagger/pull/178)
#### Improvements
- Add NGINX ingress controller e2e and unit tests [#176](https://github.com/weaveworks/flagger/pull/176)
## 0.13.1 (2019-04-09)
Fixes for custom metrics checks and NGINX Prometheus queries
#### Fixes
- Fix promql queries for custom checks and NGINX [#174](https://github.com/weaveworks/flagger/pull/174)
## 0.13.0 (2019-04-08)
Adds support for [NGINX](https://docs.flagger.app/usage/nginx-progressive-delivery) ingress controller
#### Features
- Add support for nginx ingress controller (weighted traffic and A/B testing) [#170](https://github.com/weaveworks/flagger/pull/170)
- Add Prometheus add-on to Flagger Helm chart for App Mesh and NGINX [79b3370](https://github.com/weaveworks/flagger/pull/170/commits/79b337089294a92961bc8446fd185b38c50a32df)
#### Fixes
- Fix duplicate hosts Istio error when using wildcards [#162](https://github.com/weaveworks/flagger/pull/162)
## 0.12.0 (2019-04-29)
Adds support for [SuperGloo](https://docs.flagger.app/install/flagger-install-with-supergloo)
#### Features
- Supergloo support for canary deployment (weighted traffic) [#151](https://github.com/weaveworks/flagger/pull/151)
## 0.11.1 (2019-04-18)
Move Flagger and the load tester container images to Docker Hub
#### Features
- Add Bash Automated Testing System support to Flagger tester for running acceptance tests as pre-rollout hooks
## 0.11.0 (2019-04-17)
Adds pre/post rollout [webhooks](https://docs.flagger.app/how-it-works#webhooks)
#### Features
- Add `pre-rollout` and `post-rollout` webhook types [#147](https://github.com/weaveworks/flagger/pull/147)
#### Improvements
- Unify App Mesh and Istio builtin metric checks [#146](https://github.com/weaveworks/flagger/pull/146)
- Make the pod selector label configurable [#148](https://github.com/weaveworks/flagger/pull/148)
#### Breaking changes
- Set default `mesh` Istio gateway only if no gateway is specified [#141](https://github.com/weaveworks/flagger/pull/141)
## 0.10.0 (2019-03-27)
Adds support for App Mesh
#### Features
- AWS App Mesh integration
[#107](https://github.com/weaveworks/flagger/pull/107)
[#123](https://github.com/weaveworks/flagger/pull/123)
#### Improvements
- Reconcile Kubernetes ClusterIP services [#122](https://github.com/weaveworks/flagger/pull/122)
#### Fixes
- Preserve pod labels on canary promotion [#105](https://github.com/weaveworks/flagger/pull/105)
- Fix canary status Prometheus metric [#121](https://github.com/weaveworks/flagger/pull/121)
## 0.9.0 (2019-03-11)
Allows A/B testing scenarios where instead of weighted routing, the traffic is split between the
primary and canary based on HTTP headers or cookies.
#### Features
- A/B testing - canary with session affinity [#88](https://github.com/weaveworks/flagger/pull/88)
#### Fixes
- Update the analysis interval when the custom resource changes [#91](https://github.com/weaveworks/flagger/pull/91)
## 0.8.0 (2019-03-06)
Adds support for CORS policy and HTTP request headers manipulation
#### Features
- CORS policy support [#83](https://github.com/weaveworks/flagger/pull/83)
- Allow headers to be appended to HTTP requests [#82](https://github.com/weaveworks/flagger/pull/82)
#### Improvements
- Refactor the routing management
[#72](https://github.com/weaveworks/flagger/pull/72)
[#80](https://github.com/weaveworks/flagger/pull/80)
- Fine-grained RBAC [#73](https://github.com/weaveworks/flagger/pull/73)
- Add option to limit Flagger to a single namespace [#78](https://github.com/weaveworks/flagger/pull/78)
## 0.7.0 (2019-02-28)
Adds support for custom metric checks, HTTP timeouts and HTTP retries
#### Features
- Allow custom promql queries in the canary analysis spec [#60](https://github.com/weaveworks/flagger/pull/60)
- Add HTTP timeout and retries to canary service spec [#62](https://github.com/weaveworks/flagger/pull/62)
## 0.6.0 (2019-02-25)
Allows for [HTTPMatchRequests](https://istio.io/docs/reference/config/istio.networking.v1alpha3/#HTTPMatchRequest)
and [HTTPRewrite](https://istio.io/docs/reference/config/istio.networking.v1alpha3/#HTTPRewrite)
to be customized in the service spec of the canary custom resource.
#### Features
- Add HTTP match conditions and URI rewrite to the canary service spec [#55](https://github.com/weaveworks/flagger/pull/55)
- Update virtual service when the canary service spec changes
[#54](https://github.com/weaveworks/flagger/pull/54)
[#51](https://github.com/weaveworks/flagger/pull/51)
#### Improvements
- Run e2e testing on [Kubernetes Kind](https://github.com/kubernetes-sigs/kind) for canary promotion
[#53](https://github.com/weaveworks/flagger/pull/53)
## 0.5.1 (2019-02-14)
Allows skipping the analysis phase to ship changes directly to production
#### Features
- Add option to skip the canary analysis [#46](https://github.com/weaveworks/flagger/pull/46)
#### Fixes
- Reject deployment if the pod label selector doesn't match `app: <DEPLOYMENT_NAME>` [#43](https://github.com/weaveworks/flagger/pull/43)
## 0.5.0 (2019-01-30)
Track changes in ConfigMaps and Secrets [#37](https://github.com/weaveworks/flagger/pull/37)
#### Features
- Promote configmaps and secrets changes from canary to primary
- Detect changes in configmaps and/or secrets and (re)start canary analysis
- Add configs checksum to Canary CRD status
- Create primary configmaps and secrets at bootstrap
- Scan canary volumes and containers for configmaps and secrets
#### Fixes
- Copy deployment labels from canary to primary at bootstrap and promotion
## 0.4.1 (2019-01-24)
Load testing webhook [#35](https://github.com/weaveworks/flagger/pull/35)
#### Features
- Add the load tester chart to Flagger Helm repository
- Implement a load test runner based on [rakyll/hey](https://github.com/rakyll/hey)
- Log warning when no values are found for Istio metric due to lack of traffic
#### Fixes
- Run wekbooks before the metrics checks to avoid failures when using a load tester
## 0.4.0 (2019-01-18)
Restart canary analysis if revision changes [#31](https://github.com/weaveworks/flagger/pull/31)
#### Breaking changes
- Drop support for Kubernetes 1.10
#### Features
- Detect changes during canary analysis and reset advancement
- Add status and additional printer columns to CRD
- Add canary name and namespace to controller structured logs
#### Fixes
- Allow canary name to be different to the target name
- Check if multiple canaries have the same target and log error
- Use deep copy when updating Kubernetes objects
- Skip readiness checks if canary analysis has finished
## 0.3.0 (2019-01-11)
Configurable canary analysis duration [#20](https://github.com/weaveworks/flagger/pull/20)
#### Breaking changes
- Helm chart: flag `controlLoopInterval` has been removed
#### Features
- CRD: canaries.flagger.app v1alpha3
- Schedule canary analysis independently based on `canaryAnalysis.interval`
- Add analysis interval to Canary CRD (defaults to one minute)
- Make autoscaler (HPA) reference optional
## 0.2.0 (2019-01-04)
Webhooks [#18](https://github.com/weaveworks/flagger/pull/18)
#### Features
- CRD: canaries.flagger.app v1alpha2
- Implement canary external checks based on webhooks HTTP POST calls
- Add webhooks to Canary CRD
- Move docs to gitbook [docs.flagger.app](https://docs.flagger.app)
## 0.1.2 (2018-12-06)
Improve Slack notifications [#14](https://github.com/weaveworks/flagger/pull/14)
#### Features
- Add canary analysis metadata to init and start Slack messages
- Add rollback reason to failed canary Slack messages
## 0.1.1 (2018-11-28)
Canary progress deadline [#10](https://github.com/weaveworks/flagger/pull/10)
#### Features
- Rollback canary based on the deployment progress deadline check
- Add progress deadline to Canary CRD (defaults to 10 minutes)
## 0.1.0 (2018-11-25)
First stable release
#### Features
- CRD: canaries.flagger.app v1alpha1
- Notifications: post canary events to Slack
- Instrumentation: expose Prometheus metrics for canary status and traffic weight percentage
- Autoscaling: add HPA reference to CRD and create primary HPA at bootstrap
- Bootstrap: create primary deployment, ClusterIP services and Istio virtual service based on CRD spec
## 0.0.1 (2018-10-07)
Initial semver release
#### Features
- Implement canary rollback based on failed checks threshold
- Scale up the deployment when canary revision changes
- Add OpenAPI v3 schema validation to Canary CRD
- Use CRD status for canary state persistence
- Add Helm charts for Flagger and Grafana
- Add canary analysis Grafana dashboard

View File

@@ -1,17 +1,4 @@
FROM golang:1.11
RUN mkdir -p /go/src/github.com/stefanprodan/flagger/
WORKDIR /go/src/github.com/stefanprodan/flagger
COPY . .
RUN GIT_COMMIT=$(git rev-list -1 HEAD) && \
CGO_ENABLED=0 GOOS=linux go build -ldflags "-s -w \
-X github.com/stefanprodan/flagger/pkg/version.REVISION=${GIT_COMMIT}" \
-a -installsuffix cgo -o flagger ./cmd/flagger/*
FROM alpine:3.8
FROM alpine:3.10
RUN addgroup -S flagger \
&& adduser -S -g flagger flagger \
@@ -19,7 +6,7 @@ RUN addgroup -S flagger \
WORKDIR /home/flagger
COPY --from=0 /go/src/github.com/stefanprodan/flagger/flagger .
COPY /bin/flagger .
RUN chown -R flagger:flagger ./

64
Dockerfile.loadtester Normal file
View File

@@ -0,0 +1,64 @@
FROM alpine:3.10.3 as build
RUN apk --no-cache add alpine-sdk perl curl
RUN curl -sSLo hey "https://storage.googleapis.com/hey-release/hey_linux_amd64" && \
chmod +x hey && mv hey /usr/local/bin/hey
RUN HELM2_VERSION=2.16.1 && \
curl -sSL "https://get.helm.sh/helm-v${HELM2_VERSION}-linux-amd64.tar.gz" | tar xvz && \
chmod +x linux-amd64/helm && mv linux-amd64/helm /usr/local/bin/helm && \
chmod +x linux-amd64/tiller && mv linux-amd64/tiller /usr/local/bin/tiller
RUN HELM3_VERSION=3.0.1 && \
curl -sSL "https://get.helm.sh/helm-v${HELM3_VERSION}-linux-amd64.tar.gz" | tar xvz && \
chmod +x linux-amd64/helm && mv linux-amd64/helm /usr/local/bin/helmv3
RUN GRPC_HEALTH_PROBE_VERSION=v0.3.1 && \
wget -qO /usr/local/bin/grpc_health_probe https://github.com/grpc-ecosystem/grpc-health-probe/releases/download/${GRPC_HEALTH_PROBE_VERSION}/grpc_health_probe-linux-amd64 && \
chmod +x /usr/local/bin/grpc_health_probe
RUN GHZ_VERSION=0.39.0 && \
curl -sSL "https://github.com/bojand/ghz/releases/download/v${GHZ_VERSION}/ghz_${GHZ_VERSION}_Linux_x86_64.tar.gz" | tar xz -C /tmp && \
mv /tmp/ghz /usr/local/bin && chmod +x /usr/local/bin/ghz
RUN HELM_TILLER_VERSION=0.9.3 && \
curl -sSL "https://github.com/rimusz/helm-tiller/archive/v${HELM_TILLER_VERSION}.tar.gz" | tar xz -C /tmp && \
mv /tmp/helm-tiller-${HELM_TILLER_VERSION} /tmp/helm-tiller
RUN WRK_VERSION=4.0.2 && \
cd /tmp && git clone -b ${WRK_VERSION} https://github.com/wg/wrk
RUN cd /tmp/wrk && make
FROM bats/bats:v1.1.0
RUN addgroup -S app && \
adduser -S -g app app && \
apk --no-cache add ca-certificates curl jq libgcc
WORKDIR /home/app
COPY --from=build /usr/local/bin/hey /usr/local/bin/
COPY --from=build /tmp/wrk/wrk /usr/local/bin/
COPY --from=build /usr/local/bin/helm /usr/local/bin/
COPY --from=build /usr/local/bin/tiller /usr/local/bin/
COPY --from=build /usr/local/bin/ghz /usr/local/bin/
COPY --from=build /usr/local/bin/helmv3 /usr/local/bin/
COPY --from=build /usr/local/bin/grpc_health_probe /usr/local/bin/
COPY --from=build /tmp/helm-tiller /tmp/helm-tiller
ADD https://raw.githubusercontent.com/grpc/grpc-proto/master/grpc/health/v1/health.proto /tmp/ghz/health.proto
COPY ./bin/loadtester .
RUN chown -R app:app ./
USER app
# test load generator tools
RUN hey -n 1 -c 1 https://flagger.app > /dev/null && echo $? | grep 0
RUN wrk -d 1s -c 1 -t 1 https://flagger.app > /dev/null && echo $? | grep 0
# install Helm v2 plugins
RUN helm init --client-only && helm plugin install /tmp/helm-tiller
ENTRYPOINT ["./loadtester"]

737
Gopkg.lock generated
View File

@@ -1,737 +0,0 @@
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
[[projects]]
digest = "1:5c3894b2aa4d6bead0ceeea6831b305d62879c871780e7b76296ded1b004bc57"
name = "cloud.google.com/go"
packages = ["compute/metadata"]
pruneopts = "NUT"
revision = "97efc2c9ffd9fe8ef47f7f3203dc60bbca547374"
version = "v0.28.0"
[[projects]]
branch = "master"
digest = "1:707ebe952a8b3d00b343c01536c79c73771d100f63ec6babeaed5c79e2b8a8dd"
name = "github.com/beorn7/perks"
packages = ["quantile"]
pruneopts = "NUT"
revision = "3a771d992973f24aa725d07868b467d1ddfceafb"
[[projects]]
digest = "1:ffe9824d294da03b391f44e1ae8281281b4afc1bdaa9588c9097785e3af10cec"
name = "github.com/davecgh/go-spew"
packages = ["spew"]
pruneopts = "NUT"
revision = "8991bc29aa16c548c550c7ff78260e27b9ab7c73"
version = "v1.1.1"
[[projects]]
digest = "1:81466b4218bf6adddac2572a30ac733a9255919bc2f470b4827a317bd4ee1756"
name = "github.com/ghodss/yaml"
packages = ["."]
pruneopts = "NUT"
revision = "0ca9ea5df5451ffdf184b4428c902747c2c11cd7"
version = "v1.0.0"
[[projects]]
digest = "1:8679b8a64f3613e9749c5640c3535c83399b8e69f67ce54d91dc73f6d77373af"
name = "github.com/gogo/protobuf"
packages = [
"proto",
"sortkeys",
]
pruneopts = "NUT"
revision = "636bf0302bc95575d69441b25a2603156ffdddf1"
version = "v1.1.1"
[[projects]]
branch = "master"
digest = "1:e0f096f9332ad5f84341de82db69fd098864b17c668333a1fbbffd1b846dcc2b"
name = "github.com/golang/glog"
packages = ["."]
pruneopts = "NUT"
revision = "2cc4b790554d1a0c48fcc3aeb891e3de70cf8de0"
source = "github.com/istio/glog"
[[projects]]
branch = "master"
digest = "1:3fb07f8e222402962fa190eb060608b34eddfb64562a18e2167df2de0ece85d8"
name = "github.com/golang/groupcache"
packages = ["lru"]
pruneopts = "NUT"
revision = "24b0969c4cb722950103eed87108c8d291a8df00"
[[projects]]
digest = "1:63ccdfbd20f7ccd2399d0647a7d100b122f79c13bb83da9660b1598396fd9f62"
name = "github.com/golang/protobuf"
packages = [
"proto",
"ptypes",
"ptypes/any",
"ptypes/duration",
"ptypes/timestamp",
]
pruneopts = "NUT"
revision = "aa810b61a9c79d51363740d207bb46cf8e620ed5"
version = "v1.2.0"
[[projects]]
branch = "master"
digest = "1:05f95ffdfcf651bdb0f05b40b69e7f5663047f8da75c72d58728acb59b5cc107"
name = "github.com/google/btree"
packages = ["."]
pruneopts = "NUT"
revision = "4030bb1f1f0c35b30ca7009e9ebd06849dd45306"
[[projects]]
digest = "1:d2754cafcab0d22c13541618a8029a70a8959eb3525ff201fe971637e2274cd0"
name = "github.com/google/go-cmp"
packages = [
"cmp",
"cmp/cmpopts",
"cmp/internal/diff",
"cmp/internal/function",
"cmp/internal/value",
]
pruneopts = "NUT"
revision = "3af367b6b30c263d47e8895973edcca9a49cf029"
version = "v0.2.0"
[[projects]]
branch = "master"
digest = "1:52c5834e2bebac9030c97cc0798ac11c3aa8a39f098aeb419f142533da6cd3cc"
name = "github.com/google/gofuzz"
packages = ["."]
pruneopts = "NUT"
revision = "24818f796faf91cd76ec7bddd72458fbced7a6c1"
[[projects]]
digest = "1:06a7dadb7b760767341ffb6c8d377238d68a1226f2b21b5d497d2e3f6ecf6b4e"
name = "github.com/googleapis/gnostic"
packages = [
"OpenAPIv2",
"compiler",
"extensions",
]
pruneopts = "NUT"
revision = "7c663266750e7d82587642f65e60bc4083f1f84e"
version = "v0.2.0"
[[projects]]
branch = "master"
digest = "1:7fdf3223c7372d1ced0b98bf53457c5e89d89aecbad9a77ba9fcc6e01f9e5621"
name = "github.com/gregjones/httpcache"
packages = [
".",
"diskcache",
]
pruneopts = "NUT"
revision = "9cad4c3443a7200dd6400aef47183728de563a38"
[[projects]]
digest = "1:b42cde0e1f3c816dd57f57f7bbcf05ca40263ad96f168714c130c611fc0856a6"
name = "github.com/hashicorp/golang-lru"
packages = [
".",
"simplelru",
]
pruneopts = "NUT"
revision = "20f1fb78b0740ba8c3cb143a61e86ba5c8669768"
version = "v0.5.0"
[[projects]]
digest = "1:9a52adf44086cead3b384e5d0dbf7a1c1cce65e67552ee3383a8561c42a18cd3"
name = "github.com/imdario/mergo"
packages = ["."]
pruneopts = "NUT"
revision = "9f23e2d6bd2a77f959b2bf6acdbefd708a83a4a4"
version = "v0.3.6"
[[projects]]
branch = "master"
digest = "1:e0f096f9332ad5f84341de82db69fd098864b17c668333a1fbbffd1b846dcc2b"
name = "github.com/istio/glog"
packages = ["."]
pruneopts = "NUT"
revision = "2cc4b790554d1a0c48fcc3aeb891e3de70cf8de0"
[[projects]]
digest = "1:0243cffa4a3410f161ee613dfdd903a636d07e838a42d341da95d81f42cd1d41"
name = "github.com/json-iterator/go"
packages = ["."]
pruneopts = "NUT"
revision = "f2b4162afba35581b6d4a50d3b8f34e33c144682"
[[projects]]
digest = "1:03a74b0d86021c8269b52b7c908eb9bb3852ff590b363dad0a807cf58cec2f89"
name = "github.com/knative/pkg"
packages = [
"apis",
"apis/duck",
"apis/duck/v1alpha1",
"apis/istio",
"apis/istio/authentication",
"apis/istio/authentication/v1alpha1",
"apis/istio/common/v1alpha1",
"apis/istio/v1alpha3",
"client/clientset/versioned",
"client/clientset/versioned/fake",
"client/clientset/versioned/scheme",
"client/clientset/versioned/typed/authentication/v1alpha1",
"client/clientset/versioned/typed/authentication/v1alpha1/fake",
"client/clientset/versioned/typed/duck/v1alpha1",
"client/clientset/versioned/typed/duck/v1alpha1/fake",
"client/clientset/versioned/typed/istio/v1alpha3",
"client/clientset/versioned/typed/istio/v1alpha3/fake",
"signals",
]
pruneopts = "NUT"
revision = "c15d7c8f2220a7578b33504df6edefa948c845ae"
[[projects]]
digest = "1:5985ef4caf91ece5d54817c11ea25f182697534f8ae6521eadcd628c142ac4b6"
name = "github.com/matttproud/golang_protobuf_extensions"
packages = ["pbutil"]
pruneopts = "NUT"
revision = "c12348ce28de40eed0136aa2b644d0ee0650e56c"
version = "v1.0.1"
[[projects]]
digest = "1:2f42fa12d6911c7b7659738758631bec870b7e9b4c6be5444f963cdcfccc191f"
name = "github.com/modern-go/concurrent"
packages = ["."]
pruneopts = "NUT"
revision = "bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94"
version = "1.0.3"
[[projects]]
digest = "1:c6aca19413b13dc59c220ad7430329e2ec454cc310bc6d8de2c7e2b93c18a0f6"
name = "github.com/modern-go/reflect2"
packages = ["."]
pruneopts = "NUT"
revision = "4b7aa43c6742a2c18fdef89dd197aaae7dac7ccd"
version = "1.0.1"
[[projects]]
branch = "master"
digest = "1:3bf17a6e6eaa6ad24152148a631d18662f7212e21637c2699bff3369b7f00fa2"
name = "github.com/petar/GoLLRB"
packages = ["llrb"]
pruneopts = "NUT"
revision = "53be0d36a84c2a886ca057d34b6aa4468df9ccb4"
[[projects]]
digest = "1:6c6d91dc326ed6778783cff869c49fb2f61303cdd2ebbcf90abe53505793f3b6"
name = "github.com/peterbourgon/diskv"
packages = ["."]
pruneopts = "NUT"
revision = "5f041e8faa004a95c88a202771f4cc3e991971e6"
version = "v2.0.1"
[[projects]]
digest = "1:03bca087b180bf24c4f9060775f137775550a0834e18f0bca0520a868679dbd7"
name = "github.com/prometheus/client_golang"
packages = [
"prometheus",
"prometheus/promhttp",
]
pruneopts = "NUT"
revision = "c5b7fccd204277076155f10851dad72b76a49317"
version = "v0.8.0"
[[projects]]
branch = "master"
digest = "1:2d5cd61daa5565187e1d96bae64dbbc6080dacf741448e9629c64fd93203b0d4"
name = "github.com/prometheus/client_model"
packages = ["go"]
pruneopts = "NUT"
revision = "5c3871d89910bfb32f5fcab2aa4b9ec68e65a99f"
[[projects]]
branch = "master"
digest = "1:fad5a35eea6a1a33d6c8f949fbc146f24275ca809ece854248187683f52cc30b"
name = "github.com/prometheus/common"
packages = [
"expfmt",
"internal/bitbucket.org/ww/goautoneg",
"model",
]
pruneopts = "NUT"
revision = "c7de2306084e37d54b8be01f3541a8464345e9a5"
[[projects]]
branch = "master"
digest = "1:26a2f5e891cc4d2321f18a0caa84c8e788663c17bed6a487f3cbe2c4295292d0"
name = "github.com/prometheus/procfs"
packages = [
".",
"internal/util",
"nfs",
"xfs",
]
pruneopts = "NUT"
revision = "418d78d0b9a7b7de3a6bbc8a23def624cc977bb2"
[[projects]]
digest = "1:e3707aeaccd2adc89eba6c062fec72116fe1fc1ba71097da85b4d8ae1668a675"
name = "github.com/spf13/pflag"
packages = ["."]
pruneopts = "NUT"
revision = "9a97c102cda95a86cec2345a6f09f55a939babf5"
version = "v1.0.2"
[[projects]]
digest = "1:22f696cee54865fb8e9ff91df7b633f6b8f22037a8015253c6b6a71ca82219c7"
name = "go.uber.org/atomic"
packages = ["."]
pruneopts = "NUT"
revision = "1ea20fb1cbb1cc08cbd0d913a96dead89aa18289"
version = "v1.3.2"
[[projects]]
digest = "1:58ca93bdf81bac106ded02226b5395a0595d5346cdc4caa8d9c1f3a5f8f9976e"
name = "go.uber.org/multierr"
packages = ["."]
pruneopts = "NUT"
revision = "3c4937480c32f4c13a875a1829af76c98ca3d40a"
version = "v1.1.0"
[[projects]]
digest = "1:85674ac609b704fd4e9f463553b6ffc3a3527a993ae0ba550eb56beaabdfe094"
name = "go.uber.org/zap"
packages = [
".",
"buffer",
"internal/bufferpool",
"internal/color",
"internal/exit",
"zapcore",
]
pruneopts = "NUT"
revision = "ff33455a0e382e8a81d14dd7c922020b6b5e7982"
version = "v1.9.1"
[[projects]]
branch = "master"
digest = "1:3f3a05ae0b95893d90b9b3b5afdb79a9b3d96e4e36e099d841ae602e4aca0da8"
name = "golang.org/x/crypto"
packages = ["ssh/terminal"]
pruneopts = "NUT"
revision = "0e37d006457bf46f9e6692014ba72ef82c33022c"
[[projects]]
branch = "master"
digest = "1:1400b8e87c2c9bd486ea1a13155f59f8f02d385761206df05c0b7db007a53b2c"
name = "golang.org/x/net"
packages = [
"context",
"context/ctxhttp",
"http/httpguts",
"http2",
"http2/hpack",
"idna",
]
pruneopts = "NUT"
revision = "26e67e76b6c3f6ce91f7c52def5af501b4e0f3a2"
[[projects]]
branch = "master"
digest = "1:bc2b221d465bb28ce46e8d472ecdc424b9a9b541bd61d8c311c5f29c8dd75b1b"
name = "golang.org/x/oauth2"
packages = [
".",
"google",
"internal",
"jws",
"jwt",
]
pruneopts = "NUT"
revision = "d2e6202438beef2727060aa7cabdd924d92ebfd9"
[[projects]]
branch = "master"
digest = "1:44261e94b6095310a2df925fd68632d399a00eb153b52566a7b3697f7c70638c"
name = "golang.org/x/sys"
packages = [
"unix",
"windows",
]
pruneopts = "NUT"
revision = "1561086e645b2809fb9f8a1e2a38160bf8d53bf4"
[[projects]]
digest = "1:e7071ed636b5422cc51c0e3a6cebc229d6c9fffc528814b519a980641422d619"
name = "golang.org/x/text"
packages = [
"collate",
"collate/build",
"internal/colltab",
"internal/gen",
"internal/tag",
"internal/triegen",
"internal/ucd",
"language",
"secure/bidirule",
"transform",
"unicode/bidi",
"unicode/cldr",
"unicode/norm",
"unicode/rangetable",
]
pruneopts = "NUT"
revision = "f21a4dfb5e38f5895301dc265a8def02365cc3d0"
version = "v0.3.0"
[[projects]]
branch = "master"
digest = "1:c9e7a4b4d47c0ed205d257648b0e5b0440880cb728506e318f8ac7cd36270bc4"
name = "golang.org/x/time"
packages = ["rate"]
pruneopts = "NUT"
revision = "fbb02b2291d28baffd63558aa44b4b56f178d650"
[[projects]]
branch = "master"
digest = "1:45751dc3302c90ea55913674261b2d74286b05cdd8e3ae9606e02e4e77f4353f"
name = "golang.org/x/tools"
packages = [
"go/ast/astutil",
"imports",
"internal/fastwalk",
]
pruneopts = "NUT"
revision = "90fa682c2a6e6a37b3a1364ce2fe1d5e41af9d6d"
[[projects]]
digest = "1:e2da54c7866453ac5831c61c7ec5d887f39328cac088c806553303bff4048e6f"
name = "google.golang.org/appengine"
packages = [
".",
"internal",
"internal/app_identity",
"internal/base",
"internal/datastore",
"internal/log",
"internal/modules",
"internal/remote_api",
"internal/urlfetch",
"urlfetch",
]
pruneopts = "NUT"
revision = "ae0ab99deb4dc413a2b4bd6c8bdd0eb67f1e4d06"
version = "v1.2.0"
[[projects]]
digest = "1:2d1fbdc6777e5408cabeb02bf336305e724b925ff4546ded0fa8715a7267922a"
name = "gopkg.in/inf.v0"
packages = ["."]
pruneopts = "NUT"
revision = "d2d2541c53f18d2a059457998ce2876cc8e67cbf"
version = "v0.9.1"
[[projects]]
digest = "1:7c95b35057a0ff2e19f707173cc1a947fa43a6eb5c4d300d196ece0334046082"
name = "gopkg.in/yaml.v2"
packages = ["."]
pruneopts = "NUT"
revision = "5420a8b6744d3b0345ab293f6fcba19c978f1183"
version = "v2.2.1"
[[projects]]
digest = "1:8960ef753a87391086a307122d23cd5007cee93c28189437e4f1b6ed72bffc50"
name = "k8s.io/api"
packages = [
"admissionregistration/v1alpha1",
"admissionregistration/v1beta1",
"apps/v1",
"apps/v1beta1",
"apps/v1beta2",
"authentication/v1",
"authentication/v1beta1",
"authorization/v1",
"authorization/v1beta1",
"autoscaling/v1",
"autoscaling/v2beta1",
"batch/v1",
"batch/v1beta1",
"batch/v2alpha1",
"certificates/v1beta1",
"core/v1",
"events/v1beta1",
"extensions/v1beta1",
"networking/v1",
"policy/v1beta1",
"rbac/v1",
"rbac/v1alpha1",
"rbac/v1beta1",
"scheduling/v1alpha1",
"scheduling/v1beta1",
"settings/v1alpha1",
"storage/v1",
"storage/v1alpha1",
"storage/v1beta1",
]
pruneopts = "NUT"
revision = "072894a440bdee3a891dea811fe42902311cd2a3"
version = "kubernetes-1.11.0"
[[projects]]
digest = "1:4b0d523ee389c762d02febbcfa0734c4530ebe87abe925db18f05422adcb33e8"
name = "k8s.io/apimachinery"
packages = [
"pkg/api/equality",
"pkg/api/errors",
"pkg/api/meta",
"pkg/api/resource",
"pkg/apis/meta/internalversion",
"pkg/apis/meta/v1",
"pkg/apis/meta/v1/unstructured",
"pkg/apis/meta/v1beta1",
"pkg/conversion",
"pkg/conversion/queryparams",
"pkg/fields",
"pkg/labels",
"pkg/runtime",
"pkg/runtime/schema",
"pkg/runtime/serializer",
"pkg/runtime/serializer/json",
"pkg/runtime/serializer/protobuf",
"pkg/runtime/serializer/recognizer",
"pkg/runtime/serializer/streaming",
"pkg/runtime/serializer/versioning",
"pkg/selection",
"pkg/types",
"pkg/util/cache",
"pkg/util/clock",
"pkg/util/diff",
"pkg/util/errors",
"pkg/util/framer",
"pkg/util/intstr",
"pkg/util/json",
"pkg/util/mergepatch",
"pkg/util/net",
"pkg/util/runtime",
"pkg/util/sets",
"pkg/util/sets/types",
"pkg/util/strategicpatch",
"pkg/util/validation",
"pkg/util/validation/field",
"pkg/util/wait",
"pkg/util/yaml",
"pkg/version",
"pkg/watch",
"third_party/forked/golang/json",
"third_party/forked/golang/reflect",
]
pruneopts = "NUT"
revision = "103fd098999dc9c0c88536f5c9ad2e5da39373ae"
version = "kubernetes-1.11.0"
[[projects]]
digest = "1:c7d6cf5e28c377ab4000b94b6b9ff562c4b13e7e8b948ad943f133c5104be011"
name = "k8s.io/client-go"
packages = [
"discovery",
"discovery/fake",
"kubernetes",
"kubernetes/fake",
"kubernetes/scheme",
"kubernetes/typed/admissionregistration/v1alpha1",
"kubernetes/typed/admissionregistration/v1alpha1/fake",
"kubernetes/typed/admissionregistration/v1beta1",
"kubernetes/typed/admissionregistration/v1beta1/fake",
"kubernetes/typed/apps/v1",
"kubernetes/typed/apps/v1/fake",
"kubernetes/typed/apps/v1beta1",
"kubernetes/typed/apps/v1beta1/fake",
"kubernetes/typed/apps/v1beta2",
"kubernetes/typed/apps/v1beta2/fake",
"kubernetes/typed/authentication/v1",
"kubernetes/typed/authentication/v1/fake",
"kubernetes/typed/authentication/v1beta1",
"kubernetes/typed/authentication/v1beta1/fake",
"kubernetes/typed/authorization/v1",
"kubernetes/typed/authorization/v1/fake",
"kubernetes/typed/authorization/v1beta1",
"kubernetes/typed/authorization/v1beta1/fake",
"kubernetes/typed/autoscaling/v1",
"kubernetes/typed/autoscaling/v1/fake",
"kubernetes/typed/autoscaling/v2beta1",
"kubernetes/typed/autoscaling/v2beta1/fake",
"kubernetes/typed/batch/v1",
"kubernetes/typed/batch/v1/fake",
"kubernetes/typed/batch/v1beta1",
"kubernetes/typed/batch/v1beta1/fake",
"kubernetes/typed/batch/v2alpha1",
"kubernetes/typed/batch/v2alpha1/fake",
"kubernetes/typed/certificates/v1beta1",
"kubernetes/typed/certificates/v1beta1/fake",
"kubernetes/typed/core/v1",
"kubernetes/typed/core/v1/fake",
"kubernetes/typed/events/v1beta1",
"kubernetes/typed/events/v1beta1/fake",
"kubernetes/typed/extensions/v1beta1",
"kubernetes/typed/extensions/v1beta1/fake",
"kubernetes/typed/networking/v1",
"kubernetes/typed/networking/v1/fake",
"kubernetes/typed/policy/v1beta1",
"kubernetes/typed/policy/v1beta1/fake",
"kubernetes/typed/rbac/v1",
"kubernetes/typed/rbac/v1/fake",
"kubernetes/typed/rbac/v1alpha1",
"kubernetes/typed/rbac/v1alpha1/fake",
"kubernetes/typed/rbac/v1beta1",
"kubernetes/typed/rbac/v1beta1/fake",
"kubernetes/typed/scheduling/v1alpha1",
"kubernetes/typed/scheduling/v1alpha1/fake",
"kubernetes/typed/scheduling/v1beta1",
"kubernetes/typed/scheduling/v1beta1/fake",
"kubernetes/typed/settings/v1alpha1",
"kubernetes/typed/settings/v1alpha1/fake",
"kubernetes/typed/storage/v1",
"kubernetes/typed/storage/v1/fake",
"kubernetes/typed/storage/v1alpha1",
"kubernetes/typed/storage/v1alpha1/fake",
"kubernetes/typed/storage/v1beta1",
"kubernetes/typed/storage/v1beta1/fake",
"pkg/apis/clientauthentication",
"pkg/apis/clientauthentication/v1alpha1",
"pkg/apis/clientauthentication/v1beta1",
"pkg/version",
"plugin/pkg/client/auth/exec",
"plugin/pkg/client/auth/gcp",
"rest",
"rest/watch",
"testing",
"third_party/forked/golang/template",
"tools/auth",
"tools/cache",
"tools/clientcmd",
"tools/clientcmd/api",
"tools/clientcmd/api/latest",
"tools/clientcmd/api/v1",
"tools/metrics",
"tools/pager",
"tools/record",
"tools/reference",
"transport",
"util/buffer",
"util/cert",
"util/connrotation",
"util/flowcontrol",
"util/homedir",
"util/integer",
"util/jsonpath",
"util/retry",
"util/workqueue",
]
pruneopts = "NUT"
revision = "7d04d0e2a0a1a4d4a1cd6baa432a2301492e4e65"
version = "kubernetes-1.11.0"
[[projects]]
digest = "1:8ab487a323486c8bbbaa3b689850487fdccc6cbea8690620e083b2d230a4447e"
name = "k8s.io/code-generator"
packages = [
"cmd/client-gen",
"cmd/client-gen/args",
"cmd/client-gen/generators",
"cmd/client-gen/generators/fake",
"cmd/client-gen/generators/scheme",
"cmd/client-gen/generators/util",
"cmd/client-gen/path",
"cmd/client-gen/types",
"cmd/deepcopy-gen",
"cmd/deepcopy-gen/args",
"cmd/defaulter-gen",
"cmd/defaulter-gen/args",
"cmd/informer-gen",
"cmd/informer-gen/args",
"cmd/informer-gen/generators",
"cmd/lister-gen",
"cmd/lister-gen/args",
"cmd/lister-gen/generators",
"pkg/util",
]
pruneopts = "T"
revision = "6702109cc68eb6fe6350b83e14407c8d7309fd1a"
version = "kubernetes-1.11.0"
[[projects]]
branch = "master"
digest = "1:5249c83f0fb9e277b2d28c19eca814feac7ef05dc762e4deaf0a2e4b1a7c5df3"
name = "k8s.io/gengo"
packages = [
"args",
"examples/deepcopy-gen/generators",
"examples/defaulter-gen/generators",
"examples/set-gen/sets",
"generator",
"namer",
"parser",
"types",
]
pruneopts = "NUT"
revision = "4242d8e6c5dba56827bb7bcf14ad11cda38f3991"
[[projects]]
branch = "master"
digest = "1:a2c842a1e0aed96fd732b535514556323a6f5edfded3b63e5e0ab1bce188aa54"
name = "k8s.io/kube-openapi"
packages = ["pkg/util/proto"]
pruneopts = "NUT"
revision = "e3762e86a74c878ffed47484592986685639c2cd"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
input-imports = [
"github.com/google/go-cmp/cmp",
"github.com/google/go-cmp/cmp/cmpopts",
"github.com/istio/glog",
"github.com/knative/pkg/apis/istio/v1alpha3",
"github.com/knative/pkg/client/clientset/versioned",
"github.com/knative/pkg/client/clientset/versioned/fake",
"github.com/knative/pkg/signals",
"github.com/prometheus/client_golang/prometheus/promhttp",
"go.uber.org/zap",
"go.uber.org/zap/zapcore",
"k8s.io/api/apps/v1",
"k8s.io/api/autoscaling/v1",
"k8s.io/api/autoscaling/v2beta1",
"k8s.io/api/core/v1",
"k8s.io/apimachinery/pkg/api/errors",
"k8s.io/apimachinery/pkg/api/resource",
"k8s.io/apimachinery/pkg/apis/meta/v1",
"k8s.io/apimachinery/pkg/labels",
"k8s.io/apimachinery/pkg/runtime",
"k8s.io/apimachinery/pkg/runtime/schema",
"k8s.io/apimachinery/pkg/runtime/serializer",
"k8s.io/apimachinery/pkg/types",
"k8s.io/apimachinery/pkg/util/intstr",
"k8s.io/apimachinery/pkg/util/runtime",
"k8s.io/apimachinery/pkg/util/sets/types",
"k8s.io/apimachinery/pkg/util/wait",
"k8s.io/apimachinery/pkg/watch",
"k8s.io/client-go/discovery",
"k8s.io/client-go/discovery/fake",
"k8s.io/client-go/kubernetes",
"k8s.io/client-go/kubernetes/fake",
"k8s.io/client-go/kubernetes/scheme",
"k8s.io/client-go/kubernetes/typed/core/v1",
"k8s.io/client-go/plugin/pkg/client/auth/gcp",
"k8s.io/client-go/rest",
"k8s.io/client-go/testing",
"k8s.io/client-go/tools/cache",
"k8s.io/client-go/tools/clientcmd",
"k8s.io/client-go/tools/record",
"k8s.io/client-go/util/flowcontrol",
"k8s.io/client-go/util/workqueue",
"k8s.io/code-generator/cmd/client-gen",
"k8s.io/code-generator/cmd/deepcopy-gen",
"k8s.io/code-generator/cmd/defaulter-gen",
"k8s.io/code-generator/cmd/informer-gen",
"k8s.io/code-generator/cmd/lister-gen",
]
solver-name = "gps-cdcl"
solver-version = 1

View File

@@ -1,64 +0,0 @@
required = [
"k8s.io/apimachinery/pkg/util/sets/types",
"k8s.io/code-generator/cmd/deepcopy-gen",
"k8s.io/code-generator/cmd/defaulter-gen",
"k8s.io/code-generator/cmd/client-gen",
"k8s.io/code-generator/cmd/lister-gen",
"k8s.io/code-generator/cmd/informer-gen",
]
[[constraint]]
name = "go.uber.org/zap"
version = "v1.9.1"
[[override]]
name = "gopkg.in/yaml.v2"
version = "v2.2.1"
[[override]]
name = "k8s.io/api"
version = "kubernetes-1.11.0"
[[override]]
name = "k8s.io/apimachinery"
version = "kubernetes-1.11.0"
[[override]]
name = "k8s.io/code-generator"
version = "kubernetes-1.11.0"
[[override]]
name = "k8s.io/client-go"
version = "kubernetes-1.11.0"
[[override]]
name = "github.com/json-iterator/go"
# This is the commit at which k8s depends on this in 1.11
# It seems to be broken at HEAD.
revision = "f2b4162afba35581b6d4a50d3b8f34e33c144682"
[[constraint]]
name = "github.com/prometheus/client_golang"
version = "v0.8.0"
[[constraint]]
name = "github.com/google/go-cmp"
version = "v0.2.0"
[[constraint]]
name = "github.com/knative/pkg"
revision = "c15d7c8f2220a7578b33504df6edefa948c845ae"
[[override]]
name = "github.com/golang/glog"
source = "github.com/istio/glog"
[prune]
go-tests = true
unused-packages = true
non-go = true
[[prune.project]]
name = "k8s.io/code-generator"
unused-packages = false
non-go = false

5
MAINTAINERS Normal file
View File

@@ -0,0 +1,5 @@
The maintainers are generally available in Slack at
https://weave-community.slack.com/messages/flagger/ (obtain an invitation
at https://slack.weave.works/).
Stefan Prodan, Weaveworks <stefan@weave.works> (Slack: @stefan Twitter: @stefanprodan)

View File

@@ -2,19 +2,45 @@ TAG?=latest
VERSION?=$(shell grep 'VERSION' pkg/version/version.go | awk '{ print $$4 }' | tr -d '"')
VERSION_MINOR:=$(shell grep 'VERSION' pkg/version/version.go | awk '{ print $$4 }' | tr -d '"' | rev | cut -d'.' -f2- | rev)
PATCH:=$(shell grep 'VERSION' pkg/version/version.go | awk '{ print $$4 }' | tr -d '"' | awk -F. '{print $$NF}')
SOURCE_DIRS = cmd pkg/apis pkg/controller pkg/server pkg/logging pkg/version
SOURCE_DIRS = cmd pkg/apis pkg/controller pkg/server pkg/canary pkg/metrics pkg/router pkg/notifier
LT_VERSION?=$(shell grep 'VERSION' cmd/loadtester/main.go | awk '{ print $$4 }' | tr -d '"' | head -n1)
TS=$(shell date +%Y-%m-%d_%H-%M-%S)
run:
go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info \
-metrics-server=https://prometheus.iowa.weavedx.com \
-slack-url=https://hooks.slack.com/services/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 \
-slack-channel="devops-alerts"
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=istio -namespace=test-istio \
-metrics-server=https://prometheus.istio.flagger.dev
run-appmesh:
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=appmesh \
-metrics-server=http://acfc235624ca911e9a94c02c4171f346-1585187926.us-west-2.elb.amazonaws.com:9090
run-nginx:
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=nginx -namespace=nginx \
-metrics-server=http://prometheus-weave.istio.weavedx.com
run-smi:
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=smi:istio -namespace=smi \
-metrics-server=https://prometheus.istio.weavedx.com
run-gloo:
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=gloo -namespace=gloo \
-metrics-server=https://prometheus.istio.weavedx.com
run-nop:
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=none -namespace=bg \
-metrics-server=https://prometheus.istio.weavedx.com
run-linkerd:
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=linkerd -namespace=dev \
-metrics-server=https://prometheus.linkerd.flagger.dev
build:
docker build -t stefanprodan/flagger:$(TAG) . -f Dockerfile
GIT_COMMIT=$$(git rev-list -1 HEAD) && GO111MODULE=on CGO_ENABLED=0 GOOS=linux go build -ldflags "-s -w -X github.com/weaveworks/flagger/pkg/version.REVISION=$${GIT_COMMIT}" -a -installsuffix cgo -o ./bin/flagger ./cmd/flagger/*
docker build -t weaveworks/flagger:$(TAG) . -f Dockerfile
push:
docker tag stefanprodan/flagger:$(TAG) quay.io/stefanprodan/flagger:$(VERSION)
docker push quay.io/stefanprodan/flagger:$(VERSION)
docker tag weaveworks/flagger:$(TAG) weaveworks/flagger:$(VERSION)
docker push weaveworks/flagger:$(VERSION)
fmt:
gofmt -l -s -w $(SOURCE_DIRS)
@@ -29,9 +55,10 @@ test: test-fmt test-codegen
go test ./...
helm-package:
cd charts/ && helm package flagger/ && helm package grafana/
mv charts/*.tgz docs/
helm repo index docs --url https://stefanprodan.github.io/flagger --merge ./docs/index.yaml
cd charts/ && helm package ./*
mv charts/*.tgz bin/
curl -s https://raw.githubusercontent.com/weaveworks/flagger/gh-pages/index.yaml > ./bin/index.yaml
helm repo index bin --url https://flagger.app --merge ./bin/index.yaml
helm-up:
helm upgrade --install flagger ./charts/flagger --namespace=istio-system --set crd.create=false
@@ -44,7 +71,9 @@ version-set:
sed -i '' "s/flagger:$$current/flagger:$$next/g" artifacts/flagger/deployment.yaml && \
sed -i '' "s/tag: $$current/tag: $$next/g" charts/flagger/values.yaml && \
sed -i '' "s/appVersion: $$current/appVersion: $$next/g" charts/flagger/Chart.yaml && \
echo "Version $$next set in code, deployment and charts"
sed -i '' "s/version: $$current/version: $$next/g" charts/flagger/Chart.yaml && \
sed -i '' "s/newTag: $$current/newTag: $$next/g" kustomize/base/flagger/kustomization.yaml && \
echo "Version $$next set in code, deployment, chart and kustomize"
version-up:
@next="$(VERSION_MINOR).$$(($(PATCH) + 1))" && \
@@ -77,3 +106,15 @@ reset-test:
kubectl delete -f ./artifacts/namespaces
kubectl apply -f ./artifacts/namespaces
kubectl apply -f ./artifacts/canaries
loadtester-run: loadtester-build
docker build -t weaveworks/flagger-loadtester:$(LT_VERSION) . -f Dockerfile.loadtester
docker rm -f tester || true
docker run -dp 8888:9090 --name tester weaveworks/flagger-loadtester:$(LT_VERSION)
loadtester-build:
GO111MODULE=on CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o ./bin/loadtester ./cmd/loadtester/*
loadtester-push:
docker build -t weaveworks/flagger-loadtester:$(LT_VERSION) . -f Dockerfile.loadtester
docker push weaveworks/flagger-loadtester:$(LT_VERSION)

488
README.md
View File

@@ -1,86 +1,77 @@
# flagger
[![build](https://travis-ci.org/stefanprodan/flagger.svg?branch=master)](https://travis-ci.org/stefanprodan/flagger)
[![report](https://goreportcard.com/badge/github.com/stefanprodan/flagger)](https://goreportcard.com/report/github.com/stefanprodan/flagger)
[![codecov](https://codecov.io/gh/stefanprodan/flagger/branch/master/graph/badge.svg)](https://codecov.io/gh/stefanprodan/flagger)
[![license](https://img.shields.io/github/license/stefanprodan/flagger.svg)](https://github.com/stefanprodan/flagger/blob/master/LICENSE)
[![release](https://img.shields.io/github/release/stefanprodan/flagger/all.svg)](https://github.com/stefanprodan/flagger/releases)
[![build](https://img.shields.io/circleci/build/github/weaveworks/flagger/master.svg)](https://circleci.com/gh/weaveworks/flagger)
[![report](https://goreportcard.com/badge/github.com/weaveworks/flagger)](https://goreportcard.com/report/github.com/weaveworks/flagger)
[![codecov](https://codecov.io/gh/weaveworks/flagger/branch/master/graph/badge.svg)](https://codecov.io/gh/weaveworks/flagger)
[![license](https://img.shields.io/github/license/weaveworks/flagger.svg)](https://github.com/weaveworks/flagger/blob/master/LICENSE)
[![release](https://img.shields.io/github/release/weaveworks/flagger/all.svg)](https://github.com/weaveworks/flagger/releases)
Flagger is a Kubernetes operator that automates the promotion of canary deployments
using Istio routing for traffic shifting and Prometheus metrics for canary analysis.
The canary analysis can be extended with webhooks for running integration tests, load tests or any other custom
validation.
using Istio, Linkerd, App Mesh, NGINX, Contour or Gloo routing for traffic shifting and Prometheus metrics for canary analysis.
The canary analysis can be extended with webhooks for running acceptance tests,
load tests or any other custom validation.
### Install
Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance
indicators like HTTP requests success rate, requests average duration and pods health.
Based on analysis of the KPIs a canary is promoted or aborted, and the analysis result is published to Slack or MS Teams.
Before installing Flagger make sure you have Istio setup up with Prometheus enabled.
If you are new to Istio you can follow my [Istio service mesh walk-through](https://github.com/stefanprodan/istio-gke).
![flagger-overview](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/diagrams/flagger-canary-overview.png)
Deploy Flagger in the `istio-system` namespace using Helm:
## Documentation
```bash
# add the Helm repository
helm repo add flagger https://flagger.app
Flagger documentation can be found at [docs.flagger.app](https://docs.flagger.app)
# install or upgrade
helm upgrade -i flagger flagger/flagger \
--namespace=istio-system \
--set metricsServer=http://prometheus.istio-system:9090 \
--set controlLoopInterval=1m
```
* Install
* [Flagger install on Kubernetes](https://docs.flagger.app/install/flagger-install-on-kubernetes)
* [Flagger install on GKE Istio](https://docs.flagger.app/install/flagger-install-on-google-cloud)
* [Flagger install on EKS App Mesh](https://docs.flagger.app/install/flagger-install-on-eks-appmesh)
* [Flagger install with SuperGloo](https://docs.flagger.app/install/flagger-install-with-supergloo)
* How it works
* [Canary custom resource](https://docs.flagger.app/how-it-works#canary-custom-resource)
* [Routing](https://docs.flagger.app/how-it-works#istio-routing)
* [Canary deployment stages](https://docs.flagger.app/how-it-works#canary-deployment)
* [Canary analysis](https://docs.flagger.app/how-it-works#canary-analysis)
* [HTTP metrics](https://docs.flagger.app/how-it-works#http-metrics)
* [Custom metrics](https://docs.flagger.app/how-it-works#custom-metrics)
* [Webhooks](https://docs.flagger.app/how-it-works#webhooks)
* [Load testing](https://docs.flagger.app/how-it-works#load-testing)
* [Manual gating](https://docs.flagger.app/how-it-works#manual-gating)
* [FAQ](https://docs.flagger.app/faq)
* Usage
* [Istio canary deployments](https://docs.flagger.app/usage/progressive-delivery)
* [Linkerd canary deployments](https://docs.flagger.app/usage/linkerd-progressive-delivery)
* [App Mesh canary deployments](https://docs.flagger.app/usage/appmesh-progressive-delivery)
* [NGINX ingress controller canary deployments](https://docs.flagger.app/usage/nginx-progressive-delivery)
* [Gloo ingress controller canary deployments](https://docs.flagger.app/usage/gloo-progressive-delivery)
* [Contour Canary Deployments](https://docs.flagger.app/usage/contour-progressive-delivery)
* [Crossover canary deployments](https://docs.flagger.app/usage/crossover-progressive-delivery)
* [Blue/Green deployments](https://docs.flagger.app/usage/blue-green)
* [Monitoring](https://docs.flagger.app/usage/monitoring)
* [Alerting](https://docs.flagger.app/usage/alerting)
* Tutorials
* [Canary deployments with Helm charts and Weave Flux](https://docs.flagger.app/tutorials/canary-helm-gitops)
Flagger is compatible with Kubernetes >1.10.0 and Istio >1.0.0.
## Canary CRD
### Usage
Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA),
then creates a series of objects (Kubernetes deployments, ClusterIP services and Istio or App Mesh virtual services).
These objects expose the application on the mesh and drive the canary analysis and promotion.
Flagger takes a Kubernetes deployment and creates a series of objects
(Kubernetes [deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/),
ClusterIP [services](https://kubernetes.io/docs/concepts/services-networking/service/) and
Istio [virtual services](https://istio.io/docs/reference/config/istio.networking.v1alpha3/#VirtualService))
to drive the canary analysis and promotion.
![flagger-overview](https://raw.githubusercontent.com/stefanprodan/flagger/master/docs/diagrams/flagger-canary-overview.png)
Gated canary promotion stages:
* scan for canary deployments
* check Istio virtual service routes are mapped to primary and canary ClusterIP services
* check primary and canary deployments status
* halt advancement if a rolling update is underway
* halt advancement if pods are unhealthy
* increase canary traffic weight percentage from 0% to 5% (step weight)
* check canary HTTP request success rate and latency
* halt advancement if any metric is under the specified threshold
* increment the failed checks counter
* check if the number of failed checks reached the threshold
* route all traffic to primary
* scale to zero the canary deployment and mark it as failed
* wait for the canary deployment to be updated (revision bump) and start over
* increase canary traffic weight by 5% (step weight) till it reaches 50% (max weight)
* halt advancement while canary request success rate is under the threshold
* halt advancement while canary request duration P99 is over the threshold
* halt advancement if the primary or canary deployment becomes unhealthy
* halt advancement while canary deployment is being scaled up/down by HPA
* promote canary to primary
* copy canary deployment spec template over primary
* wait for primary rolling update to finish
* halt advancement if pods are unhealthy
* route all traffic to primary
* scale to zero the canary deployment
* mark rollout as finished
* wait for the canary deployment to be updated (revision bump) and start over
You can change the canary analysis _max weight_ and the _step weight_ percentage in the Flagger's custom resource.
Flagger keeps track of ConfigMaps and Secrets referenced by a Kubernetes Deployment and triggers a canary analysis if any of those objects change.
When promoting a workload in production, both code (container images) and configuration (config maps and secrets) are being synchronised.
For a deployment named _podinfo_, a canary promotion can be defined using Flagger's custom resource:
```yaml
apiVersion: flagger.app/v1alpha2
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# service mesh provider (optional)
# can be: kubernetes, istio, linkerd, appmesh, nginx, contour, gloo, supergloo
provider: istio
# deployment reference
targetRef:
apiVersion: apps/v1
@@ -95,15 +86,29 @@ spec:
kind: HorizontalPodAutoscaler
name: podinfo
service:
# container port
# service name (optional)
name: podinfo
# ClusterIP port number
port: 9898
# Istio gateways (optional)
gateways:
- public-gateway.istio-system.svc.cluster.local
# Istio virtual service host names (optional)
hosts:
- app.iowa.weavedx.com
# container port name or number (optional)
targetPort: 9898
# port name can be http or grpc (default http)
portName: http
# HTTP match conditions (optional)
match:
- uri:
prefix: /
# HTTP rewrite (optional)
rewrite:
uri: /
# request timeout (optional)
timeout: 5s
# promote the canary without analysing it (default false)
skipAnalysis: false
# define the canary analysis timing and KPIs
canaryAnalysis:
# schedule interval (default 60s)
interval: 1m
# max number of failed metric checks before rollback
threshold: 10
# max traffic percentage routed to canary
@@ -114,319 +119,78 @@ spec:
stepWeight: 5
# Istio Prometheus checks
metrics:
- name: istio_requests_total
# builtin checks
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 1m
- name: istio_request_duration_seconds_bucket
- name: request-duration
# maximum req duration P99
# milliseconds
threshold: 500
interval: 30s
# external checks (optional)
# custom check
- name: "kafka lag"
threshold: 100
query: |
avg_over_time(
kafka_consumergroup_lag{
consumergroup=~"podinfo-consumer-.*",
topic="podinfo"
}[1m]
)
# testing (optional)
webhooks:
- name: integration-tests
url: http://podinfo.test:9898/echo
timeout: 1m
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
test: "all"
token: "16688eb5e9f289f1991c"
cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"
```
The canary analysis is using the following promql queries:
_HTTP requests success rate percentage_
For more details on how the canary analysis and promotion works please [read the docs](https://docs.flagger.app/how-it-works).
```sql
sum(
rate(
istio_requests_total{
reporter="destination",
destination_workload_namespace=~"$namespace",
destination_workload=~"$workload",
response_code!~"5.*"
}[$interval]
)
)
/
sum(
rate(
istio_requests_total{
reporter="destination",
destination_workload_namespace=~"$namespace",
destination_workload=~"$workload"
}[$interval]
)
)
```
## Features
_HTTP requests milliseconds duration P99_
| Feature | Istio | Linkerd | App Mesh | NGINX | Gloo | Contour | CNI |
| -------------------------------------------- | ------------------ | ------------------ |------------------ |------------------ |------------------ |------------------ |------------------ |
| Canary deployments (weighted traffic) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: |
| A/B testing (headers and cookies routing) | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign: |
| Blue/Green deployments (traffic switch) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Webhooks (acceptance/load testing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Manual gating (approve/pause/resume) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Request success rate check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: |
| Request duration check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: |
| Custom promql checks | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Traffic policy, CORS, retries and timeouts | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign: |
```sql
histogram_quantile(0.99,
sum(
irate(
istio_request_duration_seconds_bucket{
reporter="destination",
destination_workload=~"$workload",
destination_workload_namespace=~"$namespace"
}[$interval]
)
) by (le)
)
```
## Roadmap
The canary analysis can be extended with webhooks.
Flagger will call the webhooks (HTTP POST) and determine from the response status code (HTTP 2xx) if the canary is failing or not.
Webhook payload:
```json
{
"name": "podinfo",
"namespace": "test",
"metadata": {
"test": "all",
"token": "16688eb5e9f289f1991c"
}
}
```
### Automated canary analysis, promotions and rollbacks
Create a test namespace with Istio sidecar injection enabled:
```bash
export REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/namespaces/test.yaml
```
Create a deployment and a horizontal pod autoscaler:
```bash
kubectl apply -f ${REPO}/artifacts/canaries/deployment.yaml
kubectl apply -f ${REPO}/artifacts/canaries/hpa.yaml
```
Create a canary promotion custom resource (replace the Istio gateway and the internet domain with your own):
```bash
kubectl apply -f ${REPO}/artifacts/canaries/canary.yaml
```
After a couple of seconds Flagger will create the canary objects:
```bash
# applied
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
canary.flagger.app/podinfo
# generated
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
virtualservice.networking.istio.io/podinfo
```
![flagger-canary-steps](https://raw.githubusercontent.com/stefanprodan/flagger/master/docs/diagrams/flagger-canary-steps.png)
Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=quay.io/stefanprodan/podinfo:1.2.1
```
Flagger detects that the deployment revision changed and starts a new rollout:
```
kubectl -n test describe canary/podinfo
Status:
Canary Revision: 19871136
Failed Checks: 0
State: finished
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Synced 3m flagger New revision detected podinfo.test
Normal Synced 3m flagger Scaling up podinfo.test
Warning Synced 3m flagger Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Normal Synced 3m flagger Advance podinfo.test canary weight 5
Normal Synced 3m flagger Advance podinfo.test canary weight 10
Normal Synced 3m flagger Advance podinfo.test canary weight 15
Normal Synced 2m flagger Advance podinfo.test canary weight 20
Normal Synced 2m flagger Advance podinfo.test canary weight 25
Normal Synced 1m flagger Advance podinfo.test canary weight 30
Normal Synced 1m flagger Advance podinfo.test canary weight 35
Normal Synced 55s flagger Advance podinfo.test canary weight 40
Normal Synced 45s flagger Advance podinfo.test canary weight 45
Normal Synced 35s flagger Advance podinfo.test canary weight 50
Normal Synced 25s flagger Copying podinfo.test template spec to podinfo-primary.test
Warning Synced 15s flagger Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Normal Synced 5s flagger Promotion completed! Scaling down podinfo.test
```
During the canary analysis you can generate HTTP 500 errors and high latency to test if Flagger pauses the rollout.
Create a tester pod and exec into it:
```bash
kubectl -n test run tester --image=quay.io/stefanprodan/podinfo:1.2.1 -- ./podinfo --port=9898
kubectl -n test exec -it tester-xx-xx sh
```
Generate HTTP 500 errors:
```bash
watch curl http://podinfo-canary:9898/status/500
```
Generate latency:
```bash
watch curl http://podinfo-canary:9898/delay/1
```
When the number of failed checks reaches the canary analysis threshold, the traffic is routed back to the primary,
the canary is scaled to zero and the rollout is marked as failed.
```
kubectl -n test describe canary/podinfo
Status:
Canary Revision: 16695041
Failed Checks: 10
State: failed
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Synced 3m flagger Starting canary deployment for podinfo.test
Normal Synced 3m flagger Advance podinfo.test canary weight 5
Normal Synced 3m flagger Advance podinfo.test canary weight 10
Normal Synced 3m flagger Advance podinfo.test canary weight 15
Normal Synced 3m flagger Halt podinfo.test advancement success rate 69.17% < 99%
Normal Synced 2m flagger Halt podinfo.test advancement success rate 61.39% < 99%
Normal Synced 2m flagger Halt podinfo.test advancement success rate 55.06% < 99%
Normal Synced 2m flagger Halt podinfo.test advancement success rate 47.00% < 99%
Normal Synced 2m flagger (combined from similar events): Halt podinfo.test advancement success rate 38.08% < 99%
Warning Synced 1m flagger Rolling back podinfo.test failed checks threshold reached 10
Warning Synced 1m flagger Canary failed! Scaling down podinfo.test
```
### Monitoring
Flagger comes with a Grafana dashboard made for canary analysis.
Install Grafana with Helm:
```bash
helm upgrade -i flagger-grafana flagger/grafana \
--namespace=istio-system \
--set url=http://prometheus.istio-system:9090
```
The dashboard shows the RED and USE metrics for the primary and canary workloads:
![flagger-grafana](https://raw.githubusercontent.com/stefanprodan/flagger/master/docs/screens/grafana-canary-analysis.png)
The canary errors and latency spikes have been recorded as Kubernetes events and logged by Flagger in json format:
```
kubectl -n istio-system logs deployment/flagger --tail=100 | jq .msg
Starting canary deployment for podinfo.test
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Advance podinfo.test canary weight 20
Advance podinfo.test canary weight 25
Advance podinfo.test canary weight 30
Advance podinfo.test canary weight 35
Halt podinfo.test advancement success rate 98.69% < 99%
Advance podinfo.test canary weight 40
Halt podinfo.test advancement request duration 1.515s > 500ms
Advance podinfo.test canary weight 45
Advance podinfo.test canary weight 50
Copying podinfo.test template spec to podinfo-primary.test
Halt podinfo-primary.test advancement waiting for rollout to finish: 1 old replicas are pending termination
Scaling down podinfo.test
Promotion completed! podinfo.test
```
Flagger exposes Prometheus metrics that can be used to determine the canary analysis status and the destination weight values:
```bash
# Canaries total gauge
flagger_canary_total{namespace="test"} 1
# Canary promotion last known status gauge
# 0 - running, 1 - successful, 2 - failed
flagger_canary_status{name="podinfo" namespace="test"} 1
# Canary traffic weight gauge
flagger_canary_weight{workload="podinfo-primary" namespace="test"} 95
flagger_canary_weight{workload="podinfo" namespace="test"} 5
# Seconds spent performing canary analysis histogram
flagger_canary_duration_seconds_bucket{name="podinfo",namespace="test",le="10"} 6
flagger_canary_duration_seconds_bucket{name="podinfo",namespace="test",le="+Inf"} 6
flagger_canary_duration_seconds_sum{name="podinfo",namespace="test"} 17.3561329
flagger_canary_duration_seconds_count{name="podinfo",namespace="test"} 6
```
### Alerting
Flagger can be configured to send Slack notifications:
```bash
helm upgrade -i flagger flagger/flagger \
--namespace=istio-system \
--set slack.url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK \
--set slack.channel=general \
--set slack.user=flagger
```
Once configured with a Slack incoming webhook, Flagger will post messages when a canary deployment has been initialized,
when a new revision has been detected and if the canary analysis failed or succeeded.
![flagger-slack](https://raw.githubusercontent.com/stefanprodan/flagger/master/docs/screens/slack-canary-notifications.png)
A canary deployment will be rolled back if the progress deadline exceeded or if the analysis
reached the maximum number of failed checks:
![flagger-slack-errors](https://raw.githubusercontent.com/stefanprodan/flagger/master/docs/screens/slack-canary-failed.png)
Besides Slack, you can use Alertmanager to trigger alerts when a canary deployment failed:
```yaml
- alert: canary_rollback
expr: flagger_canary_status > 1
for: 1m
labels:
severity: warning
annotations:
summary: "Canary failed"
description: "Workload {{ $labels.name }} namespace {{ $labels.namespace }}"
```
### Roadmap
* Extend the validation mechanism to support other metrics than HTTP success rate and latency
* Integrate with other service mesh like Consul Connect and ingress controllers like HAProxy, ALB
* Add support for comparing the canary metrics to the primary ones and do the validation based on the derivation between the two
* Extend the canary analysis and promotion to other types than Kubernetes deployments such as Flux Helm releases or OpenFaaS functions
### Contributing
## Contributing
Flagger is Apache 2.0 licensed and accepts contributions via GitHub pull requests.
When submitting bug reports please include as much details as possible:
When submitting bug reports please include as much details as possible:
* which Flagger version
* which Flagger CRD version
* which Kubernetes/Istio version
* what configuration (canary, virtual service and workloads definitions)
* what happened (Flagger, Istio Pilot and Proxy logs)
* which Kubernetes version
* what configuration (canary, ingress and workloads definitions)
* what happened (Flagger and Proxy logs)
## Getting Help
If you have any questions about Flagger and progressive delivery:
* Read the Flagger [docs](https://docs.flagger.app).
* Invite yourself to the [Weave community slack](https://slack.weave.works/)
and join the [#flagger](https://weave-community.slack.com/messages/flagger/) channel.
* Join the [Weave User Group](https://www.meetup.com/pro/Weave/) and get invited to online talks,
hands-on training and meetups in your area.
* File an [issue](https://github.com/weaveworks/flagger/issues/new).
Your feedback is always welcome!

View File

@@ -0,0 +1,70 @@
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
# container port
port: 9898
# container port name (optional)
# can be http or grpc
portName: http
# App Mesh reference
meshName: global
# App Mesh retry policy (optional)
retries:
attempts: 3
perTryTimeout: 1s
retryOn: "gateway-error,client-error,stream-error"
# define the canary analysis timing and KPIs
canaryAnalysis:
# schedule interval (default 60s)
interval: 10s
# max number of failed metric checks before rollback
threshold: 10
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 5
# App Mesh Prometheus checks
metrics:
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 1m
- name: request-duration
# maximum req duration P99
# milliseconds
threshold: 500
interval: 30s
# testing (optional)
webhooks:
- name: acceptance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 30s
metadata:
type: bash
cmd: "curl -sd 'test' http://podinfo-canary.test:9898/token | grep token"
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"

View File

@@ -1,29 +1,31 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo-primary
name: podinfo
namespace: test
labels:
app: podinfo-primary
app: podinfo
spec:
replicas: 1
minReadySeconds: 5
revisionHistoryLimit: 5
progressDeadlineSeconds: 60
strategy:
rollingUpdate:
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: podinfo-primary
app: podinfo
template:
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
app: podinfo-primary
app: podinfo
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:1.1.1
image: stefanprodan/podinfo:3.1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898
@@ -33,8 +35,6 @@ spec:
- ./podinfo
- --port=9898
- --level=info
- --random-delay=false
- --random-error=false
env:
- name: PODINFO_UI_COLOR
value: blue
@@ -46,10 +46,7 @@ spec:
- http
- localhost:9898/healthz
initialDelaySeconds: 5
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
timeoutSeconds: 5
readinessProbe:
exec:
command:
@@ -58,14 +55,11 @@ spec:
- http
- localhost:9898/readyz
initialDelaySeconds: 5
failureThreshold: 3
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 1
timeoutSeconds: 5
resources:
limits:
cpu: 2000m
memory: 512Mi
requests:
cpu: 10m
cpu: 100m
memory: 64Mi

View File

@@ -0,0 +1,6 @@
apiVersion: appmesh.k8s.aws/v1beta1
kind: Mesh
metadata:
name: global
spec:
serviceDiscoveryType: dns

View File

@@ -1,13 +1,13 @@
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: podinfo-primary
name: podinfo
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo-primary
name: podinfo
minReplicas: 2
maxReplicas: 4
metrics:

View File

@@ -0,0 +1,172 @@
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-config
namespace: test
labels:
app: ingress
data:
envoy.yaml: |
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 8080
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
access_log:
- name: envoy.file_access_log
config:
path: /dev/stdout
codec_type: auto
stat_prefix: ingress_http
http_filters:
- name: envoy.router
config: {}
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match:
prefix: "/"
route:
cluster: podinfo
host_rewrite: podinfo.test
timeout: 15s
retry_policy:
retry_on: "gateway-error,connect-failure,refused-stream"
num_retries: 10
per_try_timeout: 5s
clusters:
- name: podinfo
connect_timeout: 0.30s
type: strict_dns
lb_policy: round_robin
load_assignment:
cluster_name: podinfo
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: podinfo.test
port_value: 9898
admin:
access_log_path: /dev/null
address:
socket_address:
address: 0.0.0.0
port_value: 9999
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress
namespace: test
labels:
app: ingress
spec:
replicas: 1
selector:
matchLabels:
app: ingress
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
app: ingress
annotations:
prometheus.io/path: "/stats/prometheus"
prometheus.io/port: "9999"
prometheus.io/scrape: "true"
# dummy port to exclude ingress from mesh traffic
# only egress should go over the mesh
appmesh.k8s.aws/ports: "444"
spec:
terminationGracePeriodSeconds: 30
containers:
- name: ingress
image: "envoyproxy/envoy-alpine:v1.11.1"
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
command:
- /usr/local/bin/envoy
args:
- -l
- $loglevel
- -c
- /config/envoy.yaml
- --base-id
- "1234"
ports:
- name: admin
containerPort: 9999
protocol: TCP
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
initialDelaySeconds: 5
tcpSocket:
port: admin
readinessProbe:
initialDelaySeconds: 5
tcpSocket:
port: admin
resources:
requests:
cpu: 100m
memory: 64Mi
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
configMap:
name: ingress-config
---
kind: Service
apiVersion: v1
metadata:
name: ingress
namespace: test
spec:
selector:
app: ingress
ports:
- protocol: TCP
name: http
port: 80
targetPort: http
type: LoadBalancer
---
apiVersion: appmesh.k8s.aws/v1beta1
kind: VirtualNode
metadata:
name: ingress
namespace: test
spec:
meshName: global
listeners:
- portMapping:
port: 80
protocol: http
serviceDiscovery:
dns:
hostName: ingress.test
backends:
- virtualService:
virtualServiceName: podinfo.test

View File

@@ -0,0 +1,67 @@
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
# container port
port: 9898
# Istio gateways (optional)
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
# Istio virtual service host names (optional)
hosts:
- app.example.com
# Istio traffic policy (optional)
trafficPolicy:
tls:
# use ISTIO_MUTUAL when mTLS is enabled
mode: DISABLE
canaryAnalysis:
# schedule interval (default 60s)
interval: 10s
# max number of failed metric checks before rollback
threshold: 10
# total number of iterations
iterations: 10
# canary match condition
match:
- headers:
cookie:
regex: "^(.*?;)?(type=insider)(;.*)?$"
- headers:
user-agent:
regex: "(?=.*Safari)(?!.*Chrome).*$"
metrics:
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 1m
- name: request-duration
# maximum req duration P99
# milliseconds
threshold: 500
interval: 30s
# external checks (optional)
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 10 -c 2 -H 'Cookie: type=insider' http://podinfo.test:9898/"

View File

@@ -1,9 +1,13 @@
apiVersion: flagger.app/v1alpha2
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# service mesh provider (default istio)
# can be: kubernetes, istio, appmesh, smi, nginx, gloo, supergloo
# use the kubernetes provider for Blue/Green style deployments
provider: istio
# deployment reference
targetRef:
apiVersion: apps/v1
@@ -20,13 +24,39 @@ spec:
service:
# container port
port: 9898
# port name can be http or grpc (default http)
portName: http
# add all the other container ports
# when generating ClusterIP services (default false)
portDiscovery: false
# Istio gateways (optional)
gateways:
- public-gateway.istio-system.svc.cluster.local
# remove the mesh gateway if the public host is
# shared across multiple virtual services
- mesh
# Istio virtual service host names (optional)
hosts:
- app.iowa.weavedx.com
- app.example.com
# Istio traffic policy (optional)
trafficPolicy:
tls:
# use ISTIO_MUTUAL when mTLS is enabled
mode: DISABLE
# HTTP match conditions (optional)
match:
- uri:
prefix: /
# HTTP rewrite (optional)
rewrite:
uri: /
# HTTP timeout (optional)
timeout: 30s
# promote the canary without analysing it (default false)
skipAnalysis: false
canaryAnalysis:
# schedule interval (default 60s)
interval: 10s
# max number of failed metric checks before rollback
threshold: 10
# max traffic percentage routed to canary
@@ -35,23 +65,24 @@ spec:
# canary increment step
# percentage (0-100)
stepWeight: 5
# Istio Prometheus checks
# Prometheus checks
metrics:
- name: istio_requests_total
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 1m
- name: istio_request_duration_seconds_bucket
- name: request-duration
# maximum req duration P99
# milliseconds
threshold: 500
interval: 30s
# external checks (optional)
webhooks:
- name: integration-tests
url: http://podinfo.test:9898/echo
timeout: 1m
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
test: "all"
token: "16688eb5e9f289f1991c"
type: cmd
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test:9898/"
logCmdOutput: "true"

View File

@@ -20,12 +20,13 @@ spec:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9898"
labels:
app: podinfo
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:1.3.0
image: stefanprodan/podinfo:3.1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898

View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: test
labels:
istio-injection: enabled

View File

@@ -0,0 +1,26 @@
apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
name: backend
namespace: test
annotations:
flux.weave.works/automated: "true"
flux.weave.works/tag.chart-image: regexp:^1.7.*
spec:
releaseName: backend
chart:
repository: https://flagger.app/
name: podinfo
version: 2.2.0
values:
image:
repository: quay.io/stefanprodan/podinfo
tag: 1.7.0
httpServer:
timeout: 30s
canary:
enabled: true
istioIngress:
enabled: false
loadtest:
enabled: true

View File

@@ -0,0 +1,27 @@
apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
name: frontend
namespace: test
annotations:
flux.weave.works/automated: "true"
flux.weave.works/tag.chart-image: semver:~1.7
spec:
releaseName: frontend
chart:
repository: https://flagger.app/
name: podinfo
version: 2.2.0
values:
image:
repository: quay.io/stefanprodan/podinfo
tag: 1.7.0
backend: http://backend-podinfo:9898/echo
canary:
enabled: true
istioIngress:
enabled: true
gateway: public-gateway.istio-system.svc.cluster.local
host: frontend.istio.example.com
loadtest:
enabled: true

View File

@@ -0,0 +1,18 @@
apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
name: loadtester
namespace: test
annotations:
flux.weave.works/automated: "true"
flux.weave.works/tag.chart-image: glob:0.*
spec:
releaseName: flagger-loadtester
chart:
repository: https://flagger.app/
name: loadtester
version: 0.6.0
values:
image:
repository: weaveworks/flagger-loadtester
tag: 0.6.1

View File

@@ -0,0 +1,264 @@
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
labels:
app: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
- nodes/proxy
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus
labels:
app: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: appmesh-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: appmesh-system
labels:
app: prometheus
---
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus
namespace: appmesh-system
labels:
app: prometheus
data:
prometheus.yml: |-
global:
scrape_interval: 5s
scrape_configs:
# Scrape config for AppMesh Envoy sidecar
- job_name: 'appmesh-envoy'
metrics_path: /stats/prometheus
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_name]
action: keep
regex: '^envoy$'
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: ${1}:9901
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
# Exclude high cardinality metrics
metric_relabel_configs:
- source_labels: [ cluster_name ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ tcp_prefix ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ listener_address ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_listener_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tls.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tcp_downstream.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_http_(stats|admin).*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_cluster_(lb|retry|bind|internal|max|original).*'
action: drop
# Scrape config for API servers
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- default
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: kubernetes;https
# Scrape config for nodes
- job_name: 'kubernetes-nodes'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
# scrape config for cAdvisor
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
# scrape config for pods
- job_name: kubernetes-pods
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_scrape
- source_labels: [ __address__ ]
regex: '.*9901.*'
action: drop
- action: replace
regex: (.+)
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_path
target_label: __metrics_path__
- action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_pod_annotation_prometheus_io_port
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: kubernetes_namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: kubernetes_pod_name
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus
namespace: appmesh-system
labels:
app: prometheus
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
annotations:
version: "appmesh-v1alpha1"
spec:
serviceAccountName: prometheus
containers:
- name: prometheus
image: "docker.io/prom/prometheus:v2.7.1"
imagePullPolicy: IfNotPresent
args:
- '--storage.tsdb.retention=6h'
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- containerPort: 9090
name: http
livenessProbe:
httpGet:
path: /-/healthy
port: 9090
readinessProbe:
httpGet:
path: /-/ready
port: 9090
resources:
requests:
cpu: 10m
memory: 128Mi
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
volumes:
- name: config-volume
configMap:
name: prometheus
---
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: appmesh-system
labels:
name: prometheus
spec:
selector:
app: prometheus
ports:
- name: http
protocol: TCP
port: 9090

View File

@@ -13,11 +13,83 @@ metadata:
labels:
app: flagger
rules:
- apiGroups: ['*']
resources: ['*']
verbs: ['*']
- nonResourceURLs: ['*']
verbs: ['*']
- apiGroups:
- ""
resources:
- events
- configmaps
- secrets
- services
verbs: ["*"]
- apiGroups:
- apps
resources:
- deployments
verbs: ["*"]
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs: ["*"]
- apiGroups:
- "extensions"
resources:
- ingresses
- ingresses/status
verbs: ["*"]
- apiGroups:
- flagger.app
resources:
- canaries
- canaries/status
verbs: ["*"]
- apiGroups:
- networking.istio.io
resources:
- virtualservices
- virtualservices/status
- destinationrules
- destinationrules/status
verbs: ["*"]
- apiGroups:
- appmesh.k8s.aws
resources:
- meshes
- meshes/status
- virtualnodes
- virtualnodes/status
- virtualservices
- virtualservices/status
verbs: ["*"]
- apiGroups:
- split.smi-spec.io
resources:
- trafficsplits
verbs: ["*"]
- apiGroups:
- gloo.solo.io
resources:
- settings
- upstreams
- upstreamgroups
- proxies
- virtualservices
verbs: ["*"]
- apiGroups:
- gateway.solo.io
resources:
- virtualservices
- gateways
verbs: ["*"]
- apiGroups:
- projectcontour.io
resources:
- httpproxies
verbs: ["*"]
- nonResourceURLs:
- /version
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding

View File

@@ -2,13 +2,18 @@ apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: canaries.flagger.app
annotations:
helm.sh/resource-policy: keep
spec:
group: flagger.app
version: v1alpha2
version: v1alpha3
versions:
- name: v1alpha2
- name: v1alpha3
served: true
storage: true
- name: v1alpha2
served: true
storage: false
- name: v1alpha1
served: true
storage: false
@@ -16,21 +21,63 @@ spec:
plural: canaries
singular: canary
kind: Canary
categories:
- all
scope: Namespaced
subresources:
status: {}
additionalPrinterColumns:
- name: Status
type: string
JSONPath: .status.phase
- name: Weight
type: string
JSONPath: .status.canaryWeight
- name: FailedChecks
type: string
JSONPath: .status.failedChecks
priority: 1
- name: Interval
type: string
JSONPath: .spec.canaryAnalysis.interval
priority: 1
- name: Mirror
type: boolean
JSONPath: .spec.canaryAnalysis.mirror
priority: 1
- name: StepWeight
type: string
JSONPath: .spec.canaryAnalysis.stepWeight
priority: 1
- name: MaxWeight
type: string
JSONPath: .spec.canaryAnalysis.maxWeight
priority: 1
- name: LastTransitionTime
type: string
JSONPath: .status.lastTransitionTime
validation:
openAPIV3Schema:
properties:
spec:
required:
- targetRef
- service
- canaryAnalysis
- targetRef
- service
- canaryAnalysis
properties:
provider:
description: Traffic managent provider
type: string
metricsServer:
description: Prometheus URL
type: string
progressDeadlineSeconds:
description: Deployment progress deadline
type: number
targetRef:
description: Deployment selector
type: object
required: ['apiVersion', 'kind', 'name']
required: ["apiVersion", "kind", "name"]
properties:
apiVersion:
type: string
@@ -39,8 +86,24 @@ spec:
name:
type: string
autoscalerRef:
type: object
required: ['apiVersion', 'kind', 'name']
description: HPA selector
anyOf:
- type: string
- type: object
required: ["apiVersion", "kind", "name"]
properties:
apiVersion:
type: string
kind:
type: string
name:
type: string
ingressRef:
description: NGINX ingress selector
anyOf:
- type: string
- type: object
required: ["apiVersion", "kind", "name"]
properties:
apiVersion:
type: string
@@ -50,44 +113,212 @@ spec:
type: string
service:
type: object
required: ['port']
required: ["port"]
properties:
name:
description: Kubernetes service name
type: string
port:
description: Container port number
type: number
portName:
description: Container port name
type: string
targetPort:
description: Container target port name
anyOf:
- type: string
- type: number
portDiscovery:
description: Enable port dicovery
type: boolean
meshName:
description: AppMesh mesh name
type: string
backends:
description: AppMesh backend array
anyOf:
- type: string
- type: array
timeout:
description: Istio HTTP or gRPC request timeout
type: string
trafficPolicy:
description: Istio traffic policy
anyOf:
- type: string
- type: object
match:
description: Istio URL match conditions
anyOf:
- type: string
- type: array
rewrite:
description: Istio URL rewrite
anyOf:
- type: string
- type: object
headers:
description: Istio headers operations
anyOf:
- type: string
- type: object
corsPolicy:
description: Istio CORS policy
anyOf:
- type: string
- type: object
gateways:
description: Istio gateways list
anyOf:
- type: string
- type: array
hosts:
description: Istio hosts list
anyOf:
- type: string
- type: array
skipAnalysis:
type: boolean
canaryAnalysis:
properties:
interval:
description: Canary schedule interval
type: string
pattern: "^[0-9]+(m|s)"
iterations:
description: Number of checks to run for A/B Testing and Blue/Green
type: number
threshold:
description: Max number of failed checks before rollback
type: number
maxWeight:
description: Max traffic percentage routed to canary
type: number
stepWeight:
description: Canary incremental traffic percentage step
type: number
mirror:
description: Mirror traffic to canary before shifting
type: boolean
match:
description: A/B testing match conditions
anyOf:
- type: string
- type: array
metrics:
description: Prometheus query list for this canary
type: array
properties:
items:
type: object
required: ['name', 'interval', 'threshold']
required: ["name", "threshold"]
properties:
name:
description: Name of the Prometheus metric
type: string
interval:
description: Interval of the promql query
type: string
pattern: "^[0-9]+(m)"
pattern: "^[0-9]+(m|s)"
threshold:
description: Max scalar value accepted for this metric
type: number
query:
description: Prometheus query
type: string
webhooks:
description: Webhook list for this canary
type: array
properties:
items:
type: object
required: ['name', 'url', 'timeout']
required: ["name", "url"]
properties:
name:
description: Name of the webhook
type: string
type:
description: Type of the webhook pre, post or during rollout
type: string
enum:
- ""
- confirm-rollout
- pre-rollout
- rollout
- confirm-promotion
- post-rollout
- event
- rollback
url:
description: URL address of this webhook
type: string
format: url
timeout:
description: Request timeout for this webhook
type: string
pattern: "^[0-9]+(s)"
pattern: "^[0-9]+(m|s)"
metadata:
description: Metadata (key-value pairs) for this webhook
anyOf:
- type: string
- type: object
status:
properties:
phase:
description: Analysis phase of this canary
type: string
enum:
- ""
- Initializing
- Initialized
- Waiting
- Progressing
- Promoting
- Finalising
- Succeeded
- Failed
canaryWeight:
description: Traffic weight percentage routed to canary
type: number
failedChecks:
description: Failed check count of the current canary analysis
type: number
iterations:
description: Iteration count of the current canary analysis
type: number
lastAppliedSpec:
description: LastAppliedSpec of this canary
type: string
lastTransitionTime:
description: LastTransitionTime of this canary
format: date-time
type: string
conditions:
description: Status conditions of this canary
type: array
properties:
items:
type: object
required: ["type", "status", "reason"]
properties:
lastTransitionTime:
description: LastTransitionTime of this condition
format: date-time
type: string
lastUpdateTime:
description: LastUpdateTime of this condition
format: date-time
type: string
message:
description: Message associated with this condition
type: string
reason:
description: Reason for the current status of this condition
type: string
status:
description: Status of this condition
type: string
type:
description: Type of this condition
type: string

View File

@@ -22,8 +22,8 @@ spec:
serviceAccountName: flagger
containers:
- name: flagger
image: quay.io/stefanprodan/flagger:0.2.0
imagePullPolicy: Always
image: weaveworks/flagger:0.23.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
@@ -31,6 +31,7 @@ spec:
- ./flagger
- -log-level=info
- -control-loop-interval=10s
- -mesh-provider=$(MESH_PROVIDER)
- -metrics-server=http://prometheus.istio-system.svc.cluster.local:9090
livenessProbe:
exec:

View File

@@ -0,0 +1,27 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt

View File

@@ -0,0 +1,834 @@
# Source: istio/charts/prometheus/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus
namespace: istio-system
labels:
app: prometheus
chart: prometheus-1.0.6
heritage: Tiller
release: istio
data:
prometheus.yml: |-
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'istio-mesh'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-telemetry;prometheus
# Scrape config for envoy stats
- job_name: 'envoy-stats'
metrics_path: /stats/prometheus
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_port_name]
action: keep
regex: '.*-envoy-prom'
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:15090
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
metric_relabel_configs:
# Exclude some of the envoy metrics that have massive cardinality
# This list may need to be pruned further moving forward, as informed
# by performance and scalability testing.
- source_labels: [ cluster_name ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ tcp_prefix ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ listener_address ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_listener_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tls.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tcp_downstream.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_http_(stats|admin).*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_cluster_(lb|retry|bind|internal|max|original).*'
action: drop
- job_name: 'istio-policy'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-policy;http-monitoring
- job_name: 'istio-telemetry'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-telemetry;http-monitoring
- job_name: 'pilot'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-pilot;http-monitoring
- job_name: 'galley'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-galley;http-monitoring
# scrape config for API servers
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- default
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: kubernetes;https
# scrape config for nodes (kubelet)
- job_name: 'kubernetes-nodes'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
# Scrape config for Kubelet cAdvisor.
#
# This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics
# (those whose names begin with 'container_') have been removed from the
# Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to
# retrieve those metrics.
#
# In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor
# HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics"
# in that case (and ensure cAdvisor's HTTP server hasn't been disabled with
# the --cadvisor-port=0 Kubelet flag).
#
# This job is not necessary and should be removed in Kubernetes 1.6 and
# earlier versions, or it will cause the metrics to be scraped twice.
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
# scrape config for service endpoints.
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs: # If first two labels are present, pod should be scraped by the istio-secure job.
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status]
action: drop
regex: (.+)
- source_labels: [__meta_kubernetes_pod_annotation_istio_mtls]
action: drop
regex: (true)
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
- job_name: 'kubernetes-pods-istio-secure'
scheme: https
tls_config:
ca_file: /etc/istio-certs/root-cert.pem
cert_file: /etc/istio-certs/cert-chain.pem
key_file: /etc/istio-certs/key.pem
insecure_skip_verify: true # prometheus does not support secure naming.
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
# sidecar status annotation is added by sidecar injector and
# istio_workload_mtls_ability can be specifically placed on a pod to indicate its ability to receive mtls traffic.
- source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status, __meta_kubernetes_pod_annotation_istio_mtls]
action: keep
regex: (([^;]+);([^;]*))|(([^;]*);(true))
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__] # Only keep address that is host:port
action: keep # otherwise an extra target with ':443' is added for https scheme
regex: ([^:]+):(\d+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
---
# Source: istio/charts/prometheus/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus-istio-system
labels:
app: prometheus
chart: prometheus-1.0.6
heritage: Tiller
release: istio
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
- nodes/proxy
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
# Source: istio/charts/prometheus/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: istio-system
labels:
app: prometheus
chart: prometheus-1.0.6
heritage: Tiller
release: istio
---
# Source: istio/charts/prometheus/templates/clusterrolebindings.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus-istio-system
labels:
app: prometheus
chart: prometheus-1.0.6
heritage: Tiller
release: istio
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-istio-system
subjects:
- kind: ServiceAccount
name: prometheus
namespace: istio-system
---
# Source: istio/charts/prometheus/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: istio-system
annotations:
prometheus.io/scrape: 'true'
labels:
name: prometheus
spec:
selector:
app: prometheus
ports:
- name: http-prometheus
protocol: TCP
port: 9090
---
# Source: istio/charts/prometheus/templates/deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: prometheus
namespace: istio-system
labels:
app: prometheus
chart: prometheus-1.0.6
heritage: Tiller
release: istio
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
serviceAccountName: prometheus
containers:
- name: prometheus
image: "docker.io/prom/prometheus:v2.3.1"
imagePullPolicy: IfNotPresent
args:
- '--storage.tsdb.retention=6h'
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- containerPort: 9090
name: http
livenessProbe:
httpGet:
path: /-/healthy
port: 9090
readinessProbe:
httpGet:
path: /-/ready
port: 9090
resources:
requests:
cpu: 10m
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
- mountPath: /etc/istio-certs
name: istio-certs
volumes:
- name: config-volume
configMap:
name: prometheus
- name: istio-certs
secret:
defaultMode: 420
optional: true
secretName: istio.default
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 2
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- weight: 2
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
- weight: 2
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x
---
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: requestcount
namespace: istio-system
spec:
value: "1"
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
source_workload: source.workload.name | "unknown"
source_workload_namespace: source.workload.namespace | "unknown"
source_principal: source.principal | "unknown"
source_app: source.labels["app"] | "unknown"
source_version: source.labels["version"] | "unknown"
destination_workload: destination.workload.name | "unknown"
destination_workload_namespace: destination.workload.namespace | "unknown"
destination_principal: destination.principal | "unknown"
destination_app: destination.labels["app"] | "unknown"
destination_version: destination.labels["version"] | "unknown"
destination_service: destination.service.host | "unknown"
destination_service_name: destination.service.name | "unknown"
destination_service_namespace: destination.service.namespace | "unknown"
request_protocol: api.protocol | context.protocol | "unknown"
response_code: response.code | 200
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
monitored_resource_type: '"UNSPECIFIED"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: requestduration
namespace: istio-system
spec:
value: response.duration | "0ms"
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
source_workload: source.workload.name | "unknown"
source_workload_namespace: source.workload.namespace | "unknown"
source_principal: source.principal | "unknown"
source_app: source.labels["app"] | "unknown"
source_version: source.labels["version"] | "unknown"
destination_workload: destination.workload.name | "unknown"
destination_workload_namespace: destination.workload.namespace | "unknown"
destination_principal: destination.principal | "unknown"
destination_app: destination.labels["app"] | "unknown"
destination_version: destination.labels["version"] | "unknown"
destination_service: destination.service.host | "unknown"
destination_service_name: destination.service.name | "unknown"
destination_service_namespace: destination.service.namespace | "unknown"
request_protocol: api.protocol | context.protocol | "unknown"
response_code: response.code | 200
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
monitored_resource_type: '"UNSPECIFIED"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: requestsize
namespace: istio-system
spec:
value: request.size | 0
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
source_workload: source.workload.name | "unknown"
source_workload_namespace: source.workload.namespace | "unknown"
source_principal: source.principal | "unknown"
source_app: source.labels["app"] | "unknown"
source_version: source.labels["version"] | "unknown"
destination_workload: destination.workload.name | "unknown"
destination_workload_namespace: destination.workload.namespace | "unknown"
destination_principal: destination.principal | "unknown"
destination_app: destination.labels["app"] | "unknown"
destination_version: destination.labels["version"] | "unknown"
destination_service: destination.service.host | "unknown"
destination_service_name: destination.service.name | "unknown"
destination_service_namespace: destination.service.namespace | "unknown"
request_protocol: api.protocol | context.protocol | "unknown"
response_code: response.code | 200
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
monitored_resource_type: '"UNSPECIFIED"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: responsesize
namespace: istio-system
spec:
value: response.size | 0
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
source_workload: source.workload.name | "unknown"
source_workload_namespace: source.workload.namespace | "unknown"
source_principal: source.principal | "unknown"
source_app: source.labels["app"] | "unknown"
source_version: source.labels["version"] | "unknown"
destination_workload: destination.workload.name | "unknown"
destination_workload_namespace: destination.workload.namespace | "unknown"
destination_principal: destination.principal | "unknown"
destination_app: destination.labels["app"] | "unknown"
destination_version: destination.labels["version"] | "unknown"
destination_service: destination.service.host | "unknown"
destination_service_name: destination.service.name | "unknown"
destination_service_namespace: destination.service.namespace | "unknown"
request_protocol: api.protocol | context.protocol | "unknown"
response_code: response.code | 200
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
monitored_resource_type: '"UNSPECIFIED"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: tcpbytesent
namespace: istio-system
spec:
value: connection.sent.bytes | 0
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
source_workload: source.workload.name | "unknown"
source_workload_namespace: source.workload.namespace | "unknown"
source_principal: source.principal | "unknown"
source_app: source.labels["app"] | "unknown"
source_version: source.labels["version"] | "unknown"
destination_workload: destination.workload.name | "unknown"
destination_workload_namespace: destination.workload.namespace | "unknown"
destination_principal: destination.principal | "unknown"
destination_app: destination.labels["app"] | "unknown"
destination_version: destination.labels["version"] | "unknown"
destination_service: destination.service.name | "unknown"
destination_service_name: destination.service.name | "unknown"
destination_service_namespace: destination.service.namespace | "unknown"
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
monitored_resource_type: '"UNSPECIFIED"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: tcpbytereceived
namespace: istio-system
spec:
value: connection.received.bytes | 0
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
source_workload: source.workload.name | "unknown"
source_workload_namespace: source.workload.namespace | "unknown"
source_principal: source.principal | "unknown"
source_app: source.labels["app"] | "unknown"
source_version: source.labels["version"] | "unknown"
destination_workload: destination.workload.name | "unknown"
destination_workload_namespace: destination.workload.namespace | "unknown"
destination_principal: destination.principal | "unknown"
destination_app: destination.labels["app"] | "unknown"
destination_version: destination.labels["version"] | "unknown"
destination_service: destination.service.name | "unknown"
destination_service_name: destination.service.name | "unknown"
destination_service_namespace: destination.service.namespace | "unknown"
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
monitored_resource_type: '"UNSPECIFIED"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: prometheus
metadata:
name: handler
namespace: istio-system
spec:
metrics:
- name: requests_total
instance_name: requestcount.metric.istio-system
kind: COUNTER
label_names:
- reporter
- source_app
- source_principal
- source_workload
- source_workload_namespace
- source_version
- destination_app
- destination_principal
- destination_workload
- destination_workload_namespace
- destination_version
- destination_service
- destination_service_name
- destination_service_namespace
- request_protocol
- response_code
- connection_security_policy
- name: request_duration_seconds
instance_name: requestduration.metric.istio-system
kind: DISTRIBUTION
label_names:
- reporter
- source_app
- source_principal
- source_workload
- source_workload_namespace
- source_version
- destination_app
- destination_principal
- destination_workload
- destination_workload_namespace
- destination_version
- destination_service
- destination_service_name
- destination_service_namespace
- request_protocol
- response_code
- connection_security_policy
buckets:
explicit_buckets:
bounds: [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10]
- name: request_bytes
instance_name: requestsize.metric.istio-system
kind: DISTRIBUTION
label_names:
- reporter
- source_app
- source_principal
- source_workload
- source_workload_namespace
- source_version
- destination_app
- destination_principal
- destination_workload
- destination_workload_namespace
- destination_version
- destination_service
- destination_service_name
- destination_service_namespace
- request_protocol
- response_code
- connection_security_policy
buckets:
exponentialBuckets:
numFiniteBuckets: 8
scale: 1
growthFactor: 10
- name: response_bytes
instance_name: responsesize.metric.istio-system
kind: DISTRIBUTION
label_names:
- reporter
- source_app
- source_principal
- source_workload
- source_workload_namespace
- source_version
- destination_app
- destination_principal
- destination_workload
- destination_workload_namespace
- destination_version
- destination_service
- destination_service_name
- destination_service_namespace
- request_protocol
- response_code
- connection_security_policy
buckets:
exponentialBuckets:
numFiniteBuckets: 8
scale: 1
growthFactor: 10
- name: tcp_sent_bytes_total
instance_name: tcpbytesent.metric.istio-system
kind: COUNTER
label_names:
- reporter
- source_app
- source_principal
- source_workload
- source_workload_namespace
- source_version
- destination_app
- destination_principal
- destination_workload
- destination_workload_namespace
- destination_version
- destination_service
- destination_service_name
- destination_service_namespace
- connection_security_policy
- name: tcp_received_bytes_total
instance_name: tcpbytereceived.metric.istio-system
kind: COUNTER
label_names:
- reporter
- source_app
- source_principal
- source_workload
- source_workload_namespace
- source_version
- destination_app
- destination_principal
- destination_workload
- destination_workload_namespace
- destination_version
- destination_service
- destination_service_name
- destination_service_namespace
- connection_security_policy
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: promhttp
namespace: istio-system
spec:
match: context.protocol == "http" || context.protocol == "grpc"
actions:
- handler: handler.prometheus
instances:
- requestcount.metric
- requestduration.metric
- requestsize.metric
- responsesize.metric
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: promtcp
namespace: istio-system
spec:
match: context.protocol == "tcp"
actions:
- handler: handler.prometheus
instances:
- tcpbytesent.metric
- tcpbytereceived.metric
---

View File

@@ -0,0 +1,52 @@
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
provider: gloo
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
progressDeadlineSeconds: 60
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
port: 9898
canaryAnalysis:
interval: 10s
threshold: 10
maxWeight: 50
stepWeight: 5
metrics:
- name: request-success-rate
threshold: 99
interval: 1m
- name: request-duration
threshold: 500
interval: 30s
webhooks:
- name: acceptance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 10s
metadata:
type: bash
cmd: "curl -sd 'test' http://podinfo-canary:9898/token | grep token"
- name: gloo-acceptance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 10s
metadata:
type: bash
cmd: "curl -sd 'test' -H 'Host: app.example.com' http://gateway-proxy-v2.gloo-system/token | grep token"
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 2m -q 5 -c 2 -host app.example.com http://gateway-proxy-v2.gloo-system"
logCmdOutput: "true"

View File

@@ -0,0 +1,17 @@
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
name: podinfo
namespace: test
spec:
virtualHost:
domains:
- '*'
name: podinfo
routes:
- matcher:
prefix: /
routeAction:
upstreamGroup:
name: podinfo
namespace: test

View File

@@ -0,0 +1,58 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: flagger-helmtester
namespace: kube-system
labels:
app: flagger-helmtester
spec:
selector:
matchLabels:
app: flagger-helmtester
template:
metadata:
labels:
app: flagger-helmtester
annotations:
prometheus.io/scrape: "true"
spec:
serviceAccountName: tiller
containers:
- name: helmtester
image: weaveworks/flagger-loadtester:0.8.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
command:
- ./loadtester
- -port=8080
- -log-level=info
- -timeout=1h
livenessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
readinessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
resources:
limits:
memory: "512Mi"
cpu: "1000m"
requests:
memory: "32Mi"
cpu: "10m"

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: flagger-helmtester
namespace: kube-system
labels:
app: flagger-helmtester
spec:
type: ClusterIP
selector:
app: flagger-helmtester
ports:
- name: http
port: 80
protocol: TCP
targetPort: http

View File

@@ -0,0 +1,19 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: flagger-loadtester-bats
data:
tests: |
#!/usr/bin/env bats
@test "check message" {
curl -sS http://${URL} | jq -r .message | {
run cut -d $' ' -f1
[ $output = "greetings" ]
}
}
@test "check headers" {
curl -sS http://${URL}/headers | grep X-Request-Id
}

View File

@@ -0,0 +1,67 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: flagger-loadtester
labels:
app: flagger-loadtester
spec:
selector:
matchLabels:
app: flagger-loadtester
template:
metadata:
labels:
app: flagger-loadtester
annotations:
prometheus.io/scrape: "true"
spec:
containers:
- name: loadtester
image: weaveworks/flagger-loadtester:0.12.1
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
command:
- ./loadtester
- -port=8080
- -log-level=info
- -timeout=1h
livenessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
readinessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
resources:
limits:
memory: "512Mi"
cpu: "1000m"
requests:
memory: "32Mi"
cpu: "10m"
securityContext:
readOnlyRootFilesystem: true
runAsUser: 10001
# volumeMounts:
# - name: tests
# mountPath: /bats
# readOnly: true
# volumes:
# - name: tests
# configMap:
# name: flagger-loadtester-bats

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: flagger-loadtester
labels:
app: flagger-loadtester
spec:
type: ClusterIP
selector:
app: flagger-loadtester
ports:
- name: http
port: 80
protocol: TCP
targetPort: http

View File

@@ -4,3 +4,4 @@ metadata:
name: test
labels:
istio-injection: enabled
appmesh.k8s.aws/sidecarInjectorWebhook: enabled

View File

@@ -0,0 +1,70 @@
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# ingress reference
ingressRef:
apiVersion: extensions/v1beta1
kind: Ingress
name: podinfo
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
service:
# ClusterIP port number
port: 80
# container port number or name
targetPort: 9898
canaryAnalysis:
# schedule interval (default 60s)
interval: 10s
# max number of failed metric checks before rollback
threshold: 10
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 5
# NGINX Prometheus checks
metrics:
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 1m
- name: "latency"
threshold: 0.5
interval: 1m
query: |
histogram_quantile(0.99,
sum(
rate(
http_request_duration_seconds_bucket{
kubernetes_namespace="test",
kubernetes_pod_name=~"podinfo-[0-9a-zA-Z]+(-[0-9a-zA-Z]+)"
}[1m]
)
) by (le)
)
# external checks (optional)
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 http://app.example.com/"
logCmdOutput: "true"

View File

@@ -0,0 +1,17 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: app.example.com
http:
paths:
- backend:
serviceName: podinfo
servicePort: 9898

View File

@@ -1,34 +0,0 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo
namespace: test
spec:
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
hosts:
- podinfo.iowa.weavedx.com
- podinfo
http:
- match:
- headers:
user-agent:
regex: ^(?!.*Chrome)(?=.*\bSafari\b).*$
route:
- destination:
host: podinfo-primary
port:
number: 9898
weight: 0
- destination:
host: podinfo
port:
number: 9898
weight: 100
- route:
- destination:
host: podinfo-primary
port:
number: 9898
weight: 100

View File

@@ -1,25 +0,0 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
hosts:
- podinfo.iowa.weavedx.com
- podinfo
http:
- route:
- destination:
host: podinfo-primary
port:
number: 9898
weight: 100
mirror:
host: podinfo
port:
number: 9898

View File

@@ -1,26 +0,0 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
hosts:
- podinfo.iowa.weavedx.com
- podinfo
http:
- route:
- destination:
host: podinfo-primary
port:
number: 9898
weight: 100
- destination:
host: podinfo
port:
number: 9898
weight: 0

View File

@@ -1,16 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
type: ClusterIP
selector:
app: podinfo
ports:
- name: http
port: 9898
protocol: TCP
targetPort: http

View File

@@ -1,16 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: podinfo-primary
namespace: test
labels:
app: podinfo-primary
spec:
type: ClusterIP
selector:
app: podinfo-primary
ports:
- name: http
port: 9898
protocol: TCP
targetPort: http

View File

@@ -1,30 +0,0 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
hosts:
- podinfo.istio.weavedx.com
- podinfo
http:
- route:
- destination:
host: podinfo-primary
port:
number: 9898
weight: 100
- destination:
host: podinfo
port:
number: 9898
weight: 0
timeout: 10s
retries:
attempts: 3
perTryTimeout: 2s

View File

@@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@@ -0,0 +1,19 @@
apiVersion: v1
name: appmesh-gateway
description: Flagger Gateway for AWS App Mesh is an edge L7 load balancer that exposes applications outside the mesh.
version: 1.1.1
appVersion: 1.1.0
home: https://flagger.app
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/weaveworks.png
sources:
- https://github.com/stefanprodan/appmesh-gateway
maintainers:
- name: Stefan Prodan
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com
keywords:
- flagger
- appmesh
- envoy
- gateway
- ingress

View File

@@ -0,0 +1,87 @@
# Flagger Gateway for App Mesh
[Flagger Gateway for App Mesh](https://github.com/stefanprodan/appmesh-gateway) is an
Envoy-powered load balancer that exposes applications outside the mesh.
The gateway facilitates canary deployments and A/B testing for user-facing web applications running on AWS App Mesh.
## Prerequisites
* Kubernetes >= 1.13
* [App Mesh controller](https://github.com/aws/eks-charts/tree/master/stable/appmesh-controller) >= 0.2.0
* [App Mesh inject](https://github.com/aws/eks-charts/tree/master/stable/appmesh-inject) >= 0.2.0
## Installing the Chart
Add Flagger Helm repository:
```console
$ helm repo add flagger https://flagger.app
```
Create a namespace with App Mesh sidecar injection enabled:
```sh
kubectl create ns flagger-system
kubectl label namespace test appmesh.k8s.aws/sidecarInjectorWebhook=enabled
```
Install App Mesh Gateway for an existing mesh:
```sh
helm upgrade -i appmesh-gateway flagger/appmesh-gateway \
--namespace flagger-system \
--set mesh.name=global
```
Optionally you can create a mesh at install time:
```sh
helm upgrade -i appmesh-gateway flagger/appmesh-gateway \
--namespace flagger-system \
--set mesh.name=global \
--set mesh.create=true
```
The [configuration](#configuration) section lists the parameters that can be configured during installation.
## Uninstalling the Chart
To uninstall/delete the `appmesh-gateway` deployment:
```console
helm delete --purge appmesh-gateway
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the chart and their default values.
Parameter | Description | Default
--- | --- | ---
`service.type` | When set to LoadBalancer it creates an AWS NLB | `LoadBalancer`
`proxy.access_log_path` | to enable the access logs, set it to `/dev/stdout` | `/dev/null`
`proxy.image.repository` | image repository | `envoyproxy/envoy`
`proxy.image.tag` | image tag | `<VERSION>`
`proxy.image.pullPolicy` | image pull policy | `IfNotPresent`
`controller.image.repository` | image repository | `weaveworks/flagger-appmesh-gateway`
`controller.image.tag` | image tag | `<VERSION>`
`controller.image.pullPolicy` | image pull policy | `IfNotPresent`
`resources.requests/cpu` | pod CPU request | `100m`
`resources.requests/memory` | pod memory request | `128Mi`
`resources.limits/memory` | pod memory limit | `2Gi`
`nodeSelector` | node labels for pod assignment | `{}`
`tolerations` | list of node taints to tolerate | `[]`
`rbac.create` | if `true`, create and use RBAC resources | `true`
`rbac.pspEnabled` | If `true`, create and use a restricted pod security policy | `false`
`serviceAccount.create` | If `true`, create a new service account | `true`
`serviceAccount.name` | Service account to be used | None
`mesh.create` | If `true`, create mesh custom resource | `false`
`mesh.name` | The name of the mesh to use | `global`
`mesh.discovery` | The service discovery type to use, can be dns or cloudmap | `dns`
`hpa.enabled` | `true` if HPA resource should be created, metrics-server is required | `true`
`hpa.maxReplicas` | number of max replicas | `3`
`hpa.cpu` | average total CPU usage per pod (1-100) | `99`
`hpa.memory` | average memory usage per pod (100Mi-1Gi) | None
`discovery.optIn` | `true` if only services with the 'expose' annotation are discoverable | `true`

View File

@@ -0,0 +1 @@
App Mesh Gateway installed!

View File

@@ -0,0 +1,56 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "appmesh-gateway.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "appmesh-gateway.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "appmesh-gateway.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "appmesh-gateway.labels" -}}
app.kubernetes.io/name: {{ include "appmesh-gateway.name" . }}
helm.sh/chart: {{ include "appmesh-gateway.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "appmesh-gateway.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "appmesh-gateway.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,8 @@
{{- if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "appmesh-gateway.serviceAccountName" . }}
labels:
{{ include "appmesh-gateway.labels" . | indent 4 }}
{{- end }}

View File

@@ -0,0 +1,41 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "appmesh-gateway.fullname" . }}
labels:
{{ include "appmesh-gateway.labels" . | indent 4 }}
data:
envoy.yaml: |-
admin:
access_log_path: {{ .Values.proxy.access_log_path }}
address:
socket_address:
address: 0.0.0.0
port_value: 8081
dynamic_resources:
ads_config:
api_type: GRPC
grpc_services:
- envoy_grpc:
cluster_name: xds
cds_config:
ads: {}
lds_config:
ads: {}
static_resources:
clusters:
- name: xds
connect_timeout: 0.50s
type: static
http2_protocol_options: {}
load_assignment:
cluster_name: xds
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 18000

View File

@@ -0,0 +1,144 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "appmesh-gateway.fullname" . }}
labels:
{{ include "appmesh-gateway.labels" . | indent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/name: {{ include "appmesh-gateway.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "appmesh-gateway.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/part-of: appmesh
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/stats/prometheus"
prometheus.io/port: "8081"
# exclude inbound traffic on port 8080
appmesh.k8s.aws/ports: "444"
# exclude egress traffic to xDS server and Kubernetes API
appmesh.k8s.aws/egressIgnoredPorts: "18000,22,443"
checksum/config: {{ include (print $.Template.BasePath "/config.yaml") . | sha256sum | quote }}
spec:
serviceAccountName: {{ include "appmesh-gateway.serviceAccountName" . }}
terminationGracePeriodSeconds: 45
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: {{ include "appmesh-gateway.name" . }}
topologyKey: kubernetes.io/hostname
weight: 100
volumes:
- name: appmesh-gateway-config
configMap:
name: {{ template "appmesh-gateway.fullname" . }}
containers:
- name: controller
image: "{{ .Values.controller.image.repository }}:{{ .Values.controller.image.tag }}"
imagePullPolicy: {{ .Values.controller.image.pullPolicy }}
securityContext:
readOnlyRootFilesystem: true
runAsUser: 10001
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
command:
- ./flagger-appmesh-gateway
- --opt-in={{ .Values.discovery.optIn }}
- --gateway-mesh={{ .Values.mesh.name }}
- --gateway-name=$(POD_SERVICE_ACCOUNT)
- --gateway-namespace=$(POD_NAMESPACE)
env:
- name: POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: grpc
containerPort: 18000
protocol: TCP
livenessProbe:
initialDelaySeconds: 5
tcpSocket:
port: grpc
readinessProbe:
initialDelaySeconds: 5
tcpSocket:
port: grpc
resources:
limits:
memory: 1Gi
requests:
cpu: 10m
memory: 32Mi
- name: proxy
image: "{{ .Values.proxy.image.repository }}:{{ .Values.proxy.image.tag }}"
imagePullPolicy: {{ .Values.proxy.image.pullPolicy }}
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- -c
- /config/envoy.yaml
- --service-cluster $(POD_NAMESPACE)
- --service-node $(POD_NAME)
- --log-level info
- --base-id 1234
ports:
- name: admin
containerPort: 8081
protocol: TCP
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
initialDelaySeconds: 5
tcpSocket:
port: admin
readinessProbe:
initialDelaySeconds: 5
httpGet:
path: /ready
port: admin
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: appmesh-gateway-config
mountPath: /config
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}

View File

@@ -0,0 +1,28 @@
{{- if .Values.hpa.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ template "appmesh-gateway.fullname" . }}
labels:
{{ include "appmesh-gateway.labels" . | indent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ template "appmesh-gateway.fullname" . }}
minReplicas: {{ .Values.replicaCount }}
maxReplicas: {{ .Values.hpa.maxReplicas }}
metrics:
{{- if .Values.hpa.cpu }}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .Values.hpa.cpu }}
{{- end }}
{{- if .Values.hpa.memory }}
- type: Resource
resource:
name: memory
targetAverageValue: {{ .Values.hpa.memory }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,12 @@
{{- if .Values.mesh.create }}
apiVersion: appmesh.k8s.aws/v1beta1
kind: Mesh
metadata:
name: {{ .Values.mesh.name }}
annotations:
helm.sh/resource-policy: keep
labels:
{{ include "appmesh-gateway.labels" . | indent 4 }}
spec:
serviceDiscoveryType: {{ .Values.mesh.discovery }}
{{- end }}

View File

@@ -0,0 +1,57 @@
{{- if .Values.rbac.pspEnabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "appmesh-gateway.fullname" . }}
labels:
{{ include "appmesh-gateway.labels" . | indent 4 }}
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
privileged: false
hostIPC: false
hostNetwork: false
hostPID: false
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
allowedCapabilities:
- '*'
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "appmesh-gateway.fullname" . }}-psp
labels:
{{ include "appmesh-gateway.labels" . | indent 4 }}
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- {{ template "appmesh-gateway.fullname" . }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ template "appmesh-gateway.fullname" . }}-psp
labels:
{{ include "appmesh-gateway.labels" . | indent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "appmesh-gateway.fullname" . }}-psp
subjects:
- kind: ServiceAccount
name: {{ template "appmesh-gateway.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -0,0 +1,39 @@
{{- if .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: {{ template "appmesh-gateway.fullname" . }}
labels:
{{ include "appmesh-gateway.labels" . | indent 4 }}
rules:
- apiGroups:
- ""
resources:
- services
verbs: ["*"]
- apiGroups:
- appmesh.k8s.aws
resources:
- meshes
- meshes/status
- virtualnodes
- virtualnodes/status
- virtualservices
- virtualservices/status
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: {{ template "appmesh-gateway.fullname" . }}
labels:
{{ include "appmesh-gateway.labels" . | indent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "appmesh-gateway.fullname" . }}
subjects:
- name: {{ template "appmesh-gateway.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
kind: ServiceAccount
{{- end }}

View File

@@ -0,0 +1,24 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "appmesh-gateway.fullname" . }}
annotations:
gateway.appmesh.k8s.aws/expose: "false"
{{- if eq .Values.service.type "LoadBalancer" }}
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
{{- end }}
labels:
{{ include "appmesh-gateway.labels" . | indent 4 }}
spec:
type: {{ .Values.service.type }}
{{- if eq .Values.service.type "LoadBalancer" }}
externalTrafficPolicy: Local
{{- end }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: {{ include "appmesh-gateway.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}

View File

@@ -0,0 +1,69 @@
# Default values for appmesh-gateway.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
discovery:
# discovery.optIn `true` if only services with the 'expose' annotation are discoverable
optIn: true
proxy:
access_log_path: /dev/null
image:
repository: docker.io/envoyproxy/envoy
tag: v1.12.0
pullPolicy: IfNotPresent
controller:
image:
repository: weaveworks/flagger-appmesh-gateway
tag: v1.1.0
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
service:
# service.type: When set to LoadBalancer it creates an AWS NLB
type: LoadBalancer
port: 80
hpa:
# hpa.enabled `true` if HPA resource should be created, metrics-server is required
enabled: true
maxReplicas: 3
# hpa.cpu average total CPU usage per pod (1-100)
cpu: 99
# hpa.memory average memory usage per pod (100Mi-1Gi)
memory:
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
nodeSelector: {}
tolerations: []
serviceAccount:
# serviceAccount.create: Whether to create a service account or not
create: true
# serviceAccount.name: The name of the service account to create or use
name: ""
rbac:
# rbac.create: `true` if rbac resources should be created
create: true
# rbac.pspEnabled: `true` if PodSecurityPolicy resources should be created
pspEnabled: false
mesh:
# mesh.create: `true` if mesh resource should be created
create: false
# mesh.name: The name of the mesh to use
name: "global"
# mesh.discovery: The service discovery type to use, can be dns or cloudmap
discovery: dns

View File

@@ -1,19 +1,23 @@
apiVersion: v1
name: flagger
version: 0.2.0
appVersion: 0.2.0
kubeVersion: ">=1.9.0-0"
version: 0.23.0
appVersion: 0.23.0
kubeVersion: ">=1.11.0-0"
engine: gotpl
description: Flagger is a Kubernetes operator that automates the promotion of canary deployments using Istio routing for traffic shifting and Prometheus metrics for canary analysis.
description: Flagger is a progressive delivery operator for Kubernetes
home: https://flagger.app
icon: https://raw.githubusercontent.com/stefanprodan/flagger/master/docs/logo/flagger-icon.png
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/weaveworks.png
sources:
- https://github.com/stefanprodan/flagger
- https://github.com/weaveworks/flagger
maintainers:
- name: stefanprodan
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com
- name: stefanprodan
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com
keywords:
- canary
- istio
- gitops
- flagger
- istio
- appmesh
- linkerd
- gloo
- gitops
- canary

View File

@@ -1,15 +1,15 @@
# Flagger
[Flagger](https://flagger.app) is a Kubernetes operator that automates the promotion of canary deployments
using Istio routing for traffic shifting and Prometheus metrics for canary analysis.
[Flagger](https://github.com/weaveworks/flagger) is a Kubernetes operator that automates the promotion of canary
deployments using Istio, Linkerd, App Mesh, NGINX or Gloo routing for traffic shifting and Prometheus metrics for canary analysis.
Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators
like HTTP requests success rate, requests average duration and pods health.
Based on the KPIs analysis a canary is promoted or aborted and the analysis result is published to Slack.
Based on the KPIs analysis a canary is promoted or aborted and the analysis result is published to Slack or MS Teams.
## Prerequisites
* Kubernetes >= 1.9
* Istio >= 1.0
* Kubernetes >= 1.11
* Prometheus >= 2.6
## Installing the Chart
@@ -17,16 +17,45 @@ Based on the KPIs analysis a canary is promoted or aborted and the analysis resu
Add Flagger Helm repository:
```console
helm repo add flagger https://flagger.app
$ helm repo add flagger https://flagger.app
```
To install the chart with the release name `flagger`:
Install Flagger's custom resource definitions:
```console
$ helm install --name flagger --namespace istio-system flagger/flagger
$ kubectl apply -f https://raw.githubusercontent.com/weaveworks/flagger/master/artifacts/flagger/crd.yaml
```
To install the chart with the release name `flagger` for Istio:
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=istio-system \
--set crd.create=false \
--set meshProvider=istio \
--set metricsServer=http://prometheus:9090
```
To install the chart with the release name `flagger` for Linkerd:
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=linkerd \
--set crd.create=false \
--set meshProvider=linkerd \
--set metricsServer=http://linkerd-prometheus:9090
```
To install the chart with the release name `flagger` for AWS App Mesh:
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=appmesh-system \
--set crd.create=false \
--set meshProvider=appmesh \
--set metricsServer=http://appmesh-prometheus:9090
```
The command deploys Flagger on the Kubernetes cluster in the istio-system namespace.
The [configuration](#configuration) section lists the parameters that can be configured during installation.
## Uninstalling the Chart
@@ -45,15 +74,26 @@ The following tables lists the configurable parameters of the Flagger chart and
Parameter | Description | Default
--- | --- | ---
`image.repository` | image repository | `quay.io/stefanprodan/flagger`
`image.repository` | image repository | `weaveworks/flagger`
`image.tag` | image tag | `<VERSION>`
`image.pullPolicy` | image pull policy | `IfNotPresent`
`controlLoopInterval` | wait interval between checks | `10s`
`metricsServer` | Prometheus URL | `http://prometheus.istio-system:9090`
`prometheus.install` | if `true`, installs Prometheus configured to scrape all pods in the custer including the App Mesh sidecar | `false`
`metricsServer` | Prometheus URL, used when `prometheus.install` is `false` | `http://prometheus.istio-system:9090`
`selectorLabels` | list of labels that Flagger uses to create pod selectors | `app,name,app.kubernetes.io/name`
`slack.url` | Slack incoming webhook | None
`slack.channel` | Slack channel | None
`slack.user` | Slack username | `flagger`
`eventWebhook` | If set, Flagger will publish events to the given webhook | None
`msteams.url` | Microsoft Teams incoming webhook | None
`podMonitor.enabled` | if `true`, create a PodMonitor for [monitoring the metrics](https://docs.flagger.app/usage/monitoring#metrics) | `false`
`podMonitor.namespace` | the namespace where the PodMonitor is created | the same namespace
`podMonitor.interval` | interval at which metrics should be scraped | `15s`
`podMonitor.podMonitor` | additional labels to add to the PodMonitor | `{}`
`leaderElection.enabled` | leader election must be enabled when running more than one replica | `false`
`leaderElection.replicaCount` | number of replicas | `1`
`ingressAnnotationsPrefix` | annotations prefix for ingresses | `custom.ingress.kubernetes.io`
`rbac.create` | if `true`, create and use RBAC resources | `true`
`rbac.pspEnabled` | If `true`, create and use a restricted pod security policy | `false`
`crd.create` | if `true`, create Flagger's CRDs | `true`
`resources.requests/cpu` | pod CPU request | `10m`
`resources.requests/memory` | pod memory request | `32Mi`
@@ -67,8 +107,10 @@ Specify each parameter using the `--set key=value[,key=value]` argument to `helm
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace istio-system \
--set controlLoopInterval=1m
--namespace flagger-system \
--set crd.create=false \
--set slack.url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK \
--set slack.channel=general
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
@@ -80,5 +122,5 @@ $ helm upgrade -i flagger flagger/flagger \
```
> **Tip**: You can use the default [values.yaml](values.yaml)
```

View File

@@ -3,13 +3,18 @@ apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: canaries.flagger.app
annotations:
helm.sh/resource-policy: keep
spec:
group: flagger.app
version: v1alpha2
version: v1alpha3
versions:
- name: v1alpha2
- name: v1alpha3
served: true
storage: true
- name: v1alpha2
served: true
storage: false
- name: v1alpha1
served: true
storage: false
@@ -17,7 +22,41 @@ spec:
plural: canaries
singular: canary
kind: Canary
categories:
- all
scope: Namespaced
subresources:
status: {}
additionalPrinterColumns:
- name: Status
type: string
JSONPath: .status.phase
- name: Weight
type: string
JSONPath: .status.canaryWeight
- name: FailedChecks
type: string
JSONPath: .status.failedChecks
priority: 1
- name: Interval
type: string
JSONPath: .spec.canaryAnalysis.interval
priority: 1
- name: Mirror
type: boolean
JSONPath: .spec.canaryAnalysis.mirror
priority: 1
- name: StepWeight
type: string
JSONPath: .spec.canaryAnalysis.stepWeight
priority: 1
- name: MaxWeight
type: string
JSONPath: .spec.canaryAnalysis.maxWeight
priority: 1
- name: LastTransitionTime
type: string
JSONPath: .status.lastTransitionTime
validation:
openAPIV3Schema:
properties:
@@ -27,9 +66,17 @@ spec:
- service
- canaryAnalysis
properties:
provider:
description: Traffic managent provider
type: string
metricsServer:
description: Prometheus URL
type: string
progressDeadlineSeconds:
description: Deployment progress deadline
type: number
targetRef:
description: Deployment selector
type: object
required: ['apiVersion', 'kind', 'name']
properties:
@@ -40,7 +87,23 @@ spec:
name:
type: string
autoscalerRef:
type: object
description: HPA selector
anyOf:
- type: string
- type: object
required: ['apiVersion', 'kind', 'name']
properties:
apiVersion:
type: string
kind:
type: string
name:
type: string
ingressRef:
description: NGINX ingress selector
anyOf:
- type: string
- type: object
required: ['apiVersion', 'kind', 'name']
properties:
apiVersion:
@@ -53,44 +116,211 @@ spec:
type: object
required: ['port']
properties:
name:
description: Kubernetes service name
type: string
port:
description: Container port number
type: number
portName:
description: Container port name
type: string
targetPort:
description: Container target port name
anyOf:
- type: string
- type: number
portDiscovery:
description: Enable port dicovery
type: boolean
meshName:
description: AppMesh mesh name
type: string
backends:
description: AppMesh backend array
anyOf:
- type: string
- type: array
timeout:
description: Istio HTTP or gRPC request timeout
type: string
trafficPolicy:
description: Istio traffic policy
anyOf:
- type: string
- type: object
match:
description: Istio URL match conditions
anyOf:
- type: string
- type: array
rewrite:
description: Istio URL rewrite
anyOf:
- type: string
- type: object
headers:
description: Istio headers operations
anyOf:
- type: string
- type: object
corsPolicy:
description: Istio CORS policy
anyOf:
- type: string
- type: object
gateways:
description: Istio gateways list
anyOf:
- type: string
- type: array
hosts:
description: Istio hosts list
anyOf:
- type: string
- type: array
skipAnalysis:
type: boolean
canaryAnalysis:
properties:
interval:
description: Canary schedule interval
type: string
pattern: "^[0-9]+(m|s)"
iterations:
description: Number of checks to run for A/B Testing and Blue/Green
type: number
threshold:
description: Max number of failed checks before rollback
type: number
maxWeight:
description: Max traffic percentage routed to canary
type: number
stepWeight:
description: Canary incremental traffic percentage step
type: number
mirror:
description: Mirror traffic to canary before shifting
type: boolean
match:
description: A/B testing match conditions
anyOf:
- type: string
- type: array
metrics:
description: Prometheus query list for this canary
type: array
properties:
items:
type: object
required: ['name', 'interval', 'threshold']
required: ['name', 'threshold']
properties:
name:
description: Name of the Prometheus metric
type: string
interval:
description: Interval of the promql query
type: string
pattern: "^[0-9]+(m)"
pattern: "^[0-9]+(m|s)"
threshold:
description: Max scalar value accepted for this metric
type: number
query:
description: Prometheus query
type: string
webhooks:
description: Webhook list for this canary
type: array
properties:
items:
type: object
required: ['name', 'url', 'timeout']
required: ["name", "url"]
properties:
name:
description: Name of the webhook
type: string
type:
description: Type of the webhook pre, post or during rollout
type: string
enum:
- ""
- confirm-rollout
- pre-rollout
- rollout
- confirm-promotion
- post-rollout
- event
- rollback
url:
description: URL address of this webhook
type: string
format: url
timeout:
description: Request timeout for this webhook
type: string
pattern: "^[0-9]+(s)"
pattern: "^[0-9]+(m|s)"
metadata:
description: Metadata (key-value pairs) for this webhook
anyOf:
- type: string
- type: object
status:
properties:
phase:
description: Analysis phase of this canary
type: string
enum:
- ""
- Initializing
- Initialized
- Waiting
- Progressing
- Promoting
- Finalising
- Succeeded
- Failed
canaryWeight:
description: Traffic weight percentage routed to canary
type: number
failedChecks:
description: Failed check count of the current canary analysis
type: number
iterations:
description: Iteration count of the current canary analysis
type: number
lastAppliedSpec:
description: LastAppliedSpec of this canary
type: string
lastTransitionTime:
description: LastTransitionTime of this canary
format: date-time
type: string
conditions:
description: Status conditions of this canary
type: array
properties:
items:
type: object
required: ['type', 'status', 'reason']
properties:
lastTransitionTime:
description: LastTransitionTime of this condition
format: date-time
type: string
lastUpdateTime:
description: LastUpdateTime of this condition
format: date-time
type: string
message:
description: Message associated with this condition
type: string
reason:
description: Reason for the current status of this condition
type: string
status:
description: Status of this condition
type: string
type:
description: Type of this condition
type: string
{{- end }}

View File

@@ -8,7 +8,7 @@ metadata:
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
replicas: 1
replicas: {{ .Values.leaderElection.replicaCount }}
strategy:
type: Recreate
selector:
@@ -20,8 +20,26 @@ spec:
labels:
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
annotations:
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
serviceAccountName: {{ template "flagger.serviceAccountName" . }}
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
topologyKey: kubernetes.io/hostname
{{- if .Values.image.pullSecret }}
imagePullSecrets:
- name: {{ .Values.image.pullSecret }}
{{- end }}
containers:
- name: flagger
securityContext:
@@ -35,13 +53,38 @@ spec:
command:
- ./flagger
- -log-level=info
- -control-loop-interval={{ .Values.controlLoopInterval }}
{{- if .Values.meshProvider }}
- -mesh-provider={{ .Values.meshProvider }}
{{- end }}
{{- if .Values.prometheus.install }}
- -metrics-server=http://{{ template "flagger.fullname" . }}-prometheus:9090
{{- else }}
- -metrics-server={{ .Values.metricsServer }}
{{- end }}
{{- if .Values.selectorLabels }}
- -selector-labels={{ .Values.selectorLabels }}
{{- end }}
{{- if .Values.namespace }}
- -namespace={{ .Values.namespace }}
{{- end }}
{{- if .Values.slack.url }}
- -slack-url={{ .Values.slack.url }}
- -slack-user={{ .Values.slack.user }}
- -slack-channel={{ .Values.slack.channel }}
{{- end }}
{{- if .Values.msteams.url }}
- -msteams-url={{ .Values.msteams.url }}
{{- end }}
{{- if .Values.leaderElection.enabled }}
- -enable-leader-election=true
- -leader-election-namespace={{ .Release.Namespace }}
{{- end }}
{{- if .Values.ingressAnnotationsPrefix }}
- -ingress-annotations-prefix={{ .Values.ingressAnnotationsPrefix }}
{{- end }}
{{- if .Values.eventWebhook }}
- -event-webhook={{ .Values.eventWebhook }}
{{- end }}
livenessProbe:
exec:
command:
@@ -62,14 +105,14 @@ spec:
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
{{- if .Values.env }}
env:
{{ toYaml .Values.env | indent 12 }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}

View File

@@ -0,0 +1,27 @@
{{- if .Values.podMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- range $k, $v := .Values.podMonitor.additionalLabels }}
{{ $k }}: {{ $v | quote }}
{{- end }}
name: {{ include "flagger.fullname" . }}
namespace: {{ .Values.podMonitor.namespace | default .Release.Namespace }}
spec:
podMetricsEndpoints:
- interval: {{ .Values.podMonitor.interval }}
path: /metrics
port: http
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
selector:
matchLabels:
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

View File

@@ -0,0 +1,284 @@
{{- if .Values.prometheus.install }}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: {{ template "flagger.fullname" . }}-prometheus
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
- nodes/proxy
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: {{ template "flagger.fullname" . }}-prometheus
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "flagger.fullname" . }}-prometheus
subjects:
- kind: ServiceAccount
name: {{ template "flagger.serviceAccountName" . }}-prometheus
namespace: {{ .Release.Namespace }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "flagger.serviceAccountName" . }}-prometheus
namespace: {{ .Release.Namespace }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "flagger.fullname" . }}-prometheus
namespace: {{ .Release.Namespace }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
data:
prometheus.yml: |-
global:
scrape_interval: 5s
scrape_configs:
# Scrape config for AppMesh Envoy sidecar
- job_name: 'appmesh-envoy'
metrics_path: /stats/prometheus
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_name]
action: keep
regex: '^envoy$'
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: ${1}:9901
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
# Exclude high cardinality metrics
metric_relabel_configs:
- source_labels: [ cluster_name ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ tcp_prefix ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ listener_address ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_listener_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tls.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tcp_downstream.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_http_(stats|admin).*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_cluster_(lb|retry|bind|internal|max|original).*'
action: drop
# Scrape config for API servers
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- default
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: kubernetes;https
# scrape config for cAdvisor
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
# exclude high cardinality metrics
metric_relabel_configs:
- source_labels: [__name__]
regex: (container|machine)_(cpu|memory|network|fs)_(.+)
action: keep
- source_labels: [__name__]
regex: container_memory_failures_total
action: drop
# scrape config for pods
- job_name: kubernetes-pods
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_scrape
- source_labels: [ __address__ ]
regex: '.*9901.*'
action: drop
- action: replace
regex: (.+)
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_path
target_label: __metrics_path__
- action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_pod_annotation_prometheus_io_port
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: kubernetes_namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: kubernetes_pod_name
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "flagger.fullname" . }}-prometheus
namespace: {{ .Release.Namespace }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: {{ template "flagger.name" . }}-prometheus
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ template "flagger.name" . }}-prometheus
app.kubernetes.io/instance: {{ .Release.Name }}
annotations:
appmesh.k8s.aws/sidecarInjectorWebhook: disabled
sidecar.istio.io/inject: "false"
spec:
serviceAccountName: {{ template "flagger.serviceAccountName" . }}-prometheus
containers:
- name: prometheus
image: "docker.io/prom/prometheus:v2.15.2"
imagePullPolicy: IfNotPresent
args:
- '--storage.tsdb.retention=2h'
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- containerPort: 9090
name: http
livenessProbe:
httpGet:
path: /-/healthy
port: 9090
readinessProbe:
httpGet:
path: /-/ready
port: 9090
resources:
requests:
cpu: 10m
memory: 128Mi
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
- name: data-volume
mountPath: /prometheus/data
volumes:
- name: config-volume
configMap:
name: {{ template "flagger.fullname" . }}-prometheus
- name: data-volume
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: {{ template "flagger.fullname" . }}-prometheus
namespace: {{ .Release.Namespace }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
selector:
app.kubernetes.io/name: {{ template "flagger.name" . }}-prometheus
app.kubernetes.io/instance: {{ .Release.Name }}
ports:
- name: http
protocol: TCP
port: 9090
{{- end }}

View File

@@ -0,0 +1,66 @@
{{- if .Values.rbac.pspEnabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "flagger.fullname" . }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
privileged: false
hostIPC: false
hostNetwork: false
hostPID: false
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
allowedCapabilities:
- '*'
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "flagger.fullname" . }}-psp
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- {{ template "flagger.fullname" . }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ template "flagger.fullname" . }}-psp
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "flagger.fullname" . }}-psp
subjects:
- kind: ServiceAccount
name: {{ template "flagger.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -9,11 +9,83 @@ metadata:
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
rules:
- apiGroups: ['*']
resources: ['*']
verbs: ['*']
- nonResourceURLs: ['*']
verbs: ['*']
- apiGroups:
- ""
resources:
- events
- configmaps
- secrets
- services
verbs: ["*"]
- apiGroups:
- apps
resources:
- deployments
verbs: ["*"]
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs: ["*"]
- apiGroups:
- "extensions"
resources:
- ingresses
- ingresses/status
verbs: ["*"]
- apiGroups:
- flagger.app
resources:
- canaries
- canaries/status
verbs: ["*"]
- apiGroups:
- networking.istio.io
resources:
- virtualservices
- virtualservices/status
- destinationrules
- destinationrules/status
verbs: ["*"]
- apiGroups:
- appmesh.k8s.aws
resources:
- meshes
- meshes/status
- virtualnodes
- virtualnodes/status
- virtualservices
- virtualservices/status
verbs: ["*"]
- apiGroups:
- split.smi-spec.io
resources:
- trafficsplits
verbs: ["*"]
- apiGroups:
- gloo.solo.io
resources:
- settings
- upstreams
- upstreamgroups
- proxies
- virtualservices
verbs: ["*"]
- apiGroups:
- gateway.solo.io
resources:
- virtualservices
- gateways
verbs: ["*"]
- apiGroups:
- projectcontour.io
resources:
- httpproxies
verbs: ["*"]
- nonResourceURLs:
- /version
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding

View File

@@ -1,12 +1,27 @@
# Default values for flagger.
image:
repository: quay.io/stefanprodan/flagger
tag: 0.2.0
repository: weaveworks/flagger
tag: 0.23.0
pullPolicy: IfNotPresent
pullSecret:
controlLoopInterval: "10s"
metricsServer: "http://prometheus.istio-system.svc.cluster.local:9090"
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
appmesh.k8s.aws/sidecarInjectorWebhook: disabled
metricsServer: "http://prometheus:9090"
# accepted values are kubernetes, istio, linkerd, appmesh, nginx, gloo or supergloo:mesh.namespace (defaults to istio)
meshProvider: ""
# single namespace restriction
namespace: ""
# list of pod labels that Flagger uses to create pod selectors
# defaults to: app,name,app.kubernetes.io/name
selectorLabels: ""
slack:
user: flagger
@@ -14,6 +29,36 @@ slack:
# incoming webhook https://api.slack.com/incoming-webhooks
url:
# when specified, flagger will publish events to the provided webhook
eventWebhook: ""
msteams:
# MS Teams incoming webhook URL
url:
podMonitor:
enabled: false
namespace:
interval: 15s
additionalLabels: {}
#env:
#- name: SLACK_URL
# valueFrom:
# secretKeyRef:
# name: slack
# key: url
#- name: MSTEAMS_URL
# valueFrom:
# secretKeyRef:
# name: msteams
# key: url
env: []
leaderElection:
enabled: false
replicaCount: 1
serviceAccount:
# serviceAccount.create: Whether to create a service account or not
create: true
@@ -23,6 +68,8 @@ serviceAccount:
rbac:
# rbac.create: `true` if rbac resources should be created
create: true
# rbac.pspEnabled: `true` if PodSecurityPolicy resources should be created
pspEnabled: false
crd:
# crd.create: `true` if custom resource definitions should be created
@@ -43,4 +90,6 @@ nodeSelector: {}
tolerations: []
affinity: {}
prometheus:
# to be used with AppMesh or nginx ingress
install: false

View File

@@ -1,13 +1,20 @@
apiVersion: v1
name: grafana
version: 0.1.0
appVersion: 5.4.2
version: 1.4.0
appVersion: 6.5.1
description: Grafana dashboards for monitoring Flagger canary deployments
icon: https://raw.githubusercontent.com/stefanprodan/flagger/master/docs/logo/flagger-icon.png
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/weaveworks.png
home: https://flagger.app
sources:
- https://github.com/stefanprodan/flagger
- https://github.com/weaveworks/flagger
maintainers:
- name: stefanprodan
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com
- name: stefanprodan
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com
keywords:
- flagger
- grafana
- canary
- istio
- appmesh

View File

@@ -1,13 +1,12 @@
# Flagger Grafana
Grafana dashboards for monitoring progressive deployments powered by Istio, Prometheus and Flagger.
Grafana dashboards for monitoring progressive deployments powered by Flagger and Prometheus.
![flagger-grafana](https://raw.githubusercontent.com/stefanprodan/flagger/master/docs/screens/grafana-canary-analysis.png)
![flagger-grafana](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/screens/grafana-canary-analysis.png)
## Prerequisites
* Kubernetes >= 1.9
* Istio >= 1.0
* Kubernetes >= 1.11
* Prometheus >= 2.6
## Installing the Chart
@@ -18,14 +17,20 @@ Add Flagger Helm repository:
helm repo add flagger https://flagger.app
```
To install the chart with the release name `flagger-grafana`:
To install the chart for Istio run:
```console
helm upgrade -i flagger-grafana flagger/grafana \
--namespace=istio-system \
--set url=http://prometheus:9090 \
--set user=admin \
--set password=admin
--set url=http://prometheus:9090
```
To install the chart for AWS App Mesh run:
```console
helm upgrade -i flagger-grafana flagger/grafana \
--namespace=appmesh-system \
--set url=http://appmesh-prometheus:9090
```
The command deploys Grafana on the Kubernetes cluster in the default namespace.
@@ -56,10 +61,7 @@ Parameter | Description | Default
`affinity` | node/pod affinities | `node`
`nodeSelector` | node labels for pod assignment | `{}`
`service.type` | type of service | `ClusterIP`
`url` | Prometheus URL, used when Weave Cloud token is empty | `http://prometheus:9090`
`token` | Weave Cloud token | `none`
`user` | Grafana admin username | `admin`
`password` | Grafana admin password | `admin`
`url` | Prometheus URL | `http://prometheus:9090`
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
@@ -75,5 +77,5 @@ helm install flagger/grafana --name flagger-grafana -f values.yaml
```
> **Tip**: You can use the default [values.yaml](values.yaml)
```

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,6 @@
"annotations": {
"list": [
{
"$$hashKey": "object:1587",
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
@@ -16,8 +15,8 @@
"editable": true,
"gnetId": null,
"graphTooltip": 0,
"id": null,
"iteration": 1534587617141,
"id": 1,
"iteration": 1549736611069,
"links": [],
"panels": [
{
@@ -179,7 +178,6 @@
"tableColumn": "",
"targets": [
{
"$$hashKey": "object:2857",
"expr": "sum(irate(istio_requests_total{reporter=\"destination\",destination_workload_namespace=~\"$namespace\",destination_workload=~\"$primary\",response_code!~\"5.*\"}[30s])) / sum(irate(istio_requests_total{reporter=\"destination\",destination_workload_namespace=~\"$namespace\",destination_workload=~\"$primary\"}[30s]))",
"format": "time_series",
"intervalFactor": 1,
@@ -344,7 +342,6 @@
"tableColumn": "",
"targets": [
{
"$$hashKey": "object:2810",
"expr": "sum(irate(istio_requests_total{reporter=\"destination\",destination_workload_namespace=~\"$namespace\",destination_workload=~\"$canary\",response_code!~\"5.*\"}[30s])) / sum(irate(istio_requests_total{reporter=\"destination\",destination_workload_namespace=~\"$namespace\",destination_workload=~\"$canary\"}[30s]))",
"format": "time_series",
"intervalFactor": 1,
@@ -363,7 +360,7 @@
"value": "null"
}
],
"valueName": "avg"
"valueName": "current"
},
{
"aliasColors": {},
@@ -432,6 +429,7 @@
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Primary: Request Duration",
"tooltip": {
@@ -464,7 +462,11 @@
"min": null,
"show": false
}
]
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
@@ -533,6 +535,7 @@
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Canary: Request Duration",
"tooltip": {
@@ -565,7 +568,11 @@
"min": null,
"show": false
}
]
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"content": "<div class=\"dashboard-header text-center\">\n<span>USE: $canary.$namespace</span>\n</div>",
@@ -623,7 +630,6 @@
"steppedLine": false,
"targets": [
{
"$$hashKey": "object:1685",
"expr": "sum(rate(container_cpu_usage_seconds_total{cpu=\"total\",namespace=\"$namespace\",pod_name=~\"$primary.*\", container_name!~\"POD|istio-proxy\"}[1m])) by (pod_name)",
"format": "time_series",
"hide": false,
@@ -634,6 +640,7 @@
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Primary: CPU Usage by Pod",
"tooltip": {
@@ -651,7 +658,6 @@
},
"yaxes": [
{
"$$hashKey": "object:1845",
"format": "s",
"label": "CPU seconds / second",
"logBase": 1,
@@ -660,7 +666,6 @@
"show": true
},
{
"$$hashKey": "object:1846",
"format": "short",
"label": null,
"logBase": 1,
@@ -668,7 +673,11 @@
"min": null,
"show": false
}
]
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
@@ -711,7 +720,6 @@
"steppedLine": false,
"targets": [
{
"$$hashKey": "object:1685",
"expr": "sum(rate(container_cpu_usage_seconds_total{cpu=\"total\",namespace=\"$namespace\",pod_name=~\"$canary.*\", pod_name!~\"$primary.*\", container_name!~\"POD|istio-proxy\"}[1m])) by (pod_name)",
"format": "time_series",
"hide": false,
@@ -722,6 +730,7 @@
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Canary: CPU Usage by Pod",
"tooltip": {
@@ -739,7 +748,6 @@
},
"yaxes": [
{
"$$hashKey": "object:1845",
"format": "s",
"label": "CPU seconds / second",
"logBase": 1,
@@ -748,7 +756,6 @@
"show": true
},
{
"$$hashKey": "object:1846",
"format": "short",
"label": null,
"logBase": 1,
@@ -756,7 +763,11 @@
"min": null,
"show": false
}
]
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
@@ -799,7 +810,6 @@
"steppedLine": false,
"targets": [
{
"$$hashKey": "object:1685",
"expr": "sum(container_memory_working_set_bytes{namespace=\"$namespace\",pod_name=~\"$primary.*\", container_name!~\"POD|istio-proxy\"}) by (pod_name)",
"format": "time_series",
"hide": false,
@@ -811,6 +821,7 @@
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Primary: Memory Usage by Pod",
"tooltip": {
@@ -828,7 +839,6 @@
},
"yaxes": [
{
"$$hashKey": "object:1845",
"decimals": null,
"format": "bytes",
"label": "",
@@ -838,7 +848,6 @@
"show": true
},
{
"$$hashKey": "object:1846",
"format": "short",
"label": null,
"logBase": 1,
@@ -846,7 +855,11 @@
"min": null,
"show": false
}
]
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
@@ -889,7 +902,6 @@
"steppedLine": false,
"targets": [
{
"$$hashKey": "object:1685",
"expr": "sum(container_memory_working_set_bytes{namespace=\"$namespace\",pod_name=~\"$canary.*\", pod_name!~\"$primary.*\", container_name!~\"POD|istio-proxy\"}) by (pod_name)",
"format": "time_series",
"hide": false,
@@ -901,6 +913,7 @@
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Canary: Memory Usage by Pod",
"tooltip": {
@@ -918,7 +931,6 @@
},
"yaxes": [
{
"$$hashKey": "object:1845",
"decimals": null,
"format": "bytes",
"label": "",
@@ -928,7 +940,6 @@
"show": true
},
{
"$$hashKey": "object:1846",
"format": "short",
"label": null,
"logBase": 1,
@@ -936,7 +947,11 @@
"min": null,
"show": false
}
]
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
@@ -975,12 +990,10 @@
"renderer": "flot",
"seriesOverrides": [
{
"$$hashKey": "object:3641",
"alias": "received",
"color": "#f9d9f9"
},
{
"$$hashKey": "object:3649",
"alias": "transmited",
"color": "#f29191"
}
@@ -990,7 +1003,6 @@
"steppedLine": false,
"targets": [
{
"$$hashKey": "object:2598",
"expr": "sum(rate (container_network_receive_bytes_total{namespace=\"$namespace\",pod_name=~\"$primary.*\"}[1m])) ",
"format": "time_series",
"intervalFactor": 1,
@@ -998,7 +1010,6 @@
"refId": "A"
},
{
"$$hashKey": "object:3245",
"expr": "-sum (rate (container_network_transmit_bytes_total{namespace=\"$namespace\",pod_name=~\"$primary.*\"}[1m]))",
"format": "time_series",
"intervalFactor": 1,
@@ -1008,6 +1019,7 @@
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Primary: Network I/O",
"tooltip": {
@@ -1025,7 +1037,6 @@
},
"yaxes": [
{
"$$hashKey": "object:1845",
"decimals": null,
"format": "Bps",
"label": "",
@@ -1035,7 +1046,6 @@
"show": true
},
{
"$$hashKey": "object:1846",
"format": "short",
"label": null,
"logBase": 1,
@@ -1043,7 +1053,11 @@
"min": null,
"show": false
}
]
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
@@ -1082,12 +1096,10 @@
"renderer": "flot",
"seriesOverrides": [
{
"$$hashKey": "object:3641",
"alias": "received",
"color": "#f9d9f9"
},
{
"$$hashKey": "object:3649",
"alias": "transmited",
"color": "#f29191"
}
@@ -1097,7 +1109,6 @@
"steppedLine": false,
"targets": [
{
"$$hashKey": "object:2598",
"expr": "sum(rate (container_network_receive_bytes_total{namespace=\"$namespace\",pod_name=~\"$canary.*\",pod_name!~\"$primary.*\"}[1m])) ",
"format": "time_series",
"intervalFactor": 1,
@@ -1105,7 +1116,6 @@
"refId": "A"
},
{
"$$hashKey": "object:3245",
"expr": "-sum (rate (container_network_transmit_bytes_total{namespace=\"$namespace\",pod_name=~\"$canary.*\",pod_name!~\"$primary.*\"}[1m]))",
"format": "time_series",
"intervalFactor": 1,
@@ -1115,6 +1125,7 @@
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Canary: Network I/O",
"tooltip": {
@@ -1132,7 +1143,6 @@
},
"yaxes": [
{
"$$hashKey": "object:1845",
"decimals": null,
"format": "Bps",
"label": "",
@@ -1142,7 +1152,6 @@
"show": true
},
{
"$$hashKey": "object:1846",
"format": "short",
"label": null,
"logBase": 1,
@@ -1150,7 +1159,11 @@
"min": null,
"show": false
}
]
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"content": "<div class=\"dashboard-header text-center\">\n<span>IN/OUTBOUND: $canary.$namespace</span>\n</div>",
@@ -1205,7 +1218,6 @@
"steppedLine": false,
"targets": [
{
"$$hashKey": "object:1953",
"expr": "round(sum(irate(istio_requests_total{connection_security_policy=\"mutual_tls\", destination_workload_namespace=~\"$namespace\", destination_workload=~\"$primary\", reporter=\"destination\"}[30s])) by (source_workload, source_workload_namespace, response_code), 0.001)",
"format": "time_series",
"hide": false,
@@ -1215,7 +1227,6 @@
"step": 2
},
{
"$$hashKey": "object:1954",
"expr": "round(sum(irate(istio_requests_total{connection_security_policy!=\"mutual_tls\", destination_workload_namespace=~\"$namespace\", destination_workload=~\"$primary\", reporter=\"destination\"}[30s])) by (source_workload, source_workload_namespace, response_code), 0.001)",
"format": "time_series",
"hide": false,
@@ -1227,6 +1238,7 @@
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Primary: Incoming Requests by Source And Response Code",
"tooltip": {
@@ -1246,7 +1258,6 @@
},
"yaxes": [
{
"$$hashKey": "object:1999",
"format": "ops",
"label": null,
"logBase": 1,
@@ -1255,7 +1266,6 @@
"show": true
},
{
"$$hashKey": "object:2000",
"format": "short",
"label": null,
"logBase": 1,
@@ -1263,7 +1273,11 @@
"min": null,
"show": false
}
]
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
@@ -1323,6 +1337,7 @@
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Canary: Incoming Requests by Source And Response Code",
"tooltip": {
@@ -1357,7 +1372,11 @@
"min": null,
"show": false
}
]
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
@@ -1416,6 +1435,7 @@
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Primary: Outgoing Requests by Destination And Response Code",
"tooltip": {
@@ -1450,7 +1470,11 @@
"min": null,
"show": false
}
]
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
@@ -1509,6 +1533,7 @@
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Canary: Outgoing Requests by Destination And Response Code",
"tooltip": {
@@ -1543,7 +1568,11 @@
"min": null,
"show": false
}
]
],
"yaxis": {
"align": false,
"alignLevel": null
}
}
],
"refresh": "10s",
@@ -1554,11 +1583,9 @@
"list": [
{
"allValue": null,
"current": {
"text": "demo",
"value": "demo"
},
"current": null,
"datasource": "prometheus",
"definition": "",
"hide": 0,
"includeAll": false,
"label": "Namespace",
@@ -1568,6 +1595,7 @@
"query": "query_result(sum(istio_requests_total) by (destination_workload_namespace) or sum(istio_tcp_sent_bytes_total) by (destination_workload_namespace))",
"refresh": 1,
"regex": "/.*_namespace=\"([^\"]*).*/",
"skipUrlSync": false,
"sort": 0,
"tagValuesQuery": "",
"tags": [],
@@ -1577,20 +1605,19 @@
},
{
"allValue": null,
"current": {
"text": "primary",
"value": "primary"
},
"current": null,
"datasource": "prometheus",
"definition": "",
"hide": 0,
"includeAll": false,
"label": "Primary",
"multi": false,
"name": "primary",
"options": [],
"query": "query_result(sum(istio_requests_total{destination_workload_namespace=~\"$namespace\"}) by (destination_service_name))",
"query": "query_result(sum(istio_requests_total{destination_workload_namespace=~\"$namespace\"}) by (destination_workload))",
"refresh": 1,
"regex": "/.*destination_service_name=\"([^\"]*).*/",
"regex": "/.*destination_workload=\"([^\"]*).*/",
"skipUrlSync": false,
"sort": 1,
"tagValuesQuery": "",
"tags": [],
@@ -1600,20 +1627,19 @@
},
{
"allValue": null,
"current": {
"text": "canary",
"value": "canary"
},
"current": null,
"datasource": "prometheus",
"definition": "",
"hide": 0,
"includeAll": false,
"label": "Canary",
"multi": false,
"name": "canary",
"options": [],
"query": "query_result(sum(istio_requests_total{destination_workload_namespace=~\"$namespace\"}) by (destination_service_name))",
"query": "query_result(sum(istio_requests_total{destination_workload_namespace=~\"$namespace\"}) by (destination_workload))",
"refresh": 1,
"regex": "/.*destination_service_name=\"([^\"]*).*/",
"regex": "/.*destination_workload=\"([^\"]*).*/",
"skipUrlSync": false,
"sort": 1,
"tagValuesQuery": "",
"tags": [],
@@ -1653,7 +1679,7 @@
]
},
"timezone": "",
"title": "Canary analysis",
"uid": "RdykD7tiz",
"version": 2
}
"title": "Istio Canary",
"uid": "flagger-istio",
"version": 3
}

View File

@@ -1,15 +1,7 @@
1. Get the application URL by running these commands:
{{- if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "grafana.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ template "grafana.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "grafana.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "grafana.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
{{- end }}
1. Run the port forward command:
kubectl -n {{ .Release.Namespace }} port-forward svc/{{ .Release.Name }} 3000:80
2. Navigate to:
http://localhost:3000

View File

@@ -1,4 +1,4 @@
apiVersion: apps/v1beta2
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "grafana.fullname" . }}
@@ -20,6 +20,9 @@ spec:
release: {{ .Release.Name }}
annotations:
prometheus.io/scrape: 'false'
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
containers:
- name: {{ .Chart.Name }}
@@ -38,12 +41,21 @@ spec:
# path: /
# port: http
env:
- name: GF_PATHS_PROVISIONING
value: /etc/grafana/provisioning/
{{- if .Values.password }}
- name: GF_SECURITY_ADMIN_USER
value: {{ .Values.user }}
- name: GF_SECURITY_ADMIN_PASSWORD
value: {{ .Values.password }}
- name: GF_PATHS_PROVISIONING
value: /etc/grafana/provisioning/
{{- else }}
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
{{- end }}
volumeMounts:
- name: grafana
mountPath: /var/lib/grafana

View File

@@ -6,9 +6,11 @@ replicaCount: 1
image:
repository: grafana/grafana
tag: 5.4.2
tag: 6.5.1
pullPolicy: IfNotPresent
podAnnotations: {}
service:
type: ClusterIP
port: 80
@@ -28,7 +30,7 @@ tolerations: []
affinity: {}
user: admin
password: admin
password:
# Istio Prometheus instance
url: http://prometheus:9090

View File

@@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,23 @@
apiVersion: v1
name: loadtester
version: 0.12.1
appVersion: 0.12.1
kubeVersion: ">=1.11.0-0"
engine: gotpl
description: Flagger's load testing services based on rakyll/hey and bojand/ghz that generates traffic during canary analysis when configured as a webhook.
home: https://docs.flagger.app
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/weaveworks.png
sources:
- https://github.com/weaveworks/flagger
maintainers:
- name: stefanprodan
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com
keywords:
- flagger
- istio
- appmesh
- linkerd
- gloo
- gitops
- load testing

View File

@@ -0,0 +1,78 @@
# Flagger load testing service
[Flagger's](https://github.com/weaveworks/flagger) load testing service is based on
[rakyll/hey](https://github.com/rakyll/hey)
and can be used to generates traffic during canary analysis when configured as a webhook.
## Prerequisites
* Kubernetes >= 1.11
## Installing the Chart
Add Flagger Helm repository:
```console
helm repo add flagger https://flagger.app
```
To install the chart with the release name `flagger-loadtester`:
```console
helm upgrade -i flagger-loadtester flagger/loadtester
```
The command deploys Grafana on the Kubernetes cluster in the default namespace.
> **Tip**: Note that the namespace where you deploy the load tester should have the Istio or App Mesh sidecar injection enabled
The [configuration](#configuration) section lists the parameters that can be configured during installation.
## Uninstalling the Chart
To uninstall/delete the `flagger-loadtester` deployment:
```console
helm delete --purge flagger-loadtester
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the load tester chart and their default values.
Parameter | Description | Default
--- | --- | ---
`image.repository` | Image repository | `quay.io/stefanprodan/flagger-loadtester`
`image.pullPolicy` | Image pull policy | `IfNotPresent`
`image.tag` | Image tag | `<VERSION>`
`replicaCount` | Desired number of pods | `1`
`serviceAccountName` | Kubernetes service account name | `none`
`resources.requests.cpu` | CPU requests | `10m`
`resources.requests.memory` | Memory requests | `64Mi`
`tolerations` | List of node taints to tolerate | `[]`
`affinity` | node/pod affinities | `node`
`nodeSelector` | Node labels for pod assignment | `{}`
`service.type` | Type of service | `ClusterIP`
`service.port` | ClusterIP port | `80`
`cmd.timeout` | Command execution timeout | `1h`
`logLevel` | Log level can be debug, info, warning, error or panic | `info`
`meshName` | AWS App Mesh name | `none`
`backends` | AWS App Mesh virtual services | `none`
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
helm install flagger/loadtester --name flagger-loadtester
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
helm install flagger/loadtester --name flagger-loadtester -f values.yaml
```
> **Tip**: You can use the default [values.yaml](values.yaml)

View File

@@ -0,0 +1 @@
Flagger's load testing service is available at http://{{ include "loadtester.fullname" . }}.{{ .Release.Namespace }}/

View File

@@ -0,0 +1,32 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "loadtester.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "loadtester.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "loadtester.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}

View File

@@ -0,0 +1,75 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "loadtester.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "loadtester.name" . }}
helm.sh/chart: {{ include "loadtester.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ include "loadtester.name" . }}
template:
metadata:
labels:
app: {{ include "loadtester.name" . }}
annotations:
appmesh.k8s.aws/ports: "444"
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
{{- if .Values.serviceAccountName }}
serviceAccountName: {{ .Values.serviceAccountName }}
{{- else if .Values.rbac.create }}
serviceAccountName: {{ include "loadtester.fullname" . }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
command:
- ./loadtester
- -port=8080
- -log-level={{ .Values.logLevel }}
- -timeout={{ .Values.cmd.timeout }}
livenessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
readinessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -0,0 +1,54 @@
---
{{- if .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
{{- if eq .Values.rbac.scope "cluster" }}
kind: ClusterRole
{{- else }}
kind: Role
{{- end }}
metadata:
name: {{ template "loadtester.fullname" . }}
labels:
helm.sh/chart: {{ template "loadtester.chart" . }}
app.kubernetes.io/name: {{ template "loadtester.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
rules:
{{ toYaml .Values.rbac.rules | indent 2 }}
---
apiVersion: rbac.authorization.k8s.io/v1
{{- if eq .Values.rbac.scope "cluster" }}
kind: ClusterRoleBinding
{{- else }}
kind: RoleBinding
{{- end }}
metadata:
name: {{ template "loadtester.fullname" . }}
labels:
helm.sh/chart: {{ template "loadtester.chart" . }}
app.kubernetes.io/name: {{ template "loadtester.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
{{- if eq .Values.rbac.scope "cluster" }}
kind: ClusterRole
{{- else }}
kind: Role
{{- end }}
name: {{ template "loadtester.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "loadtester.fullname" . }}
namespace: {{ .Release.Namespace }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "loadtester.fullname" . }}
labels:
helm.sh/chart: {{ template "loadtester.chart" . }}
app.kubernetes.io/name: {{ template "loadtester.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

View File

@@ -0,0 +1,18 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "loadtester.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "loadtester.name" . }}
helm.sh/chart: {{ include "loadtester.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
app: {{ include "loadtester.name" . }}

View File

@@ -0,0 +1,27 @@
{{- if .Values.meshName }}
apiVersion: appmesh.k8s.aws/v1beta1
kind: VirtualNode
metadata:
name: {{ include "loadtester.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "loadtester.name" . }}
helm.sh/chart: {{ include "loadtester.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
meshName: {{ .Values.meshName }}
listeners:
- portMapping:
port: 444
protocol: http
serviceDiscovery:
dns:
hostName: {{ include "loadtester.fullname" . }}.{{ .Release.Namespace }}
{{- if .Values.backends }}
backends:
{{- range .Values.backends }}
- virtualService:
virtualServiceName: {{ . }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,54 @@
replicaCount: 1
image:
repository: weaveworks/flagger-loadtester
tag: 0.12.1
pullPolicy: IfNotPresent
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
logLevel: info
cmd:
timeout: 1h
nameOverride: ""
fullnameOverride: ""
service:
type: ClusterIP
port: 80
resources:
requests:
cpu: 10m
memory: 64Mi
nodeSelector: {}
tolerations: []
affinity: {}
rbac:
# rbac.create: `true` if rbac resources should be created
create: false
# rbac.scope: `cluster` to create cluster-scope rbac resources (ClusterRole/ClusterRoleBinding)
# otherwise, namespace-scope rbac resources will be created (Role/RoleBinding)
scope:
# rbac.rules: array of rules to apply to the role. example:
# rules:
# - apiGroups: [""]
# resources: ["pods"]
# verbs: ["list", "get"]
rules: []
# name of an existing service account to use - if not creating rbac resources
serviceAccountName: ""
# App Mesh virtual node settings
meshName: ""
#backends:
# - app1.namespace
# - app2.namespace

View File

@@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

14
charts/podinfo/Chart.yaml Normal file
View File

@@ -0,0 +1,14 @@
apiVersion: v1
version: 3.1.0
appVersion: 3.1.0
name: podinfo
engine: gotpl
description: Flagger canary deployment demo application
home: https://docs.flagger.app
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/weaveworks.png
sources:
- https://github.com/stefanprodan/podinfo
maintainers:
- name: stefanprodan
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com

79
charts/podinfo/README.md Normal file
View File

@@ -0,0 +1,79 @@
# Podinfo
Podinfo is a tiny web application made with Go
that showcases best practices of running canary deployments with Flagger and Istio.
## Installing the Chart
Add Flagger Helm repository:
```console
helm repo add flagger https://flagger.app
```
To install the chart with the release name `frontend`:
```console
helm upgrade -i frontend flagger/podinfo \
--namespace test \
--set nameOverride=frontend \
--set backend=http://backend.test:9898/echo \
--set canary.enabled=true \
--set canary.istioIngress.enabled=true \
--set canary.istioIngress.gateway=public-gateway.istio-system.svc.cluster.local \
--set canary.istioIngress.host=frontend.istio.example.com
```
To install the chart as `backend`:
```console
helm upgrade -i backend flagger/podinfo \
--namespace test \
--set nameOverride=backend \
--set canary.enabled=true
```
## Uninstalling the Chart
To uninstall/delete the `frontend` deployment:
```console
$ helm delete --purge frontend
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the podinfo chart and their default values.
Parameter | Description | Default
--- | --- | ---
`image.repository` | image repository | `quay.io/stefanprodan/podinfo`
`image.tag` | image tag | `<VERSION>`
`image.pullPolicy` | image pull policy | `IfNotPresent`
`hpa.enabled` | enables HPA | `true`
`hpa.cpu` | target CPU usage per pod | `80`
`hpa.memory` | target memory usage per pod | `512Mi`
`hpa.minReplicas` | maximum pod replicas | `2`
`hpa.maxReplicas` | maximum pod replicas | `4`
`resources.requests/cpu` | pod CPU request | `1m`
`resources.requests/memory` | pod memory request | `16Mi`
`backend` | backend URL | None
`faults.delay` | random HTTP response delays between 0 and 5 seconds | `false`
`faults.error` | 1/3 chances of a random HTTP response error | `false`
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
$ helm install flagger/podinfo --name frontend \
--set=image.tag=1.4.1,hpa.enabled=false
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
$ helm install flagger/podinfo --name frontend -f values.yaml
```

View File

@@ -0,0 +1 @@
podinfo {{ .Release.Name }} deployed!

View File

@@ -0,0 +1,43 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "podinfo.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "podinfo.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "podinfo.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create chart name suffix.
*/}}
{{- define "podinfo.suffix" -}}
{{- if .Values.canary.enabled -}}
{{- "-primary" -}}
{{- else -}}
{{- "" -}}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,66 @@
{{- if .Values.canary.enabled }}
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: {{ template "podinfo.fullname" . }}
labels:
app: {{ template "podinfo.name" . }}
chart: {{ template "podinfo.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ template "podinfo.fullname" . }}
progressDeadlineSeconds: 60
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: {{ template "podinfo.fullname" . }}
service:
port: {{ .Values.service.port }}
{{- if .Values.canary.istioIngress.enabled }}
gateways:
- {{ .Values.canary.istioIngress.gateway }}
hosts:
- {{ .Values.canary.istioIngress.host }}
{{- end }}
trafficPolicy:
tls:
mode: {{ .Values.canary.istioTLS }}
canaryAnalysis:
interval: {{ .Values.canary.analysis.interval }}
threshold: {{ .Values.canary.analysis.threshold }}
maxWeight: {{ .Values.canary.analysis.maxWeight }}
stepWeight: {{ .Values.canary.analysis.stepWeight }}
metrics:
- name: request-success-rate
threshold: {{ .Values.canary.thresholds.successRate }}
interval: 1m
- name: request-duration
threshold: {{ .Values.canary.thresholds.latency }}
interval: 1m
webhooks:
{{- if .Values.canary.helmtest.enabled }}
- name: "helm test"
type: pre-rollout
url: {{ .Values.canary.helmtest.url }}
timeout: 3m
metadata:
type: "helm"
cmd: "test {{ .Release.Name }} --cleanup"
{{- end }}
{{- if .Values.canary.loadtest.enabled }}
- name: load-test-get
url: {{ .Values.canary.loadtest.url }}
timeout: 5s
metadata:
cmd: "hey -z 1m -q 5 -c 2 http://{{ template "podinfo.fullname" . }}.{{ .Release.Namespace }}:{{ .Values.service.port }}"
- name: load-test-post
url: {{ .Values.canary.loadtest.url }}
timeout: 5s
metadata:
cmd: "hey -z 1m -q 5 -c 2 -m POST -d '{\"test\": true}' http://{{ template "podinfo.fullname" . }}.{{ .Release.Namespace }}:{{ .Values.service.port }}/echo"
{{- end }}
{{- end }}

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "podinfo.fullname" . }}
labels:
app: {{ template "podinfo.name" . }}
chart: {{ template "podinfo.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
config.yaml: |-
# http settings
http-client-timeout: 1m
http-server-timeout: {{ .Values.httpServer.timeout }}
http-server-shutdown-timeout: 5s

View File

@@ -0,0 +1,99 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "podinfo.fullname" . }}
labels:
app: {{ template "podinfo.name" . }}
chart: {{ template "podinfo.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ template "podinfo.fullname" . }}
template:
metadata:
labels:
app: {{ template "podinfo.fullname" . }}
annotations:
prometheus.io/scrape: 'true'
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
terminationGracePeriodSeconds: 30
containers:
- name: podinfo
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- ./podinfo
- --port={{ .Values.service.port }}
- --level={{ .Values.logLevel }}
- --random-delay={{ .Values.faults.delay }}
- --random-error={{ .Values.faults.error }}
- --config-path=/podinfo/config
{{- range .Values.backends }}
- --backend-url={{ . }}
{{- end }}
env:
{{- if .Values.message }}
- name: PODINFO_UI_MESSAGE
value: {{ .Values.message }}
{{- end }}
{{- if .Values.backend }}
- name: PODINFO_BACKEND_URL
value: {{ .Values.backend }}
{{- end }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
exec:
command:
- podcli
- check
- http
- localhost:{{ .Values.service.port }}/healthz
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
exec:
command:
- podcli
- check
- http
- localhost:{{ .Values.service.port }}/readyz
initialDelaySeconds: 5
timeoutSeconds: 5
volumeMounts:
- name: data
mountPath: /data
- name: config
mountPath: /podinfo/config
readOnly: true
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: {{ template "podinfo.fullname" . }}

View File

@@ -0,0 +1,37 @@
{{- if .Values.hpa.enabled -}}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ template "podinfo.fullname" . }}
labels:
app: {{ template "podinfo.name" . }}
chart: {{ template "podinfo.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
scaleTargetRef:
apiVersion: apps/v1beta2
kind: Deployment
name: {{ template "podinfo.fullname" . }}
minReplicas: {{ .Values.hpa.minReplicas }}
maxReplicas: {{ .Values.hpa.maxReplicas }}
metrics:
{{- if .Values.hpa.cpu }}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .Values.hpa.cpu }}
{{- end }}
{{- if .Values.hpa.memory }}
- type: Resource
resource:
name: memory
targetAverageValue: {{ .Values.hpa.memory }}
{{- end }}
{{- if .Values.hpa.requests }}
- type: Pod
pods:
metricName: http_requests
targetAverageValue: {{ .Values.hpa.requests }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,20 @@
{{- if not .Values.canary.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "podinfo.fullname" . }}
labels:
app: {{ template "podinfo.name" . }}
chart: {{ template "podinfo.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
app: {{ template "podinfo.fullname" . }}
{{- end }}

Some files were not shown because too many files have changed in this diff Show More