Compare commits

...

761 Commits

Author SHA1 Message Date
Stefan Prodan
27b2616330 Merge pull request #748 from weaveworks/release-v1.4.2
Release v1.4.2
2020-12-09 14:52:43 +02:00
Stefan Prodan
8ed729cd54 Release v1.4.2
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2020-12-09 13:14:33 +02:00
Stefan Prodan
34f7bca33c Merge pull request #747 from weaveworks/update-prom-grafana
Update Prometheus and Grafana
2020-12-09 12:04:15 +02:00
Stefan Prodan
fee442ffe0 Update Prometheus and Grafana
- Prometheus 2.23.0
- Grafana 7.3.4

Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2020-12-09 11:20:23 +02:00
Stefan Prodan
eb890ef174 Merge pull request #746 from weaveworks/prom-auth-docs
Add Prometheus basic-auth config to docs
2020-12-09 11:01:54 +02:00
Stefan Prodan
24c61df388 Add Prometheus basic-auth config to docs
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2020-12-09 10:19:25 +02:00
Stefan Prodan
bfb3331457 Merge pull request #745 from Nerja/delegate
Fix for VirtualService delegation when analysis is enabled
2020-12-09 10:08:20 +02:00
Marcus Rodan
7fc6f8a04d Changed to using the old e2e test scenario 2020-12-08 18:08:44 +01:00
Marcus Rodan
3c37020260 Changed test file permissions 2020-12-08 16:54:00 +01:00
Marcus Rodan
d05b684dbe Remove log line 2020-12-08 16:14:15 +01:00
Marcus Rodan
da978254b1 Fix issue 2020-12-08 16:12:12 +01:00
Stefan Prodan
0cfeceb3c9 Merge pull request #744 from weaveworks/release-v1.4.1
Release v1.4.1
2020-12-08 15:09:17 +02:00
Stefan Prodan
814aee8f4f Release v1.4.1
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2020-12-08 14:43:52 +02:00
Stefan Prodan
48bfb062d8 Merge pull request #743 from relu/exclude-labels-for-cm-secrets
Apply label prefix rules for cm and secrets
2020-12-08 13:37:01 +02:00
Aurel Canciu
08be31f022 Apply label prefix rules for cm and secrets
Copying of Configmaps and Secrets managed through Flagger should now
follow the same label prefix filtering rules as for the workloads.

Extends: #709

Signed-off-by: Aurel Canciu <aurelcanciu@gmail.com>
2020-12-08 12:55:45 +02:00
Stefan Prodan
39380d4ce8 Merge pull request #741 from weaveworks/release-v1.4.0
Release v1.4.0
2020-12-07 11:59:49 +02:00
Stefan Prodan
1b9e575ba5 Release v1.4.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2020-12-07 11:39:08 +02:00
Stefan Prodan
128c883755 Update docs and examples to HPA v2beta2
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2020-12-07 11:24:01 +02:00
Stefan Prodan
a244e00057 Merge pull request #740 from tr-fteixeira/hpa-behavior
Add support to HPA behaviors on canaries
2020-12-07 11:13:52 +02:00
Fernando Teixeira
afc063ae9a update tests to use autoscaling/v2beta2 2020-12-06 15:23:08 -05:00
Stefan Prodan
0827622985 Merge pull request #736 from nmlc/traefik
Traefik support
2020-12-06 10:23:16 +02:00
Fernando Teixeira
83dae63989 add support to hpa behaviors on canaries 2020-12-06 00:51:20 -05:00
nmlc
578361a2b0 [traefik] Fix documentation 2020-12-02 05:22:50 +05:00
nmlc
553e1b38bc [traefik] Add documentation 2020-12-01 05:17:33 +05:00
nmlc
635bc83259 [traefik] Add CircleCI tests 2020-11-26 06:00:15 +05:00
nmlc
746507dcc9 [traefik] Remove TraefikService metadata from canary spec 2020-11-26 05:52:42 +05:00
nmlc
adeb585de1 [traefik] add e2e test 2020-11-25 07:55:05 +05:00
nmlc
9c4edc602a [traefik] Update chart: crd & rbac 2020-11-25 07:54:28 +05:00
nmlc
642d3678ec [traefik] Implement observer interface 2020-11-25 07:54:15 +05:00
nmlc
2c1d998c43 [traefik] Implement router interface 2020-11-25 07:54:00 +05:00
nmlc
a3b9ed126d [traefik] Api changes & codegen 2020-11-25 07:50:54 +05:00
Stefan Prodan
2f027de91f Merge pull request #735 from mattchrist/update_faq
fix typo in faq
2020-11-23 17:10:18 +02:00
Matt Christ
b8c9fcfb91 fix typo 2020-11-23 08:16:05 -06:00
Stefan Prodan
1b81ea5a10 Merge pull request #734 from weaveworks/releases-v1.3.0
Release v1.3.0
2020-11-23 14:52:20 +02:00
Stefan Prodan
82bf73e8da Release v1.3.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2020-11-23 13:59:43 +02:00
Stefan Prodan
58de5ab198 Merge pull request #733 from weaveworks/deps-update
Update Istio to v1.8.0
2020-11-23 13:47:16 +02:00
Stefan Prodan
6a0ab874b8 Update Istio docs for v1.8.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2020-11-23 12:58:06 +02:00
Stefan Prodan
8301a2c1ba Update Istio e2e tests to v1.8.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2020-11-23 12:40:07 +02:00
Stefan Prodan
9b5b1a1421 Merge pull request #731 from mattchrist/update_faq
Update faq with correct prometheus queries for Contour & Gloo (fixes #730)
2020-11-23 11:11:24 +02:00
Stefan Prodan
bc5150903c Merge pull request #729 from jddcarreira/supportAppMeshBackendARN
Support AWS App Mesh backends ARN
2020-11-23 11:10:45 +02:00
Matt Christ
0c017f916b Update faq with correct prometheus queries for Contour & Gloo 2020-11-20 09:30:29 -06:00
João Carreira
df6fb2251d Merge branch 'master' of github.com:jddcarreira/flagger into supportAppMeshBackendARN 2020-11-20 12:41:24 +00:00
Stefan Prodan
4c3bab7ed7 Merge pull request #726 from robq99/feat/custom-weights-in-progression
feat: custom weights in progression
2020-11-20 13:54:41 +02:00
João Carreira
74efb784a2 Update App Mesh guide with ARN usage in backends 2020-11-20 11:37:13 +00:00
João Carreira
5a856c98aa Use strings.HasPrefix instead of manual count of prefix 2020-11-20 10:43:28 +00:00
João Carreira
a9c96fa888 update th usage of App Mesh types 2020-11-20 10:34:10 +00:00
João Carreira
7ab9061899 Update AWS App Mesh types 2020-11-20 10:33:25 +00:00
João Carreira
e149125eaa validate if its an ARN 2020-11-19 16:19:16 +00:00
robq99
c53cbac22c fix: tests added, edge cases protection added 2020-11-18 12:20:42 +01:00
robq99
90bccf748b fix: rollout weights moved to canary doc 2020-11-18 10:09:04 +01:00
Robert Kwolek
1ea2e22734 fix: full weight => total weight 2020-11-17 16:30:45 +01:00
Robert Kwolek
2a0473fc9b fix: fullWeight removed, fullWeight => totalWeight 2020-11-17 09:00:21 +01:00
Robert Kwolek
67dca9c7ad Merge remote-tracking branch 'upstream/master' 2020-11-12 20:47:37 +01:00
Stefan Prodan
9667664853 Merge pull request #725 from sfrique/add-qps-and-burts-config-2
Add QPS and Burst configs for kubernetes client
2020-11-12 17:51:13 +02:00
Henrique Fernandes
4db9701c62 Add QPS and Burst configs for kubernetes client
Implemented as requested in PR723
supersedes: https://github.com/weaveworks/flagger/pull/723
fixes: https://github.com/weaveworks/flagger/issues/638
2020-11-11 17:48:27 -03:00
Stefan Prodan
4a805be5cd Merge pull request #721 from kingdonb/patch-3
Fixup some typos
2020-11-04 16:49:50 +02:00
Kingdon Barrett
3abeea43d0 Fix Typo in skipper-progressive-delivery.md
"exmaple" -> example
2020-11-03 18:13:48 -05:00
Kingdon Barrett
f51629d6b5 Fix Typo in nginx-progressive-delivery.md
"exmaple" -> example
2020-11-03 18:11:13 -05:00
Kazuki Nitta
a624a2977e Add support for Istio VirtualService delegation (#715)
Add support for Istio VirtualService delegation
2020-10-28 11:38:54 +02:00
Stefan Prodan
5ae5530c35 Merge pull request #718 from seankhliao/patch-1
fix release date
2020-10-28 10:02:11 +02:00
Sean Liao
1c58301fd7 fix release date 2020-10-27 19:47:07 +01:00
Stefan Prodan
690da0005d Merge pull request #714 from weaveworks/gitops-toolkit-roadmap
Add GitOps Toolkit integration to roadmap
2020-10-22 15:33:39 +03:00
Stefan Prodan
4d9fbc5da6 Merge pull request #709 from worldtiki/exclude-labels
Copy labels from canary to primary workloads based on prefix rules
2020-10-21 18:12:51 +03:00
Daniel Albuquerque
fbece964e0 Copy annotations to deployment and daemonset 2020-10-21 14:20:09 +01:00
Stefan Prodan
d3e855ac86 Add GitOps Toolkit integration to roadmap
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2020-10-21 15:24:48 +03:00
Daniel Albuquerque
bd536b689f Fix filtering of labels 2020-10-14 15:20:15 +01:00
Daniel Albuquerque
5ca5647fab Remove refs to jenkins 2020-10-13 22:01:49 +01:00
Daniel Albuquerque
bef02d8e1f Rename proprty from exclude to include 2020-10-13 22:00:31 +01:00
Daniel Albuquerque
8b87cf1757 MIssing commit 2020-10-13 21:59:26 +01:00
Daniel Albuquerque
6ec377181a Change from exclude labels to include labels 2020-10-13 21:58:47 +01:00
Daniel Albuquerque
23e59168af Exclude controller labels by prefix 2020-10-11 14:10:16 +01:00
Stefan Prodan
2f58e51242 Merge pull request #704 from Brick7Face/spell-fix
fix spelling of "template" in scheduler_metrics.go
2020-10-01 18:07:53 +03:00
Nate Tranel
79f0381c52 fix spelling of template 2020-10-01 08:06:39 -06:00
Stefan Prodan
14adedba6a Merge pull request #702 from weaveworks/release-v1.2.0
Release v1.2.0
2020-09-29 09:43:46 +03:00
stefanprodan
f2608e627c Release v1.2.0 2020-09-29 09:13:12 +03:00
Stefan Prodan
17237fbb3e Merge pull request #695 from worldtiki/skip_analysis
Do not promote when not ready on skip analysis
2020-09-29 08:48:43 +03:00
Daniel Albuquerque
065c8640e7 Remove metadata tests (unrelated to skip analysis) 2020-09-19 17:39:54 +01:00
Daniel Albuquerque
1a90392400 Add set -o errexit 2020-09-19 15:15:39 +01:00
Daniel Albuquerque
3b6302640f Remove custom metrics (not needed for tests) 2020-09-18 19:51:03 +01:00
Daniel Albuquerque
26d53dcd44 diff test stucture for istio 2020-09-18 19:05:45 +01:00
Daniel Albuquerque
0eee5b7402 Revert changes in skip analysis condition 2020-09-18 18:43:27 +01:00
Daniel Albuquerque
4b098cc7a2 Better assertion for new tests 2020-09-18 18:17:50 +01:00
Daniel Albuquerque
8119acb40a Remove comment :) 2020-09-18 18:00:38 +01:00
Daniel Albuquerque
013949a9f4 Add tests for when canary analysis is skipped 2020-09-18 17:59:16 +01:00
Stefan Prodan
6d65a2c897 Merge pull request #685 from splkforrest/add-label-value
Derive the label selector value from the target matchLabels
2020-09-17 13:19:49 +03:00
Stefan Prodan
fba16aa1f5 Merge pull request #691 from fpetkovski/newrelic-provider
Add New Relic as a metrics provider
2020-09-17 13:15:00 +03:00
Daniel Albuquerque
2907526452 Do not promote when not ready on skip analysis 2020-09-14 19:46:35 +01:00
Stefan Prodan
04a8759159 Merge pull request #692 from erkannt/patch-1
Add eLife to orgs using flagger
2020-09-10 14:56:54 +03:00
Daniel Haarhoff
d62e7f678f Add eLife to orgs using flagger 2020-09-10 12:22:05 +01:00
Filip Petkovski
8b3296c065 Apply suggestions from code review
Co-authored-by: Stefan Prodan <stefan.prodan@gmail.com>
2020-09-10 09:19:36 +02:00
Filip Petkovski
563b1cd88d Add New Relic provider to the documentation 2020-09-10 09:11:33 +02:00
Filip Petkovski
c81e19c48a Add newrelic as to the provider type enum 2020-09-09 18:12:18 +02:00
Filip Petkovski
68e4e1cc68 Apply suggestions from code review
Co-authored-by: Stefan Prodan <stefan.prodan@gmail.com>
2020-09-09 13:51:27 +02:00
Filip Petkovski
2c249e2a92 Add New Relic as a metrics provider 2020-09-09 12:10:53 +02:00
Forrest Thomas
6c35f7611b address PR review comments and remove unnecessary configuration from Canary CR in e2e tests 2020-09-04 09:35:11 -07:00
Forrest Thomas
7793f0b29d add e2e nginx tests for inconsistent naming between service name and selector 2020-09-02 12:46:02 -07:00
Forrest Thomas
930eb8919d add e2e linkerd tests for inconsistent naming between service name and selector 2020-09-02 12:46:02 -07:00
Forrest Thomas
7ade97790e update e2e istio test to query the canary service instead of the apex service 2020-09-02 12:46:02 -07:00
Forrest Thomas
29c3056940 add e2e gloo tests for inconsistent naming between service name and selector 2020-09-02 12:46:02 -07:00
Forrest Thomas
2abfec05c9 add e2e contour tests for inconsistent naming between service name and selector 2020-09-02 12:46:02 -07:00
Forrest Thomas
621150cce6 add e2e istio tests for inconsistent naming between service name and selector 2020-09-02 12:46:02 -07:00
Forrest Thomas
ef57dcf75d add a small test for verifying the label selector is named as expected for daemonsets 2020-09-02 12:46:02 -07:00
Forrest Thomas
1bd7ce4eed add a small test for verifying the label selector is named as expected for deployments 2020-09-02 12:46:02 -07:00
Forrest Thomas
364fd0db65 setup daemonset tests to allow configurable name, label and selector 2020-09-02 12:46:02 -07:00
Forrest Thomas
b378b3eb5d setup deployment tests to allow configurable name, label and selector 2020-09-02 12:46:02 -07:00
Forrest Thomas
0db82b64f7 correct formatting 2020-09-02 12:46:02 -07:00
Forrest Thomas
c9dc5c5936 fix incorrect primary label value during promotion 2020-09-02 12:46:02 -07:00
Forrest Thomas
6f372d787d fix the incorrect primary label value 2020-09-02 12:46:02 -07:00
Forrest Thomas
f70f43bb3d use the existing labelSelector value instead of using the service name as the value 2020-09-02 12:46:02 -07:00
Stefan Prodan
c6f3a87bb3 Merge pull request #684 from xichengliudui/master
add istio 1.7 install command
2020-09-02 12:01:05 +03:00
xichengliudui
8e7aa29ef1 add istio 1.7 install command 2020-09-02 01:30:53 -07:00
Stefan Prodan
fb66cd3d94 Merge pull request #681 from o11n/preservePredicates
Skipper: preserve Predicates
2020-08-29 11:34:18 +03:00
Samuel Lang
e7da8c3f35 Skipper: preserve Predicates
Current implementation did overwrite potentially existing Predicates.

We face the situation that we need to add further Predicates which we need to keep in order to have  a proper route setup
2020-08-26 12:00:36 +02:00
Robert Kwolek
a6a38c6a7a fix: go fixes 2020-08-25 12:22:57 +02:00
Robert Kwolek
0ccf97bec1 fix: max weight for steps fixed 2020-08-25 10:34:59 +02:00
Robert Kwolek
ab80bcde44 doc: tutorial link added 2020-08-21 09:01:35 +02:00
Robert Kwolek
a58c0ac2c9 doc: rollout weights moved out of Linkerd 2020-08-21 08:59:44 +02:00
Robert Kwolek
c55fd94b67 doc: weighted rollout doc added 2020-08-20 21:38:11 +02:00
Robert Kwolek
16a6df59ab Merge remote-tracking branch 'upstream/master' 2020-08-20 21:03:43 +02:00
Robert Kwolek
906103daa5 feat: weighted deployments 2020-08-20 20:56:10 +02:00
Takeshi Yoneda
ce69a180d8 Merge pull request #679 from weaveworks/feature/optimized-config-disabled
pkg/canary: add unit test of configIsDisabled and its optimization
2020-08-20 21:33:05 +09:00
mathetake
87c090ad8c pkg/canary: add unit test of configIsDisabled and its optimization 2020-08-20 21:15:27 +09:00
Stefan Prodan
b6d6f32c7f Merge pull request #674 from weaveworks/prep-release-1.1.0
Release v1.1.0
2020-08-19 18:37:38 +03:00
stefanprodan
b6c98799d1 Release v1.1.0 2020-08-19 12:07:39 +03:00
stefanprodan
06dab2e137 Docs tidy up
Split feature comparison into two tables: service mesh and ingress.
2020-08-19 11:29:08 +03:00
Stefan Prodan
6494893812 Merge pull request #671 from stealthybox/per-config-tracker-disable
Support per-config configTracker disable via ConfigMap/Secret annotation
2020-08-19 10:48:09 +03:00
Stefan Prodan
11b82dbcc7 Merge pull request #670 from o11n/feature-Skipper
Skipper Ingress Controller support
2020-08-19 10:47:53 +03:00
David Hohengaßner
e09f44df77 📝 add documentation about Skipper Ingress (#15)
Skipper Ingress Controller support is added with
https://github.com/weaveworks/flagger/pull/670.

This commit add the documentation and links to mention
Skipper is now an available option.

Currently only Canary deployments are supported.
2020-08-18 17:02:53 +02:00
Samuel Lang
ad8233cf46 👷 Add high-level E2E test steps for Skipper
Add e2e-skipper* files for test setup

It does the following things:
* install Skipper ingress with Kustomize
* load Flagger image onto the local cluster
* install Flagger and Prometheus in the flagger-system namespace
2020-08-18 17:02:45 +02:00
leigh capili
dad70a6876 Support per-config configTracker disable via ConfigMap/Secret annotation
This allows a user to annotate a specific ConfigMap or Secret to be disabled/ignored via the
configTracking logic that tracks config changes makes configuration copies for the primary Deploy

Closes #435
2020-08-17 16:24:56 -06:00
Samuel Lang
39e55daa04 📈 Skipper Metrics Observer
Te be able to distinct Skipper routes we need to combine the Canary data to generate the Skipper metric label.

"request-success-rate" and  "request-duration" queries are implemented and tested that provide those obersvations from Skipper metrics

* Takes into account how Skipper renders the paths accordingly and reformats the quieries.
2020-08-17 08:23:38 +02:00
Samuel Lang
a9ad6c92a6 adding CircleCI tests 2020-08-17 08:23:38 +02:00
Samuel Lang
ca14a08f9c Skipper Router Implementation
Router implementation for zalan.do/Skipper Ingress -
An HTTP router and reverse proxy for service composition, including use cases like Kubernetes Ingress

https://github.com/zalando/skipper/

* The concept is to define routes with specific weights via the skipper specific annotation predicate of "zalando.org/backend-weights".
* A new "canary ingress" is created that has higher "weight" thus receiving all traffic, which distributes progressively
* After the canary process is finished, this ingress is disabled via the "False()" annotation predicate to route traffic again back to the apex Ingress.
There are certain Skipper principles which are taken into account:

```
Skipper Principles:
* if only one backend has a weight, only one backend will get 100% traffic
* if two of three or more backends have a weight, only those two should get traffic.
* if two backends don't have any weight, it's undefined and right now they get equal amount of traffic.
* weights can be int or float, but always treated as a ratio.

Implementation:
* apex Ingress is immutable
* new canary Ingress contains two paths for primary and canary service
* canary Ingress manages weights on primary & canary service, hence no traffic to apex service
```
2020-08-17 08:23:38 +02:00
Stefan Prodan
be16bd8768 Merge pull request #668 from timricese/master
Add securityContext parameter to loadtester chart
2020-08-17 08:33:45 +03:00
Stefan Prodan
47d00857bc Merge pull request #672 from weaveworks/kube-1.18.8
Update Kubernetes packages to v1.18.8
2020-08-15 10:17:19 +03:00
stefanprodan
7c3cb5c5a3 Install kustomize in CI 2020-08-15 09:25:27 +03:00
stefanprodan
f12fe4254a Add license to Flagger Helm chart 2020-08-15 09:16:47 +03:00
stefanprodan
bb627779d9 Update Kubernetes packages to v1.18.8 2020-08-15 09:16:11 +03:00
Tim Rice
eba066e044 Add securityContext parameter to loadtester chart
Default to `enabled: false` to avoid changing default behavior.

Allows using the chart on clusters with runAsNonRoot security policy
2020-08-13 08:11:32 +02:00
Stefan Prodan
34f0273c34 Merge pull request #667 from snahelou/master
Fix(grafana): metrics change since 1.16
2020-08-12 17:34:01 +03:00
Sebastien Nahelou
394c9545ce Fix(grafana): metrics change since 1.16 2020-08-11 11:13:58 +02:00
Stefan Prodan
a6f0481b27 Merge pull request #661 from weaveworks/e2e-test-suite-updates
Update Istio, Linkerd and Contour e2e to latest version
2020-08-06 10:22:23 +03:00
Stefan Prodan
4d2664b57e Merge pull request #663 from stealthybox/mapfix-658
Fix O(log n) bug over network in GetTargetConfigs() when using `--enable-config-tracking`
2020-08-06 08:56:16 +03:00
leigh capili
1242825c42 Fix O(log n) bug over network in GetTargetConfigs() when using --enable-config-tracking
Read for more details:
https://github.com/weaveworks/flagger/issues/658#issuecomment-669389203
2020-08-05 13:16:50 -06:00
stefanprodan
fd34614c84 Update Istio, Linkerd and Contour e2e to latest version 2020-08-05 11:47:46 +03:00
Takeshi Yoneda
68312570b6 Merge pull request #654 from weaveworks/docs-fix-typo-prometheus
fix typo in docs: promethues -> prometheus
2020-07-27 16:18:07 +09:00
Stefan Prodan
fa9de7d8f9 Merge pull request #652 from imrenagi/feature/pod-priority
Add priorityClassName to flagger and loadtester chart
2020-07-27 09:16:32 +03:00
mathetake
a04bb3d3c0 fix typo in docs: promethues -> prometheus 2020-07-27 15:14:55 +09:00
Imre Nagi
23e805965e Update readme for podPriorityClassName
Signed-off-by: Imre Nagi <imre.nagi2812@gmail.com>
2020-07-23 16:37:37 +07:00
Imre Nagi
9aa775f409 Add priorityClassName to loadtester chart
Signed-off-by: Imre Nagi <imre.nagi2812@gmail.com>
2020-07-23 07:33:29 +07:00
Imre Nagi
9655ed652f Add pod priorityClassName to flagger deployment template
Signed-off-by: Imre Nagi <imre.nagi2812@gmail.com>
2020-07-23 07:27:27 +07:00
Stefan Prodan
744b83253a Merge pull request #651 from weaveworks/release-v1.0.1
Release v1.0.1
2020-07-18 09:44:37 +03:00
stefanprodan
74db314288 Release v1.0.1 2020-07-18 09:21:37 +03:00
Stefan Prodan
f8e68a2dad Merge pull request #649 from weaveworks/docs-appmesh-v1beta2
Update App Mesh docs to v1beta2 API
2020-07-18 08:44:18 +03:00
stefanprodan
1c35524b13 Update App Mesh docs to v1beta2 API 2020-07-16 10:14:56 +03:00
Hasindu Malala Achichige
7352237fa9 chart: add Istio virtual service into loadtester chart (#647)
Add Istio virtual service to loadtester chart
2020-07-09 13:31:17 +03:00
Stefan Prodan
997e7be8af Merge pull request #643 from mvollman/helm_threadiness
Add threadiness to helm chart
2020-07-09 13:01:25 +03:00
Stefan Prodan
0e2858d311 Merge pull request #646 from weaveworks/fix-kustomize
Fix installers for kustomize >= 3.6.0
2020-07-09 13:00:07 +03:00
Stefan Prodan
d7790ad5b1 Fix kustomize URL
Co-authored-by: Takeshi Yoneda <cz.rk.t0415y.g@gmail.com>
2020-07-09 10:24:07 +03:00
stefanprodan
96234c1d6c Fix installers for kustomize >= 3.6.0 2020-07-09 09:47:29 +03:00
Michael Vollman
0f1a42a5cc Add threadiness to helm chart 2020-07-06 11:17:20 -04:00
Takeshi Yoneda
8a5a0538fd Merge pull request #641 from jqlu/patch-1
fix typo in status.go
2020-07-03 18:44:09 +09:00
jqlu
7fd8251a06 fix typo in status.go 2020-07-03 10:47:40 +08:00
Stefan Prodan
72c7a103f9 Merge pull request #632 from rafaelgaspar/fix-multiple-paths-per-rule
Fix multiple paths per rule on canary ingress
2020-06-22 17:45:09 +03:00
Rafael Gaspar
b890b79234 Fix multiple paths per rule on canary ingress 2020-06-22 15:59:27 +02:00
Stefan Prodan
1a65937278 Merge pull request #625 from sergioteix/patch-1
add jumia as an organization using flagger
2020-06-17 18:14:58 +03:00
Sergio Teixeira
a490cde692 Update README.md 2020-06-17 15:20:49 +01:00
Stefan Prodan
682a1bf5ae Merge pull request #624 from weaveworks/release-v1.0.0
Release v1.0.0
2020-06-17 11:44:59 +03:00
stefanprodan
de3aeab702 Release v1.0.0 2020-06-17 11:11:06 +03:00
Stefan Prodan
fa25872ceb Merge pull request #623 from weaveworks/istio-latency
Change builtin metrics to work with Istio >= 1.5
2020-06-17 10:32:12 +03:00
stefanprodan
e8ca5f270b Change builtin metrics to work with Istio >= 1.5 2020-06-17 10:13:29 +03:00
Stefan Prodan
6f65f6096d Merge pull request #607 from justinabrahms/master
Support for specifying additional arguments to concord tasks
2020-06-16 09:26:37 +03:00
Stefan Prodan
f2eca79a1f Merge pull request #621 from weaveworks/charts-update
Remove Flagger's Gateway and update Prometheus
2020-06-15 14:17:30 +03:00
stefanprodan
8e9c326561 Remove App Mesh Gateway chart
Replaced by https://github.com/aws/eks-charts/pull/168
2020-06-15 13:53:36 +03:00
stefanprodan
9c89346a22 Update Prometheus to v2.19.0 2020-06-15 13:50:54 +03:00
Stefan Prodan
2827ecbc31 Merge pull request #617 from weaveworks/tester-updates
Release load tester v0.17.0
2020-06-11 14:07:27 +03:00
stefanprodan
8a02195ac2 Release load tester v0.17.0 2020-06-11 13:48:22 +03:00
stefanprodan
93e76e5050 Add AppMesh v1beta2 virtual node to load tester chart 2020-06-11 13:34:46 +03:00
stefanprodan
b2fd6f994c Update load tester Helm binaries 2020-06-11 13:34:03 +03:00
Stefan Prodan
95dcc17bc2 Merge pull request #615 from weaveworks/e2e-updates
Update e2e components
2020-06-10 14:30:20 +03:00
stefanprodan
71725c4771 Update e2e components
- istio 1.6.1
- linkerd 2.8.0
- contour 1.5.0
- gloo 1.3.28
- nginx-ingress 0.32.0
2020-06-10 14:13:08 +03:00
Stefan Prodan
bebfac8b9f Merge pull request #611 from weaveworks/appmesh-timeout
Implement App Mesh v1beta2 timeout
2020-06-08 11:19:28 +03:00
Takeshi Yoneda
45d4d1ff55 Merge pull request #612 from kingdonb/patch-1
Update zero-downtime-deployments.md
2020-06-05 22:45:26 +09:00
Kingdon Barrett
bf27aed2e4 Update zero-downtime-deployments.md
Fix typo "exists" -> "exits"
2020-06-04 20:06:29 -04:00
stefanprodan
0715e1ca37 Add AppMesh timeout unit test 2020-06-04 13:48:52 +03:00
stefanprodan
37ec07d2ec Set timeout for virtual nodes and routers 2020-06-04 12:37:34 +03:00
stefanprodan
7a18bfaac5 Add timeout fields to AppMesh client 2020-06-04 12:15:44 +03:00
Stefan Prodan
c367e65672 Merge pull request #609 from sledigabel/master
Rephrasing Canary Progressing message
2020-06-02 17:10:23 +03:00
Sebastien Le Digabel
8c55bb222d Rephrasing Canary Progressing message
Fixes #606.

Also fixed the alert message to keep it consistent with the message,
along with the documentation.
2020-06-02 14:35:55 +01:00
Stefan Prodan
a74ae1f4a2 Merge pull request #608 from justinabrahms/go-version-update
go 1.14 is required due to the change in `go fmt`
2020-06-02 16:27:57 +03:00
Justin Abrahms
8376623839 go 1.14 is required due to the change in go fmt 2020-06-02 05:53:27 -07:00
Justin Abrahms
dcab2d518f Support for specifying additional arguments to concord tasks 2020-06-02 04:28:54 -07:00
Sergio Teixeira
9afd741dc1 Update Alpine to v3.12 (#605)
Update Alpine to v3.12

use default nobody user from alpine, and update to the stable version
2020-06-02 11:24:47 +03:00
Ciaran Moran
9ba78031e2 Revert "Use multi-stage build to slim image"
This reverts commit ce6ae8d511.
2020-06-01 16:00:42 +01:00
Ciaran Moran
ce6ae8d511 Use multi-stage build to slim image 2020-06-01 14:36:15 +01:00
Stefan Prodan
33076941b9 Merge pull request #604 from weaveworks/cors-allow-origins
Add allow origins field to CORS spec
2020-06-01 15:22:31 +03:00
stefanprodan
6db5b5c417 Add allow origins field to CORS spec 2020-06-01 14:58:08 +03:00
Stefan Prodan
d2fe182e2d Merge pull request #601 from weaveworks/istio-1.6.0-e2e
Update Istio e2e to v1.6.0
2020-06-01 14:26:56 +03:00
stefanprodan
8740f41a3a Update Istio e2e to v1.6.0 2020-06-01 14:06:59 +03:00
Stefan Prodan
b6b6633692 Merge pull request #598 from cosmin-mogos/update-helm-test-documentation
Add example RBAC for `helm test`
2020-05-31 00:55:19 +03:00
Cosmin Mogos
fe58b32d9b Add --debug to helm command 2020-05-30 18:07:40 +02:00
Cosmin Mogos
df50c32c09 Update docs/gitbook/usage/webhooks.md
Co-authored-by: Stefan Prodan <stefan.prodan@gmail.com>
2020-05-30 13:58:25 +02:00
Cosmin Mogos
ada9288f88 Update docs/gitbook/usage/webhooks.md
Co-authored-by: Stefan Prodan <stefan.prodan@gmail.com>
2020-05-30 13:57:57 +02:00
Cosmin Mogos
df103fb257 Add example RBAC for helm test 2020-05-27 18:45:43 +02:00
Stefan Prodan
3dd5dfa6aa Merge pull request #594 from weaveworks/istio-source-labels
istio: Add source labels to analysis matching rules
2020-05-27 12:02:31 +03:00
Takeshi Yoneda
44cee4210d Merge pull request #596 from weaveworks/fix/doc-promql
update README: custom metric instead of custom promql
2020-05-27 17:50:45 +09:00
mathetake
893a53234b update README: custom metric instead of custom promql 2020-05-27 17:31:40 +09:00
stefanprodan
4f299e5696 Add source labels to A/B testing docs 2020-05-18 14:41:44 +03:00
stefanprodan
3cf6400092 Add source labels to analysis matching rules 2020-05-18 13:16:03 +03:00
stefanprodan
476eb8c185 Update Istio e2e to v1.5.4 2020-05-18 13:15:12 +03:00
Stefan Prodan
f5a3b9df24 Merge pull request #593 from weaveworks/progressive-promotion
Implement progressive promotion
2020-05-18 12:37:55 +03:00
stefanprodan
be96a11479 Add promotion step weight to docs 2020-05-18 11:07:40 +03:00
stefanprodan
2e75dbb170 Add progressive promotion test 2020-05-18 10:44:23 +03:00
stefanprodan
eaa5b14be6 Implement progressive traffic shifting on promotion 2020-05-18 10:34:10 +03:00
stefanprodan
f3b444ab49 Add promotion step weight to Canary CRD 2020-05-18 10:34:10 +03:00
Takeshi Yoneda
0056b99309 Merge pull request #592 from weaveworks/check-metrics-server-availability
Check metrics server availability during canary initialization
2020-05-16 15:15:42 +09:00
mathetake
e0de9d0afa pkg/controller: add unit test for checkMetricProviderAvailability 2020-05-16 11:12:45 +09:00
mathetake
a17e8b4794 not return even if checkMetricProviderAvailability fails 2020-05-15 21:44:35 +09:00
mathetake
ad73643e4a pkg/metrics/provideres: delete fake value 2020-05-15 19:36:11 +09:00
mathetake
5d84596bc0 pkg/controller: check metrics server's availability during initalization 2020-05-15 19:35:40 +09:00
Stefan Prodan
0b0c49bd2a Merge pull request #589 from weaveworks/release-1.0.0-rc.5
Release v1.0.0-rc.5
2020-05-14 15:33:34 +03:00
stefanprodan
99bc7040a3 Release v1.0.0-rc.5 2020-05-14 14:00:23 +03:00
Stefan Prodan
30073f2a8d Merge pull request #588 from weaveworks/ingress-class
Add ingress class support for Contour
2020-05-14 13:06:37 +03:00
Stefan Prodan
3e19ef0f01 Make Contour annotation const
Co-authored-by: Takeshi Yoneda <cz.rk.t0415y.g@gmail.com>
2020-05-14 12:48:12 +03:00
stefanprodan
68ccbc4817 Add ingress class e2e test 2020-05-14 12:29:54 +03:00
stefanprodan
fbaf8fedc7 Set ingress class in factory 2020-05-14 12:27:11 +03:00
stefanprodan
ff94e14d5a Update Contour e2e to v1.4 2020-05-14 12:21:56 +03:00
stefanprodan
5c7fd5d4db Add ingress class option to Helm chart 2020-05-14 12:17:03 +03:00
stefanprodan
48467eb8b3 Add ingress class support for Contour
Add `-ingress-class` command flag. When set, the specified class is used to annotate the generated HTTPProxy objects.
2020-05-14 12:17:03 +03:00
Takeshi Yoneda
5bd6906c32 Merge pull request #587 from weaveworks/redundant-assertion
pkg/metrics/providers: remove redundant assertion in prometheus test
2020-05-14 18:07:50 +09:00
mathetake
772099f073 pkg/metrics/providers: remove redundant assertion in prometheus test 2020-05-14 17:48:14 +09:00
Stefan Prodan
a6b8d19629 Merge pull request #586 from weaveworks/retry-initializing-status
Retry canary initialization on conflict
2020-05-14 11:46:30 +03:00
stefanprodan
e7f2d22505 Fix svc update conflict 2020-05-14 11:08:32 +03:00
stefanprodan
6cfa432834 Retry canary initialization on conflict 2020-05-14 11:03:32 +03:00
Stefan Prodan
474a5a20be Merge pull request #584 from weaveworks/appmesh-v1beta2
Implement AppMesh v1beta2 router
2020-05-14 09:38:08 +03:00
Stefan Prodan
af02ed46a5 Merge pull request #585 from tr-srij/patch-1
Fix typo in loadtester chart readme
2020-05-14 09:36:13 +03:00
tr-srij
972596f443 Update README.md 2020-05-13 23:30:51 -04:00
stefanprodan
6498cccb85 Use target port for virtual routers
AppMesh does not support port mappings
2020-05-13 22:50:26 +03:00
stefanprodan
6c9847ae14 Use FQDN for virtual nodes DNS 2020-05-13 22:43:10 +03:00
stefanprodan
b0b0cedde1 Map target port for virtual node listeners 2020-05-13 17:26:36 +03:00
Stefan Prodan
d96672eec1 Merge pull request #581 from edtan/fix-dev-link
Fix broken link to Flagger Development Guide
2020-05-12 09:46:22 +03:00
Ed
fed6948dab Fix broken link to Flagger Development Guide 2020-05-11 23:35:56 -04:00
stefanprodan
86939d9dce Register AppMesh VirtualNodes before Deployment init 2020-05-08 13:16:19 +03:00
stefanprodan
854d7665f0 Add AppMesh v1beta2 to factories 2020-05-08 13:15:37 +03:00
stefanprodan
52c757250a Fix annotations diff 2020-05-08 13:14:50 +03:00
stefanprodan
fe1d85b0ce Add AppMesh v1beta2 router tests 2020-05-08 13:13:27 +03:00
stefanprodan
0aac94b782 Implement AppMesh v1beta2 router 2020-05-08 13:12:58 +03:00
Stefan Prodan
e55af2ff19 Merge pull request #579 from jlbutler/fix-link
broken link in tutorials section of gitbook
2020-05-07 11:37:27 +03:00
Jesse Butler
2e388fceee fix broken link in tutorials section of gitbook
Signed-off-by: Jesse Butler <butlerjl@amazon.com>
2020-05-06 19:55:14 -04:00
stefanprodan
2d1c4a9d84 Use API providers in observer factory 2020-05-06 01:14:41 +03:00
stefanprodan
004eb88962 Add Envoy metric templates to docs 2020-05-06 01:13:42 +03:00
stefanprodan
eba6478729 Add providers to API 2020-05-06 01:12:41 +03:00
stefanprodan
7686b4b01a Generate AppMesh v1beta2 client 2020-05-05 19:31:03 +03:00
stefanprodan
55c89770d7 Add AppMesh v1beta2 clientset and RBAC 2020-05-04 22:22:51 +03:00
Stefan Prodan
d6f3a2453b Merge pull request #576 from weaveworks/deps-update
Update packages and e2e to Kubernetes v1.18.2
2020-05-02 12:12:09 +03:00
stefanprodan
d320b558d0 e2e: Update Kind, Istio and Linkerd
- Kind v0.8.1 (Kubernetes 1.18.2)
- Istio v1.5.2
- Linkerd stable-2.7.1
2020-05-02 09:39:55 +03:00
stefanprodan
66203c0916 build: Update Kubernetes client-go to 1.18.2 2020-05-02 08:54:22 +03:00
Stefan Prodan
d97a8cbc01 Merge pull request #575 from heubeck/master
Add MediaMarktSaturn to list of users
2020-05-02 08:42:25 +03:00
Florian Heubeck
6bb47f2e5d Add MediaMarktSaturn to list of users
Signed-off-by: Florian Heubeck <heubeck@mediamarktsaturn.com>
2020-05-01 23:30:34 +02:00
Stefan Prodan
f89f0d6515 Merge pull request #571 from edtan/fix-deployment-alert-test
controller: fix deployment alerts unit test
2020-04-30 11:02:37 +03:00
Ed
f46eaa8d05 This makes the primary ready in the TestScheduler_DeploymentAlerts test
in order to send out an alert.  Previously, it did not reach a state
to send an alert.
2020-04-30 00:20:25 -04:00
Takeshi Yoneda
e3f18b3d7e Merge pull request #565 from GijsvanDulmen/fix-rocket-test
Fix rocket tests naming but keep structs
2020-04-27 20:42:59 +09:00
Gijs van Dulmen
2b7a95fee5 Fix slack naming as well in tests 2020-04-27 13:13:46 +02:00
Gijs van Dulmen
647eb81021 Fix naming but keep structs 2020-04-27 09:46:13 +02:00
Stefan Prodan
bda620aae9 Merge pull request #560 from tariq1890/fix_lint
fix issues reported by the linter
2020-04-18 11:12:38 +03:00
Tariq Ibrahim
d41ed43ef9 fix issues reported by the linter 2020-04-17 11:45:44 -07:00
Stefan Prodan
86d3b498b6 Merge pull request #561 from tariq1890/no_kutils
remove unnecessary dependency on k/utils
2020-04-17 09:28:10 +03:00
Tariq Ibrahim
e473d4b2fb remove unnecessary dependency on k/utils 2020-04-16 16:14:08 -07:00
Stefan Prodan
fcac3380d7 Merge pull request #559 from weaveworks/e2e-istio-1.5.1
ci: Update end-to-end test to Istio 1.5.1
2020-04-16 13:06:05 +03:00
stefanprodan
f5fd57f3df Use Kubernetes v1.16 in Istio e2e 2020-04-16 12:05:58 +03:00
stefanprodan
2c6259495b Update end-to-end tests to Istio 1.5.1 2020-04-16 11:50:46 +03:00
Takeshi Yoneda
0d1e41504c Merge pull request #557 from n0rad/master
Check prometheus is online with simple query
2020-04-15 22:27:42 +09:00
Arnaud Lemaire
4a4f8555df Check prometheus is online with simple query 2020-04-15 10:29:05 +02:00
Stefan Prodan
4890a71283 Merge pull request #538 from splunk/feature/user-specified-labels-annotations
Add user-specified labels/annotations to Canary for generated Services
2020-04-09 20:41:07 +03:00
stefanprodan
84dd0006ca Add service metadata update unit test 2020-04-04 17:16:49 +03:00
stefanprodan
8d37b7b20b Update service metadata if ownerRef kind is canary 2020-04-04 16:52:12 +03:00
Finn Herzfeld
3f961ae73f Handle annotations/labels update 2020-04-04 16:52:12 +03:00
Finn Herzfeld
4460cb7385 Fix CRD indentation 2020-04-04 16:48:35 +03:00
stefanprodan
37527854d2 Add e2e test for apex service custom metadata 2020-04-04 16:48:35 +03:00
stefanprodan
c609a90959 Add unit tests for service custom metadata 2020-04-04 16:48:35 +03:00
Finn Herzfeld
2657e135b8 Use pointers for metadata because it is optional
and metadata parameter is nil on finalize.

in response to PR feedback
2020-04-04 16:48:35 +03:00
Finn Herzfeld
b7441a7ce7 Fix call to reconcileService 2020-04-04 16:48:35 +03:00
Finn Herzfeld
0aee385145 log canary.spec.service for debugging purposes 2020-04-04 16:48:35 +03:00
Finn Herzfeld
5c48430ed2 Initialize the label and annotation maps if they are nil 2020-04-04 16:48:35 +03:00
Finn Herzfeld
9d907deece Initial support for custom labels and annotations 2020-04-04 16:48:35 +03:00
Stefan Prodan
b564a2fda2 Merge pull request #549 from sayboras/feature/kube-1-18
Update Kubernetes packages to 1.18.0
2020-04-04 16:44:13 +03:00
sayboras
c0515fc6ff Upgrade to kube 1.18 2020-04-04 23:46:38 +11:00
Takeshi Yoneda
adae6afc91 Merge pull request #548 from weaveworks/deployment-is-ready-test
pkg/canary: add unit test of isDeploymentReady
2020-04-04 15:09:07 +09:00
mathetake
bbdac24ed3 pkg/canary: add unit test of isDeploymentReady 2020-04-04 10:48:29 +09:00
Stefan Prodan
e0b3b7134b Merge pull request #547 from weaveworks/prep-1.0.0-rc.4
Release v1.0.0-rc.4
2020-04-03 12:43:49 +03:00
stefanprodan
af2ef409b4 Release v1.0.0-rc.4 2020-04-03 12:22:44 +03:00
Stefan Prodan
740b46e818 Merge pull request #543 from richardcase/oc
chart: allow security context to be disabled on OpenShift
2020-04-03 11:34:06 +03:00
Stefan Prodan
db4a15e21d Merge pull request #546 from weaveworks/nginx-header-regex
Implement NGINX ingress header regex matching
2020-04-03 09:40:12 +03:00
stefanprodan
e6901467f2 Add Prometheus Operator to docs index 2020-04-02 16:43:15 +03:00
stefanprodan
14e9c7f466 Add e2e tests for ingress A/B Testing 2020-04-02 15:07:37 +03:00
stefanprodan
b8e9f57e1e Add unit tests for ingress A/B Testing 2020-04-02 01:35:15 +03:00
stefanprodan
38ef4ef4d8 Implement NGINX ingress header regex match 2020-04-02 01:34:29 +03:00
Stefan Prodan
e99e8f22a2 Merge pull request #544 from weaveworks/avoid-status-retry
pkg/controller: avoid status conflicts on initialization
2020-04-01 21:57:21 +03:00
stefanprodan
cbcd6ab03b pkg/controller: avoid status conflicts on initialization 2020-04-01 20:47:31 +03:00
Richard Case
f199923a3c feat: started to add a flag to disable security context 2020-04-01 16:57:30 +01:00
Stefan Prodan
7c00e5bbd8 Merge pull request #541 from weaveworks/retry-fixes
pkg/canary: fix status retry
2020-04-01 14:12:22 +03:00
stefanprodan
f6baba271a pkg/canary: fix status retry 2020-04-01 13:39:32 +03:00
Stefan Prodan
3b04f12b65 Merge pull request #540 from weaveworks/events-fix
logs: reduce log verbosity
2020-03-31 16:39:30 +03:00
stefanprodan
686de4bf06 logs: trim down log prefix 2020-03-31 15:31:07 +03:00
Stefan Prodan
9150816ec6 Merge pull request #539 from weaveworks/loadtest-0.16.0
loadtester: release v0.16.0
2020-03-31 13:08:41 +03:00
stefanprodan
3c11749f80 loadtester: release v0.16.0
- update helm to 2.16.5
- update helmv3 to 3.1.2
- remove http server timeouts
2020-03-31 12:48:16 +03:00
Stefan Prodan
b14bcc4a43 Merge pull request #535 from jacobsin/300-loadtester-return-cmd-output-optionally
loadtester: add return cmd output option
2020-03-30 20:56:02 +03:00
Jacob Sin
29c31e56bd loadtester: return cmd output optionally 2020-03-31 01:22:47 +08:00
Stefan Prodan
26c18a3385 Merge pull request #537 from weaveworks/openshift-rbac
rbac: add finalizers to RBAC rules
2020-03-30 12:48:11 +03:00
stefanprodan
6bbf99dbc5 rbac: add finalizers to RBAC rules 2020-03-30 12:24:27 +03:00
Stefan Prodan
4be2a0c4e1 Merge pull request #536 from weaveworks/loadtester-timeout
loadtester: set write timeout
2020-03-30 12:01:51 +03:00
Stefan Prodan
ccc17b080f Merge pull request #534 from weaveworks/ingress-update
pkg/router: update ingress API to networking.k8s.io/v1beta1
2020-03-30 11:17:49 +03:00
stefanprodan
a5986987b7 loadtester: release v0.15.0 2020-03-30 11:14:27 +03:00
stefanprodan
3807a5019d loadtester: set server write timeout 2020-03-30 11:13:10 +03:00
stefanprodan
6d2c172fca docs: update NGINX tutorial to networking.k8s.io 2020-03-29 13:38:59 +03:00
stefanprodan
017ca70807 e2e: update tests to ingress networking.k8s.io 2020-03-29 13:37:29 +03:00
stefanprodan
4ff00958ef artifacts: add networking.k8s.io to RBAC 2020-03-29 13:27:33 +03:00
stefanprodan
4ba27f018d router: update ingress API to networking/v1beta1 2020-03-29 13:15:00 +03:00
Takeshi Yoneda
388ad0400d Merge pull request #531 from weaveworks/refactor-finalizer
finalizer: refactoring
2020-03-29 17:50:19 +09:00
mathetake
ef8b6fe9b8 finalizer: refactoring 2020-03-29 17:07:18 +09:00
Takeshi Yoneda
7676918184 Merge pull request #530 from weaveworks/remove-skipLivenessChecks-param
pkg/{canary,controller}: remove unused skipLivenessChecks
2020-03-28 19:11:13 +09:00
mathetake
65d4b28b58 pkg/{canary,controller}: remove unused skipLivenessChecks 2020-03-28 18:13:00 +09:00
Stefan Prodan
9c46be131e Merge pull request #524 from Wihrt/prometheus-operator-docs
docs: Add prometheus-operator tutorial
2020-03-27 17:07:45 +02:00
Takeshi Yoneda
61fd505179 Merge pull request #529 from weaveworks/fix-daemonset-ready-cond
pkg/canary/daemonset: fix ready condition according to kubectl
2020-03-27 19:48:43 +09:00
mathetake
4242bf0f07 pkg/canary/daemonset: fix ready condition according to kubectl 2020-03-27 19:31:12 +09:00
Stefan Prodan
7821bc66d6 Merge pull request #528 from sayboras/bugfix/typo-struct
Remove extra space in json tags
2020-03-27 10:24:04 +02:00
sayboras
df5a2d8266 Remove extra space in json tags 2020-03-27 18:52:57 +11:00
Arnaud Hatzenbuhler
42c3080c19 Commit requested changes and fix some typos 2020-03-26 17:54:04 +01:00
Arnaud Hatzenbuhler
a548bfd8e6 Fix typo 2020-03-26 17:08:10 +01:00
Arnaud Hatzenbuhler
37c7ddc9a4 Rewrite documentation to use podinfo as exemple 2020-03-26 17:03:38 +01:00
Takeshi Yoneda
70475d475b Merge pull request #526 from weaveworks/fix-typo-daemon
fix typos in error messages
2020-03-27 00:55:34 +09:00
mathetake
6a16dc0c7c fix typos in error messages 2020-03-27 00:03:43 +09:00
Arnaud Hatzenbuhler
2b1aacc8e3 Add documentation for prometheus-operator 2020-03-26 11:15:32 +01:00
Stefan Prodan
658dec2693 Merge pull request #521 from sayboras/feature/service-accounts-annotations
chart: Add annotations for service account
2020-03-25 09:58:35 +02:00
sayboras
99366b4960 Clarify more details for serviceAccount.name 2020-03-25 16:37:38 +11:00
sayboras
562467765a Add annotations for service account 2020-03-25 16:30:10 +11:00
Stefan Prodan
423424cb3d Merge pull request #520 from weaveworks/release-v1.0.0-rc.3
Release v1.0.0-rc.3
2020-03-23 13:19:29 +02:00
stefanprodan
79759243d4 build: post report only if coverage changes 2020-03-23 12:51:21 +02:00
stefanprodan
1d43447994 Release Flagger v1.0.0-RC.3 2020-03-23 12:15:20 +02:00
stefanprodan
ce426b50e3 install: Update Prometheus to v2.16.0 2020-03-23 12:03:28 +02:00
stefanprodan
683bc0b5ff docs: Update install docs for Kustomize 3.5.0 2020-03-23 12:03:28 +02:00
Stefan Prodan
41af740798 Merge pull request #519 from weaveworks/e2e-up-ingress
e2e: Update Contour and Gloo
2020-03-23 12:02:22 +02:00
stefanprodan
3458757c35 e2e: Update Gloo to v1.3.14 2020-03-23 11:42:00 +02:00
stefanprodan
4e24ad53bd e2e: Update Contour to v1.3 2020-03-23 11:40:36 +02:00
Stefan Prodan
0bdaa008aa Merge pull request #516 from tariq1890/upd_dep
clean up and update dependencies of flagger
2020-03-23 09:35:54 +02:00
Stefan Prodan
16ecb4bed7 Merge pull request #514 from weaveworks/preserve-nodeports
Preserve node ports on service reconciliation
2020-03-23 09:35:39 +02:00
Stefan Prodan
dbdf198b74 Merge pull request #495 from ta924/addFinalize
Add canary finalizers
2020-03-23 09:34:17 +02:00
Tanner Altares
7cf836e982 support for daemonset finalize
fmt

update e2e test typo

finalizer return if not found

fix typo
2020-03-20 21:41:43 -05:00
Tariq Ibrahim
d8d2345359 clean up and update dependencies of flagger
Signed-off-by: Tariq Ibrahim <tariq181290@gmail.com>
2020-03-20 17:11:02 -07:00
Tanner Altares
a3e3567f1e finalize docs 2020-03-20 15:13:51 -05:00
Tanner Altares
c9a07cec87 add e2e tests istio
add e2e tests istio

clean up comment from review

add e2e tests istio

clean up comment from review

clean up logging statement

add e2e tests istio

clean up comment from review

clean up logging statement

add log statement on e2e iteration

add e2e tests istio

clean up comment from review

clean up logging statement

add log statement on e2e iteration

extend timeout for finalizing

add e2e tests istio

clean up comment from review

clean up logging statement

add log statement on e2e iteration

extend timeout for finalizing

add phase to kustomize crd

add e2e tests istio

clean up comment from review

clean up logging statement

add log statement on e2e iteration

extend timeout for finalizing

add phase to kustomize crd

revert timeout on circleci

vs and svc checks for istio e2e tests

fix fmt errors and tests

add get statement in e2e test

add get statement in e2e test

add namespace to e2e

use only selector for service revert
2020-03-20 15:13:51 -05:00
Tanner Altares
92937a8f48 kubectl annotation support
rebase and squash

fix fmt issues

revert Dockerfile

revert go.mod and go.sum

introduction of finalizer

introduction of finalizer

remove test for finalizer add istio tests

fix fmt issues

revert go.mod and go.sum

revert Dockerfile and main.go

fmt deployment controller

introduction of finalizer

rebase and squash

fix fmt issues

revert Dockerfile

revert go.mod and go.sum

introduction of finalizer

introduction of finalizer

remove test for finalizer add istio tests

fix fmt issues

revert go.mod and go.sum

revert Dockerfile and main.go

fmt deployment controller

add unit tests for finalizing

introduction of finalizer

rebase and squash

fix fmt issues

revert Dockerfile

revert go.mod and go.sum

introduction of finalizer

introduction of finalizer

remove test for finalizer add istio tests

fix fmt issues

revert go.mod and go.sum

revert Dockerfile and main.go

fmt deployment controller

run fmt to clean up formatting

review changes

add kubectl annotation

add kubectl annotation support

introduction of finalizer

introduction of finalizer

rebase and squash

fix fmt issues

revert Dockerfile

revert go.mod and go.sum

introduction of finalizer

introduction of finalizer

remove test for finalizer add istio tests

fix fmt issues

revert go.mod and go.sum

revert Dockerfile and main.go

fmt deployment controller

introduction of finalizer

rebase and squash

fix fmt issues

revert Dockerfile

revert go.mod and go.sum

introduction of finalizer

introduction of finalizer

remove test for finalizer add istio tests

fix fmt issues

revert go.mod and go.sum

revert Dockerfile and main.go

fmt deployment controller

add unit tests for finalizing

introduction of finalizer

rebase and squash

fix fmt issues

revert Dockerfile

revert go.mod and go.sum

introduction of finalizer

introduction of finalizer

remove test for finalizer add istio tests

fix fmt issues

revert go.mod and go.sum

revert Dockerfile and main.go

fmt deployment controller

run fmt to clean up formatting

review changes

introduction of finalizer

introduction of finalizer

rebase and squash

fix fmt issues

revert Dockerfile

revert go.mod and go.sum

introduction of finalizer

introduction of finalizer

remove test for finalizer add istio tests

fix fmt issues

revert go.mod and go.sum

revert Dockerfile and main.go

fmt deployment controller

introduction of finalizer

rebase and squash

fix fmt issues

revert Dockerfile

revert go.mod and go.sum

introduction of finalizer

introduction of finalizer

remove test for finalizer add istio tests

fix fmt issues

revert go.mod and go.sum

revert Dockerfile and main.go

fmt deployment controller

add unit tests for finalizing

introduction of finalizer

rebase and squash

fix fmt issues

revert Dockerfile

revert go.mod and go.sum

introduction of finalizer

introduction of finalizer

remove test for finalizer add istio tests

fix fmt issues

revert go.mod and go.sum

revert Dockerfile and main.go

fmt deployment controller

run fmt to clean up formatting

review changes
2020-03-20 15:13:51 -05:00
Tanner Altares
b39add9ee6 introduction of finalizer 2020-03-20 15:13:51 -05:00
stefanprodan
a193d13e37 Preserve node ports on service reconciliation
Allow taking over Kubernetes services of type LoadBalancer
2020-03-20 10:10:30 +02:00
Stefan Prodan
52d9951eb9 Merge pull request #512 from weaveworks/rc.2
Release Flagger v1.0.0-rc.2
2020-03-19 15:08:19 +02:00
stefanprodan
8684a074aa Add changelog for v1.0.0-rc.2 2020-03-19 13:53:16 +02:00
stefanprodan
1d618a9945 Release loadtester v0.14.0 2020-03-19 13:25:13 +02:00
stefanprodan
0def28d7c3 Release Flagger v1.0.0-rc.2 2020-03-19 12:55:35 +02:00
Stefan Prodan
b146163130 Merge pull request #511 from sayboras/debug/nginx-metrics
e2e: Upgrade nginx helm chart to 1.34.2
2020-03-19 12:29:25 +02:00
sayboras
28c98f0793 Update nginx docs 2020-03-19 20:58:27 +11:00
sayboras
2954317982 Remove obsolete stats configuration 2020-03-19 20:53:39 +11:00
sayboras
830f3ac18f Upgrade nginx help to 1.34.2 2020-03-19 20:36:09 +11:00
Stefan Prodan
d6d65e20e8 Merge pull request #507 from ernst01/loadtester-concord-support
loadtester: add concord test support
2020-03-19 09:31:22 +02:00
Stefan Prodan
061c17971e Merge pull request #509 from stealthybox/grpc-typos
Fix gRPC typos
2020-03-19 08:18:40 +02:00
leigh capili
bbbcfd6cde Fix gRPC typos 2020-03-18 20:16:37 -06:00
Thiebaud Ernstberger
7f590b0701 Issue #477: added concord test support to loadtester + documentation 2020-03-18 14:58:46 -07:00
Stefan Prodan
4d90abf581 Merge pull request #506 from sayboras/feature/logging-helm
Add the logLevel configuration in helm charts
2020-03-17 13:59:01 +02:00
sayboras
f2ef8339d3 Update README.md 2020-03-17 22:29:11 +11:00
sayboras
b02370102f Revert chart version upgrade
Remove if/else for logLevel
2020-03-17 22:26:48 +11:00
sayboras
3aa3ae2de4 Add the logLevel configuration in helm charts 2020-03-17 21:39:50 +11:00
Stefan Prodan
e76a6792b9 Merge pull request #502 from weaveworks/fix-istio-examples
docs: Update A/B testing docs for Istio 1.5
2020-03-16 09:29:43 +02:00
Takeshi Yoneda
ebfbe1b535 Merge pull request #504 from weaveworks/nop-notifier
fix: nil pointer on notifier
2020-03-15 21:39:53 +09:00
mathetake
f0bd307d3c fix: nil pointer on notifier 2020-03-15 18:44:05 +09:00
stefanprodan
bc4e0d69a2 docs: Update A/B testing docs for Istio 1.5
- change the header match condition to a regex expression compatible with google re2
2020-03-14 14:51:37 +02:00
Stefan Prodan
b8682ccfd4 Merge pull request #500 from weaveworks/fix-docs-links
docs: fix wrong links
2020-03-14 09:51:00 +02:00
mathetake
51adfc1f60 docs: fix wrong links 2020-03-14 16:33:46 +09:00
Stefan Prodan
38299fd947 Merge pull request #494 from weaveworks/docs-faq
docs: How to retry a failed release
2020-03-11 08:21:36 +02:00
stefanprodan
3a1e66ec03 docs: How to retry a failed release 2020-03-10 16:30:20 +02:00
Takeshi Yoneda
9a5328b507 Merge pull request #492 from weaveworks/mirror-percentage
istio router: make mirrorPercentage configurable for traffic mirroring
2020-03-10 16:44:41 +09:00
Stefan Prodan
99103840a1 Merge pull request #493 from weaveworks/query-deprecation
docs: Add metric.query deprecation notice
2020-03-10 09:42:01 +02:00
Stefan Prodan
7a24cee6d5 Merge pull request #490 from staceypotter/patch-1
docs: Fix typo in changelog
2020-03-10 09:29:26 +02:00
stefanprodan
523903e0af docs: Add metric.query deprecation notice 2020-03-10 09:22:25 +02:00
mathetake
7380dbb8ab make MirrorWeight's type int, not float64 2020-03-10 15:47:46 +09:00
mathetake
3425d6e965 pkg/router/istio: use canary.GetAnalysis to prevent nil pointer 2020-03-10 15:28:20 +09:00
mathetake
d911e1ddc5 docs: add mirrorWeight example 2020-03-10 15:16:03 +09:00
mathetake
aec0010b14 ignore MirrorPercentage in reconcileVirtualService 2020-03-10 14:54:57 +09:00
mathetake
3a887afa38 fix json key of mirrorWeight 2020-03-10 14:15:18 +09:00
mathetake
adff6989f5 pkg/router/istio: add test for mirrorWeight 2020-03-10 11:42:44 +09:00
mathetake
1f6160148c change CanaryAnalysis to Analysis left in tests 2020-03-10 11:25:57 +09:00
mathetake
8242e7691a pkg/router/istio: set mirrorWeight if provided 2020-03-10 09:53:53 +09:00
mathetake
cb130d3239 add api changes for making mirrorPercentage configurable 2020-03-10 09:43:23 +09:00
Stacey Potter
246e5f8c13 fixed typo
"where" to "were"
2020-03-09 10:19:50 -07:00
Stefan Prodan
c5dffbaa3f Merge pull request #489 from weaveworks/e2e-nginx-0.30.0
e2e: Update NGINX ingress to v0.30.0
2020-03-09 19:19:47 +02:00
stefanprodan
31090d08b6 e2e: Update NGINX ingress to v0.30.0 2020-03-09 18:44:49 +02:00
Stefan Prodan
010852edd1 Merge pull request #486 from weaveworks/docs-istio-telemetry-v2
docs: Add Istio telemetry v2 to upgrade guide
2020-03-09 12:39:30 +02:00
stefanprodan
2e54ef4a31 docs: Add Istio telemetry v2 to upgrade guide 2020-03-09 12:11:56 +02:00
Takeshi Yoneda
39c5968606 Merge pull request #485 from weaveworks/fix-doc-canaryAnalysis-analysis
docs: change canaryAnalysis left in docs to analysis
2020-03-09 18:36:42 +09:00
mathetake
951386392d docs: change canaryAnalysis to analysis as it's deprecated 2020-03-09 18:19:58 +09:00
Stefan Prodan
714dd86cd4 Merge pull request #484 from weaveworks/scheduler-refactoring
pkg/controller: Refactor scheduler
2020-03-09 11:12:48 +02:00
stefanprodan
d35290dd6e pkg/controller: Refactor scheduler
- move scheduler metrics and hooks to dedicate files
- remove weight params from shouldSkipAnalysis
2020-03-09 10:40:16 +02:00
Stefan Prodan
fbc886794e Merge pull request #483 from weaveworks/kube-e2e
e2e: Consolidate Kubernetes e2e tests
2020-03-09 10:32:33 +02:00
stefanprodan
5cd78bfd40 e2e: Consolidate Kubernetes e2e tests
- run both Deployment and DaemonSet tests on the same Kubernetes Kind cluster
- add cleanup script that deletes the test namespace before running the DaemonSet tests
- set Kubernetes version to 1.17.2
2020-03-09 10:10:37 +02:00
Takeshi Yoneda
862dfbde94 Merge pull request #481 from weaveworks/daemonset-service-router
pkg/router: renamed KubernetesDeploymentRouter to KubernetesDefaultRouter
2020-03-08 17:57:53 +09:00
mathetake
ea42f704f0 pkg/router: rename KubernetesDeploymentRouter to KubernetesDefaultRouter 2020-03-08 17:38:38 +09:00
Stefan Prodan
30dc29b689 Merge pull request #479 from weaveworks/istio-1.5
e2e: Istio 1.5
2020-03-08 10:22:41 +02:00
Takeshi Yoneda
ffbbc2ca33 Merge pull request #480 from weaveworks/refactor-error-handling
refactor error handlings: oraganize messages, wrap with %w and use errors.Is
2020-03-08 17:03:23 +09:00
mathetake
a32bd63eda pkg/metrics/providers/datadog: improve request failure error message 2020-03-08 16:30:50 +09:00
mathetake
22f860a3a3 refactor pkg/controller 2020-03-08 16:06:53 +09:00
mathetake
ce89a24947 pkg/canary: refator error handling 2020-03-08 13:38:27 +09:00
mathetake
f34d94a912 pkg/loadtester: improve error handling messages 2020-03-08 13:12:56 +09:00
mathetake
0be72ab981 pkg/notifier: improve error handling messages 2020-03-08 11:53:03 +09:00
mathetake
23ab1bdb4b pkg/router: improve error handling messages 2020-03-08 11:45:09 +09:00
mathetake
64efd56ce9 pkg/metrics/observers: wrap errors 2020-03-08 10:56:35 +09:00
mathetake
5843b02931 pkg/canary: refator error messages 2020-03-08 10:42:51 +09:00
mathetake
2ec24bb17d pkg/metrics/providers: wrap ErrNoValuesFound and modify controller accordingly 2020-03-08 00:17:52 +09:00
mathetake
7fb675e8aa make deployment tests aligned with daemonset 2020-03-07 23:28:50 +09:00
mathetake
8e9b9a358f pkg/canary: refator error handlings and enhance messages 2020-03-07 23:10:07 +09:00
stefanprodan
e76e718967 e2e: Use custom latency check for Istio 1.5 2020-03-07 11:57:46 +02:00
stefanprodan
438a9839d2 e2e: Update Istio to v1.5.0 2020-03-07 10:34:51 +02:00
Stefan Prodan
e59acc7bae Merge pull request #476 from weaveworks/fix-release-notes
build: generate release notes on disk
2020-03-04 20:48:55 +02:00
stefanprodan
2fb36d58b1 build: generate release notes on disk 2020-03-04 20:48:06 +02:00
Stefan Prodan
c0dbef37c6 Merge pull request #472 from weaveworks/release-1.0.0-rc.1
Release Flagger v1.0.0-RC.1
2020-03-04 19:27:44 +02:00
stefanprodan
cfd2ff92bf Add Ingress v2 to roadmap 2020-03-04 16:13:49 +02:00
stefanprodan
8d9dde2dc7 docs: update Flux tutorial to latest version 2020-03-04 16:13:49 +02:00
stefanprodan
f164eac58e docs: add API changes section to dev guide 2020-03-04 16:13:49 +02:00
stefanprodan
a0a9b7d29a e2e: use kustomize to install the load tester 2020-03-04 16:13:49 +02:00
stefanprodan
6d4db45d6c build: update Go to v1.14 and Alpine to v3.11 2020-03-04 16:13:49 +02:00
stefanprodan
4f0f7ff9db Update examples to v1beta1 API 2020-03-04 16:13:49 +02:00
stefanprodan
e8924a7e27 Update podinfo chart to v1beta1 API 2020-03-04 16:13:49 +02:00
stefanprodan
eced0f45c6 Update roadmap and readme 2020-03-04 16:13:49 +02:00
stefanprodan
23e6209789 Release Flagger 1.0.0-rc.1 2020-03-04 16:13:49 +02:00
stefanprodan
3d2817dd0d Add changelog for v1.0.0-rc.1 2020-03-04 16:13:49 +02:00
Stefan Prodan
5fecefe3b4 Merge pull request #475 from weaveworks/refactor/test-assertion
refactor tests: simplify assertion
2020-03-04 15:40:35 +02:00
mathetake
a616199b81 refactor tests: simplify assertion 2020-03-04 21:46:08 +09:00
Stefan Prodan
c42c624763 Merge pull request #474 from weaveworks/user-Chick-fil-A
Add Chick-fil-A to user list
2020-03-04 10:06:42 +02:00
stefanprodan
ccd27b4614 Add Chick-fil-A to user list 2020-03-04 10:05:51 +02:00
Stefan Prodan
9258cbeecb Merge pull request #471 from weaveworks/flagger-users-dmm-com
add Flagger user: dmm.com
2020-03-03 09:18:44 +02:00
mathetake
c66ef8f935 add Flagger user: dmm.com 2020-03-03 16:17:54 +09:00
Stefan Prodan
cf1f8f4140 Merge pull request #469 from weaveworks/flagger-users
Add Flagger users to readme
2020-03-03 09:12:39 +02:00
stefanprodan
ac3492a7b4 Add Flagger users to readme 2020-03-03 09:11:25 +02:00
Stefan Prodan
b47cfb62b2 Merge pull request #464 from weaveworks/cloud-watch-metrics
pkg/metrics/providers: add AWS CloudWatch provider
2020-03-03 09:05:37 +02:00
mathetake
ecb8207488 pkg/metrics/provider: fix a type in cloudwatch permission name in comments 2020-03-02 22:26:55 +09:00
mathetake
5faf63ed24 docs/gitbooks/usage/metrics: add cloud watch metrics example 2020-03-02 22:21:47 +09:00
mathetake
7b6a5f96a1 pkg/metrics/providers: add region filed for MetricTemplate.Provider
and make Address not required
2020-03-02 21:58:58 +09:00
mathetake
4ff28d7bd5 pkg/metrics/providers: add AWS CloudWatch metrics provider 2020-03-02 21:58:58 +09:00
Stefan Prodan
62f2851dfd Merge pull request #457 from weaveworks/docs-v1beta1
Update docs for Flagger v1beta1 API
2020-03-02 13:30:47 +02:00
stefanprodan
34c9fecf8c docs: add prerequisites to tutorials 2020-03-02 13:05:33 +02:00
stefanprodan
b6958733e1 docs: replace threshold with thresholdRange 2020-03-01 22:36:08 +02:00
stefanprodan
0c998c36cf docs: add upgrade guide for v1beta1 2020-03-01 12:13:55 +02:00
stefanprodan
bf0499e8a6 docs: use metric providers in tutorials 2020-02-29 11:56:13 +02:00
stefanprodan
c4a9712b81 docs: add getting started section 2020-02-29 11:37:18 +02:00
stefanprodan
49c088595e Add code changes section to dev docs 2020-02-28 18:46:52 +02:00
stefanprodan
be4c67540d build: make release compatible with go mod 2020-02-28 18:46:26 +02:00
stefanprodan
a9fba0a1f2 docs: rename canaryAnalysis to analysis 2020-02-28 18:20:57 +02:00
stefanprodan
98ecae93e1 Set API version to v1beta1 in docs examples 2020-02-28 18:20:57 +02:00
stefanprodan
5aa9dd154c Add datadog metric provider to docs
Ref: #460
2020-02-28 18:20:57 +02:00
stefanprodan
e4da4a34a6 Add dev guides section to docs 2020-02-28 18:20:57 +02:00
stefanprodan
2837d4407e Split the CRD docs into canary target, service, status, analysis 2020-02-28 18:20:57 +02:00
stefanprodan
0e81b5f4d2 Update docs for Flagger v1beta1 API 2020-02-28 18:20:57 +02:00
Stefan Prodan
8f12128aaf Merge pull request #467 from weaveworks/build-v1beta1
Release loadtester v0.13.0
2020-02-28 18:19:38 +02:00
stefanprodan
dd7a045542 Release loadtester v0.13.0 2020-02-28 17:58:39 +02:00
stefanprodan
baadc19a42 build: cleanup makefile 2020-02-28 17:46:43 +02:00
stefanprodan
981abdbc85 tester: Update Helm binaries and bash 2020-02-28 17:17:05 +02:00
stefanprodan
273f84b374 tester: Fix health.proto permissions 2020-02-28 13:43:28 +02:00
Stefan Prodan
793e998a39 Merge pull request #463 from weaveworks/analysis-v1beta1
Rename spec.canaryAnalysis to spec.analysis
2020-02-28 13:18:48 +02:00
stefanprodan
9a44c5baac build: add goimports to CI 2020-02-28 12:58:49 +02:00
stefanprodan
a30f688450 fmt: fix imports formatting
- run gofmt and goimports
2020-02-28 12:52:23 +02:00
stefanprodan
19faf67523 e2e: update istio to 1.4.5 and NGINX to 1.33.0 2020-02-28 12:28:09 +02:00
stefanprodan
3e0867040f Add unit tests for canary phases 2020-02-28 11:31:47 +02:00
stefanprodan
82660e23da Update e2e tests to v1beta1
- set Canary API version to flagger.app/v1beta1
- rename canaryAnalysis to analysis
2020-02-28 11:31:27 +02:00
stefanprodan
43662582b8 Replace spec.canaryAnalysis with spec.analysis in CRD
- rename spec.canaryAnalysis to spec.analysis
- required fields: spec.analysis.interval and spec.analysis.threshold
2020-02-28 11:30:06 +02:00
stefanprodan
287977c2b5 Deprecate spec.canaryAnalysis replaced by spec.analysis
- add analysis field to Canary spec
- deprecate canaryAnalysis filed (to be removed in the next API version)
- maintain backwards compatibility with v1alpha3 by using spec.canaryAnalysis if spec.analysis is nil
- set analysis threshold default value to 1
2020-02-28 11:24:38 +02:00
Stefan Prodan
41f644ab8c Merge pull request #461 from weaveworks/add-maintainer/mathetake
docs: add maintainer: @mathetake
2020-02-27 13:41:37 +02:00
mathetake
74618a9016 docs: add maintainer: @mathetake 2020-02-27 13:22:59 +02:00
Stefan Prodan
450dcd692e Merge pull request #462 from weaveworks/ci-push
ci: push container for master only
2020-02-27 13:22:18 +02:00
stefanprodan
e2b4a3de32 ci: push container for master only 2020-02-27 13:03:58 +02:00
Stefan Prodan
c17c69ec1b Merge pull request #460 from mathetake/datadog-metrics-provider
feature: add datadog metrics provider
2020-02-27 11:41:55 +02:00
mathetake
a157824130 metrics/provider: add datadog metrics provider
implement metrics provider interface for datadog, tested against
the actual datadog account

resolves #284
2020-02-27 17:35:03 +09:00
Stefan Prodan
6c398c246f Merge pull request #455 from mathetake/feature/daemonset-target
feat: Support daemonset target kind
2020-02-26 12:02:54 +02:00
Stefan Prodan
31f38a4f43 GitBook: [master] 24 pages modified 2020-02-26 08:43:28 +00:00
mathetake
9c8b887d30 use LastTransitionTime for deadline calculation
and run goimports on several files
2020-02-26 12:57:26 +09:00
mathetake
eec343f3aa prevent infinite loop 2020-02-25 18:14:43 +09:00
mathetake
4fe19be9b7 allow flagger to access to apps.daemonsts resources 2020-02-25 13:25:37 +09:00
mathetake
cc07c2891e add DaemonSet targetKind in crd and change label selector
and ignore daemonSetScaleDownNodeSelector in target spec change detection
2020-02-25 13:00:36 +09:00
mathetake
336344720c add e2e test for daemonset target type 2020-02-23 13:55:45 +09:00
mathetake
5af1665ef8 pkg/controller: add unit test for daemonset target 2020-02-23 13:39:11 +09:00
mathetake
a828b43463 pkg/canary: add daemonset target controller 2020-02-23 12:25:38 +09:00
Stefan Prodan
bf1089b204 Merge pull request #454 from weaveworks/contour-1.2
Contour 1.2
2020-02-22 08:55:32 +02:00
stefanprodan
0c2d7da136 Use Contour 1.2 in e2e tests 2020-02-22 00:50:38 +02:00
stefanprodan
3968e84efd Fix Contour header override for Linkerd 2020-02-22 00:46:00 +02:00
Stefan Prodan
54d4df5751 Merge pull request #450 from weaveworks/istio-multi-cluster
Add docs for Istio multi-cluster setup
2020-02-21 10:28:02 +02:00
stefanprodan
65c7fd1cf8 Add links to ingress controllers installers 2020-02-20 18:15:42 +02:00
stefanprodan
c3cb9e394d Add docs for Istio multi-cluster setup
- add istio.kubeconfig options to Helm chart
- rename command flag to kubeconfig-service-mesh
2020-02-20 17:57:01 +02:00
Stefan Prodan
ab00a0099c Merge pull request #447 from viditganpi/canary-for-multi-cluster-istio
Add support for Istio multi-cluster
2020-02-20 17:16:08 +02:00
Vidit Mathur
0d493f658d removed non-required binary file 2020-02-20 20:12:56 +05:30
Vidit Mathur
13342e5e7f corrected formatting 2020-02-20 19:55:47 +05:30
Vidit Mathur
1f0a4d9f35 Changes
1. Modified deployment.yaml to remove source config
    2. Modified values to change default kubeconfigHost values
    3. Removed debugging logs
2020-02-20 19:44:57 +05:30
Vidit Mathur
8e996b61ae removed mode, fallback to default mode for secret file 2020-02-19 17:37:21 +05:30
Vidit Mathur
5462db7c11 1. Modified cmd/main to consume host kubeconfig
and service kubeconfig
2. Modified deployment to pass and mount secrets
   for configs.
2020-02-19 17:23:33 +05:30
Stefan Prodan
91ef81201e Merge pull request #449 from ta924/loadtesterRollbackSupport
Add support for rollback gating in tester API
2020-02-19 10:13:28 +02:00
Tanner Altares
34ed690416 update the docs 2020-02-18 15:00:03 -06:00
Tanner Altares
0598e4b51e add support for rollback gating for loadtester 2020-02-18 12:33:52 -06:00
Stefan Prodan
7f9cc30b07 Merge pull request #448 from weaveworks/docs-dev
docs: Add development and release guide
2020-02-18 18:47:31 +02:00
stefanprodan
52ee018ffd Add dev guide to readme 2020-02-18 18:27:15 +02:00
stefanprodan
3b8c285870 Move examples to tutorials docs 2020-02-18 18:13:20 +02:00
stefanprodan
77aef5591d Add deployment strategies to usage docs 2020-02-18 18:12:53 +02:00
stefanprodan
143397c45e Add development guide to docs 2020-02-18 18:12:23 +02:00
Stefan Prodan
c6884fb5b4 Merge pull request #446 from weaveworks/fix-hashing
Fix spec changes detection
2020-02-18 11:54:59 +02:00
Vidit Mathur
2cb6ce4697 Subject:- Canary support for istio multi cluster
Body:-
If applied this commit will resolve the following reported issue https://github.com/weaveworks/flagger/issues/437
Have added support for consuming kubeconfig of istio host cluster where istio
resources will be created.
2020-02-18 12:07:59 +05:30
stefanprodan
c3b1ee6dae Add test for last promoted hash 2020-02-18 08:32:56 +02:00
stefanprodan
f5182061ef Compute spec hash with spew instead of hashstructure 2020-02-17 20:00:35 +02:00
stefanprodan
890365c189 ci: List PRs in release notes 2020-02-17 16:48:53 +02:00
Stefan Prodan
0801576dcf Merge pull request #442 from weaveworks/header-ops
Use header operations in Istio router
2020-02-15 10:15:41 +02:00
stefanprodan
3a5a0faa4f build: List PRs in release notes 2020-02-15 00:45:50 +02:00
stefanprodan
172c4f56dd Use header operations in Istio router
- remove deprecated appendHeaders from Istio client
- propagate header operations from canary service headers to Istio virtual service
- add Istio router tests for request/response header removal
- update header operations examples in docs
2020-02-14 13:59:36 +02:00
Stefan Prodan
e1bb8e741e Merge pull request #441 from weaveworks/istio-v1alpha3
Extend Istio traffic policy
2020-02-14 13:02:05 +02:00
stefanprodan
b4753f68b5 Disable CRD creation for Helm v2 2020-02-14 12:45:31 +02:00
stefanprodan
33d57af233 e2e: Install CRDs with Helm v3 2020-02-14 12:43:21 +02:00
stefanprodan
0106dff2d7 Update packages to Kubernetes v1.17.2 2020-02-14 12:35:33 +02:00
stefanprodan
37a2bf966a Sync CRDs from artifacts dir 2020-02-14 12:35:03 +02:00
stefanprodan
57b1732b67 Add crds dir to Helm chart
Allow installing the CRDs with Helm v3
2020-02-14 12:34:18 +02:00
stefanprodan
acce3a9c13 Add Istio traffic policy validation schema to CRD 2020-02-14 12:31:59 +02:00
stefanprodan
05050c950a Add missing fields to Istio destination rule
- add ConsecutiveGatewayErrors, Consecutive5xxErrors and MinHealthPercent to OutlierDetection
- add H2UpgradePolicy and IdleTimeout to ConnectionPool HTTPSettings
2020-02-14 12:30:15 +02:00
Stefan Prodan
8c1166fa5b Merge pull request #440 from weaveworks/smi-v1alpha2-client
SMI TrafficSplit v1alpha2 client
2020-02-14 11:20:03 +02:00
stefanprodan
68c6d302b7 Add SMI router tests 2020-02-14 00:40:01 +02:00
stefanprodan
951a4435eb Add SMI v1alpha1 to v1alpha2 conversion 2020-02-14 00:39:49 +02:00
stefanprodan
98bd8696f2 Refactor router test fixture 2020-02-14 00:38:51 +02:00
stefanprodan
41f535191e Add SMI TrafficSplit v1alpha2 client 2020-02-14 00:37:25 +02:00
Stefan Prodan
0697343b7c Merge pull request #438 from weaveworks/e2e-updates
Update e2e tests and docs
2020-02-13 18:14:36 +02:00
stefanprodan
39e44e6a7a e2e: Update Istio to v1.4.4 2020-02-13 17:42:58 +02:00
stefanprodan
67ba14e438 e2e: Update Linkerd to v2.7.0 2020-02-13 17:22:30 +02:00
Stefan Prodan
1a9cec9cb7 Merge pull request #436 from weaveworks/istio-gateway-port
Set destination port for Istio ingress gateways
2020-02-13 17:11:28 +02:00
stefanprodan
6347861fda Update docs to Helm v3 2020-02-13 12:59:13 +02:00
stefanprodan
bd3435b82d Update Gloo docs to v1.3.5 2020-02-13 12:45:47 +02:00
stefanprodan
0bd66f4603 e2e: Update Gloo gateway proxy address 2020-02-13 12:19:36 +02:00
stefanprodan
78dacc98fa e2e: Fix NGINX helm uninstall 2020-02-13 12:17:30 +02:00
stefanprodan
71a220d432 e2e: Fix Gloo routes 2020-02-13 11:54:36 +02:00
stefanprodan
089aa1fe22 e2e: Create namespaces for Helm v3 2020-02-13 11:23:10 +02:00
stefanprodan
c88fa5d882 e2e: Update Gloo to v1.3.5 2020-02-13 11:14:26 +02:00
stefanprodan
14214bc2fe e2e: Update Helm to v3.0.3 2020-02-13 11:14:07 +02:00
stefanprodan
4c5b226b4c Add tests for Istio gateways 2020-02-12 11:21:52 +02:00
Stefan Prodan
4d8b153cf9 Merge pull request #433 from weaveworks/projected-configs
Track projected configmaps and secrets
2020-02-12 09:37:56 +02:00
stefanprodan
ea4d9ba58d Set destination port for Istio ingress gateways 2020-02-11 17:07:10 +02:00
stefanprodan
c181eb464c Track projected configmaps and secrets
- scan volumes with projected configmaps and secrets
- update primary volumes with configmaps and/or secrets projections
- add tests for configmaps and secrets projections
2020-02-11 11:36:16 +02:00
Stefan Prodan
ad68ca3a4a Merge pull request #429 from weaveworks/alerts
Implement canary alerts and alert providers
2020-02-11 11:21:30 +02:00
Stefan Prodan
963a9afd09 Merge pull request #430 from heubeck/eventhookUrlFromEnv
Add webhookUrl env parameter
2020-02-11 08:48:36 +02:00
Florian Heubeck
4e8b7d4cb4 Add webhookUrl env parameter
Environment variable 'EVENT_WEBHOOK_URL' can be used to override/set eventWebhook argument that serves as default value
2020-02-10 18:31:08 +01:00
stefanprodan
fd85a3426a Implement Rocket chat notifier 2020-02-10 18:23:34 +02:00
stefanprodan
e257d48262 Add unit tests for canary alerts 2020-02-10 17:24:06 +02:00
stefanprodan
1a87a9be45 Implement notifications based on alert providers and severity 2020-02-10 15:25:56 +02:00
stefanprodan
35cf634d89 Implement Discord notifier with Slack formatting 2020-02-10 11:39:57 +02:00
stefanprodan
86e813f6b7 Add alert providers to RBAC 2020-02-10 11:05:53 +02:00
stefanprodan
c4c3342eb9 Add alert provider to CRD yamls 2020-02-10 10:36:31 +02:00
stefanprodan
898edee67e Refactor Flagger APIs and tests
- add CrossNamespaceObjectReference type
- add informers collection to controller
- use the informer cache to query for metric templates
- rename mock to fixture
- regenerate clientset
2020-02-10 10:36:31 +02:00
stefanprodan
0673b54092 Implement AlertProvider CRD 2020-02-10 10:36:31 +02:00
Stefan Prodan
3cfb1fbb65 Merge pull request #425 from weaveworks/nop-tracker
Allow disabling secrets/configmaps tracking
2020-02-10 09:44:51 +02:00
Stefan Prodan
ea4a84991e Merge pull request #424 from weaveworks/threshold-range
Implement metric range validation
2020-02-09 10:19:34 +02:00
stefanprodan
d5ba46965f Allow config tracking option to chart 2020-02-08 22:52:05 +02:00
stefanprodan
7c0e3d9a0b Allow config tracking toggling
- Add enable-config-tracking command arg (true by default)
- Add no-operation tracker
- Add tests for nop tracker
2020-02-08 22:24:46 +02:00
stefanprodan
5c479d9d80 Add metric templates to controller tests 2020-02-08 20:09:33 +02:00
stefanprodan
8f99e589a6 Add metrics to controller tests
Fix: #387
2020-02-08 19:08:36 +02:00
stefanprodan
e4e92b3353 Add metric threshold range to e2e tests 2020-02-08 15:14:52 +02:00
stefanprodan
228954b5db Improve Canary CRD schema validation
- add thresholdRange validation
- add Kubernetes Kind validation for target, autoscaler and ingress
- add validation for webhook metadata map[string]string
- add missing Istio types to schema validation
2020-02-08 15:11:11 +02:00
stefanprodan
de03d49f55 Implement metric threshold range
- add CanaryThresholdRange type to Canary API
- add optional thresholdRange field to the analysis metric object
- implement min/max metric result validation
- thresholdRange takes precedence over threshold when both are specified
2020-02-08 15:04:03 +02:00
Stefan Prodan
b7c2dcda0e Merge pull request #423 from weaveworks/crd-v1beta1
crd: Release flagger.app/v1beta1
2020-02-07 13:10:11 +02:00
stefanprodan
f2f2a9fc58 Add provider type to metric template printer 2020-02-07 12:48:39 +02:00
stefanprodan
22589265ce Keep serving v1alpha3 for backwards compat 2020-02-07 12:35:56 +02:00
stefanprodan
3f83f306a5 Upgrade canary resources in-cluster 2020-02-07 12:35:56 +02:00
stefanprodan
448c210324 Release API version v1beta1
- bump Canary and MetricTemplate version to v1beta1
- regenerate clientset and CRD
2020-02-07 12:35:56 +02:00
Stefan Prodan
ea39041b24 Merge pull request #419 from weaveworks/metric-template
Implement metric templates for Prometheus
2020-02-07 12:32:14 +02:00
stefanprodan
eec287a501 Check if CRDs are registered before starting informers 2020-02-06 15:07:53 +02:00
stefanprodan
54c03f4d07 Add metric templates to RBAC 2020-02-06 15:07:53 +02:00
stefanprodan
95b389a8fa Add e2e tests for metric templates 2020-02-06 15:07:53 +02:00
stefanprodan
b17d84a39d Run the metric checks defined in templates 2020-02-06 15:07:53 +02:00
stefanprodan
d7d9d1eabe Migrate the builtin Prometheus checks to metric templates 2020-02-06 15:07:53 +02:00
stefanprodan
d154c63ac3 Implement Prometheus provider 2020-02-06 15:07:53 +02:00
stefanprodan
d9252748d2 Add MetricTemplate CRD and clientset 2020-02-06 15:07:53 +02:00
Stefan Prodan
1cca5a455b Merge pull request #422 from weaveworks/prep-0.23.0
Release v0.23.0
2020-02-06 15:06:23 +02:00
stefanprodan
1b651500a1 Release v0.23.0 2020-02-06 14:49:04 +02:00
Stefan Prodan
e457b6d35c Merge pull request #420 from ta924/manualrollback
Add support for gated rollback
2020-02-06 13:48:32 +02:00
Tanner Altares
402dda71e6 manual push to trigger build 2020-02-05 19:17:45 -06:00
Tanner Altares
69e969ac51 modify the hook name 2020-02-05 14:49:35 -06:00
Tanner Altares
edbc373109 add docs for manual rollback 2020-02-05 14:14:13 -06:00
Tanner Altares
1d23c0f0a2 update CRD manifest to add rollback enum to webhook validation 2020-02-05 10:29:32 -06:00
Tanner Altares
fa950e1a48 support gated rollback 2020-01-30 15:11:59 -06:00
Stefan Prodan
e31ecbedf0 Merge pull request #416 from weaveworks/service-name
Implement service name override
2020-01-28 21:22:41 +02:00
stefanprodan
b982c9e2ae Fix service pod selector 2020-01-26 18:52:15 +02:00
stefanprodan
3766c843fe Add service name field to docs 2020-01-26 13:00:07 +02:00
stefanprodan
e00d9962d6 Use service name override in Kubernetes e2e tests 2020-01-26 12:59:51 +02:00
stefanprodan
940e547e88 Implement service name override
Use targetRef.name as the Kubernetes service name prefix only if service name is not specified
Warn about routing conflicts when service name changes
2020-01-26 12:48:49 +02:00
stefanprodan
e3ecebc9ae Add service name field to Canary CRD 2020-01-26 12:46:08 +02:00
stefanprodan
c38bd144e4 Update Kubernetes packages to v1.17.1 2020-01-25 12:51:44 +02:00
Stefan Prodan
2be6f3d678 Merge pull request #412 from weaveworks/prep-release-0.22.0
Release v0.22.0
2020-01-16 19:50:25 +02:00
stefanprodan
3d7091a56b Use Kubernetes v1.17.0 in e2e tests 2020-01-16 19:33:17 +02:00
stefanprodan
1f0305949e Update Prometheus to v2.15.2 2020-01-16 14:48:06 +02:00
stefanprodan
1332db85c5 Add selector-labels example to docs
Fix: #403
2020-01-16 14:38:50 +02:00
stefanprodan
1f06ec838d Release Flagger v0.22.0 2020-01-16 14:32:33 +02:00
Stefan Prodan
308351918c Merge pull request #411 from weaveworks/contour-up
Update Contour to v1.1 and add Linkerd header
2020-01-16 14:22:51 +02:00
stefanprodan
558a1fc6e6 Add Linkerd l5d-dst-override header to Contour routes 2020-01-16 11:26:02 +02:00
stefanprodan
bc3256e1c5 Update Contour to v1.1 2020-01-16 11:08:55 +02:00
Stefan Prodan
6eaf421f98 Merge pull request #409 from weaveworks/event-webhook
Implement event dispatching webhook
2020-01-16 11:02:32 +02:00
stefanprodan
1271f12d3f Add the event webhook type to docs 2020-01-15 14:29:51 +02:00
stefanprodan
4776b1d285 Implement events dispatching for the event webhook type 2020-01-15 14:12:22 +02:00
stefanprodan
e4dc923299 Add event webhook type to CRD 2020-01-15 14:10:38 +02:00
Stefan Prodan
98ba38d436 Merge pull request #408 from weaveworks/e2e-updates
e2e: Update Kubernetes Kind to v0.7.0
2020-01-15 13:27:14 +02:00
stefanprodan
9d765feb38 Remove deprecated Kind command from e2e 2020-01-14 13:12:54 +02:00
stefanprodan
7e6a70bdbf Update Kubernetes Kind to v0.7.0 2020-01-14 12:55:20 +02:00
Stefan Prodan
455ec1b6e7 Merge pull request #407 from weaveworks/istio-1.4
Update Istio e2e to v1.4.3
2020-01-14 12:48:12 +02:00
Stefan Prodan
3b152a370f Merge pull request #406 from weaveworks/kube-1.17
Update Kubernetes packages to 1.17
2020-01-13 16:03:40 +02:00
stefanprodan
8d7d5e6810 Update Istio e2e to v1.4.3 2020-01-11 20:59:00 +02:00
stefanprodan
8dc4c03258 Update Kubernetes packages to 1.17 2020-01-11 18:24:31 +02:00
Stefan Prodan
0082b3307b Merge pull request #401 from mrparkers/event-webhook
adds general purpose event webhook
2020-01-11 17:54:32 +02:00
Michael Parker
b1a9c33d36 add docs 2020-01-09 16:11:03 -06:00
Michael Parker
6e06cf1074 use unix timestamp ms 2020-01-09 16:10:56 -06:00
Michael Parker
8d61e6f893 rename 2020-01-09 14:26:53 -06:00
Michael Parker
9c71e70a0a webhook tests 2020-01-09 14:25:43 -06:00
Michael Parker
91395ea1ab deepcopy canary for failed notification 2020-01-09 11:05:22 -06:00
Michael Parker
0894304fce use canary copy for new revision notification 2020-01-09 10:45:13 -06:00
Michael Parker
9cfa0ac43f update event payload schema 2020-01-07 11:11:52 -06:00
Michael Parker
1d5029d607 Merge branch 'event-webhook' of github.com:mrparkers/flagger into event-webhook 2020-01-07 09:39:13 -06:00
Michael Parker
e6d1880c93 use correct event type 2020-01-07 09:38:14 -06:00
Michael Parker
6da533090a Update controller.go 2020-01-06 19:12:39 -06:00
Michael Parker
17efcaa6d1 update helm chart 2020-01-06 16:35:52 -06:00
Michael Parker
38dfda9d8f add event-webhook command line flag 2020-01-06 16:35:42 -06:00
stefanprodan
0abc254ef2 Add Contour TLS guide to docs 2020-01-06 16:29:04 +02:00
Stefan Prodan
db427b5e54 Merge pull request #400 from weaveworks/release-0.21.0
Release 0.21.0
2020-01-06 10:23:46 +00:00
stefanprodan
b49d63bdfe Update e2e tests to Linkerd 2.6.1 2020-01-06 12:02:53 +02:00
stefanprodan
c84f7addff Release 0.21.0 2020-01-06 11:43:48 +02:00
Stefan Prodan
5d72398925 Merge pull request #397 from weaveworks/contour
Add support for Contour ingress controller
2020-01-06 08:08:47 +00:00
stefanprodan
11d16468c9 Add Contour TLS guide link to docs 2019-12-29 13:36:55 +02:00
Stefan Prodan
82b61d69b7 Merge pull request #399 from int128/pod-monitor
Add PodMonitor template to flagger chart
2019-12-24 14:35:39 +02:00
Hidetake Iwata
824391321f Add PodMonitor template to flagger chart 2019-12-24 12:55:40 +09:00
stefanprodan
a7c242e437 Add user agent match examples to Contour docs 2019-12-20 18:26:18 +02:00
stefanprodan
1544610203 Add Contour e2e test for canary rollback 2019-12-20 14:38:06 +02:00
stefanprodan
14ca775ed9 Set Contour namespace in kustomization 2019-12-20 14:33:03 +02:00
stefanprodan
f1d29f5951 Set Contour idle timeout to 5m 2019-12-20 14:32:24 +02:00
stefanprodan
ad0a66ffcc Add Contour usage docs and diagrams 2019-12-20 11:47:44 +02:00
stefanprodan
4288fa261c Add Contour reference to docs 2019-12-20 11:47:00 +02:00
stefanprodan
a537637dc9 Add Flagger Kustomize installer for Contour 2019-12-20 11:46:23 +02:00
stefanprodan
851c6701b3 Add unit tests for Contour prefix, timeout and retries 2019-12-19 19:06:47 +02:00
stefanprodan
bb4591106a Add Contour URL prefix 2019-12-19 18:48:31 +02:00
stefanprodan
7641190ecb Add Contour timeout and retry policies 2019-12-19 18:27:35 +02:00
stefanprodan
02b579f128 Add unit tests for Contour routes 2019-12-19 15:30:53 +02:00
stefanprodan
9cf6b407f1 Add unit tests for Contour router reconciliation 2019-12-19 15:15:02 +02:00
stefanprodan
c3564176f8 Add unit tests for Contour observer 2019-12-19 12:41:39 +02:00
stefanprodan
ae9cf57fd5 Add e2e tests for Contour header routing 2019-12-19 12:22:57 +02:00
stefanprodan
ae63b01373 Implement Contour A/B testing 2019-12-19 12:02:20 +02:00
stefanprodan
c066a9163b Set HTTPProxy status on init 2019-12-19 09:58:32 +02:00
stefanprodan
38b04f2690 Add Contour canary e2e tests 2019-12-19 09:38:23 +02:00
stefanprodan
ee0e7b091a Implement Contour router for traffic shifting 2019-12-18 19:29:17 +02:00
stefanprodan
e922c3e9d9 Add Contour metrics 2019-12-18 19:29:17 +02:00
stefanprodan
2c31a4bf90 Add Contour CRD to Flagger RBAC 2019-12-18 19:29:17 +02:00
stefanprodan
7332e6b173 Add Contour HTTPProxy CRD and clientset 2019-12-18 19:29:17 +02:00
Stefan Prodan
968d67a7c3 Merge pull request #386 from mumoshu/envoy-canary-analysis
feat: Support for canary analysis on deployments and services behind Envoy
2019-12-18 19:22:18 +02:00
Yusuke Kuoka
266b957fc6 Fix CrossoverServiceObserver's ID 2019-12-18 22:11:21 +09:00
Yusuke Kuoka
357ef86c8b Differentiate AppMesh observer vs Crossover observer
To not break AppMesh integration.
2019-12-18 22:03:30 +09:00
Yusuke Kuoka
d75ade5e8c Fix envoy dashboard, scheduler, and envoy metrics provider to correctly pass canary analysis and show graphs 2019-12-18 10:55:49 +09:00
Yusuke Kuoka
806b95c8ce Do send http requests only to canary for canary analysis 2019-12-18 09:06:22 +09:00
Yusuke Kuoka
bf58cd763f Do use correct envoy metrics for canary analysis 2019-12-18 09:05:37 +09:00
Yusuke Kuoka
52856177e3 Fix trafficsplits api version for envoy+crossover 2019-12-18 09:03:41 +09:00
Yusuke Kuoka
58c3cebaac Fix the dashboard and the steps to browse it 2019-12-17 20:18:33 +09:00
Yusuke Kuoka
1e5d05c3fc Improve Envoy/Crossover installation experience with the chart registry 2019-12-17 17:02:50 +09:00
Yusuke Kuoka
020129bf5c Fix misconfiguration 2019-12-17 15:45:16 +09:00
Stefan Prodan
3ff0786e1f Merge pull request #394 from weaveworks/helm-tester-v3.0.1
Update Helm tester to Helm v3.0.1
2019-12-17 08:21:57 +02:00
stefanprodan
a60dc55dad Update Helm tester to Helm v3.0.1 2019-12-17 00:10:11 +02:00
Stefan Prodan
ff6acae544 Merge pull request #391 from weaveworks/appmesh-docs-fix
App Mesh docs fixes
2019-12-06 00:13:34 +07:00
stefanprodan
09b5295c85 Fix App Mesh gateway namespace 2019-12-05 23:39:13 +07:00
stefanprodan
9e423a6f71 Fix metrics-server install for EKS 2019-12-05 23:36:58 +07:00
Stefan Prodan
0ef05edf1e Merge pull request #390 from weaveworks/e2e-kube-1.16
Update e2e tests to Kubernetes v1.16
2019-12-05 18:06:39 +07:00
stefanprodan
a59901aaa9 Update e2e tests to Kubernetes 1.16 2019-12-04 15:35:36 +07:00
Stefan Prodan
53be3e07d2 Merge pull request #389 from weaveworks/release-0.20.4
Release 0.20.4
2019-12-03 14:56:40 +07:00
stefanprodan
2eb2ae52cd Release v0.20.4 2019-12-03 14:31:07 +07:00
stefanprodan
7bcc76eca0 Update Grafana to 6.5.1 2019-12-03 14:30:03 +07:00
Yusuke Kuoka
0d531e7bd1 Fix loadtester config in the envoy doc 2019-12-01 23:29:21 +09:00
Yusuke Kuoka
08851f83c7 Make envoy + crossover installation a bit more understandable 2019-12-01 23:25:29 +09:00
Stefan Prodan
295f5d7b39 Merge pull request #384 from weaveworks/svc-init
Add initialization phase to Kubernetes router
2019-12-01 10:08:18 +07:00
Yusuke Kuoka
a828524957 Add the guide for using Envoy and Crossover for Deployment targets
Ref #385
2019-11-30 13:03:01 +09:00
Yusuke Kuoka
6661406b75 Metrics provider for deployments and services behind Envoy
Assumes `envoy:smi` as the mesh provider name as I've successfully tested the progressive delivery for Envoy + Crossover with it.

This enhances Flagger to translate it to the metrics provider name of `envoy` for deployment targets, or `envoy:service` for service targets.

The `envoy` metrics provider is equivalent to `appmesh`, as both relies on the same set of standard metrics exposed by Envoy itself.

The `envoy:service` is almost the same as the `envoy` provider, but removing the condition on pod name, as we only need to filter on the backing service name = envoy_cluster_name. We don't consider other Envoy xDS implementations that uses anything that is different to original servicen ames as `envoy_cluster_name`, for now.

Ref #385
2019-11-30 13:03:01 +09:00
stefanprodan
8766523279 Add initialization phase to Kubernetes router
Create Kubernetes services before deployments because Envoy's readiness depends on existing ClusterIPs
2019-11-27 22:15:04 +02:00
Stefan Prodan
b02a6da614 Merge pull request #383 from weaveworks/e2e-ups
Update nginx-ingress to 1.26.0
2019-11-27 18:51:27 +02:00
stefanprodan
89d7cb1b04 Update nginx-ingress to 1.26.0 2019-11-27 17:48:37 +02:00
Stefan Prodan
59d18de753 Merge pull request #372 from mumoshu/svc-support
feat: Canary-release anything behind K8s service
2019-11-27 16:44:56 +02:00
Yusuke Kuoka
e1d8703a15 Refactor to merge KubernetesServiceRouter into ServiceController
The current design is that everything related to managing the targeted resource should go into the respective implementation of `canary.Controller`. In the service-canary use-case our target is Service so rather than splitting and scattering the logics over Controller and Router, everything should naturally go to `ServiceController`. Maybe at the time of writing the first implementation, I was confusing the target service vs the router.
2019-11-27 22:40:40 +09:00
Yusuke Kuoka
1ba595bc6f feat: Canary-release anything behind K8s service
Resolves #371

---

This adds the support for `corev1.Service` as the `targetRef.kind`, so that we can use Flagger just for canary analysis and traffic-shifting on existing and pre-created services. Flagger doesn't touch deployments and HPAs in this mode.

This is useful for keeping your full-control on the resources backing the service to be canary-released, including pods(behind a ClusterIP service) and external services(behind an ExternalName service).

Major use-case in my mind are:

- Canary-release a K8s cluster. You create two clusters and a master cluster. In the master cluster, you create two `ExternalName` services pointing to (the hostname of the loadbalancer of the targeted app instance in) each cluster. Flagger runs on the master cluster and helps safely rolling-out a new K8s cluster by doing a canary release on the `ExternalName` service.
- You want annotations and labels added to the service for integrating with things like external lbs(without extending Flagger to support customizing any aspect of the K8s service it manages

**Design**:

A canary release on a K8s service is almost the same as one on a K8s deployment. The only fundamental difference is that it operates only on a set of K8s services.

For example, one may start by creating two Helm releases for `podinfo-blue` and `podinfo-green`, and a K8s service `podinfo`. The `podinfo` service should initially have the same `Spec` as that of  `podinfo-blue`.

On a new release, you update `podinfo-green`, then trigger Flagger by updating the K8s service `podinfo` so that it points to pods or `externalName` as declared in `podinfo-green`. Flagger does the rest. The end result is the traffic to `podinfo` is gradually and safely shifted from `podinfo-blue` to `podinfo-green`.

**How it works**:

Under the hood, Flagger maintains two K8s services, `podinfo-primary` and `podinfo-canary`. Compared to canaries on K8s deployments, it doesn't create the service named `podinfo`, as it is already provided by YOU.

Once Flagger detects the change in the `podinfo` service, it updates the `podinfo-canary` service and the routes, then analyzes the canary. On successful analysis, it promotes the canary service to the `podinfo-primary` service. You expose the `podinfo` service via any L7 ingress solution or a service mesh so that the traffic is managed by Flagger for safe deployments.

**Giving it a try**:

To give it a try, create a `Canary` as usual, but its `targetRef` pointed to a K8s service:

```
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
  name: podinfo
spec:
  provider: kubernetes
  targetRef:
    apiVersion: core/v1
    kind: Service
    name: podinfo
  service:
    port: 9898
  canaryAnalysis:
    # schedule interval (default 60s)
    interval: 10s
    # max number of failed checks before rollback
    threshold: 2
    # number of checks to run before rollback
    iterations: 2
    # Prometheus checks based on
    # http_request_duration_seconds histogram
    metrics: []
```

Create a K8s service named `podinfo`, and update it. Now watch for the services `podinfo`, `podinfo-primary`, `podinfo-canary`.

Flagger tracks `podinfo` service for changes. Upon any change, it reconciles `podinfo-primary` and `podinfo-canary` services. `podinfo-canary` always replicate the latest `podinfo`. In contract, `podinfo-primary` replicates the latest successful `podinfo-canary`.

**Notes**:

- For the canary cluster use-case, we would need to write a K8s operator to, e.g. for App Mesh, sync `ExternalName` services to AppMesh `VirtualNode`s. But that's another story!
2019-11-27 09:07:29 +09:00
Stefan Prodan
446a2b976c Merge pull request #380 from weaveworks/skip-primary-check
Skip primary check on skip analysis
2019-11-26 14:25:57 +02:00
stefanprodan
9af6ade54d Skip primary check on skip analysis 2019-11-25 23:48:22 +02:00
Stefan Prodan
3fbe62aa47 Merge pull request #378 from weaveworks/refac-deployer
Refactor canary package
2019-11-25 21:03:16 +02:00
stefanprodan
4454c9b5b5 Add canary factory for Kubernetes targets
- extract Kubernetes operations to controller interface
- implement controller interface for kind Deployment
2019-11-25 18:45:19 +02:00
Stefan Prodan
c2cf9bf4b1 Merge pull request #373 from sfxworks/deployment-fix
Upgrade deployment spec to apps v1
2019-11-23 16:55:14 +00:00
Samuel Walker
3afc7978bd upgrade deployment spec to apps v1 2019-11-18 11:10:15 -05:00
stefanprodan
7a0ba8b477 Update v0.20.3 changelog 2019-11-13 14:06:14 +02:00
Stefan Prodan
0eb21a98a5 Merge pull request #368 from weaveworks/wrk
Add wrk to load tester tools
2019-11-13 13:59:28 +02:00
stefanprodan
2876092912 Update flagger-appmesh-gateway to 1.1.0 2019-11-13 13:07:59 +02:00
stefanprodan
3dbfa34a53 Add wrk to load tester tools
- add wrk v4.0.2
- update Helm v2 to 2.16.1
- update Helm v3 to 3.0.0-rc.3
2019-11-13 12:54:47 +02:00
Stefan Prodan
656f81787c Merge pull request #367 from andrew-demb/patch-1
Fixed readiness/liveness probe example in docs
2019-11-13 12:10:19 +02:00
Andrii Dembitskyi
920d558fde Fixed readiness/liveness probe example in docs 2019-11-13 09:24:12 +02:00
stefanprodan
638a9f1c93 Fix App Mesh gateway deployment 2019-11-12 13:18:45 +02:00
stefanprodan
f1c3ee7a82 Release v0.20.3 2019-11-11 19:14:05 +02:00
Stefan Prodan
878f106573 Merge pull request #365 from weaveworks/appmesh-gateway-chart
Add App Mesh gateway chart
2019-11-08 21:40:21 +02:00
stefanprodan
945eded6bf Add the App Mesh Gateway to docs 2019-11-08 21:02:51 +02:00
stefanprodan
f94f9c23d6 Patch cluster role bindings in kustomization 2019-11-08 12:40:14 +02:00
stefanprodan
527b73e8ef Use App Mesh Prometheus in kustomization 2019-11-08 12:39:45 +02:00
stefanprodan
d4555c5919 Use weaveworks logo in Helm charts 2019-11-08 12:38:47 +02:00
stefanprodan
560bb93e3d Add App Mesh gateway Helm chart 2019-11-08 12:38:06 +02:00
Stefan Prodan
e7fc72e6b5 Merge pull request #364 from weaveworks/release-0.20.2
Release v0.20.2
2019-11-07 12:08:18 +02:00
stefanprodan
4203232b05 Release v0.20.2 2019-11-07 11:34:25 +02:00
stefanprodan
a06aa05201 Add canary namespace to Linkerd webhooks example 2019-11-07 11:34:00 +02:00
Stefan Prodan
8e582e9b73 Merge pull request #363 from weaveworks/no-hpa
Use the specified replicas when scaling up the canary
2019-11-07 10:44:31 +02:00
stefanprodan
0e9fe8a446 Remove the traffic mention from the custom metrics error log
Fix: #361
2019-11-07 09:36:38 +02:00
stefanprodan
27b4bcc648 Use the specified replicas when scaling up the canary 2019-11-07 09:34:53 +02:00
Stefan Prodan
614b7c74c4 Merge pull request #358 from weaveworks/appmesh-gateway
Expose canaries on public domains with App Mesh Gateway
2019-11-06 13:21:20 +02:00
Stefan Prodan
5901129ec6 Merge pull request #359 from KeisukeYamashita/fix-typo-in-how-it-works
Fix typo in section "Webhook" of how-it-works.md
2019-11-06 13:20:53 +02:00
KeisukeYamashita
ded14345b4 doc(how-it-works): fix typo ca to can in how it works doc 2019-11-05 17:39:45 +09:00
stefanprodan
dd272c6870 Expose canaries on public domains with App Mesh Gateway
- map canary service hosts to domain gateway annotation
- map canary retries and timeout to gateway annotations
2019-11-04 18:26:28 +02:00
Stefan Prodan
b31c7c6230 Merge pull request #356 from weaveworks/docs-cleanup
Docs cleanup
2019-11-04 00:52:47 +02:00
stefanprodan
b0297213c3 Use kustomize in Istio docs 2019-11-04 00:35:28 +02:00
stefanprodan
d0fba2d111 Update Istio SMI tutorial 2019-11-04 00:13:19 +02:00
stefanprodan
9924cc2152 Update NGINX usage docs 2019-11-04 00:12:51 +02:00
Stefan Prodan
008a74f86c Merge pull request #354 from weaveworks/prep-0.20.1
Release v0.20.1
2019-11-03 12:29:14 +02:00
stefanprodan
4ca110292f Add v0.20.1 changelog 2019-11-03 11:57:58 +02:00
stefanprodan
55b4c19670 Release v0.20.1 2019-11-03 11:47:16 +02:00
stefanprodan
8349dd1cda Release load tester v0.11.0
- tools updates: Helm v2.15.1, Helm v3.0.0-rc.2, rimusz helm-tiller v0.9.3, gPRC probe v0.3.1
- add hey test during build
2019-11-03 11:46:18 +02:00
Stefan Prodan
402fb66b2a Merge pull request #353 from weaveworks/fix-promql
Fix Prometheus query escape
2019-11-03 11:04:43 +02:00
stefanprodan
f991274b97 Fix Prometheus query escape
Removing whitespace without trimming spaces
2019-11-03 00:01:32 +02:00
Stefan Prodan
0d94a49b6a Merge pull request #350 from laszlocph/update-hey-link
Updating hey release link
2019-10-30 09:01:56 +02:00
Laszlo Fogas
7c14225442 Updating hey release link 2019-10-30 06:40:57 +01:00
stefanprodan
2af0a050bc Fix Prometheus URL in EKS install docs 2019-10-29 18:32:15 +02:00
Stefan Prodan
582f8d6abd Merge pull request #346 from weaveworks/e2e-up
e2e testing: update providers
2019-10-28 16:26:06 +02:00
stefanprodan
eeea3123ac Update e2e NGINX ingress to v1.24.4 2019-10-28 16:08:00 +02:00
stefanprodan
51fe43e169 Update e2e Helm to v2.15.1 2019-10-28 15:32:02 +02:00
stefanprodan
6e6b127092 Update loadtester Helm to v3.0.0-beta.5 2019-10-28 15:31:17 +02:00
stefanprodan
c9bacdfe05 Update Istio to v1.3.3 2019-10-28 15:19:17 +02:00
stefanprodan
f56a69770c Update Linkerd to v2.6.0 2019-10-28 14:42:16 +02:00
Stefan Prodan
0196124c9f Merge pull request #343 from weaveworks/prep-0.20.0
Release v0.20.0
2019-10-22 19:11:59 +03:00
stefanprodan
63756d9d5f Add changelog for v0.20.0 2019-10-22 17:54:18 +03:00
stefanprodan
8e346960ac Add blue/green service mesh docs 2019-10-22 16:57:49 +03:00
stefanprodan
1b485b3459 Release v0.20.0 2019-10-22 09:39:14 +03:00
Stefan Prodan
ee05108279 Merge pull request #344 from weaveworks/gloo-refactoring
Gloo integration refactoring
2019-10-22 09:38:19 +03:00
stefanprodan
dfaa039c9c Update Goo docs 2019-10-22 00:48:15 +03:00
stefanprodan
46579d2ee6 Refactor Gloo integration
- build Gloo UpstreamGroup clientset
- drop solo-io, envoyproxy, hcl, consul, opencensus, apiextensions deps
- use the native routers with supergloo
2019-10-21 16:33:47 +03:00
Stefan Prodan
f372523fb8 Merge pull request #342 from weaveworks/prom-config
Implement metrics server override
2019-10-17 17:24:24 +03:00
stefanprodan
5e434df6ea Exclude high cardinality cAdvisor metrics 2019-10-17 13:02:18 +03:00
stefanprodan
d6c5bdd241 Implement metrics server override 2019-10-17 11:37:54 +03:00
stefanprodan
cdcd97244c Add the metrics server field to CRD 2019-10-17 11:36:25 +03:00
Stefan Prodan
60c4bba263 Merge pull request #340 from weaveworks/appmesh-ab-testing
Implement App Mesh A/B testing
2019-10-17 10:54:31 +03:00
stefanprodan
2b73bc5e38 Fix A/B testing examples 2019-10-17 09:12:39 +03:00
stefanprodan
03652dc631 Add App Mesh http match headers tests 2019-10-16 15:43:26 +03:00
stefanprodan
00155aff37 Add App Mesh A/B testing example to docs 2019-10-16 10:49:33 +03:00
stefanprodan
206c3e6d7a Implement App Mesh A/B testing 2019-10-15 16:39:54 +03:00
Stefan Prodan
8345fea812 Merge pull request #338 from weaveworks/appmesh-up
Implement App Mesh HTTP retry policy
2019-10-15 08:45:49 +03:00
stefanprodan
c11dba1e05 Add retry policy to docs and examples 2019-10-14 21:03:57 +03:00
stefanprodan
7d4c3c5814 Implement App Mesh HTTP retry policy 2019-10-14 20:27:48 +03:00
stefanprodan
9b36794c9d Update App Mesh CRD 2019-10-14 20:26:46 +03:00
Stefan Prodan
1f34c656e9 Merge pull request #336 from weaveworks/appmesh-router-fix
Generate unique names for App Mesh virtual routers and routes
2019-10-14 19:25:08 +03:00
stefanprodan
9982dc9c83 Generate unique names for App Mesh virtual routers and routes 2019-10-14 19:07:10 +03:00
Stefan Prodan
780f3d2ab9 Merge pull request #334 from weaveworks/env-vars
Allow setting Slack and Teams URLs with env vars
2019-10-10 09:05:04 +03:00
stefanprodan
1cb09890fb Add env to chart options to be used for Slack and Teams URLs 2019-10-09 16:53:34 +03:00
stefanprodan
faae6a7c3b Add env vars for Slack and Teams URLs 2019-10-09 16:03:30 +03:00
Stefan Prodan
d4250f3248 Merge pull request #333 from weaveworks/default-labels
Add the app/name label to services and primary deployment
2019-10-09 13:45:14 +03:00
stefanprodan
a8ee477b62 Add selector labels option to Helm chart 2019-10-09 13:22:10 +03:00
stefanprodan
673b6102a7 Add the name label to ClusterIP services and primary deployment 2019-10-09 13:01:15 +03:00
494 changed files with 45143 additions and 14206 deletions

View File

@@ -3,7 +3,7 @@ jobs:
build-binary:
docker:
- image: circleci/golang:1.13
- image: circleci/golang:1.14
working_directory: ~/build
steps:
- checkout
@@ -11,8 +11,11 @@ jobs:
keys:
- go-mod-v3-{{ checksum "go.sum" }}
- run:
name: Run go fmt
command: make test-fmt
name: Run go mod download
command: go mod download
- run:
name: Check code formatting
command: go install golang.org/x/tools/cmd/goimports && make test-fmt
- run:
name: Build Flagger
command: |
@@ -44,7 +47,7 @@ jobs:
push-container:
docker:
- image: circleci/golang:1.13
- image: circleci/golang:1.14
steps:
- checkout
- setup_remote_docker:
@@ -56,7 +59,7 @@ jobs:
push-binary:
docker:
- image: circleci/golang:1.13
- image: circleci/golang:1.14
working_directory: ~/build
steps:
- checkout
@@ -65,19 +68,10 @@ jobs:
- restore_cache:
keys:
- go-mod-v3-{{ checksum "go.sum" }}
- run: make release-notes
- run: github-release-notes -org weaveworks -repo flagger -since-latest-release -include-author > /tmp/release.txt
- run: test/goreleaser.sh
e2e-istio-testing:
machine: true
steps:
- checkout
- attach_workspace:
at: /tmp/bin
- run: test/container-build.sh
- run: test/e2e-kind.sh
- run: test/e2e-istio.sh
- run: test/e2e-tests.sh
e2e-kubernetes-testing:
machine: true
steps:
@@ -85,31 +79,27 @@ jobs:
- attach_workspace:
at: /tmp/bin
- run: test/container-build.sh
- run: test/e2e-kind.sh
- run: test/e2e-kind.sh v1.18.2
- run: test/e2e-kubernetes.sh
- run: test/e2e-kubernetes-tests.sh
- run: test/e2e-kubernetes-tests-deployment.sh
- run: test/e2e-kubernetes-cleanup.sh
- run: test/e2e-kubernetes-tests-daemonset.sh
e2e-smi-istio-testing:
e2e-istio-testing:
machine: true
steps:
- checkout
- attach_workspace:
at: /tmp/bin
- run: test/container-build.sh
- run: test/e2e-kind.sh
- run: test/e2e-smi-istio.sh
- run: test/e2e-tests.sh canary
e2e-supergloo-testing:
machine: true
steps:
- checkout
- attach_workspace:
at: /tmp/bin
- run: test/container-build.sh
- run: test/e2e-kind.sh 0.2.1
- run: test/e2e-supergloo.sh
- run: test/e2e-tests.sh canary
- run: test/e2e-kind.sh v1.18.2
- run: test/e2e-istio.sh
- run: test/e2e-istio-dependencies.sh
- run: test/e2e-istio-tests.sh
- run: test/e2e-istio-tests-skip-analysis.sh
- run: test/e2e-kubernetes-cleanup.sh
- run: test/e2e-istio-dependencies.sh
- run: test/e2e-istio-tests-delegate.sh
e2e-gloo-testing:
machine: true
@@ -147,9 +137,44 @@ jobs:
- run: test/e2e-linkerd.sh
- run: test/e2e-linkerd-tests.sh
e2e-contour-testing:
machine: true
steps:
- checkout
- attach_workspace:
at: /tmp/bin
- run: test/container-build.sh
- run: test/e2e-kind.sh
- run: test/e2e-contour.sh
- run: test/e2e-contour-tests.sh
e2e-skipper-testing:
machine: true
steps:
- checkout
- attach_workspace:
at: /tmp/bin
- run: test/container-build.sh
- run: test/e2e-kind.sh
- run: test/e2e-skipper.sh
- run: test/e2e-skipper-tests.sh
- run: test/e2e-skipper-cleanup.sh
e2e-traefik-testing:
machine: true
steps:
- checkout
- attach_workspace:
at: /tmp/bin
- run: test/container-build.sh
- run: test/e2e-kind.sh
- run: test/e2e-traefik.sh
- run: test/e2e-traefik-tests.sh
- run: test/e2e-skipper-cleanup.sh
push-helm-charts:
docker:
- image: circleci/golang:1.13
- image: circleci/golang:1.14
steps:
- checkout
- run:
@@ -173,7 +198,7 @@ jobs:
- run:
name: Publish charts
command: |
if echo "${CIRCLE_TAG}" | grep -Eq "[0-9]+(\.[0-9]+)*(-[a-z]+)?$"; then
if echo "${CIRCLE_TAG}" | grep v; then
REPOSITORY="https://weaveworksbot:${GITHUB_TOKEN}@github.com/weaveworks/flagger.git"
git config user.email weaveworksbot@users.noreply.github.com
git config user.name weaveworksbot
@@ -197,15 +222,13 @@ workflows:
branches:
ignore:
- gh-pages
- e2e-istio-testing:
requires:
- build-binary
- /^user-.*/
- e2e-kubernetes-testing:
requires:
- build-binary
# - e2e-supergloo-testing:
# requires:
# - build-binary
- e2e-istio-testing:
requires:
- build-binary
- e2e-gloo-testing:
requires:
- build-binary
@@ -215,15 +238,29 @@ workflows:
- e2e-linkerd-testing:
requires:
- build-binary
- e2e-contour-testing:
requires:
- build-binary
- e2e-skipper-testing:
requires:
- build-binary
- e2e-traefik-testing:
requires:
- build-binary
- push-container:
requires:
- build-binary
- e2e-istio-testing
- e2e-kubernetes-testing
#- e2e-supergloo-testing
- e2e-istio-testing
- e2e-gloo-testing
- e2e-nginx-testing
- e2e-linkerd-testing
- e2e-skipper-testing
- e2e-traefik-testing
filters:
branches:
only:
- master
release:
jobs:
@@ -256,4 +293,4 @@ workflows:
branches:
ignore: /.*/
tags:
ignore: /^chart.*/
ignore: /^chart.*/

View File

@@ -8,4 +8,7 @@ coverage:
patch: off
comment:
require_changes: yes
require_changes: true
branches:
- "!docs"
- "!release"

View File

@@ -1 +1,15 @@
root: ./docs/gitbook
root: ./docs/gitbook
redirects:
how-it-works: usage/how-it-works.md
usage/progressive-delivery: tutorials/istio-progressive-delivery.md
usage/ab-testing: tutorials/istio-ab-testing.md
usage/blue-green: tutorials/kubernetes-blue-green.md
usage/appmesh-progressive-delivery: tutorials/appmesh-progressive-delivery.md
usage/linkerd-progressive-delivery: tutorials/linkerd-progressive-delivery.md
usage/contour-progressive-delivery: tutorials/contour-progressive-delivery.md
usage/gloo-progressive-delivery: tutorials/gloo-progressive-delivery.md
usage/nginx-progressive-delivery: tutorials/nginx-progressive-delivery.md
usage/skipper-progressive-delivery: tutorials/skipper-progressive-delivery.md
usage/crossover-progressive-delivery: tutorials/crossover-progressive-delivery.md
usage/traefik-progressive-delivery: tutorials/traefik-progressive-delivery.md

5
.gitignore vendored
View File

@@ -16,4 +16,7 @@ bin/
_tmp/
artifacts/gcloud/
.idea
.idea
Makefile.dev
vendor

View File

@@ -2,6 +2,440 @@
All notable changes to this project are documented in this file.
## 1.4.2 (2020-12-09)
Fix Istio virtual service delegation
#### Improvements
- Add Prometheus basic-auth config to docs
[#746](https://github.com/weaveworks/flagger/pull/746)
- Update Prometheus to 2.23.0 and Grafana to 7.3.4
[#747](https://github.com/weaveworks/flagger/pull/747)
#### Fixes
- Fix for VirtualService delegation when analysis is enabled
[#745](https://github.com/weaveworks/flagger/pull/745)
## 1.4.1 (2020-12-08)
Prevent primary ConfigMaps and Secrets from being pruned by Flux
#### Improvements
- Apply label prefix rules for ConfigMaps and Secrets
[#743](https://github.com/weaveworks/flagger/pull/743)
## 1.4.0 (2020-12-07)
Add support for Traefik ingress controller
#### Features
- Add Traefik support for progressive traffic shifting with `TraefikService`
[#736](https://github.com/weaveworks/flagger/pull/736)
- Add support for HPA v2beta2 behaviors
[#740](https://github.com/weaveworks/flagger/pull/740)
## 1.3.0 (2020-11-23)
Add support for custom weights when configuring traffic shifting
#### Features
- Support AWS App Mesh backends ARN
[#715](https://github.com/weaveworks/flagger/pull/715)
- Add support for Istio VirtualService delegation
[#715](https://github.com/weaveworks/flagger/pull/715)
- Copy labels from canary to primary workloads based on prefix rules
[#709](https://github.com/weaveworks/flagger/pull/709)
#### Improvements
- Add QPS and Burst configs for kubernetes client
[#725](https://github.com/weaveworks/flagger/pull/725)
- Update Istio to v1.8.0
[#733](https://github.com/weaveworks/flagger/pull/733)
## 1.2.0 (2020-09-29)
Add support for New Relic metrics
#### Features
- Add New Relic as a metrics provider
[#691](https://github.com/weaveworks/flagger/pull/691)
#### Improvements
- Derive the label selector value from the target matchLabel
[#685](https://github.com/weaveworks/flagger/pull/685)
- Preserve Skipper predicates
[#681](https://github.com/weaveworks/flagger/pull/681)
#### Fixes
- Do not promote when not ready on skip analysis
[#695](https://github.com/weaveworks/flagger/pull/695)
## 1.1.0 (2020-08-18)
Add support for Skipper ingress controller
#### Features
- Skipper Ingress Controller support
[#670](https://github.com/weaveworks/flagger/pull/670)
- Support per-config configTracker disable via ConfigMap/Secret annotation
[#671](https://github.com/weaveworks/flagger/pull/671)
#### Improvements
- Add priorityClassName and securityContext to Helm charts
[#652](https://github.com/weaveworks/flagger/pull/652)
[#668](https://github.com/weaveworks/flagger/pull/668)
- Update Kubernetes packages to v1.18.8
[#672](https://github.com/weaveworks/flagger/pull/672)
- Update Istio, Linkerd and Contour e2e tests
[#661](https://github.com/weaveworks/flagger/pull/661)
#### Fixes
- Fix O(log n) bug over network in GetTargetConfigs
[#663](https://github.com/weaveworks/flagger/pull/663)
- Fix(grafana): metrics change since Kubernetes 1.16
[#663](https://github.com/weaveworks/flagger/pull/663)
## 1.0.1 (2020-07-18)
Add support for App Mesh Gateway GA
#### Improvements
- Update App Mesh docs to v1beta2 API
[#649](https://github.com/weaveworks/flagger/pull/649)
- Add threadiness to Flagger helm chart
[#643](https://github.com/weaveworks/flagger/pull/643)
- Add Istio virtual service to loadtester helm chart
[#643](https://github.com/weaveworks/flagger/pull/643)
#### Fixes
- Fix multiple paths per rule on canary ingress
[#632](https://github.com/weaveworks/flagger/pull/632)
- Fix installers for kustomize >= 3.6.0
[#646](https://github.com/weaveworks/flagger/pull/646)
## 1.0.0 (2020-06-17)
This is the GA release for Flagger v1.0.0.
The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.app/dev/upgrade-guide).
Two new resources were added to the API: `MetricTemplate` and `AlertProvider`.
The analysis can reference [metric templates](https://docs.flagger.app//usage/metrics#custom-metrics)
to query Prometheus, Datadog and AWS CloudWatch.
[Alerting](https://docs.flagger.app/v/master/usage/alerting#canary-configuration) can be configured on a per
canary basis for Slack, MS Teams, Discord and Rocket.
#### Features
- Implement progressive promotion
[#593](https://github.com/weaveworks/flagger/pull/593)
#### Improvements
- istio: Add source labels to analysis matching rules
[#594](https://github.com/weaveworks/flagger/pull/594)
- istio: Add allow origins field to CORS spec
[#604](https://github.com/weaveworks/flagger/pull/604)
- istio: Change builtin metrics to work with Istio telemetry v2
[#623](https://github.com/weaveworks/flagger/pull/623)
- appmesh: Implement App Mesh v1beta2 timeout
[#611](https://github.com/weaveworks/flagger/pull/611)
- metrics: Check metrics server availability during canary initialization
[#592](https://github.com/weaveworks/flagger/pull/592)
## 1.0.0-rc.5 (2020-05-14)
This is a release candidate for Flagger v1.0.0.
The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.app/dev/upgrade-guide).
#### Features
- Add support for AWS AppMesh v1beta2 API
[#584](https://github.com/weaveworks/flagger/pull/584)
- Add support for Contour v1.4 ingress class
[#588](https://github.com/weaveworks/flagger/pull/588)
- Add user-specified labels/annotations to the generated Services
[#538](https://github.com/weaveworks/flagger/pull/538)
#### Improvements
- Support compatible Prometheus service
[#557](https://github.com/weaveworks/flagger/pull/557)
- Update e2e tests and packages to Kubernetes v1.18
[#549](https://github.com/weaveworks/flagger/pull/549)
[#576](https://github.com/weaveworks/flagger/pull/576)
#### Fixes
- pkg/controller: retry canary initialization on conflict
[#586](https://github.com/weaveworks/flagger/pull/586)
## 1.0.0-rc.4 (2020-04-03)
This is a release candidate for Flagger v1.0.0.
The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.app/dev/upgrade-guide).
**Breaking change**: the minimum supported version of Kubernetes is v1.14.0.
#### Features
- Implement NGINX Ingress header regex matching
[#546](https://github.com/weaveworks/flagger/pull/546)
#### Improvements
- pkg/router: update ingress API to networking.k8s.io/v1beta1
[#534](https://github.com/weaveworks/flagger/pull/534)
- loadtester: add return cmd output option
[#535](https://github.com/weaveworks/flagger/pull/535)
- refactoring: finalizer error handling and unit testing
[#531](https://github.com/weaveworks/flagger/pull/535)
[#530](https://github.com/weaveworks/flagger/pull/530)
- chart: add finalizers to RBAC rules for OpenShift
[#537](https://github.com/weaveworks/flagger/pull/537)
- chart: allow security context to be disabled on OpenShift
[#543](https://github.com/weaveworks/flagger/pull/543)
- chart: add annotations for service account
[#521](https://github.com/weaveworks/flagger/pull/521)
- docs: Add Prometheus Operator tutorial
[#524](https://github.com/weaveworks/flagger/pull/524)
#### Fixes
- pkg/controller: avoid status conflicts on initialization
[#544](https://github.com/weaveworks/flagger/pull/544)
- pkg/canary: fix status retry
[#541](https://github.com/weaveworks/flagger/pull/541)
- loadtester: fix timeout errors
[#539](https://github.com/weaveworks/flagger/pull/539)
- pkg/canary/daemonset: fix readiness check
[#529](https://github.com/weaveworks/flagger/pull/529)
- logs: reduce log verbosity and fix typos
[#540](https://github.com/weaveworks/flagger/pull/540)
[#526](https://github.com/weaveworks/flagger/pull/526)
## 1.0.0-rc.3 (2020-03-23)
This is a release candidate for Flagger v1.0.0.
The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.app/dev/upgrade-guide).
#### Features
- Add opt-in finalizers to revert Flagger's mutations on deletion of a canary
[#495](https://github.com/weaveworks/flagger/pull/495)
#### Improvements
- e2e: update end-to-end tests to Contour 1.3.0 and Gloo 1.3.14
[#519](https://github.com/weaveworks/flagger/pull/519)
- build: update Kubernetes packages to 1.17.4
[#516](https://github.com/weaveworks/flagger/pull/516)
#### Fixes
- Preserve node ports on service reconciliation
[#514](https://github.com/weaveworks/flagger/pull/514)
## 1.0.0-rc.2 (2020-03-19)
This is a release candidate for Flagger v1.0.0.
The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.app/dev/upgrade-guide).
#### Features
- Make mirror percentage configurable when using Istio traffic shadowing
[#492](https://github.com/weaveworks/flagger/pull/455)
- Add support for running Concord tests with loadtester webhooks
[#507](https://github.com/weaveworks/flagger/pull/507)
#### Improvements
- docs: add Istio telemetry v2 upgrade guide
[#486](https://github.com/weaveworks/flagger/pull/486),
update A/B testing tutorial for Istio 1.5
[#502](https://github.com/weaveworks/flagger/pull/502),
add how to retry a failed release to FAQ
[#494](https://github.com/weaveworks/flagger/pull/494)
- e2e: update end-to-end tests to
Istio 1.5 [#447](https://github.com/weaveworks/flagger/pull/447) and
NGINX Ingress 0.30
[#489](https://github.com/weaveworks/flagger/pull/489)
[#511](https://github.com/weaveworks/flagger/pull/511)
- refactoring:
error handling [#480](https://github.com/weaveworks/flagger/pull/480),
scheduler [#484](https://github.com/weaveworks/flagger/pull/484) and
unit tests [#475](https://github.com/weaveworks/flagger/pull/475)
- chart: add the log level configuration to Flagger helm chart
[#506](https://github.com/weaveworks/flagger/pull/506)
#### Fixes
- Fix nil pointer for the global notifiers [#504](https://github.com/weaveworks/flagger/pull/504)
## 1.0.0-rc.1 (2020-03-03)
This is a release candidate for Flagger v1.0.0.
The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.app/dev/upgrade-guide).
Two new resources were added to the API: `MetricTemplate` and `AlertProvider`.
The analysis can reference [metric templates](https://docs.flagger.app//usage/metrics#custom-metrics)
to query Prometheus, Datadog and AWS CloudWatch.
[Alerting](https://docs.flagger.app/v/master/usage/alerting#canary-configuration) can be configured on a per
canary basis for Slack, MS Teams, Discord and Rocket.
#### Features
- Implement metric templates for Prometheus [#419](https://github.com/weaveworks/flagger/pull/419),
Datadog [#460](https://github.com/weaveworks/flagger/pull/460) and
CloudWatch [#464](https://github.com/weaveworks/flagger/pull/464)
- Implement metric range validation [#424](https://github.com/weaveworks/flagger/pull/424)
- Add support for targeting DaemonSets [#455](https://github.com/weaveworks/flagger/pull/455)
- Implement canary alerts and alert providers (Slack, MS Teams, Discord and Rocket)
[#429](https://github.com/weaveworks/flagger/pull/429)
#### Improvements
- Add support for Istio multi-cluster
[#447](https://github.com/weaveworks/flagger/pull/447) [#450](https://github.com/weaveworks/flagger/pull/450)
- Extend Istio traffic policy [#441](https://github.com/weaveworks/flagger/pull/441),
add support for header operations [#442](https://github.com/weaveworks/flagger/pull/442) and
set ingress destination port when multiple ports are discovered [#436](https://github.com/weaveworks/flagger/pull/436)
- Add support for rollback gating [#449](https://github.com/weaveworks/flagger/pull/449)
- Allow disabling ConfigMaps and Secrets tracking [#425](https://github.com/weaveworks/flagger/pull/425)
#### Fixes
- Fix spec changes detection [#446](https://github.com/weaveworks/flagger/pull/446)
- Track projected ConfigMaps and Secrets [#433](https://github.com/weaveworks/flagger/pull/433)
## 0.23.0 (2020-02-06)
Adds support for service name configuration and rollback webhook
#### Features
- Implement service name override [#416](https://github.com/weaveworks/flagger/pull/416)
- Add support for gated rollback [#420](https://github.com/weaveworks/flagger/pull/420)
## 0.22.0 (2020-01-16)
Adds event dispatching through webhooks
#### Features
- Implement event dispatching webhook [#409](https://github.com/weaveworks/flagger/pull/409)
- Add general purpose event webhook [#401](https://github.com/weaveworks/flagger/pull/401)
#### Improvements
- Update Contour to v1.1 and add Linkerd header [#411](https://github.com/weaveworks/flagger/pull/411)
- Update Istio e2e to v1.4.3 [#407](https://github.com/weaveworks/flagger/pull/407)
- Update Kubernetes packages to 1.17 [#406](https://github.com/weaveworks/flagger/pull/406)
## 0.21.0 (2020-01-06)
Adds support for Contour ingress controller
#### Features
- Add support for Contour ingress controller [#397](https://github.com/weaveworks/flagger/pull/397)
- Add support for Envoy managed by Crossover via SMI [#386](https://github.com/weaveworks/flagger/pull/386)
- Extend canary target ref to Kubernetes Service kind [#372](https://github.com/weaveworks/flagger/pull/372)
#### Improvements
- Add Prometheus operator PodMonitor template to Helm chart [#399](https://github.com/weaveworks/flagger/pull/399)
- Update e2e tests to Kubernetes v1.16 [#390](https://github.com/weaveworks/flagger/pull/390)
## 0.20.4 (2019-12-03)
Adds support for taking over a running deployment without disruption
#### Improvements
- Add initialization phase to Kubernetes router [#384](https://github.com/weaveworks/flagger/pull/384)
- Add canary controller interface and Kubernetes deployment kind implementation [#378](https://github.com/weaveworks/flagger/pull/378)
#### Fixes
- Skip primary check on skip analysis [#380](https://github.com/weaveworks/flagger/pull/380)
## 0.20.3 (2019-11-13)
Adds wrk to load tester tools and the App Mesh gateway chart to Flagger Helm repository
#### Improvements
- Add wrk to load tester tools [#368](https://github.com/weaveworks/flagger/pull/368)
- Add App Mesh gateway chart [#365](https://github.com/weaveworks/flagger/pull/365)
## 0.20.2 (2019-11-07)
Adds support for exposing canaries outside the cluster using App Mesh Gateway annotations
#### Improvements
- Expose canaries on public domains with App Mesh Gateway [#358](https://github.com/weaveworks/flagger/pull/358)
#### Fixes
- Use the specified replicas when scaling up the canary [#363](https://github.com/weaveworks/flagger/pull/363)
## 0.20.1 (2019-11-03)
Fixes promql execution and updates the load testing tools
#### Improvements
- Update load tester Helm tools [#8349dd1](https://github.com/weaveworks/flagger/commit/8349dd1cda59a741c7bed9a0f67c0fc0fbff4635)
- e2e testing: update providers [#346](https://github.com/weaveworks/flagger/pull/346)
#### Fixes
- Fix Prometheus query escape [#353](https://github.com/weaveworks/flagger/pull/353)
- Updating hey release link [#350](https://github.com/weaveworks/flagger/pull/350)
## 0.20.0 (2019-10-21)
Adds support for [A/B Testing](https://docs.flagger.app/usage/progressive-delivery#traffic-mirroring)
and retry policies when using App Mesh
#### Features
- Implement App Mesh A/B testing based on HTTP headers match conditions [#340](https://github.com/weaveworks/flagger/pull/340)
- Implement App Mesh HTTP retry policy [#338](https://github.com/weaveworks/flagger/pull/338)
- Implement metrics server override [#342](https://github.com/weaveworks/flagger/pull/342)
#### Improvements
- Add the app/name label to services and primary deployment [#333](https://github.com/weaveworks/flagger/pull/333)
- Allow setting Slack and Teams URLs with env vars [#334](https://github.com/weaveworks/flagger/pull/334)
- Refactor Gloo integration [#344](https://github.com/weaveworks/flagger/pull/344)
#### Fixes
- Generate unique names for App Mesh virtual routers and routes [#336](https://github.com/weaveworks/flagger/pull/336)
## 0.19.0 (2019-10-08)
Adds support for canary and blue/green [traffic mirroring](https://docs.flagger.app/usage/progressive-delivery#traffic-mirroring)
@@ -13,13 +447,14 @@ Adds support for canary and blue/green [traffic mirroring](https://docs.flagger.
#### Improvements
- Allow gPRC protocol for App Mesh [#325](https://github.com/weaveworks/flagger/pull/325)
- Allow gRPC protocol for App Mesh [#325](https://github.com/weaveworks/flagger/pull/325)
- Enforce blue/green when using Kubernetes networking [#326](https://github.com/weaveworks/flagger/pull/326)
#### Fixes
- Fix port discovery diff [#324](https://github.com/weaveworks/flagger/pull/324)
- Helm chart: Enable Prometheus scraping of Flagger metrics [#2141d88](https://github.com/weaveworks/flagger/commit/2141d88ce1cc6be220dab34171c215a334ecde24)
- Helm chart: Enable Prometheus scraping of Flagger metrics
[#2141d88](https://github.com/weaveworks/flagger/commit/2141d88ce1cc6be220dab34171c215a334ecde24)
## 0.18.6 (2019-10-03)
@@ -37,7 +472,8 @@ Adds support for App Mesh conformance tests and latency metric checks
## 0.18.5 (2019-10-02)
Adds support for [confirm-promotion](https://docs.flagger.app/how-it-works#webhooks) webhooks and blue/green deployments when using a service mesh
Adds support for [confirm-promotion](https://docs.flagger.app/how-it-works#webhooks)
webhooks and blue/green deployments when using a service mesh
#### Features
@@ -62,7 +498,7 @@ Adds support for NGINX custom annotations and Helm v3 acceptance testing
- Add annotations prefix for NGINX ingresses [#293](https://github.com/weaveworks/flagger/pull/293)
- Add wide columns in CRD [#289](https://github.com/weaveworks/flagger/pull/289)
- loadtester: implement Helm v3 test command [#296](https://github.com/weaveworks/flagger/pull/296)
- loadtester: add gPRC health check to load tester image [#295](https://github.com/weaveworks/flagger/pull/295)
- loadtester: add gRPC health check to load tester image [#295](https://github.com/weaveworks/flagger/pull/295)
#### Fixes
@@ -122,8 +558,10 @@ Adds support for [manual gating](https://docs.flagger.app/how-it-works#manual-ga
#### Breaking changes
- Due to the status sub-resource changes in [#240](https://github.com/weaveworks/flagger/pull/240), when upgrading Flagger the canaries status phase will be reset to `Initialized`
- Upgrading Flagger with Helm will fail due to Helm poor support of CRDs, see [workaround](https://github.com/weaveworks/flagger/issues/223)
- Due to the status sub-resource changes in [#240](https://github.com/weaveworks/flagger/pull/240),
when upgrading Flagger the canaries status phase will be reset to `Initialized`
- Upgrading Flagger with Helm will fail due to Helm poor support of CRDs,
see [workaround](https://github.com/weaveworks/flagger/issues/223)
## 0.17.0 (2019-07-08)
@@ -137,12 +575,14 @@ Adds support for Linkerd (SMI Traffic Split API), MS Teams notifications and HA
#### Improvements
- Add [Kustomize](https://docs.flagger.app/install/flagger-install-on-kubernetes#install-flagger-with-kustomize) installer [#232](https://github.com/weaveworks/flagger/pull/232)
- Add [Kustomize](https://docs.flagger.app/install/flagger-install-on-kubernetes#install-flagger-with-kustomize)
installer [#232](https://github.com/weaveworks/flagger/pull/232)
- Add Pod Security Policy to Helm chart [#234](https://github.com/weaveworks/flagger/pull/234)
## 0.16.0 (2019-06-23)
Adds support for running [Blue/Green deployments](https://docs.flagger.app/usage/blue-green) without a service mesh or ingress controller
Adds support for running [Blue/Green deployments](https://docs.flagger.app/usage/blue-green)
without a service mesh or ingress controller
#### Features
@@ -174,7 +614,8 @@ Adds support for customising the Istio [traffic policy](https://docs.flagger.app
## 0.14.1 (2019-06-05)
Adds support for running [acceptance/integration tests](https://docs.flagger.app/how-it-works#integration-testing) with Helm test or Bash Bats using pre-rollout hooks
Adds support for running [acceptance/integration tests](https://docs.flagger.app/how-it-works#integration-testing)
with Helm test or Bash Bats using pre-rollout hooks
#### Features
@@ -221,7 +662,8 @@ Adds support for [NGINX](https://docs.flagger.app/usage/nginx-progressive-delive
#### Features
- Add support for nginx ingress controller (weighted traffic and A/B testing) [#170](https://github.com/weaveworks/flagger/pull/170)
- Add Prometheus add-on to Flagger Helm chart for App Mesh and NGINX [79b3370](https://github.com/weaveworks/flagger/pull/170/commits/79b337089294a92961bc8446fd185b38c50a32df)
- Add Prometheus add-on to Flagger Helm chart for App Mesh and
NGINX [79b3370](https://github.com/weaveworks/flagger/pull/170/commits/79b337089294a92961bc8446fd185b38c50a32df)
#### Fixes
@@ -467,4 +909,4 @@ Initial semver release
- Add OpenAPI v3 schema validation to Canary CRD
- Use CRD status for canary state persistence
- Add Helm charts for Flagger and Grafana
- Add canary analysis Grafana dashboard
- Add canary analysis Grafana dashboard

View File

@@ -17,12 +17,12 @@ contribution.
## Chat
The project uses Slack: To join the conversation, simply join the
[Weave community](https://slack.weave.works/) Slack workspace.
[Weave community](https://slack.weave.works/) Slack workspace #flagger channel.
## Getting Started
- Fork the repository on GitHub
- If you want to contribute as a developer, continue reading this document for further instructions
- If you want to contribute as a developer, read [Flagger Development Guide](https://docs.flagger.app/dev/dev-guide)
- If you have questions, concerns, get stuck or need a hand, let us know
on the Slack channel. We are happy to help and look forward to having
you part of the team. No matter in which capacity.
@@ -59,7 +59,7 @@ get asked to resubmit the PR or divide the changes into more than one PR.
### Format of the Commit Message
For Flux we prefer the following rules for good commit messages:
For Flagger we prefer the following rules for good commit messages:
- Limit the subject to 50 characters and write as the continuation
of the sentence "If applied, this commit will ..."
@@ -69,4 +69,4 @@ For Flux we prefer the following rules for good commit messages:
The [following article](https://chris.beams.io/posts/git-commit/#seven-rules)
has some more helpful advice on documenting your work.
This doc is adapted from the [Weaveworks Flux](https://github.com/weaveworks/flux/blob/master/CONTRIBUTING.md)
This doc is adapted from [FluxCD](https://github.com/fluxcd/flux/blob/master/CONTRIBUTING.md).

View File

@@ -1,16 +1,9 @@
FROM alpine:3.10
FROM alpine:3.12
RUN addgroup -S flagger \
&& adduser -S -g flagger flagger \
&& apk --no-cache add ca-certificates
RUN apk --no-cache add ca-certificates
WORKDIR /home/flagger
USER nobody
COPY /bin/flagger .
RUN chown -R flagger:flagger ./
USER flagger
COPY --chown=nobody:nobody /bin/flagger .
ENTRYPOINT ["./flagger"]

View File

@@ -1,41 +1,69 @@
FROM bats/bats:v1.1.0
FROM alpine:3.11 as build
RUN addgroup -S app \
&& adduser -S -g app app \
&& apk --no-cache add ca-certificates curl jq
RUN apk --no-cache add alpine-sdk perl curl
WORKDIR /home/app
RUN curl -sSLo hey "https://storage.googleapis.com/jblabs/dist/hey_linux_v0.1.2" && \
RUN curl -sSLo hey "https://storage.googleapis.com/hey-release/hey_linux_amd64" && \
chmod +x hey && mv hey /usr/local/bin/hey
RUN curl -sSL "https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz" | tar xvz && \
RUN HELM2_VERSION=2.16.8 && \
curl -sSL "https://get.helm.sh/helm-v${HELM2_VERSION}-linux-amd64.tar.gz" | tar xvz && \
chmod +x linux-amd64/helm && mv linux-amd64/helm /usr/local/bin/helm && \
chmod +x linux-amd64/tiller && mv linux-amd64/tiller /usr/local/bin/tiller && \
rm -rf linux-amd64
chmod +x linux-amd64/tiller && mv linux-amd64/tiller /usr/local/bin/tiller
RUN curl -sSL "https://get.helm.sh/helm-v3.0.0-beta.3-linux-amd64.tar.gz" | tar xvz && \
chmod +x linux-amd64/helm && mv linux-amd64/helm /usr/local/bin/helmv3 && \
rm -rf linux-amd64
RUN HELM3_VERSION=3.2.3 && \
curl -sSL "https://get.helm.sh/helm-v${HELM3_VERSION}-linux-amd64.tar.gz" | tar xvz && \
chmod +x linux-amd64/helm && mv linux-amd64/helm /usr/local/bin/helmv3
RUN GRPC_HEALTH_PROBE_VERSION=v0.3.0 && \
RUN GRPC_HEALTH_PROBE_VERSION=v0.3.1 && \
wget -qO /usr/local/bin/grpc_health_probe https://github.com/grpc-ecosystem/grpc-health-probe/releases/download/${GRPC_HEALTH_PROBE_VERSION}/grpc_health_probe-linux-amd64 && \
chmod +x /usr/local/bin/grpc_health_probe
RUN curl -sSL "https://github.com/bojand/ghz/releases/download/v0.39.0/ghz_0.39.0_Linux_x86_64.tar.gz" | tar xz -C /tmp && \
mv /tmp/ghz /usr/local/bin && chmod +x /usr/local/bin/ghz && rm -rf /tmp/ghz-web
RUN GHZ_VERSION=0.39.0 && \
curl -sSL "https://github.com/bojand/ghz/releases/download/v${GHZ_VERSION}/ghz_${GHZ_VERSION}_Linux_x86_64.tar.gz" | tar xz -C /tmp && \
mv /tmp/ghz /usr/local/bin && chmod +x /usr/local/bin/ghz
RUN HELM_TILLER_VERSION=0.9.3 && \
curl -sSL "https://github.com/rimusz/helm-tiller/archive/v${HELM_TILLER_VERSION}.tar.gz" | tar xz -C /tmp && \
mv /tmp/helm-tiller-${HELM_TILLER_VERSION} /tmp/helm-tiller
RUN WRK_VERSION=4.0.2 && \
cd /tmp && git clone -b ${WRK_VERSION} https://github.com/wg/wrk
RUN cd /tmp/wrk && make
FROM bash:5.0
RUN addgroup -S app && \
adduser -S -g app app && \
apk --no-cache add ca-certificates curl jq libgcc
WORKDIR /home/app
COPY --from=bats/bats:v1.1.0 /opt/bats/ /opt/bats/
RUN ln -s /opt/bats/bin/bats /usr/local/bin/
COPY --from=build /usr/local/bin/hey /usr/local/bin/
COPY --from=build /tmp/wrk/wrk /usr/local/bin/
COPY --from=build /usr/local/bin/helm /usr/local/bin/
COPY --from=build /usr/local/bin/tiller /usr/local/bin/
COPY --from=build /usr/local/bin/ghz /usr/local/bin/
COPY --from=build /usr/local/bin/helmv3 /usr/local/bin/
COPY --from=build /usr/local/bin/grpc_health_probe /usr/local/bin/
COPY --from=build /tmp/helm-tiller /tmp/helm-tiller
ADD https://raw.githubusercontent.com/grpc/grpc-proto/master/grpc/health/v1/health.proto /tmp/ghz/health.proto
RUN ls /tmp
COPY ./bin/loadtester .
RUN chown -R app:app ./
RUN chown -R app:app /tmp/ghz
USER app
RUN curl -sSL "https://github.com/rimusz/helm-tiller/archive/v0.8.3.tar.gz" | tar xvz && \
helm init --client-only && helm plugin install helm-tiller-0.8.3 && helm plugin list
# test load generator tools
RUN hey -n 1 -c 1 https://flagger.app > /dev/null && echo $? | grep 0
RUN wrk -d 1s -c 1 -t 1 https://flagger.app > /dev/null && echo $? | grep 0
# install Helm v2 plugins
RUN helm init --client-only && helm plugin install /tmp/helm-tiller
ENTRYPOINT ["./loadtester"]

View File

@@ -3,3 +3,4 @@ https://weave-community.slack.com/messages/flagger/ (obtain an invitation
at https://slack.weave.works/).
Stefan Prodan, Weaveworks <stefan@weave.works> (Slack: @stefan Twitter: @stefanprodan)
Takeshi Yoneda, DMM.com <cz.rk.t0415y.g@gmail.com> (Slack: @mathetake Twitter: @mathetake)

100
Makefile
View File

@@ -1,41 +1,11 @@
TAG?=latest
VERSION?=$(shell grep 'VERSION' pkg/version/version.go | awk '{ print $$4 }' | tr -d '"')
VERSION_MINOR:=$(shell grep 'VERSION' pkg/version/version.go | awk '{ print $$4 }' | tr -d '"' | rev | cut -d'.' -f2- | rev)
PATCH:=$(shell grep 'VERSION' pkg/version/version.go | awk '{ print $$4 }' | tr -d '"' | awk -F. '{print $$NF}')
SOURCE_DIRS = cmd pkg/apis pkg/controller pkg/server pkg/canary pkg/metrics pkg/router pkg/notifier
LT_VERSION?=$(shell grep 'VERSION' cmd/loadtester/main.go | awk '{ print $$4 }' | tr -d '"' | head -n1)
TS=$(shell date +%Y-%m-%d_%H-%M-%S)
run:
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=istio -namespace=test-istio \
-metrics-server=https://prometheus.istio.flagger.dev
run-appmesh:
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=appmesh \
-metrics-server=http://acfc235624ca911e9a94c02c4171f346-1585187926.us-west-2.elb.amazonaws.com:9090
run-nginx:
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=nginx -namespace=nginx \
-metrics-server=http://prometheus-weave.istio.weavedx.com
run-smi:
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=smi:istio -namespace=smi \
-metrics-server=https://prometheus.istio.weavedx.com
run-gloo:
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=gloo -namespace=gloo \
-metrics-server=https://prometheus.istio.weavedx.com
run-nop:
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=none -namespace=bg \
-metrics-server=https://prometheus.istio.weavedx.com
run-linkerd:
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=linkerd -namespace=dev \
-metrics-server=https://prometheus.linkerd.flagger.dev
build:
GIT_COMMIT=$$(git rev-list -1 HEAD) && GO111MODULE=on CGO_ENABLED=0 GOOS=linux go build -ldflags "-s -w -X github.com/weaveworks/flagger/pkg/version.REVISION=$${GIT_COMMIT}" -a -installsuffix cgo -o ./bin/flagger ./cmd/flagger/*
GIT_COMMIT=$$(git rev-list -1 HEAD) && CGO_ENABLED=0 GOOS=linux go build \
-ldflags "-s -w -X github.com/weaveworks/flagger/pkg/version.REVISION=$${GIT_COMMIT}" \
-a -installsuffix cgo -o ./bin/flagger ./cmd/flagger/*
docker build -t weaveworks/flagger:$(TAG) . -f Dockerfile
push:
@@ -43,10 +13,15 @@ push:
docker push weaveworks/flagger:$(VERSION)
fmt:
gofmt -l -s -w $(SOURCE_DIRS)
gofmt -l -s -w ./
goimports -l -w ./
test-fmt:
gofmt -l -s $(SOURCE_DIRS) | grep ".*\.go"; if [ "$$?" = "0" ]; then exit 1; fi
gofmt -l -s ./ | grep ".*\.go"; if [ "$$?" = "0" ]; then exit 1; fi
goimports -l ./ | grep ".*\.go"; if [ "$$?" = "0" ]; then exit 1; fi
codegen:
./hack/update-codegen.sh
test-codegen:
./hack/verify-codegen.sh
@@ -54,15 +29,9 @@ test-codegen:
test: test-fmt test-codegen
go test ./...
helm-package:
cd charts/ && helm package ./*
mv charts/*.tgz bin/
curl -s https://raw.githubusercontent.com/weaveworks/flagger/gh-pages/index.yaml > ./bin/index.yaml
helm repo index bin --url https://flagger.app --merge ./bin/index.yaml
helm-up:
helm upgrade --install flagger ./charts/flagger --namespace=istio-system --set crd.create=false
helm upgrade --install flagger-grafana ./charts/grafana --namespace=istio-system
crd:
cat artifacts/flagger/crd.yaml > charts/flagger/crds/crd.yaml
cat artifacts/flagger/crd.yaml > kustomize/base/flagger/crd.yaml
version-set:
@next="$(TAG)" && \
@@ -75,46 +44,17 @@ version-set:
sed -i '' "s/newTag: $$current/newTag: $$next/g" kustomize/base/flagger/kustomization.yaml && \
echo "Version $$next set in code, deployment, chart and kustomize"
version-up:
@next="$(VERSION_MINOR).$$(($(PATCH) + 1))" && \
current="$(VERSION)" && \
sed -i '' "s/$$current/$$next/g" pkg/version/version.go && \
sed -i '' "s/flagger:$$current/flagger:$$next/g" artifacts/flagger/deployment.yaml && \
sed -i '' "s/tag: $$current/tag: $$next/g" charts/flagger/values.yaml && \
sed -i '' "s/appVersion: $$current/appVersion: $$next/g" charts/flagger/Chart.yaml && \
echo "Version $$next set in code, deployment and chart"
dev-up: version-up
@echo "Starting build/push/deploy pipeline for $(VERSION)"
docker build -t quay.io/stefanprodan/flagger:$(VERSION) . -f Dockerfile
docker push quay.io/stefanprodan/flagger:$(VERSION)
kubectl apply -f ./artifacts/flagger/crd.yaml
helm upgrade -i flagger ./charts/flagger --namespace=istio-system --set crd.create=false
release:
git tag $(VERSION)
git push origin $(VERSION)
git tag "v$(VERSION)"
git push origin "v$(VERSION)"
release-set: fmt version-set helm-package
git add .
git commit -m "Release $(VERSION)"
git push origin master
git tag $(VERSION)
git push origin $(VERSION)
reset-test:
kubectl delete -f ./artifacts/namespaces
kubectl apply -f ./artifacts/namespaces
kubectl apply -f ./artifacts/canaries
loadtester-run: loadtester-build
docker build -t weaveworks/flagger-loadtester:$(LT_VERSION) . -f Dockerfile.loadtester
docker rm -f tester || true
docker run -dp 8888:9090 --name tester weaveworks/flagger-loadtester:$(LT_VERSION)
release-notes:
cd /tmp && GH_REL_URL="https://github.com/buchanae/github-release-notes/releases/download/0.2.0/github-release-notes-linux-amd64-0.2.0.tar.gz" && \
curl -sSL $${GH_REL_URL} | tar xz && sudo mv github-release-notes /usr/local/bin/
loadtester-build:
GO111MODULE=on CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o ./bin/loadtester ./cmd/loadtester/*
CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o ./bin/loadtester ./cmd/loadtester/*
docker build -t weaveworks/flagger-loadtester:$(LT_VERSION) . -f Dockerfile.loadtester
loadtester-push:
docker build -t weaveworks/flagger-loadtester:$(LT_VERSION) . -f Dockerfile.loadtester
docker push weaveworks/flagger-loadtester:$(LT_VERSION)

211
README.md
View File

@@ -6,54 +6,59 @@
[![license](https://img.shields.io/github/license/weaveworks/flagger.svg)](https://github.com/weaveworks/flagger/blob/master/LICENSE)
[![release](https://img.shields.io/github/release/weaveworks/flagger/all.svg)](https://github.com/weaveworks/flagger/releases)
Flagger is a Kubernetes operator that automates the promotion of canary deployments
using Istio, Linkerd, App Mesh, NGINX or Gloo routing for traffic shifting and Prometheus metrics for canary analysis.
The canary analysis can be extended with webhooks for running acceptance tests,
load tests or any other custom validation.
Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance
indicators like HTTP requests success rate, requests average duration and pods health.
Based on analysis of the KPIs a canary is promoted or aborted, and the analysis result is published to Slack or MS Teams.
Flagger is a progressive delivery tool that automates the release process for applications running on Kubernetes.
It reduces the risk of introducing a new software version in production
by gradually shifting traffic to the new version while measuring metrics and running conformance tests.
![flagger-overview](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/diagrams/flagger-canary-overview.png)
## Documentation
Flagger implements several deployment strategies (Canary releases, A/B testing, Blue/Green mirroring)
using a service mesh (App Mesh, Istio, Linkerd) or an ingress controller (Contour, Gloo, NGINX, Skipper, Traefik) for traffic routing.
For release analysis, Flagger can query Prometheus, Datadog or CloudWatch
and for alerting it uses Slack, MS Teams, Discord and Rocket.
Flagger documentation can be found at [docs.flagger.app](https://docs.flagger.app)
### Documentation
Flagger documentation can be found at [docs.flagger.app](https://docs.flagger.app).
* Install
* [Flagger install on Kubernetes](https://docs.flagger.app/install/flagger-install-on-kubernetes)
* [Flagger install on GKE Istio](https://docs.flagger.app/install/flagger-install-on-google-cloud)
* [Flagger install on EKS App Mesh](https://docs.flagger.app/install/flagger-install-on-eks-appmesh)
* [Flagger install with SuperGloo](https://docs.flagger.app/install/flagger-install-with-supergloo)
* How it works
* [Canary custom resource](https://docs.flagger.app/how-it-works#canary-custom-resource)
* [Routing](https://docs.flagger.app/how-it-works#istio-routing)
* [Canary deployment stages](https://docs.flagger.app/how-it-works#canary-deployment)
* [Canary analysis](https://docs.flagger.app/how-it-works#canary-analysis)
* [HTTP metrics](https://docs.flagger.app/how-it-works#http-metrics)
* [Custom metrics](https://docs.flagger.app/how-it-works#custom-metrics)
* [Webhooks](https://docs.flagger.app/how-it-works#webhooks)
* [Load testing](https://docs.flagger.app/how-it-works#load-testing)
* [Manual gating](https://docs.flagger.app/how-it-works#manual-gating)
* [FAQ](https://docs.flagger.app/faq)
* Usage
* [Istio canary deployments](https://docs.flagger.app/usage/progressive-delivery)
* [Istio A/B testing](https://docs.flagger.app/usage/ab-testing)
* [Linkerd canary deployments](https://docs.flagger.app/usage/linkerd-progressive-delivery)
* [App Mesh canary deployments](https://docs.flagger.app/usage/appmesh-progressive-delivery)
* [NGINX ingress controller canary deployments](https://docs.flagger.app/usage/nginx-progressive-delivery)
* [Gloo ingress controller canary deployments](https://docs.flagger.app/usage/gloo-progressive-delivery)
* [Blue/Green deployments](https://docs.flagger.app/usage/blue-green)
* [Monitoring](https://docs.flagger.app/usage/monitoring)
* [How it works](https://docs.flagger.app/usage/how-it-works)
* [Deployment strategies](https://docs.flagger.app/usage/deployment-strategies)
* [Metrics analysis](https://docs.flagger.app/usage/metrics)
* [Webhooks](https://docs.flagger.app/usage/webhooks)
* [Alerting](https://docs.flagger.app/usage/alerting)
* [Monitoring](https://docs.flagger.app/usage/monitoring)
* Tutorials
* [Canary deployments with Helm charts and Weave Flux](https://docs.flagger.app/tutorials/canary-helm-gitops)
* [App Mesh](https://docs.flagger.app/tutorials/appmesh-progressive-delivery)
* [Istio](https://docs.flagger.app/tutorials/istio-progressive-delivery)
* [Linkerd](https://docs.flagger.app/tutorials/linkerd-progressive-delivery)
* [Contour](https://docs.flagger.app/tutorials/contour-progressive-delivery)
* [Gloo](https://docs.flagger.app/tutorials/gloo-progressive-delivery)
* [NGINX Ingress](https://docs.flagger.app/tutorials/nginx-progressive-delivery)
* [Skipper](https://docs.flagger.app/tutorials/skipper-progressive-delivery)
* [Traefik](https://docs.flagger.app/tutorials/traefik-progressive-delivery)
* [Kubernetes Blue/Green](https://docs.flagger.app/tutorials/kubernetes-blue-green)
## Canary CRD
### Who is using Flagger
List of organizations using Flagger:
* [Chick-fil-A](https://www.chick-fil-a.com)
* [Capra Consulting](https://www.capraconsulting.no)
* [DMM.com](https://dmm-corp.com)
* [MediaMarktSaturn](https://www.mediamarktsaturn.com)
* [Weaveworks](https://weave.works)
* [Jumia Group](https://group.jumia.com)
* [eLife](https://elifesciences.org/)
If you are using Flagger, please submit a PR to add your organization to the list!
### Canary CRD
Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA),
then creates a series of objects (Kubernetes deployments, ClusterIP services and Istio or App Mesh virtual services).
then creates a series of objects (Kubernetes deployments, ClusterIP services, service mesh or ingress routes).
These objects expose the application on the mesh and drive the canary analysis and promotion.
Flagger keeps track of ConfigMaps and Secrets referenced by a Kubernetes Deployment and triggers a canary analysis if any of those objects change.
@@ -62,14 +67,14 @@ When promoting a workload in production, both code (container images) and config
For a deployment named _podinfo_, a canary promotion can be defined using Flagger's custom resource:
```yaml
apiVersion: flagger.app/v1alpha3
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# service mesh provider (optional)
# can be: kubernetes, istio, linkerd, appmesh, nginx, gloo, supergloo
# can be: kubernetes, istio, linkerd, appmesh, nginx, skipper, contour, gloo, supergloo, traefik
provider: istio
# deployment reference
targetRef:
@@ -85,12 +90,17 @@ spec:
kind: HorizontalPodAutoscaler
name: podinfo
service:
# service name (defaults to targetRef.name)
name: podinfo
# ClusterIP port number
port: 9898
# container port name or number (optional)
targetPort: 9898
# port name can be http or grpc (default http)
portName: http
# add all the other container ports
# to the ClusterIP services (default false)
portDiscovery: true
# HTTP match conditions (optional)
match:
- uri:
@@ -103,7 +113,7 @@ spec:
# promote the canary without analysing it (default false)
skipAnalysis: false
# define the canary analysis timing and KPIs
canaryAnalysis:
analysis:
# schedule interval (default 60s)
interval: 1m
# max number of failed metric checks before rollback
@@ -114,72 +124,123 @@ spec:
# canary increment step
# percentage (0-100)
stepWeight: 5
# Istio Prometheus checks
# validation (optional)
metrics:
# builtin checks
- name: request-success-rate
# builtin Prometheus check
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
thresholdRange:
min: 99
interval: 1m
- name: request-duration
# builtin Prometheus check
# maximum req duration P99
# milliseconds
threshold: 500
thresholdRange:
max: 500
interval: 30s
# custom check
- name: "kafka lag"
threshold: 100
query: |
avg_over_time(
kafka_consumergroup_lag{
consumergroup=~"podinfo-consumer-.*",
topic="podinfo"
}[1m]
)
- name: "database connections"
# custom metric check
templateRef:
name: db-connections
thresholdRange:
min: 2
max: 100
interval: 1m
# testing (optional)
webhooks:
- name: load-test
- name: "conformance test"
type: pre-rollout
url: http://flagger-helmtester.test/
timeout: 5m
metadata:
type: "helmv3"
cmd: "test run podinfo -n test"
- name: "load test"
type: rollout
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"
# alerting (optional)
alerts:
- name: "dev team Slack"
severity: error
providerRef:
name: dev-slack
namespace: flagger
- name: "qa team Discord"
severity: warn
providerRef:
name: qa-discord
- name: "on-call MS Teams"
severity: info
providerRef:
name: on-call-msteams
```
For more details on how the canary analysis and promotion works please [read the docs](https://docs.flagger.app/how-it-works).
For more details on how the canary analysis and promotion works please [read the docs](https://docs.flagger.app/usage/how-it-works).
## Features
### Features
| Feature | Istio | Linkerd | App Mesh | NGINX | Gloo | Kubernetes CNI |
| -------------------------------------------- | ------------------ | ------------------ |------------------ |------------------ |------------------ |------------------ |
| Canary deployments (weighted traffic) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: |
| A/B testing (headers and cookies routing) | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: |
| Blue/Green deployments (traffic switch) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Webhooks (acceptance/load testing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Manual gating (approve/pause/resume) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Request success rate check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: |
| Request duration check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: |
| Custom promql checks | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Traffic policy, CORS, retries and timeouts | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: |
**Service Mesh**
## Roadmap
| Feature | App Mesh | Istio | Linkerd | Kubernetes CNI |
| ------------------------------------------ | ------------------ | ------------------ | ------------------ | ----------------- |
| Canary deployments (weighted traffic) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: |
| A/B testing (headers and cookies routing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: |
| Blue/Green deployments (traffic switch) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Blue/Green deployments (traffic mirroring) | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: |
| Webhooks (acceptance/load testing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Manual gating (approve/pause/resume) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Request success rate check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: |
| Request duration check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: |
| Custom metric checks | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
* Integrate with other ingress controllers like Contour, HAProxy, ALB
* Add support for comparing the canary metrics to the primary ones and do the validation based on the derivation between the two
**Ingress**
## Contributing
| Feature | Contour | Gloo | NGINX | Skipper | Traefik |
| ------------------------------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
| Canary deployments (weighted traffic) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| A/B testing (headers and cookies routing) | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: |
| Blue/Green deployments (traffic switch) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Webhooks (acceptance/load testing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Manual gating (approve/pause/resume) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Request success rate check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_check_mark: |
| Request duration check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_check_mark: |
| Custom metric checks | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
### Roadmap
#### [GitOps Toolkit](https://github.com/fluxcd/toolkit) compatibility
* Migrate Flagger to Kubernetes controller-runtime and [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder)
* Make the Canary status compatible with [kstatus](https://github.com/kubernetes-sigs/cli-utils)
* Make Flagger emit Kubernetes events compatible with Flux v2 notification API
* Migrate CI to GitHub Actions and publish AMD64, ARM64 and ARMv7 container images
* Integrate Flagger into Flux v2 as the progressive delivery component
#### Integrations
* Add support for Kubernetes [Ingress v2](https://github.com/kubernetes-sigs/service-apis)
* Add support for SMI compatible service mesh solutions like Open Service Mesh and Consul Connect
* Add support for ingress controllers like HAProxy and ALB
* Add support for metrics providers like InfluxDB, Stackdriver, SignalFX
### Contributing
Flagger is Apache 2.0 licensed and accepts contributions via GitHub pull requests.
To start contributing please read the [development guide](https://docs.flagger.app/dev/dev-guide).
When submitting bug reports please include as much details as possible:
* which Flagger version
* which Flagger CRD version
* which Kubernetes/Istio version
* what configuration (canary, virtual service and workloads definitions)
* what happened (Flagger, Istio Pilot and Proxy logs)
* which Kubernetes version
* what configuration (canary, ingress and workloads definitions)
* what happened (Flagger and Proxy logs)
## Getting Help
### Getting Help
If you have any questions about Flagger and progressive delivery:

View File

@@ -1,62 +0,0 @@
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: abtest
namespace: test
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: abtest
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: abtest
service:
# container port
port: 9898
# Istio gateways (optional)
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
# Istio virtual service host names (optional)
hosts:
- abtest.istio.weavedx.com
canaryAnalysis:
# schedule interval (default 60s)
interval: 10s
# max number of failed metric checks before rollback
threshold: 10
# total number of iterations
iterations: 10
# canary match condition
match:
- headers:
user-agent:
regex: "^(?!.*Chrome)(?=.*\bSafari\b).*$"
- headers:
cookie:
regex: "^(.*?;)?(type=insider)(;.*)?$"
metrics:
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 1m
- name: request-duration
# maximum req duration P99
# milliseconds
threshold: 500
interval: 30s
# external checks (optional)
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 10 -c 2 -H 'Cookie: type=insider' http://podinfo.test:9898/"

View File

@@ -1,67 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: abtest
namespace: test
labels:
app: abtest
spec:
minReadySeconds: 5
revisionHistoryLimit: 5
progressDeadlineSeconds: 60
strategy:
rollingUpdate:
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: abtest
template:
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
app: abtest
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:1.7.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898
name: http
protocol: TCP
command:
- ./podinfo
- --port=9898
- --level=info
- --random-delay=false
- --random-error=false
env:
- name: PODINFO_UI_COLOR
value: blue
livenessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/healthz
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/readyz
initialDelaySeconds: 5
timeoutSeconds: 5
resources:
limits:
cpu: 2000m
memory: 512Mi
requests:
cpu: 100m
memory: 64Mi

View File

@@ -1,19 +0,0 @@
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: abtest
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: abtest
minReplicas: 2
maxReplicas: 4
metrics:
- type: Resource
resource:
name: cpu
# scale up if usage is above
# 99% of the requested CPU (100m)
targetAverageUtilization: 99

View File

@@ -1,65 +0,0 @@
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
# container port
port: 9898
# container port name (optional)
# can be http or grpc
portName: http
# App Mesh reference
meshName: global
# define the canary analysis timing and KPIs
canaryAnalysis:
# schedule interval (default 60s)
interval: 10s
# max number of failed metric checks before rollback
threshold: 10
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 5
# App Mesh Prometheus checks
metrics:
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 1m
- name: request-duration
# maximum req duration P99
# milliseconds
threshold: 500
interval: 30s
# testing (optional)
webhooks:
- name: acceptance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 30s
metadata:
type: bash
cmd: "curl -sd 'test' http://podinfo-canary.test:9898/token | grep token"
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"

View File

@@ -1,65 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
minReadySeconds: 5
revisionHistoryLimit: 5
progressDeadlineSeconds: 60
strategy:
rollingUpdate:
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: podinfo
template:
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
app: podinfo
spec:
containers:
- name: podinfod
image: stefanprodan/podinfo:3.1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898
name: http
protocol: TCP
command:
- ./podinfo
- --port=9898
- --level=info
env:
- name: PODINFO_UI_COLOR
value: blue
livenessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/healthz
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/readyz
initialDelaySeconds: 5
timeoutSeconds: 5
resources:
limits:
cpu: 2000m
memory: 512Mi
requests:
cpu: 100m
memory: 64Mi

View File

@@ -1,6 +0,0 @@
apiVersion: appmesh.k8s.aws/v1beta1
kind: Mesh
metadata:
name: global
spec:
serviceDiscoveryType: dns

View File

@@ -1,19 +0,0 @@
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: podinfo
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
minReplicas: 2
maxReplicas: 4
metrics:
- type: Resource
resource:
name: cpu
# scale up if usage is above
# 99% of the requested CPU (100m)
targetAverageUtilization: 99

View File

@@ -1,172 +0,0 @@
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-config
namespace: test
labels:
app: ingress
data:
envoy.yaml: |
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 8080
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
access_log:
- name: envoy.file_access_log
config:
path: /dev/stdout
codec_type: auto
stat_prefix: ingress_http
http_filters:
- name: envoy.router
config: {}
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match:
prefix: "/"
route:
cluster: podinfo
host_rewrite: podinfo.test
timeout: 15s
retry_policy:
retry_on: "gateway-error,connect-failure,refused-stream"
num_retries: 10
per_try_timeout: 5s
clusters:
- name: podinfo
connect_timeout: 0.30s
type: strict_dns
lb_policy: round_robin
load_assignment:
cluster_name: podinfo
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: podinfo.test
port_value: 9898
admin:
access_log_path: /dev/null
address:
socket_address:
address: 0.0.0.0
port_value: 9999
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress
namespace: test
labels:
app: ingress
spec:
replicas: 1
selector:
matchLabels:
app: ingress
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
app: ingress
annotations:
prometheus.io/path: "/stats/prometheus"
prometheus.io/port: "9999"
prometheus.io/scrape: "true"
# dummy port to exclude ingress from mesh traffic
# only egress should go over the mesh
appmesh.k8s.aws/ports: "444"
spec:
terminationGracePeriodSeconds: 30
containers:
- name: ingress
image: "envoyproxy/envoy-alpine:v1.11.1"
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
command:
- /usr/local/bin/envoy
args:
- -l
- $loglevel
- -c
- /config/envoy.yaml
- --base-id
- "1234"
ports:
- name: admin
containerPort: 9999
protocol: TCP
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
initialDelaySeconds: 5
tcpSocket:
port: admin
readinessProbe:
initialDelaySeconds: 5
tcpSocket:
port: admin
resources:
requests:
cpu: 100m
memory: 64Mi
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
configMap:
name: ingress-config
---
kind: Service
apiVersion: v1
metadata:
name: ingress
namespace: test
spec:
selector:
app: ingress
ports:
- protocol: TCP
name: http
port: 80
targetPort: http
type: LoadBalancer
---
apiVersion: appmesh.k8s.aws/v1beta1
kind: VirtualNode
metadata:
name: ingress
namespace: test
spec:
meshName: global
listeners:
- portMapping:
port: 80
protocol: http
serviceDiscovery:
dns:
hostName: ingress.test
backends:
- virtualService:
virtualServiceName: podinfo.test

View File

@@ -1,88 +0,0 @@
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# service mesh provider (default istio)
# can be: kubernetes, istio, appmesh, smi, nginx, gloo, supergloo
# use the kubernetes provider for Blue/Green style deployments
provider: istio
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
# container port
port: 9898
# port name can be http or grpc (default http)
portName: http
# add all the other container ports
# when generating ClusterIP services (default false)
portDiscovery: false
# Istio gateways (optional)
gateways:
- public-gateway.istio-system.svc.cluster.local
# remove the mesh gateway if the public host is
# shared across multiple virtual services
- mesh
# Istio virtual service host names (optional)
hosts:
- app.istio.weavedx.com
# Istio traffic policy (optional)
trafficPolicy:
tls:
# use ISTIO_MUTUAL when mTLS is enabled
mode: DISABLE
# HTTP match conditions (optional)
match:
- uri:
prefix: /
# HTTP rewrite (optional)
rewrite:
uri: /
# HTTP timeout (optional)
timeout: 30s
# promote the canary without analysing it (default false)
skipAnalysis: false
canaryAnalysis:
# schedule interval (default 60s)
interval: 10s
# max number of failed metric checks before rollback
threshold: 10
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 5
# Prometheus checks
metrics:
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 1m
- name: request-duration
# maximum req duration P99
# milliseconds
threshold: 500
interval: 30s
# external checks (optional)
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test:9898/"
logCmdOutput: "true"

View File

@@ -1,68 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
minReadySeconds: 5
revisionHistoryLimit: 5
progressDeadlineSeconds: 60
strategy:
rollingUpdate:
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: podinfo
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9898"
labels:
app: podinfo
spec:
containers:
- name: podinfod
image: stefanprodan/podinfo:3.1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898
name: http
protocol: TCP
command:
- ./podinfo
- --port=9898
- --level=info
- --random-delay=false
- --random-error=false
env:
- name: PODINFO_UI_COLOR
value: blue
livenessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/healthz
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/readyz
initialDelaySeconds: 5
timeoutSeconds: 5
resources:
limits:
cpu: 2000m
memory: 512Mi
requests:
cpu: 100m
memory: 64Mi

View File

@@ -1,19 +0,0 @@
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: podinfo
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
minReplicas: 2
maxReplicas: 4
metrics:
- type: Resource
resource:
name: cpu
# scale up if usage is above
# 99% of the requested CPU (100m)
targetAverageUtilization: 99

View File

@@ -1,6 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: test
labels:
istio-injection: enabled

View File

@@ -1,26 +0,0 @@
apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
name: backend
namespace: test
annotations:
flux.weave.works/automated: "true"
flux.weave.works/tag.chart-image: regexp:^1.7.*
spec:
releaseName: backend
chart:
repository: https://flagger.app/
name: podinfo
version: 2.2.0
values:
image:
repository: quay.io/stefanprodan/podinfo
tag: 1.7.0
httpServer:
timeout: 30s
canary:
enabled: true
istioIngress:
enabled: false
loadtest:
enabled: true

View File

@@ -1,27 +0,0 @@
apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
name: frontend
namespace: test
annotations:
flux.weave.works/automated: "true"
flux.weave.works/tag.chart-image: semver:~1.7
spec:
releaseName: frontend
chart:
repository: https://flagger.app/
name: podinfo
version: 2.2.0
values:
image:
repository: quay.io/stefanprodan/podinfo
tag: 1.7.0
backend: http://backend-podinfo:9898/echo
canary:
enabled: true
istioIngress:
enabled: true
gateway: public-gateway.istio-system.svc.cluster.local
host: frontend.istio.example.com
loadtest:
enabled: true

View File

@@ -1,18 +0,0 @@
apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
name: loadtester
namespace: test
annotations:
flux.weave.works/automated: "true"
flux.weave.works/tag.chart-image: glob:0.*
spec:
releaseName: flagger-loadtester
chart:
repository: https://flagger.app/
name: loadtester
version: 0.6.0
values:
image:
repository: weaveworks/flagger-loadtester
tag: 0.6.1

View File

@@ -1,264 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
labels:
app: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
- nodes/proxy
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus
labels:
app: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: appmesh-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: appmesh-system
labels:
app: prometheus
---
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus
namespace: appmesh-system
labels:
app: prometheus
data:
prometheus.yml: |-
global:
scrape_interval: 5s
scrape_configs:
# Scrape config for AppMesh Envoy sidecar
- job_name: 'appmesh-envoy'
metrics_path: /stats/prometheus
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_name]
action: keep
regex: '^envoy$'
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: ${1}:9901
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
# Exclude high cardinality metrics
metric_relabel_configs:
- source_labels: [ cluster_name ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ tcp_prefix ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ listener_address ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_listener_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tls.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tcp_downstream.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_http_(stats|admin).*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_cluster_(lb|retry|bind|internal|max|original).*'
action: drop
# Scrape config for API servers
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- default
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: kubernetes;https
# Scrape config for nodes
- job_name: 'kubernetes-nodes'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
# scrape config for cAdvisor
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
# scrape config for pods
- job_name: kubernetes-pods
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_scrape
- source_labels: [ __address__ ]
regex: '.*9901.*'
action: drop
- action: replace
regex: (.+)
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_path
target_label: __metrics_path__
- action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_pod_annotation_prometheus_io_port
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: kubernetes_namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: kubernetes_pod_name
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus
namespace: appmesh-system
labels:
app: prometheus
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
annotations:
version: "appmesh-v1alpha1"
spec:
serviceAccountName: prometheus
containers:
- name: prometheus
image: "docker.io/prom/prometheus:v2.7.1"
imagePullPolicy: IfNotPresent
args:
- '--storage.tsdb.retention=6h'
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- containerPort: 9090
name: http
livenessProbe:
httpGet:
path: /-/healthy
port: 9090
readinessProbe:
httpGet:
path: /-/ready
port: 9090
resources:
requests:
cpu: 10m
memory: 128Mi
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
volumes:
- name: config-volume
configMap:
name: prometheus
---
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: appmesh-system
labels:
name: prometheus
spec:
selector:
app: prometheus
ports:
- name: http
protocol: TCP
port: 9090

View File

@@ -0,0 +1,62 @@
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
provider: appmesh
progressDeadlineSeconds: 600
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
port: 80
targetPort: 9898
meshName: global
retries:
attempts: 3
perTryTimeout: 5s
retryOn: "gateway-error,client-error,stream-error"
timeout: 35s
match:
- uri:
prefix: /
rewrite:
uri: /
analysis:
interval: 15s
threshold: 10
iterations: 10
match:
- headers:
x-canary:
exact: "insider"
metrics:
- name: request-success-rate
thresholdRange:
min: 99
interval: 1m
- name: request-duration
thresholdRange:
max: 500
interval: 30s
webhooks:
- name: conformance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 15s
metadata:
type: "bash"
cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token"
- name: load-test
type: rollout
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 -H 'X-Canary: insider' http://podinfo-canary.test/"

View File

@@ -0,0 +1,59 @@
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
provider: appmesh
progressDeadlineSeconds: 600
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
port: 80
targetPort: http
meshName: global
retries:
attempts: 3
perTryTimeout: 5s
retryOn: "gateway-error,client-error,stream-error"
timeout: 35s
match:
- uri:
prefix: /
rewrite:
uri: /
analysis:
interval: 15s
threshold: 10
maxWeight: 50
stepWeight: 5
metrics:
- name: request-success-rate
thresholdRange:
min: 99
interval: 1m
- name: request-duration
thresholdRange:
max: 500
interval: 30s
webhooks:
- name: conformance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 15s
metadata:
type: "bash"
cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token"
- name: load-test
type: rollout
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test/"

View File

@@ -0,0 +1,70 @@
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
provider: istio
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
name: podinfo
port: 80
targetPort: 9898
portName: http
portDiscovery: true
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
hosts:
- app.example.com
trafficPolicy:
tls:
mode: DISABLE
match:
- uri:
prefix: /
rewrite:
uri: /
timeout: 30s
analysis:
interval: 15s
threshold: 10
iterations: 10
match:
- headers:
cookie:
regex: "^(.*?;)?(type=insider)(;.*)?$"
- headers:
user-agent:
regex: ".*Firefox.*"
metrics:
- name: request-success-rate
thresholdRange:
min: 99
interval: 1m
- name: request-duration
thresholdRange:
max: 500
interval: 30s
webhooks:
- name: conformance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 15s
metadata:
type: "bash"
cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token"
- name: load-test
type: rollout
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 -H 'Cookie: type=insider' http://podinfo.test/"

View File

@@ -0,0 +1,66 @@
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
provider: istio
progressDeadlineSeconds: 600
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
name: podinfo
port: 80
targetPort: 9898
portName: http
portDiscovery: true
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
hosts:
- app.example.com
trafficPolicy:
tls:
mode: DISABLE
match:
- uri:
prefix: /
rewrite:
uri: /
timeout: 30s
skipAnalysis: false
analysis:
interval: 15s
threshold: 10
maxWeight: 50
stepWeight: 5
metrics:
- name: request-success-rate
thresholdRange:
min: 99
interval: 1m
- name: request-duration
thresholdRange:
max: 500
interval: 30s
webhooks:
- name: conformance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 15s
metadata:
type: "bash"
cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token"
- name: load-test
type: rollout
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test/"

View File

@@ -0,0 +1,51 @@
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
provider: linkerd
progressDeadlineSeconds: 600
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
name: podinfo
port: 80
targetPort: 9898
portName: http
portDiscovery: true
skipAnalysis: false
analysis:
interval: 15s
threshold: 10
stepWeights: [5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55]
metrics:
- name: request-success-rate
thresholdRange:
min: 99
interval: 1m
- name: request-duration
thresholdRange:
max: 500
interval: 30s
webhooks:
- name: conformance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 15s
metadata:
type: "bash"
cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token"
- name: load-test
type: rollout
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test/"

View File

@@ -0,0 +1,52 @@
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
provider: linkerd
progressDeadlineSeconds: 600
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
name: podinfo
port: 80
targetPort: 9898
portName: http
portDiscovery: true
skipAnalysis: false
analysis:
interval: 15s
threshold: 10
maxWeight: 50
stepWeight: 5
metrics:
- name: request-success-rate
thresholdRange:
min: 99
interval: 1m
- name: request-duration
thresholdRange:
max: 500
interval: 30s
webhooks:
- name: conformance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 15s
metadata:
type: "bash"
cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token"
- name: load-test
type: rollout
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test/"

View File

@@ -2,7 +2,7 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: flagger
namespace: istio-system
namespace: default
labels:
app: flagger
---
@@ -18,69 +18,164 @@ rules:
resources:
- events
- configmaps
- configmaps/finalizers
- secrets
- secrets/finalizers
- services
verbs: ["*"]
- services/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- apps
resources:
- daemonsets
- daemonsets/finalizers
- deployments
verbs: ["*"]
- deployments/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs: ["*"]
- horizontalpodautoscalers/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- "extensions"
- extensions
- networking.k8s.io
resources:
- ingresses
- ingresses/status
verbs: ["*"]
- ingresses/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- flagger.app
resources:
- canaries
- canaries/status
verbs: ["*"]
- metrictemplates
- metrictemplates/status
- alertproviders
- alertproviders/status
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- networking.istio.io
resources:
- virtualservices
- virtualservices/status
- virtualservices/finalizers
- destinationrules
- destinationrules/status
verbs: ["*"]
- destinationrules/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- appmesh.k8s.aws
resources:
- meshes
- meshes/status
- virtualnodes
- virtualnodes/status
- virtualnodes/finalizers
- virtualrouters
- virtualrouters/finalizers
- virtualservices
- virtualservices/status
verbs: ["*"]
- virtualservices/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- split.smi-spec.io
resources:
- trafficsplits
verbs: ["*"]
- trafficsplits/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- specs.smi-spec.io
resources:
- httproutegroups
- httproutegroups/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- gloo.solo.io
resources:
- settings
- upstreams
- upstreams/finalizers
- upstreamgroups
- proxies
- virtualservices
verbs: ["*"]
- upstreamgroups/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- gateway.solo.io
- projectcontour.io
resources:
- virtualservices
- gateways
verbs: ["*"]
- httpproxies
- httpproxies/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- nonResourceURLs:
- /version
verbs:
@@ -99,4 +194,4 @@ roleRef:
subjects:
- kind: ServiceAccount
name: flagger
namespace: istio-system
namespace: default

View File

@@ -6,16 +6,19 @@ metadata:
helm.sh/resource-policy: keep
spec:
group: flagger.app
version: v1alpha3
version: v1beta1
versions:
- name: v1alpha3
- name: v1beta1
served: true
storage: true
- name: v1alpha2
- name: v1alpha3
served: true
storage: false
- name: v1alpha2
served: false
storage: false
- name: v1alpha1
served: true
served: false
storage: false
names:
plural: canaries
@@ -39,19 +42,23 @@ spec:
priority: 1
- name: Interval
type: string
JSONPath: .spec.canaryAnalysis.interval
JSONPath: .spec.analysis.interval
priority: 1
- name: Mirror
type: boolean
JSONPath: .spec.canaryAnalysis.mirror
JSONPath: .spec.analysis.mirror
priority: 1
- name: StepWeight
type: string
JSONPath: .spec.canaryAnalysis.stepWeight
JSONPath: .spec.analysis.stepWeight
priority: 1
- name: StepWeights
type: string
JSONPath: .spec.analysis.stepWeights
priority: 1
- name: MaxWeight
type: string
JSONPath: .spec.canaryAnalysis.maxWeight
JSONPath: .spec.analysis.maxWeight
priority: 1
- name: LastTransitionTime
type: string
@@ -63,16 +70,19 @@ spec:
required:
- targetRef
- service
- canaryAnalysis
- analysis
properties:
provider:
description: Traffic managent provider
type: string
metricsServer:
description: Prometheus URL
type: string
progressDeadlineSeconds:
description: Deployment progress deadline
type: number
targetRef:
description: Deployment selector
description: Target selector
type: object
required: ["apiVersion", "kind", "name"]
properties:
@@ -80,38 +90,46 @@ spec:
type: string
kind:
type: string
enum:
- DaemonSet
- Deployment
- Service
name:
type: string
autoscalerRef:
description: HPA selector
anyOf:
- type: string
- type: object
type: object
required: ["apiVersion", "kind", "name"]
properties:
apiVersion:
type: string
kind:
type: string
enum:
- HorizontalPodAutoscaler
name:
type: string
ingressRef:
description: NGINX ingress selector
anyOf:
- type: string
- type: object
type: object
required: ["apiVersion", "kind", "name"]
properties:
apiVersion:
type: string
kind:
type: string
enum:
- Ingress
name:
type: string
service:
description: Kubernetes Service spec
type: object
required: ["port"]
properties:
name:
description: Kubernetes service name
type: string
port:
description: Container port number
type: number
@@ -126,58 +144,420 @@ spec:
portDiscovery:
description: Enable port dicovery
type: boolean
timeout:
description: HTTP or gRPC request timeout
type: string
meshName:
description: AppMesh mesh name
type: string
backends:
description: AppMesh backend array
anyOf:
- type: string
- type: array
timeout:
description: Istio HTTP or gRPC request timeout
type: string
type: array
items:
type: string
hosts:
description: The list of host names for this service
type: array
items:
type: string
delegation:
description: enable behaving as a delegate VirtualService
type: boolean
match:
description: URI match conditions
type: array
items:
type: object
properties:
uri:
type: object
oneOf:
- required: ["exact"]
- required: ["prefix"]
- required: ["suffix"]
- required: ["regex"]
properties:
exact:
format: string
type: string
prefix:
format: string
type: string
suffix:
format: string
type: string
regex:
format: string
type: string
retries:
description: Retry policy for HTTP requests
type: object
properties:
attempts:
description: Number of retries for a given request
format: int32
type: integer
perTryTimeout:
description: Timeout per retry attempt for a given request
type: string
retryOn:
description: Specifies the conditions under which retry takes place
format: string
type: string
rewrite:
description: Rewrite HTTP URIs
type: object
properties:
uri:
format: string
type: string
headers:
description: Headers operations
type: object
properties:
request:
properties:
add:
additionalProperties:
format: string
type: string
type: object
remove:
items:
format: string
type: string
type: array
set:
additionalProperties:
format: string
type: string
type: object
type: object
response:
properties:
add:
additionalProperties:
format: string
type: string
type: object
remove:
items:
format: string
type: string
type: array
set:
additionalProperties:
format: string
type: string
type: object
type: object
gateways:
description: The list of Istio gateway for this virtual service
type: array
items:
type: string
corsPolicy:
description: Istio Cross-Origin Resource Sharing policy (CORS)
type: object
properties:
allowCredentials:
type: boolean
allowHeaders:
items:
format: string
type: string
type: array
allowMethods:
description: List of HTTP methods allowed to access the resource
items:
format: string
type: string
type: array
allowOrigin:
description: The list of origins that are allowed to perform
CORS requests.
items:
format: string
type: string
type: array
allowOrigins:
description: String patterns that match allowed origins
type: array
items:
type: object
oneOf:
- required:
- exact
- required:
- prefix
- required:
- regex
properties:
exact:
format: string
type: string
prefix:
format: string
type: string
regex:
format: string
type: string
exposeHeaders:
items:
format: string
type: string
type: array
maxAge:
type: string
trafficPolicy:
description: Istio traffic policy
anyOf:
- type: string
- type: object
match:
description: Istio URL match conditions
anyOf:
- type: string
- type: array
rewrite:
description: Istio URL rewrite
anyOf:
- type: string
- type: object
headers:
description: Istio headers operations
anyOf:
- type: string
- type: object
corsPolicy:
description: Istio CORS policy
anyOf:
- type: string
- type: object
gateways:
description: Istio gateways list
anyOf:
- type: string
- type: array
hosts:
description: Istio hosts list
anyOf:
- type: string
- type: array
type: object
properties:
connectionPool:
properties:
http:
description: HTTP connection pool settings.
type: object
properties:
h2UpgradePolicy:
description: Specify if http1.1 connection should
be upgraded to http2 for the associated destination.
enum:
- DEFAULT
- DO_NOT_UPGRADE
- UPGRADE
type: string
http1MaxPendingRequests:
description: Maximum number of pending HTTP requests
to a destination.
format: int32
type: integer
http2MaxRequests:
description: Maximum number of requests to a backend.
format: int32
type: integer
idleTimeout:
description: The idle timeout for upstream connection
pool connections.
type: string
maxRequestsPerConnection:
description: Maximum number of requests per connection
to a backend.
format: int32
type: integer
maxRetries:
format: int32
type: integer
loadBalancer:
description: Settings controlling the load balancer algorithms.
type: object
oneOf:
- required:
- simple
- properties:
consistentHash:
oneOf:
- required:
- httpHeaderName
- required:
- httpCookie
- required:
- useSourceIp
- required:
- httpQueryParameterName
required:
- consistentHash
properties:
consistentHash:
properties:
httpCookie:
description: Hash based on HTTP cookie.
properties:
name:
description: Name of the cookie.
format: string
type: string
path:
description: Path to set for the cookie.
format: string
type: string
ttl:
description: Lifetime of the cookie.
type: string
type: object
httpHeaderName:
description: Hash based on a specific HTTP header.
format: string
type: string
httpQueryParameterName:
description: Hash based on a specific HTTP query parameter.
format: string
type: string
minimumRingSize:
type: integer
useSourceIp:
description: Hash based on the source IP address.
type: boolean
type: object
localityLbSetting:
properties:
distribute:
description: 'Optional: only one of distribute or
failover can be set.'
items:
properties:
from:
description: Originating locality, '/' separated,
e.g.
format: string
type: string
to:
additionalProperties:
type: integer
description: Map of upstream localities to traffic
distribution weights.
type: object
type: object
type: array
enabled:
description: enable locality load balancing, this
is DestinationRule-level and will override mesh
wide settings in entirety.
type: boolean
failover:
description: 'Optional: only failover or distribute
can be set.'
items:
properties:
from:
description: Originating region.
format: string
type: string
to:
format: string
type: string
type: object
type: array
type: object
simple:
enum:
- ROUND_ROBIN
- LEAST_CONN
- RANDOM
- PASSTHROUGH
type: string
outlierDetection:
description: Settings controlling eviction of unhealthy hosts from the load balancing pool.
type: object
properties:
baseEjectionTime:
description: Minimum ejection duration.
type: string
consecutive5xxErrors:
description: Number of 5xx errors before a host is ejected
from the connection pool.
type: integer
consecutiveErrors:
format: int32
type: integer
consecutiveGatewayErrors:
description: Number of gateway errors before a host is
ejected from the connection pool.
format: int32
type: integer
interval:
description: Time interval between ejection sweep analysis.
type: string
maxEjectionPercent:
format: int32
type: integer
minHealthPercent:
format: int32
type: integer
tls:
description: Istio TLS related settings for connections to the upstream service
type: object
properties:
caCertificates:
format: string
type: string
clientCertificate:
description: REQUIRED if mode is `MUTUAL`.
format: string
type: string
mode:
enum:
- DISABLE
- SIMPLE
- MUTUAL
- ISTIO_MUTUAL
type: string
privateKey:
description: REQUIRED if mode is `MUTUAL`.
format: string
type: string
sni:
description: SNI string to present to the server
during TLS handshake.
format: string
type: string
subjectAltNames:
items:
format: string
type: string
type: array
apex:
description: Metadata to add to the apex service
type: object
properties:
labels:
type: object
additionalProperties:
type: string
annotations:
type: object
additionalProperties:
type: string
primary:
description: Metadata to add to the primary service
type: object
properties:
labels:
type: object
additionalProperties:
type: string
annotations:
type: object
additionalProperties:
type: string
canary:
description: Metadata to add to the canary service
type: object
properties:
labels:
type: object
additionalProperties:
type: string
annotations:
type: object
additionalProperties:
type: string
skipAnalysis:
description: Skip analysis and promote canary
type: boolean
canaryAnalysis:
revertOnDeletion:
description: Revert mutated resources to original spec on deletion
type: boolean
analysis:
description: Canary analysis for this canary
type: object
oneOf:
- required: ["interval", "threshold", "iterations"]
- required: ["interval", "threshold", "stepWeight"]
- required: ["interval", "threshold", "stepWeights"]
properties:
interval:
description: Canary schedule interval
description: Schedule interval for this canary
type: string
pattern: "^[0-9]+(m|s)"
iterations:
@@ -187,74 +567,136 @@ spec:
description: Max number of failed checks before rollback
type: number
maxWeight:
description: Max traffic percentage routed to canary
description: Max traffic weight routed to canary
type: number
stepWeight:
description: Canary incremental traffic percentage step
description: Incremental traffic step weight for the analysis phase
type: number
stepWeights:
description: Incremental traffic step weights for the analysis phase
type: array
items:
type: number
stepWeightPromotion:
description: Incremental traffic step weight for the promotion phase
type: number
mirror:
description: Mirror traffic to canary before shifting
description: Mirror traffic to canary
type: boolean
mirrorWeight:
description: Weight of traffic to be mirrored
type: number
match:
description: A/B testing match conditions
anyOf:
- type: string
- type: array
metrics:
description: Prometheus query list for this canary
type: array
properties:
items:
type: object
required: ["name", "threshold"]
properties:
name:
description: Name of the Prometheus metric
type: string
interval:
description: Interval of the promql query
type: string
pattern: "^[0-9]+(m|s)"
threshold:
description: Max scalar value accepted for this metric
type: number
query:
description: Prometheus query
items:
type: object
properties:
headers:
type: object
additionalProperties:
oneOf:
- required: ["exact"]
- required: ["prefix"]
- required: ["suffix"]
- required: ["regex"]
type: object
properties:
exact:
format: string
type: string
prefix:
format: string
type: string
suffix:
format: string
type: string
regex:
description: RE2 style regex-based match (https://github.com/google/re2/wiki/Syntax)
format: string
type: string
sourceLabels:
description: Applicable only when the 'mesh' gateway is included in the service.gateways list
type: object
additionalProperties:
format: string
type: string
metrics:
description: Metric check list for this canary
type: array
items:
type: object
required: ["name"]
properties:
name:
description: Name of the metric
type: string
interval:
description: Interval of the query
type: string
pattern: "^[0-9]+(m|s)"
threshold:
description: Max value accepted for this metric
type: number
thresholdRange:
description: Range accepted for this metric
type: object
properties:
min:
description: Min value accepted for this metric
type: number
max:
description: Max value accepted for this metric
type: number
query:
description: Prometheus query
type: string
templateRef:
description: Metric template reference
type: object
required: ["name"]
properties:
name:
description: Name of this metric template
type: string
namespace:
description: Namespace of this metric template
type: string
webhooks:
description: Webhook list for this canary
type: array
properties:
items:
type: object
required: ["name", "url"]
properties:
name:
description: Name of the webhook
items:
type: object
required: ["name", "url"]
properties:
name:
description: Name of the webhook
type: string
type:
description: Type of the webhook pre, post or during rollout
type: string
enum:
- ""
- confirm-rollout
- pre-rollout
- rollout
- confirm-promotion
- post-rollout
- event
- rollback
url:
description: URL address of this webhook
type: string
format: url
timeout:
description: Request timeout for this webhook
type: string
pattern: "^[0-9]+(m|s)"
metadata:
description: Metadata (key-value pairs) for this webhook
type: object
additionalProperties:
type: string
type:
description: Type of the webhook pre, post or during rollout
type: string
enum:
- ""
- confirm-rollout
- pre-rollout
- rollout
- confirm-promotion
- post-rollout
url:
description: URL address of this webhook
type: string
format: url
timeout:
description: Request timeout for this webhook
type: string
pattern: "^[0-9]+(m|s)"
metadata:
description: Metadata (key-value pairs) for this webhook
anyOf:
- type: string
- type: object
status:
properties:
phase:
@@ -270,8 +712,10 @@ spec:
- Finalising
- Succeeded
- Failed
- Terminating
- Terminated
canaryWeight:
description: Traffic weight percentage routed to canary
description: Traffic weight routed to canary
type: number
failedChecks:
description: Failed check count of the current canary analysis
@@ -289,28 +733,157 @@ spec:
conditions:
description: Status conditions of this canary
type: array
items:
type: object
required: ["type", "status", "reason"]
properties:
lastTransitionTime:
description: LastTransitionTime of this condition
format: date-time
type: string
lastUpdateTime:
description: LastUpdateTime of this condition
format: date-time
type: string
message:
description: Message associated with this condition
type: string
reason:
description: Reason for the current status of this condition
type: string
status:
description: Status of this condition
type: string
type:
description: Type of this condition
type: string
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: metrictemplates.flagger.app
annotations:
helm.sh/resource-policy: keep
spec:
group: flagger.app
version: v1beta1
versions:
- name: v1beta1
served: true
storage: true
- name: v1alpha1
served: true
storage: false
names:
plural: metrictemplates
singular: metrictemplate
kind: MetricTemplate
categories:
- all
scope: Namespaced
subresources:
status: {}
additionalPrinterColumns:
- name: Provider
type: string
JSONPath: .spec.provider.type
validation:
openAPIV3Schema:
properties:
spec:
required:
- provider
- query
properties:
provider:
description: Provider of this metric template
type: object
required:
- type
properties:
items:
type:
description: Type of this provider
type: string
enum:
- prometheus
- influxdb
- datadog
- cloudwatch
- newrelic
address:
description: API address of this provider
type: string
secretRef:
description: Kubernetes secret reference containing the provider credentials
type: object
required: ["type", "status", "reason"]
required:
- name
properties:
lastTransitionTime:
description: LastTransitionTime of this condition
format: date-time
type: string
lastUpdateTime:
description: LastUpdateTime of this condition
format: date-time
type: string
message:
description: Message associated with this condition
type: string
reason:
description: Reason for the current status of this condition
type: string
status:
description: Status of this condition
type: string
type:
description: Type of this condition
name:
description: Name of the Kubernetes secret
type: string
region:
description: Region of the provider
type: string
query:
description: Query of this metric template
type: string
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: alertproviders.flagger.app
annotations:
helm.sh/resource-policy: keep
spec:
group: flagger.app
version: v1beta1
versions:
- name: v1beta1
served: true
storage: true
names:
plural: alertproviders
singular: alertprovider
kind: AlertProvider
categories:
- all
scope: Namespaced
subresources:
status: {}
additionalPrinterColumns:
- name: Type
type: string
JSONPath: .spec.type
validation:
openAPIV3Schema:
properties:
spec:
oneOf:
- required:
- type
- address
- required:
- type
- secretRef
properties:
type:
description: Type of this provider
type: string
enum:
- slack
- msteams
- discord
- rocket
address:
description: Hook URL address of this provider
type: string
secretRef:
description: Kubernetes secret reference containing the provider address
type: object
required:
- name
properties:
name:
description: Name of the Kubernetes secret
type: string

View File

@@ -2,7 +2,7 @@ apiVersion: apps/v1
kind: Deployment
metadata:
name: flagger
namespace: istio-system
namespace: default
labels:
app: flagger
spec:
@@ -22,7 +22,7 @@ spec:
serviceAccountName: flagger
containers:
- name: flagger
image: weaveworks/flagger:0.19.0
image: weaveworks/flagger:1.4.2
imagePullPolicy: IfNotPresent
ports:
- name: http
@@ -30,9 +30,6 @@ spec:
command:
- ./flagger
- -log-level=info
- -control-loop-interval=10s
- -mesh-provider=$(MESH_PROVIDER)
- -metrics-server=http://prometheus.istio-system.svc.cluster.local:9090
livenessProbe:
exec:
command:

View File

@@ -1,27 +0,0 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt

View File

@@ -1,834 +0,0 @@
# Source: istio/charts/prometheus/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus
namespace: istio-system
labels:
app: prometheus
chart: prometheus-1.0.6
heritage: Tiller
release: istio
data:
prometheus.yml: |-
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'istio-mesh'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-telemetry;prometheus
# Scrape config for envoy stats
- job_name: 'envoy-stats'
metrics_path: /stats/prometheus
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_port_name]
action: keep
regex: '.*-envoy-prom'
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:15090
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
metric_relabel_configs:
# Exclude some of the envoy metrics that have massive cardinality
# This list may need to be pruned further moving forward, as informed
# by performance and scalability testing.
- source_labels: [ cluster_name ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ tcp_prefix ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ listener_address ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_listener_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tls.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tcp_downstream.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_http_(stats|admin).*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_cluster_(lb|retry|bind|internal|max|original).*'
action: drop
- job_name: 'istio-policy'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-policy;http-monitoring
- job_name: 'istio-telemetry'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-telemetry;http-monitoring
- job_name: 'pilot'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-pilot;http-monitoring
- job_name: 'galley'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-galley;http-monitoring
# scrape config for API servers
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- default
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: kubernetes;https
# scrape config for nodes (kubelet)
- job_name: 'kubernetes-nodes'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
# Scrape config for Kubelet cAdvisor.
#
# This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics
# (those whose names begin with 'container_') have been removed from the
# Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to
# retrieve those metrics.
#
# In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor
# HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics"
# in that case (and ensure cAdvisor's HTTP server hasn't been disabled with
# the --cadvisor-port=0 Kubelet flag).
#
# This job is not necessary and should be removed in Kubernetes 1.6 and
# earlier versions, or it will cause the metrics to be scraped twice.
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
# scrape config for service endpoints.
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs: # If first two labels are present, pod should be scraped by the istio-secure job.
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status]
action: drop
regex: (.+)
- source_labels: [__meta_kubernetes_pod_annotation_istio_mtls]
action: drop
regex: (true)
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
- job_name: 'kubernetes-pods-istio-secure'
scheme: https
tls_config:
ca_file: /etc/istio-certs/root-cert.pem
cert_file: /etc/istio-certs/cert-chain.pem
key_file: /etc/istio-certs/key.pem
insecure_skip_verify: true # prometheus does not support secure naming.
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
# sidecar status annotation is added by sidecar injector and
# istio_workload_mtls_ability can be specifically placed on a pod to indicate its ability to receive mtls traffic.
- source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status, __meta_kubernetes_pod_annotation_istio_mtls]
action: keep
regex: (([^;]+);([^;]*))|(([^;]*);(true))
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__] # Only keep address that is host:port
action: keep # otherwise an extra target with ':443' is added for https scheme
regex: ([^:]+):(\d+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
---
# Source: istio/charts/prometheus/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus-istio-system
labels:
app: prometheus
chart: prometheus-1.0.6
heritage: Tiller
release: istio
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
- nodes/proxy
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
# Source: istio/charts/prometheus/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: istio-system
labels:
app: prometheus
chart: prometheus-1.0.6
heritage: Tiller
release: istio
---
# Source: istio/charts/prometheus/templates/clusterrolebindings.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus-istio-system
labels:
app: prometheus
chart: prometheus-1.0.6
heritage: Tiller
release: istio
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-istio-system
subjects:
- kind: ServiceAccount
name: prometheus
namespace: istio-system
---
# Source: istio/charts/prometheus/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: istio-system
annotations:
prometheus.io/scrape: 'true'
labels:
name: prometheus
spec:
selector:
app: prometheus
ports:
- name: http-prometheus
protocol: TCP
port: 9090
---
# Source: istio/charts/prometheus/templates/deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: prometheus
namespace: istio-system
labels:
app: prometheus
chart: prometheus-1.0.6
heritage: Tiller
release: istio
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
serviceAccountName: prometheus
containers:
- name: prometheus
image: "docker.io/prom/prometheus:v2.3.1"
imagePullPolicy: IfNotPresent
args:
- '--storage.tsdb.retention=6h'
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- containerPort: 9090
name: http
livenessProbe:
httpGet:
path: /-/healthy
port: 9090
readinessProbe:
httpGet:
path: /-/ready
port: 9090
resources:
requests:
cpu: 10m
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
- mountPath: /etc/istio-certs
name: istio-certs
volumes:
- name: config-volume
configMap:
name: prometheus
- name: istio-certs
secret:
defaultMode: 420
optional: true
secretName: istio.default
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 2
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- weight: 2
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
- weight: 2
preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x
---
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: requestcount
namespace: istio-system
spec:
value: "1"
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
source_workload: source.workload.name | "unknown"
source_workload_namespace: source.workload.namespace | "unknown"
source_principal: source.principal | "unknown"
source_app: source.labels["app"] | "unknown"
source_version: source.labels["version"] | "unknown"
destination_workload: destination.workload.name | "unknown"
destination_workload_namespace: destination.workload.namespace | "unknown"
destination_principal: destination.principal | "unknown"
destination_app: destination.labels["app"] | "unknown"
destination_version: destination.labels["version"] | "unknown"
destination_service: destination.service.host | "unknown"
destination_service_name: destination.service.name | "unknown"
destination_service_namespace: destination.service.namespace | "unknown"
request_protocol: api.protocol | context.protocol | "unknown"
response_code: response.code | 200
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
monitored_resource_type: '"UNSPECIFIED"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: requestduration
namespace: istio-system
spec:
value: response.duration | "0ms"
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
source_workload: source.workload.name | "unknown"
source_workload_namespace: source.workload.namespace | "unknown"
source_principal: source.principal | "unknown"
source_app: source.labels["app"] | "unknown"
source_version: source.labels["version"] | "unknown"
destination_workload: destination.workload.name | "unknown"
destination_workload_namespace: destination.workload.namespace | "unknown"
destination_principal: destination.principal | "unknown"
destination_app: destination.labels["app"] | "unknown"
destination_version: destination.labels["version"] | "unknown"
destination_service: destination.service.host | "unknown"
destination_service_name: destination.service.name | "unknown"
destination_service_namespace: destination.service.namespace | "unknown"
request_protocol: api.protocol | context.protocol | "unknown"
response_code: response.code | 200
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
monitored_resource_type: '"UNSPECIFIED"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: requestsize
namespace: istio-system
spec:
value: request.size | 0
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
source_workload: source.workload.name | "unknown"
source_workload_namespace: source.workload.namespace | "unknown"
source_principal: source.principal | "unknown"
source_app: source.labels["app"] | "unknown"
source_version: source.labels["version"] | "unknown"
destination_workload: destination.workload.name | "unknown"
destination_workload_namespace: destination.workload.namespace | "unknown"
destination_principal: destination.principal | "unknown"
destination_app: destination.labels["app"] | "unknown"
destination_version: destination.labels["version"] | "unknown"
destination_service: destination.service.host | "unknown"
destination_service_name: destination.service.name | "unknown"
destination_service_namespace: destination.service.namespace | "unknown"
request_protocol: api.protocol | context.protocol | "unknown"
response_code: response.code | 200
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
monitored_resource_type: '"UNSPECIFIED"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: responsesize
namespace: istio-system
spec:
value: response.size | 0
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
source_workload: source.workload.name | "unknown"
source_workload_namespace: source.workload.namespace | "unknown"
source_principal: source.principal | "unknown"
source_app: source.labels["app"] | "unknown"
source_version: source.labels["version"] | "unknown"
destination_workload: destination.workload.name | "unknown"
destination_workload_namespace: destination.workload.namespace | "unknown"
destination_principal: destination.principal | "unknown"
destination_app: destination.labels["app"] | "unknown"
destination_version: destination.labels["version"] | "unknown"
destination_service: destination.service.host | "unknown"
destination_service_name: destination.service.name | "unknown"
destination_service_namespace: destination.service.namespace | "unknown"
request_protocol: api.protocol | context.protocol | "unknown"
response_code: response.code | 200
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
monitored_resource_type: '"UNSPECIFIED"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: tcpbytesent
namespace: istio-system
spec:
value: connection.sent.bytes | 0
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
source_workload: source.workload.name | "unknown"
source_workload_namespace: source.workload.namespace | "unknown"
source_principal: source.principal | "unknown"
source_app: source.labels["app"] | "unknown"
source_version: source.labels["version"] | "unknown"
destination_workload: destination.workload.name | "unknown"
destination_workload_namespace: destination.workload.namespace | "unknown"
destination_principal: destination.principal | "unknown"
destination_app: destination.labels["app"] | "unknown"
destination_version: destination.labels["version"] | "unknown"
destination_service: destination.service.name | "unknown"
destination_service_name: destination.service.name | "unknown"
destination_service_namespace: destination.service.namespace | "unknown"
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
monitored_resource_type: '"UNSPECIFIED"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: tcpbytereceived
namespace: istio-system
spec:
value: connection.received.bytes | 0
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
source_workload: source.workload.name | "unknown"
source_workload_namespace: source.workload.namespace | "unknown"
source_principal: source.principal | "unknown"
source_app: source.labels["app"] | "unknown"
source_version: source.labels["version"] | "unknown"
destination_workload: destination.workload.name | "unknown"
destination_workload_namespace: destination.workload.namespace | "unknown"
destination_principal: destination.principal | "unknown"
destination_app: destination.labels["app"] | "unknown"
destination_version: destination.labels["version"] | "unknown"
destination_service: destination.service.name | "unknown"
destination_service_name: destination.service.name | "unknown"
destination_service_namespace: destination.service.namespace | "unknown"
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
monitored_resource_type: '"UNSPECIFIED"'
---
apiVersion: "config.istio.io/v1alpha2"
kind: prometheus
metadata:
name: handler
namespace: istio-system
spec:
metrics:
- name: requests_total
instance_name: requestcount.metric.istio-system
kind: COUNTER
label_names:
- reporter
- source_app
- source_principal
- source_workload
- source_workload_namespace
- source_version
- destination_app
- destination_principal
- destination_workload
- destination_workload_namespace
- destination_version
- destination_service
- destination_service_name
- destination_service_namespace
- request_protocol
- response_code
- connection_security_policy
- name: request_duration_seconds
instance_name: requestduration.metric.istio-system
kind: DISTRIBUTION
label_names:
- reporter
- source_app
- source_principal
- source_workload
- source_workload_namespace
- source_version
- destination_app
- destination_principal
- destination_workload
- destination_workload_namespace
- destination_version
- destination_service
- destination_service_name
- destination_service_namespace
- request_protocol
- response_code
- connection_security_policy
buckets:
explicit_buckets:
bounds: [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10]
- name: request_bytes
instance_name: requestsize.metric.istio-system
kind: DISTRIBUTION
label_names:
- reporter
- source_app
- source_principal
- source_workload
- source_workload_namespace
- source_version
- destination_app
- destination_principal
- destination_workload
- destination_workload_namespace
- destination_version
- destination_service
- destination_service_name
- destination_service_namespace
- request_protocol
- response_code
- connection_security_policy
buckets:
exponentialBuckets:
numFiniteBuckets: 8
scale: 1
growthFactor: 10
- name: response_bytes
instance_name: responsesize.metric.istio-system
kind: DISTRIBUTION
label_names:
- reporter
- source_app
- source_principal
- source_workload
- source_workload_namespace
- source_version
- destination_app
- destination_principal
- destination_workload
- destination_workload_namespace
- destination_version
- destination_service
- destination_service_name
- destination_service_namespace
- request_protocol
- response_code
- connection_security_policy
buckets:
exponentialBuckets:
numFiniteBuckets: 8
scale: 1
growthFactor: 10
- name: tcp_sent_bytes_total
instance_name: tcpbytesent.metric.istio-system
kind: COUNTER
label_names:
- reporter
- source_app
- source_principal
- source_workload
- source_workload_namespace
- source_version
- destination_app
- destination_principal
- destination_workload
- destination_workload_namespace
- destination_version
- destination_service
- destination_service_name
- destination_service_namespace
- connection_security_policy
- name: tcp_received_bytes_total
instance_name: tcpbytereceived.metric.istio-system
kind: COUNTER
label_names:
- reporter
- source_app
- source_principal
- source_workload
- source_workload_namespace
- source_version
- destination_app
- destination_principal
- destination_workload
- destination_workload_namespace
- destination_version
- destination_service
- destination_service_name
- destination_service_namespace
- connection_security_policy
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: promhttp
namespace: istio-system
spec:
match: context.protocol == "http" || context.protocol == "grpc"
actions:
- handler: handler.prometheus
instances:
- requestcount.metric
- requestduration.metric
- requestsize.metric
- responsesize.metric
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: promtcp
namespace: istio-system
spec:
match: context.protocol == "tcp"
actions:
- handler: handler.prometheus
instances:
- tcpbytesent.metric
- tcpbytereceived.metric
---

View File

@@ -1,36 +0,0 @@
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
progressDeadlineSeconds: 60
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
port: 9898
canaryAnalysis:
interval: 10s
threshold: 10
maxWeight: 50
stepWeight: 5
metrics:
- name: request-success-rate
threshold: 99
interval: 1m
- name: request-duration
threshold: 500
interval: 30s
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 http://gloo.example.com/"

View File

@@ -1,67 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
minReadySeconds: 5
revisionHistoryLimit: 5
progressDeadlineSeconds: 60
strategy:
rollingUpdate:
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: podinfo
template:
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
app: podinfo
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:1.7.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898
name: http
protocol: TCP
command:
- ./podinfo
- --port=9898
- --level=info
- --random-delay=false
- --random-error=false
env:
- name: PODINFO_UI_COLOR
value: blue
livenessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/healthz
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/readyz
initialDelaySeconds: 5
timeoutSeconds: 5
resources:
limits:
cpu: 2000m
memory: 512Mi
requests:
cpu: 100m
memory: 64Mi

View File

@@ -1,19 +0,0 @@
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: podinfo
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
minReplicas: 1
maxReplicas: 4
metrics:
- type: Resource
resource:
name: cpu
# scale up if usage is above
# 99% of the requested CPU (100m)
targetAverageUtilization: 99

View File

@@ -1,17 +0,0 @@
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
name: podinfo
namespace: test
spec:
virtualHost:
domains:
- '*'
name: podinfo.default
routes:
- matcher:
prefix: /
routeAction:
upstreamGroup:
name: podinfo
namespace: gloo

View File

@@ -1,58 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: flagger-helmtester
namespace: kube-system
labels:
app: flagger-helmtester
spec:
selector:
matchLabels:
app: flagger-helmtester
template:
metadata:
labels:
app: flagger-helmtester
annotations:
prometheus.io/scrape: "true"
spec:
serviceAccountName: tiller
containers:
- name: helmtester
image: weaveworks/flagger-loadtester:0.8.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
command:
- ./loadtester
- -port=8080
- -log-level=info
- -timeout=1h
livenessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
readinessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
resources:
limits:
memory: "512Mi"
cpu: "1000m"
requests:
memory: "32Mi"
cpu: "10m"

View File

@@ -1,16 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: flagger-helmtester
namespace: kube-system
labels:
app: flagger-helmtester
spec:
type: ClusterIP
selector:
app: flagger-helmtester
ports:
- name: http
port: 80
protocol: TCP
targetPort: http

View File

@@ -1,19 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: flagger-loadtester-bats
data:
tests: |
#!/usr/bin/env bats
@test "check message" {
curl -sS http://${URL} | jq -r .message | {
run cut -d $' ' -f1
[ $output = "greetings" ]
}
}
@test "check headers" {
curl -sS http://${URL}/headers | grep X-Request-Id
}

View File

@@ -1,67 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: flagger-loadtester
labels:
app: flagger-loadtester
spec:
selector:
matchLabels:
app: flagger-loadtester
template:
metadata:
labels:
app: flagger-loadtester
annotations:
prometheus.io/scrape: "true"
spec:
containers:
- name: loadtester
image: weaveworks/flagger-loadtester:0.9.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
command:
- ./loadtester
- -port=8080
- -log-level=info
- -timeout=1h
livenessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
readinessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
resources:
limits:
memory: "512Mi"
cpu: "1000m"
requests:
memory: "32Mi"
cpu: "10m"
securityContext:
readOnlyRootFilesystem: true
runAsUser: 10001
# volumeMounts:
# - name: tests
# mountPath: /bats
# readOnly: true
# volumes:
# - name: tests
# configMap:
# name: flagger-loadtester-bats

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: flagger-loadtester
labels:
app: flagger-loadtester
spec:
type: ClusterIP
selector:
app: flagger-loadtester
ports:
- name: http
port: 80
protocol: TCP
targetPort: http

View File

@@ -1,7 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: test
labels:
istio-injection: enabled
appmesh.k8s.aws/sidecarInjectorWebhook: enabled

View File

@@ -1,68 +0,0 @@
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# ingress reference
ingressRef:
apiVersion: extensions/v1beta1
kind: Ingress
name: podinfo
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
service:
# container port
port: 9898
canaryAnalysis:
# schedule interval (default 60s)
interval: 10s
# max number of failed metric checks before rollback
threshold: 10
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 5
# NGINX Prometheus checks
metrics:
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 1m
- name: "latency"
threshold: 0.5
interval: 1m
query: |
histogram_quantile(0.99,
sum(
rate(
http_request_duration_seconds_bucket{
kubernetes_namespace="test",
kubernetes_pod_name=~"podinfo-[0-9a-zA-Z]+(-[0-9a-zA-Z]+)"
}[1m]
)
) by (le)
)
# external checks (optional)
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 http://app.example.com/"
logCmdOutput: "true"

View File

@@ -1,69 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
replicas: 1
strategy:
rollingUpdate:
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: podinfo
template:
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
app: podinfo
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:1.7.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898
name: http
protocol: TCP
command:
- ./podinfo
- --port=9898
- --level=info
- --random-delay=false
- --random-error=false
env:
- name: PODINFO_UI_COLOR
value: green
livenessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/healthz
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
readinessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/readyz
failureThreshold: 3
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 2
resources:
limits:
cpu: 1000m
memory: 256Mi
requests:
cpu: 100m
memory: 16Mi

View File

@@ -1,19 +0,0 @@
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: podinfo
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
minReplicas: 2
maxReplicas: 4
metrics:
- type: Resource
resource:
name: cpu
# scale up if usage is above
# 99% of the requested CPU (100m)
targetAverageUtilization: 99

View File

@@ -1,17 +0,0 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: app.example.com
http:
paths:
- backend:
serviceName: podinfo
servicePort: 9898

View File

@@ -1,131 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: trafficsplits.split.smi-spec.io
spec:
additionalPrinterColumns:
- JSONPath: .spec.service
description: The service
name: Service
type: string
group: split.smi-spec.io
names:
kind: TrafficSplit
listKind: TrafficSplitList
plural: trafficsplits
singular: trafficsplit
scope: Namespaced
subresources:
status: {}
version: v1alpha1
versions:
- name: v1alpha1
served: true
storage: true
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: smi-adapter-istio
namespace: istio-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: smi-adapter-istio
rules:
- apiGroups:
- ""
resources:
- pods
- services
- endpoints
- persistentvolumeclaims
- events
- configmaps
- secrets
verbs:
- '*'
- apiGroups:
- apps
resources:
- deployments
- daemonsets
- replicasets
- statefulsets
verbs:
- '*'
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors
verbs:
- get
- create
- apiGroups:
- apps
resourceNames:
- smi-adapter-istio
resources:
- deployments/finalizers
verbs:
- update
- apiGroups:
- split.smi-spec.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- networking.istio.io
resources:
- '*'
verbs:
- '*'
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: smi-adapter-istio
subjects:
- kind: ServiceAccount
name: smi-adapter-istio
namespace: istio-system
roleRef:
kind: ClusterRole
name: smi-adapter-istio
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: smi-adapter-istio
namespace: istio-system
spec:
replicas: 1
selector:
matchLabels:
name: smi-adapter-istio
template:
metadata:
labels:
name: smi-adapter-istio
annotations:
sidecar.istio.io/inject: "false"
spec:
serviceAccountName: smi-adapter-istio
containers:
- name: smi-adapter-istio
image: docker.io/stefanprodan/smi-adapter-istio:0.0.2-beta.1
command:
- smi-adapter-istio
imagePullPolicy: Always
env:
- name: WATCH_NAMESPACE
value: ""
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPERATOR_NAME
value: "smi-adapter-istio"

View File

@@ -1,21 +1,25 @@
apiVersion: v1
name: flagger
version: 0.19.0
appVersion: 0.19.0
version: 1.4.2
appVersion: 1.4.2
kubeVersion: ">=1.11.0-0"
engine: gotpl
description: Flagger is a Kubernetes operator that automates the promotion of canary deployments using Istio, Linkerd, App Mesh, Gloo or NGINX routing for traffic shifting and Prometheus metrics for canary analysis.
home: https://docs.flagger.app
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/flagger-icon.png
description: Flagger is a progressive delivery operator for Kubernetes
home: https://flagger.app
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/weaveworks.png
sources:
- https://github.com/weaveworks/flagger
- https://github.com/weaveworks/flagger
maintainers:
- name: stefanprodan
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com
- name: stefanprodan
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com
keywords:
- canary
- istio
- appmesh
- linkerd
- gitops
- flagger
- istio
- appmesh
- linkerd
- gloo
- contour
- nginx
- gitops
- canary

201
charts/flagger/LICENSE Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2018 Weaveworks. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,15 +1,18 @@
# Flagger
[Flagger](https://github.com/weaveworks/flagger) is a Kubernetes operator that automates the promotion of
canary deployments using Istio, Linkerd, App Mesh, NGINX or Gloo routing for traffic shifting and Prometheus metrics for canary analysis.
Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators
like HTTP requests success rate, requests average duration and pods health.
Based on the KPIs analysis a canary is promoted or aborted and the analysis result is published to Slack or MS Teams.
[Flagger](https://github.com/weaveworks/flagger) is an operator that automates the release process of applications on Kubernetes.
Flagger can run automated application analysis, testing, promotion and rollback for the following deployment strategies:
* Canary Release (progressive traffic shifting)
* A/B Testing (HTTP headers and cookies traffic routing)
* Blue/Green (traffic switching and mirroring)
Flagger works with service mesh solutions (Istio, Linkerd, AWS App Mesh) and with Kubernetes ingress controllers (NGINX, Skipper, Gloo, Contour, Traefik).
Flagger can be configured to send alerts to various chat platforms such as Slack, Microsoft Teams, Discord and Rocket.
## Prerequisites
* Kubernetes >= 1.11
* Prometheus >= 2.6
* Kubernetes >= 1.14
## Installing the Chart
@@ -25,26 +28,70 @@ Install Flagger's custom resource definitions:
$ kubectl apply -f https://raw.githubusercontent.com/weaveworks/flagger/master/artifacts/flagger/crd.yaml
```
To install the chart with the release name `flagger` for Istio:
To install Flagger for **Istio**:
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=istio-system \
--set crd.create=false \
--set meshProvider=istio \
--set metricsServer=http://prometheus:9090
```
To install the chart with the release name `flagger` for Linkerd:
To install Flagger for **Linkerd**:
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=linkerd \
--set crd.create=false \
--set meshProvider=linkerd \
--set metricsServer=http://linkerd-prometheus:9090
```
To install Flagger for **AWS App Mesh**:
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=appmesh-system \
--set meshProvider=appmesh:v1beta2 \
--set metricsServer=http://appmesh-prometheus:9090
```
To install Flagger and Prometheus for **NGINX** Ingress (requires controller metrics enabled):
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=ingress-nginx \
--set meshProvider=nginx \
--set prometheus.install=true
```
To install Flagger and Prometheus for **Gloo** (requires Gloo discovery enabled):
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=gloo-system \
--set meshProvider=gloo \
--set prometheus.install=true
```
To install Flagger and Prometheus for **Contour**:
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=projectcontour \
--set meshProvider=contour \
--set ingressClass=contour \
--set prometheus.install=true
```
To install Flagger and Prometheus for **Traefik**:
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace traefik \
--set prometheus.install=true \
--set meshProvider=traefik
```
The [configuration](#configuration) section lists the parameters that can be configured during installation.
## Uninstalling the Chart
@@ -52,7 +99,7 @@ The [configuration](#configuration) section lists the parameters that can be con
To uninstall/delete the `flagger` deployment:
```console
$ helm delete --purge flagger
$ helm delete flagger
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
@@ -63,35 +110,53 @@ The following tables lists the configurable parameters of the Flagger chart and
Parameter | Description | Default
--- | --- | ---
`image.repository` | image repository | `weaveworks/flagger`
`image.tag` | image tag | `<VERSION>`
`image.pullPolicy` | image pull policy | `IfNotPresent`
`prometheus.install` | if `true`, installs Prometheus configured to scrape all pods in the custer including the App Mesh sidecar | `false`
`image.repository` | Image repository | `weaveworks/flagger`
`image.tag` | Image tag | `<VERSION>`
`image.pullPolicy` | Image pull policy | `IfNotPresent`
`logLevel` | Log level | `info`
`metricsServer` | Prometheus URL, used when `prometheus.install` is `false` | `http://prometheus.istio-system:9090`
`prometheus.install` | If `true`, installs Prometheus configured to scrape all pods in the custer | `false`
`prometheus.retention` | Prometheus data retention | `2h`
`selectorLabels` | List of labels that Flagger uses to create pod selectors | `app,name,app.kubernetes.io/name`
`configTracking.enabled` | If `true`, flagger will track changes in Secrets and ConfigMaps referenced in the target deployment | `true`
`eventWebhook` | If set, Flagger will publish events to the given webhook | None
`slack.url` | Slack incoming webhook | None
`slack.channel` | Slack channel | None
`slack.user` | Slack username | `flagger`
`msteams.url` | Microsoft Teams incoming webhook | None
`leaderElection.enabled` | leader election must be enabled when running more than one replica | `false`
`leaderElection.replicaCount` | number of replicas | `1`
`ingressAnnotationsPrefix` | annotations prefix for ingresses | `custom.ingress.kubernetes.io`
`rbac.create` | if `true`, create and use RBAC resources | `true`
`podMonitor.enabled` | If `true`, create a PodMonitor for [monitoring the metrics](https://docs.flagger.app/usage/monitoring#metrics) | `false`
`podMonitor.namespace` | Namespace where the PodMonitor is created | the same namespace
`podMonitor.interval` | Interval at which metrics should be scraped | `15s`
`podMonitor.podMonitor` | Additional labels to add to the PodMonitor | `{}`
`leaderElection.enabled` | If `true`, Flagger will run in HA mode | `false`
`leaderElection.replicaCount` | Number of replicas | `1`
`serviceAccount.create` | If `true`, Flagger will create service account | `true`
`serviceAccount.name` | The name of the service account to create or use. If not set and `serviceAccount.create` is `true`, a name is generated using the Flagger fullname | `""`
`serviceAccount.annotations` | Annotations for service account | `{}`
`ingressAnnotationsPrefix` | Annotations prefix for ingresses | `custom.ingress.kubernetes.io`
`includeLabelPrefix` | List of prefixes of labels that are copied when creating primary deployments or daemonsets. Use * to include all | `""`
`rbac.create` | If `true`, create and use RBAC resources | `true`
`rbac.pspEnabled` | If `true`, create and use a restricted pod security policy | `false`
`crd.create` | if `true`, create Flagger's CRDs | `true`
`resources.requests/cpu` | pod CPU request | `10m`
`resources.requests/memory` | pod memory request | `32Mi`
`resources.limits/cpu` | pod CPU limit | `1000m`
`resources.limits/memory` | pod memory limit | `512Mi`
`affinity` | node/pod affinities | None
`nodeSelector` | node labels for pod assignment | `{}`
`tolerations` | list of node taints to tolerate | `[]`
`crd.create` | If `true`, create Flagger's CRDs (should be enabled for Helm v2 only) | `false`
`resources.requests/cpu` | Pod CPU request | `10m`
`resources.requests/memory` | Pod memory request | `32Mi`
`resources.limits/cpu` | Pod CPU limit | `1000m`
`resources.limits/memory` | Pod memory limit | `512Mi`
`affinity` | Node/pod affinities | None
`nodeSelector` | Node labels for pod assignment | `{}`
`threadiness` | Number of controller workers | `2`
`tolerations` | List of node taints to tolerate | `[]`
`istio.kubeconfig.secretName` | The name of the Kubernetes secret containing the Istio shared control plane kubeconfig | None
`istio.kubeconfig.key` | The name of Kubernetes secret data key that contains the Istio control plane kubeconfig | `kubeconfig`
`ingressAnnotationsPrefix` | Annotations prefix for NGINX ingresses | None
`ingressClass` | Ingress class used for annotating HTTPProxy objects, e.g. `contour` | None
`podPriorityClassName` | PriorityClass name for pod priority configuration | ""
Specify each parameter using the `--set key=value[,key=value]` argument to `helm upgrade`. For example,
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace istio-system \
--set crd.create=false \
--namespace flagger-system \
--set slack.url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK \
--set slack.channel=general
```

View File

@@ -0,0 +1,889 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: canaries.flagger.app
annotations:
helm.sh/resource-policy: keep
spec:
group: flagger.app
version: v1beta1
versions:
- name: v1beta1
served: true
storage: true
- name: v1alpha3
served: true
storage: false
- name: v1alpha2
served: false
storage: false
- name: v1alpha1
served: false
storage: false
names:
plural: canaries
singular: canary
kind: Canary
categories:
- all
scope: Namespaced
subresources:
status: {}
additionalPrinterColumns:
- name: Status
type: string
JSONPath: .status.phase
- name: Weight
type: string
JSONPath: .status.canaryWeight
- name: FailedChecks
type: string
JSONPath: .status.failedChecks
priority: 1
- name: Interval
type: string
JSONPath: .spec.analysis.interval
priority: 1
- name: Mirror
type: boolean
JSONPath: .spec.analysis.mirror
priority: 1
- name: StepWeight
type: string
JSONPath: .spec.analysis.stepWeight
priority: 1
- name: StepWeights
type: string
JSONPath: .spec.analysis.stepWeights
priority: 1
- name: MaxWeight
type: string
JSONPath: .spec.analysis.maxWeight
priority: 1
- name: LastTransitionTime
type: string
JSONPath: .status.lastTransitionTime
validation:
openAPIV3Schema:
properties:
spec:
required:
- targetRef
- service
- analysis
properties:
provider:
description: Traffic managent provider
type: string
metricsServer:
description: Prometheus URL
type: string
progressDeadlineSeconds:
description: Deployment progress deadline
type: number
targetRef:
description: Target selector
type: object
required: ["apiVersion", "kind", "name"]
properties:
apiVersion:
type: string
kind:
type: string
enum:
- DaemonSet
- Deployment
- Service
name:
type: string
autoscalerRef:
description: HPA selector
type: object
required: ["apiVersion", "kind", "name"]
properties:
apiVersion:
type: string
kind:
type: string
enum:
- HorizontalPodAutoscaler
name:
type: string
ingressRef:
description: NGINX ingress selector
type: object
required: ["apiVersion", "kind", "name"]
properties:
apiVersion:
type: string
kind:
type: string
enum:
- Ingress
name:
type: string
service:
description: Kubernetes Service spec
type: object
required: ["port"]
properties:
name:
description: Kubernetes service name
type: string
port:
description: Container port number
type: number
portName:
description: Container port name
type: string
targetPort:
description: Container target port name
anyOf:
- type: string
- type: number
portDiscovery:
description: Enable port dicovery
type: boolean
timeout:
description: HTTP or gRPC request timeout
type: string
meshName:
description: AppMesh mesh name
type: string
backends:
description: AppMesh backend array
type: array
items:
type: string
hosts:
description: The list of host names for this service
type: array
items:
type: string
delegation:
description: enable behaving as a delegate VirtualService
type: boolean
match:
description: URI match conditions
type: array
items:
type: object
properties:
uri:
type: object
oneOf:
- required: ["exact"]
- required: ["prefix"]
- required: ["suffix"]
- required: ["regex"]
properties:
exact:
format: string
type: string
prefix:
format: string
type: string
suffix:
format: string
type: string
regex:
format: string
type: string
retries:
description: Retry policy for HTTP requests
type: object
properties:
attempts:
description: Number of retries for a given request
format: int32
type: integer
perTryTimeout:
description: Timeout per retry attempt for a given request
type: string
retryOn:
description: Specifies the conditions under which retry takes place
format: string
type: string
rewrite:
description: Rewrite HTTP URIs
type: object
properties:
uri:
format: string
type: string
headers:
description: Headers operations
type: object
properties:
request:
properties:
add:
additionalProperties:
format: string
type: string
type: object
remove:
items:
format: string
type: string
type: array
set:
additionalProperties:
format: string
type: string
type: object
type: object
response:
properties:
add:
additionalProperties:
format: string
type: string
type: object
remove:
items:
format: string
type: string
type: array
set:
additionalProperties:
format: string
type: string
type: object
type: object
gateways:
description: The list of Istio gateway for this virtual service
type: array
items:
type: string
corsPolicy:
description: Istio Cross-Origin Resource Sharing policy (CORS)
type: object
properties:
allowCredentials:
type: boolean
allowHeaders:
items:
format: string
type: string
type: array
allowMethods:
description: List of HTTP methods allowed to access the resource
items:
format: string
type: string
type: array
allowOrigin:
description: The list of origins that are allowed to perform
CORS requests.
items:
format: string
type: string
type: array
allowOrigins:
description: String patterns that match allowed origins
type: array
items:
type: object
oneOf:
- required:
- exact
- required:
- prefix
- required:
- regex
properties:
exact:
format: string
type: string
prefix:
format: string
type: string
regex:
format: string
type: string
exposeHeaders:
items:
format: string
type: string
type: array
maxAge:
type: string
trafficPolicy:
description: Istio traffic policy
type: object
properties:
connectionPool:
properties:
http:
description: HTTP connection pool settings.
type: object
properties:
h2UpgradePolicy:
description: Specify if http1.1 connection should
be upgraded to http2 for the associated destination.
enum:
- DEFAULT
- DO_NOT_UPGRADE
- UPGRADE
type: string
http1MaxPendingRequests:
description: Maximum number of pending HTTP requests
to a destination.
format: int32
type: integer
http2MaxRequests:
description: Maximum number of requests to a backend.
format: int32
type: integer
idleTimeout:
description: The idle timeout for upstream connection
pool connections.
type: string
maxRequestsPerConnection:
description: Maximum number of requests per connection
to a backend.
format: int32
type: integer
maxRetries:
format: int32
type: integer
loadBalancer:
description: Settings controlling the load balancer algorithms.
type: object
oneOf:
- required:
- simple
- properties:
consistentHash:
oneOf:
- required:
- httpHeaderName
- required:
- httpCookie
- required:
- useSourceIp
- required:
- httpQueryParameterName
required:
- consistentHash
properties:
consistentHash:
properties:
httpCookie:
description: Hash based on HTTP cookie.
properties:
name:
description: Name of the cookie.
format: string
type: string
path:
description: Path to set for the cookie.
format: string
type: string
ttl:
description: Lifetime of the cookie.
type: string
type: object
httpHeaderName:
description: Hash based on a specific HTTP header.
format: string
type: string
httpQueryParameterName:
description: Hash based on a specific HTTP query parameter.
format: string
type: string
minimumRingSize:
type: integer
useSourceIp:
description: Hash based on the source IP address.
type: boolean
type: object
localityLbSetting:
properties:
distribute:
description: 'Optional: only one of distribute or
failover can be set.'
items:
properties:
from:
description: Originating locality, '/' separated,
e.g.
format: string
type: string
to:
additionalProperties:
type: integer
description: Map of upstream localities to traffic
distribution weights.
type: object
type: object
type: array
enabled:
description: enable locality load balancing, this
is DestinationRule-level and will override mesh
wide settings in entirety.
type: boolean
failover:
description: 'Optional: only failover or distribute
can be set.'
items:
properties:
from:
description: Originating region.
format: string
type: string
to:
format: string
type: string
type: object
type: array
type: object
simple:
enum:
- ROUND_ROBIN
- LEAST_CONN
- RANDOM
- PASSTHROUGH
type: string
outlierDetection:
description: Settings controlling eviction of unhealthy hosts from the load balancing pool.
type: object
properties:
baseEjectionTime:
description: Minimum ejection duration.
type: string
consecutive5xxErrors:
description: Number of 5xx errors before a host is ejected
from the connection pool.
type: integer
consecutiveErrors:
format: int32
type: integer
consecutiveGatewayErrors:
description: Number of gateway errors before a host is
ejected from the connection pool.
format: int32
type: integer
interval:
description: Time interval between ejection sweep analysis.
type: string
maxEjectionPercent:
format: int32
type: integer
minHealthPercent:
format: int32
type: integer
tls:
description: Istio TLS related settings for connections to the upstream service
type: object
properties:
caCertificates:
format: string
type: string
clientCertificate:
description: REQUIRED if mode is `MUTUAL`.
format: string
type: string
mode:
enum:
- DISABLE
- SIMPLE
- MUTUAL
- ISTIO_MUTUAL
type: string
privateKey:
description: REQUIRED if mode is `MUTUAL`.
format: string
type: string
sni:
description: SNI string to present to the server
during TLS handshake.
format: string
type: string
subjectAltNames:
items:
format: string
type: string
type: array
apex:
description: Metadata to add to the apex service
type: object
properties:
labels:
type: object
additionalProperties:
type: string
annotations:
type: object
additionalProperties:
type: string
primary:
description: Metadata to add to the primary service
type: object
properties:
labels:
type: object
additionalProperties:
type: string
annotations:
type: object
additionalProperties:
type: string
canary:
description: Metadata to add to the canary service
type: object
properties:
labels:
type: object
additionalProperties:
type: string
annotations:
type: object
additionalProperties:
type: string
skipAnalysis:
description: Skip analysis and promote canary
type: boolean
revertOnDeletion:
description: Revert mutated resources to original spec on deletion
type: boolean
analysis:
description: Canary analysis for this canary
type: object
oneOf:
- required: ["interval", "threshold", "iterations"]
- required: ["interval", "threshold", "stepWeight"]
- required: ["interval", "threshold", "stepWeights"]
properties:
interval:
description: Schedule interval for this canary
type: string
pattern: "^[0-9]+(m|s)"
iterations:
description: Number of checks to run for A/B Testing and Blue/Green
type: number
threshold:
description: Max number of failed checks before rollback
type: number
maxWeight:
description: Max traffic weight routed to canary
type: number
stepWeight:
description: Incremental traffic step weight for the analysis phase
type: number
stepWeights:
description: Incremental traffic step weights for the analysis phase
type: array
items:
type: number
stepWeightPromotion:
description: Incremental traffic step weight for the promotion phase
type: number
mirror:
description: Mirror traffic to canary
type: boolean
mirrorWeight:
description: Weight of traffic to be mirrored
type: number
match:
description: A/B testing match conditions
type: array
items:
type: object
properties:
headers:
type: object
additionalProperties:
oneOf:
- required: ["exact"]
- required: ["prefix"]
- required: ["suffix"]
- required: ["regex"]
type: object
properties:
exact:
format: string
type: string
prefix:
format: string
type: string
suffix:
format: string
type: string
regex:
description: RE2 style regex-based match (https://github.com/google/re2/wiki/Syntax)
format: string
type: string
sourceLabels:
description: Applicable only when the 'mesh' gateway is included in the service.gateways list
type: object
additionalProperties:
format: string
type: string
metrics:
description: Metric check list for this canary
type: array
items:
type: object
required: ["name"]
properties:
name:
description: Name of the metric
type: string
interval:
description: Interval of the query
type: string
pattern: "^[0-9]+(m|s)"
threshold:
description: Max value accepted for this metric
type: number
thresholdRange:
description: Range accepted for this metric
type: object
properties:
min:
description: Min value accepted for this metric
type: number
max:
description: Max value accepted for this metric
type: number
query:
description: Prometheus query
type: string
templateRef:
description: Metric template reference
type: object
required: ["name"]
properties:
name:
description: Name of this metric template
type: string
namespace:
description: Namespace of this metric template
type: string
webhooks:
description: Webhook list for this canary
type: array
items:
type: object
required: ["name", "url"]
properties:
name:
description: Name of the webhook
type: string
type:
description: Type of the webhook pre, post or during rollout
type: string
enum:
- ""
- confirm-rollout
- pre-rollout
- rollout
- confirm-promotion
- post-rollout
- event
- rollback
url:
description: URL address of this webhook
type: string
format: url
timeout:
description: Request timeout for this webhook
type: string
pattern: "^[0-9]+(m|s)"
metadata:
description: Metadata (key-value pairs) for this webhook
type: object
additionalProperties:
type: string
status:
properties:
phase:
description: Analysis phase of this canary
type: string
enum:
- ""
- Initializing
- Initialized
- Waiting
- Progressing
- Promoting
- Finalising
- Succeeded
- Failed
- Terminating
- Terminated
canaryWeight:
description: Traffic weight routed to canary
type: number
failedChecks:
description: Failed check count of the current canary analysis
type: number
iterations:
description: Iteration count of the current canary analysis
type: number
lastAppliedSpec:
description: LastAppliedSpec of this canary
type: string
lastTransitionTime:
description: LastTransitionTime of this canary
format: date-time
type: string
conditions:
description: Status conditions of this canary
type: array
items:
type: object
required: ["type", "status", "reason"]
properties:
lastTransitionTime:
description: LastTransitionTime of this condition
format: date-time
type: string
lastUpdateTime:
description: LastUpdateTime of this condition
format: date-time
type: string
message:
description: Message associated with this condition
type: string
reason:
description: Reason for the current status of this condition
type: string
status:
description: Status of this condition
type: string
type:
description: Type of this condition
type: string
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: metrictemplates.flagger.app
annotations:
helm.sh/resource-policy: keep
spec:
group: flagger.app
version: v1beta1
versions:
- name: v1beta1
served: true
storage: true
- name: v1alpha1
served: true
storage: false
names:
plural: metrictemplates
singular: metrictemplate
kind: MetricTemplate
categories:
- all
scope: Namespaced
subresources:
status: {}
additionalPrinterColumns:
- name: Provider
type: string
JSONPath: .spec.provider.type
validation:
openAPIV3Schema:
properties:
spec:
required:
- provider
- query
properties:
provider:
description: Provider of this metric template
type: object
required:
- type
properties:
type:
description: Type of this provider
type: string
enum:
- prometheus
- influxdb
- datadog
- cloudwatch
- newrelic
address:
description: API address of this provider
type: string
secretRef:
description: Kubernetes secret reference containing the provider credentials
type: object
required:
- name
properties:
name:
description: Name of the Kubernetes secret
type: string
region:
description: Region of the provider
type: string
query:
description: Query of this metric template
type: string
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: alertproviders.flagger.app
annotations:
helm.sh/resource-policy: keep
spec:
group: flagger.app
version: v1beta1
versions:
- name: v1beta1
served: true
storage: true
names:
plural: alertproviders
singular: alertprovider
kind: AlertProvider
categories:
- all
scope: Namespaced
subresources:
status: {}
additionalPrinterColumns:
- name: Type
type: string
JSONPath: .spec.type
validation:
openAPIV3Schema:
properties:
spec:
oneOf:
- required:
- type
- address
- required:
- type
- secretRef
properties:
type:
description: Type of this provider
type: string
enum:
- slack
- msteams
- discord
- rocket
address:
description: Hook URL address of this provider
type: string
secretRef:
description: Kubernetes secret reference containing the provider address
type: object
required:
- name
properties:
name:
description: Name of the Kubernetes secret
type: string

View File

@@ -3,6 +3,10 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "flagger.serviceAccountName" . }}
annotations:
{{- if .Values.serviceAccount.annotations }}
{{ toYaml .Values.serviceAccount.annotations | indent 4 }}
{{- end }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}

View File

@@ -1,318 +1,6 @@
{{- if .Values.crd.create }}
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: canaries.flagger.app
annotations:
helm.sh/resource-policy: keep
spec:
group: flagger.app
version: v1alpha3
versions:
- name: v1alpha3
served: true
storage: true
- name: v1alpha2
served: true
storage: false
- name: v1alpha1
served: true
storage: false
names:
plural: canaries
singular: canary
kind: Canary
categories:
- all
scope: Namespaced
subresources:
status: {}
additionalPrinterColumns:
- name: Status
type: string
JSONPath: .status.phase
- name: Weight
type: string
JSONPath: .status.canaryWeight
- name: FailedChecks
type: string
JSONPath: .status.failedChecks
priority: 1
- name: Interval
type: string
JSONPath: .spec.canaryAnalysis.interval
priority: 1
- name: Mirror
type: boolean
JSONPath: .spec.canaryAnalysis.mirror
priority: 1
- name: StepWeight
type: string
JSONPath: .spec.canaryAnalysis.stepWeight
priority: 1
- name: MaxWeight
type: string
JSONPath: .spec.canaryAnalysis.maxWeight
priority: 1
- name: LastTransitionTime
type: string
JSONPath: .status.lastTransitionTime
validation:
openAPIV3Schema:
properties:
spec:
required:
- targetRef
- service
- canaryAnalysis
properties:
provider:
description: Traffic managent provider
type: string
progressDeadlineSeconds:
description: Deployment progress deadline
type: number
targetRef:
description: Deployment selector
type: object
required: ['apiVersion', 'kind', 'name']
properties:
apiVersion:
type: string
kind:
type: string
name:
type: string
autoscalerRef:
description: HPA selector
anyOf:
- type: string
- type: object
required: ['apiVersion', 'kind', 'name']
properties:
apiVersion:
type: string
kind:
type: string
name:
type: string
ingressRef:
description: NGINX ingress selector
anyOf:
- type: string
- type: object
required: ['apiVersion', 'kind', 'name']
properties:
apiVersion:
type: string
kind:
type: string
name:
type: string
service:
type: object
required: ['port']
properties:
port:
description: Container port number
type: number
portName:
description: Container port name
type: string
targetPort:
description: Container target port name
anyOf:
- type: string
- type: number
portDiscovery:
description: Enable port dicovery
type: boolean
meshName:
description: AppMesh mesh name
type: string
backends:
description: AppMesh backend array
anyOf:
- type: string
- type: array
timeout:
description: Istio HTTP or gRPC request timeout
type: string
trafficPolicy:
description: Istio traffic policy
anyOf:
- type: string
- type: object
match:
description: Istio URL match conditions
anyOf:
- type: string
- type: array
rewrite:
description: Istio URL rewrite
anyOf:
- type: string
- type: object
headers:
description: Istio headers operations
anyOf:
- type: string
- type: object
corsPolicy:
description: Istio CORS policy
anyOf:
- type: string
- type: object
gateways:
description: Istio gateways list
anyOf:
- type: string
- type: array
hosts:
description: Istio hosts list
anyOf:
- type: string
- type: array
skipAnalysis:
type: boolean
canaryAnalysis:
properties:
interval:
description: Canary schedule interval
type: string
pattern: "^[0-9]+(m|s)"
iterations:
description: Number of checks to run for A/B Testing and Blue/Green
type: number
threshold:
description: Max number of failed checks before rollback
type: number
maxWeight:
description: Max traffic percentage routed to canary
type: number
stepWeight:
description: Canary incremental traffic percentage step
type: number
mirror:
description: Mirror traffic to canary before shifting
type: boolean
match:
description: A/B testing match conditions
anyOf:
- type: string
- type: array
metrics:
description: Prometheus query list for this canary
type: array
properties:
items:
type: object
required: ['name', 'threshold']
properties:
name:
description: Name of the Prometheus metric
type: string
interval:
description: Interval of the promql query
type: string
pattern: "^[0-9]+(m|s)"
threshold:
description: Max scalar value accepted for this metric
type: number
query:
description: Prometheus query
type: string
webhooks:
description: Webhook list for this canary
type: array
properties:
items:
type: object
required: ["name", "url"]
properties:
name:
description: Name of the webhook
type: string
type:
description: Type of the webhook pre, post or during rollout
type: string
enum:
- ""
- confirm-rollout
- pre-rollout
- rollout
- confirm-promotion
- post-rollout
url:
description: URL address of this webhook
type: string
format: url
timeout:
description: Request timeout for this webhook
type: string
pattern: "^[0-9]+(m|s)"
metadata:
description: Metadata (key-value pairs) for this webhook
anyOf:
- type: string
- type: object
status:
properties:
phase:
description: Analysis phase of this canary
type: string
enum:
- ""
- Initializing
- Initialized
- Waiting
- Progressing
- Promoting
- Finalising
- Succeeded
- Failed
canaryWeight:
description: Traffic weight percentage routed to canary
type: number
failedChecks:
description: Failed check count of the current canary analysis
type: number
iterations:
description: Iteration count of the current canary analysis
type: number
lastAppliedSpec:
description: LastAppliedSpec of this canary
type: string
lastTransitionTime:
description: LastTransitionTime of this canary
format: date-time
type: string
conditions:
description: Status conditions of this canary
type: array
properties:
items:
type: object
required: ['type', 'status', 'reason']
properties:
lastTransitionTime:
description: LastTransitionTime of this condition
format: date-time
type: string
lastUpdateTime:
description: LastUpdateTime of this condition
format: date-time
type: string
message:
description: Message associated with this condition
type: string
reason:
description: Reason for the current status of this condition
type: string
status:
description: Status of this condition
type: string
type:
description: Type of this condition
type: string
{{- end }}
{{- if .Values.crd.create -}}
{{- range $path, $bytes := .Files.Glob "crds/*.yaml" -}}
{{ $.Files.Get $path }}
---
{{- end -}}
{{- end -}}

View File

@@ -9,8 +9,10 @@ metadata:
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
replicas: {{ .Values.leaderElection.replicaCount }}
{{- if eq .Values.leaderElection.enabled false }}
strategy:
type: Recreate
{{- end }}
selector:
matchLabels:
app.kubernetes.io/name: {{ template "flagger.name" . }}
@@ -40,11 +42,26 @@ spec:
imagePullSecrets:
- name: {{ .Values.image.pullSecret }}
{{- end }}
volumes:
{{- if .Values.istio.kubeconfig.secretName }}
- name: kubeconfig
secret:
secretName: "{{ .Values.istio.kubeconfig.secretName }}"
{{- end }}
{{- if .Values.podPriorityClassName }}
priorityClassName: {{ .Values.podPriorityClassName }}
{{- end }}
containers:
- name: flagger
{{- if .Values.securityContext.enabled }}
securityContext:
readOnlyRootFilesystem: true
runAsUser: 10001
{{ toYaml .Values.securityContext.context | indent 12 }}
{{- end }}
volumeMounts:
{{- if .Values.istio.kubeconfig.secretName }}
- name: kubeconfig
mountPath: "/tmp/istio-host"
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
@@ -52,7 +69,7 @@ spec:
containerPort: 8080
command:
- ./flagger
- -log-level=info
- -log-level={{ .Values.logLevel }}
{{- if .Values.meshProvider }}
- -mesh-provider={{ .Values.meshProvider }}
{{- end }}
@@ -61,12 +78,22 @@ spec:
{{- else }}
- -metrics-server={{ .Values.metricsServer }}
{{- end }}
{{- if .Values.selectorLabels }}
- -selector-labels={{ .Values.selectorLabels }}
{{- end }}
{{- if .Values.configTracking }}
- -enable-config-tracking={{ .Values.configTracking.enabled }}
{{- end }}
{{- if .Values.namespace }}
- -namespace={{ .Values.namespace }}
{{- end }}
{{- if .Values.slack.url }}
- -slack-url={{ .Values.slack.url }}
{{- end }}
{{- if .Values.slack.user }}
- -slack-user={{ .Values.slack.user }}
{{- end }}
{{- if .Values.slack.channel }}
- -slack-channel={{ .Values.slack.channel }}
{{- end }}
{{- if .Values.msteams.url }}
@@ -79,6 +106,27 @@ spec:
{{- if .Values.ingressAnnotationsPrefix }}
- -ingress-annotations-prefix={{ .Values.ingressAnnotationsPrefix }}
{{- end }}
{{- if .Values.includeLabelPrefix }}
- -include-label-prefix={{ .Values.includeLabelPrefix }}
{{- end }}
{{- if .Values.ingressClass }}
- -ingress-class={{ .Values.ingressClass }}
{{- end }}
{{- if .Values.eventWebhook }}
- -event-webhook={{ .Values.eventWebhook }}
{{- end }}
{{- if .Values.kubeconfigQPS }}
- -kubeconfig-qps={{ .Values.kubeconfigQPS }}
{{- end }}
{{- if .Values.kubeconfigBurst }}
- -kubeconfig-burst={{ .Values.kubeconfigBurst }}
{{- end }}
{{- if .Values.istio.kubeconfig.secretName }}
- -kubeconfig-service-mesh=/tmp/istio-host/{{ .Values.istio.kubeconfig.key }}
{{- end }}
{{- if .Values.threadiness }}
- -threadiness={{ .Values.threadiness }}
{{- end }}
livenessProbe:
exec:
command:
@@ -99,6 +147,10 @@ spec:
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
{{- if .Values.env }}
env:
{{ toYaml .Values.env | indent 12 }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}

View File

@@ -0,0 +1,27 @@
{{- if .Values.podMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- range $k, $v := .Values.podMonitor.additionalLabels }}
{{ $k }}: {{ $v | quote }}
{{- end }}
name: {{ include "flagger.fullname" . }}
namespace: {{ .Values.podMonitor.namespace | default .Release.Namespace }}
spec:
podMetricsEndpoints:
- interval: {{ .Values.podMonitor.interval }}
path: /metrics
port: http
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
selector:
matchLabels:
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

View File

@@ -133,38 +133,22 @@ data:
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: kubernetes;https
# Scrape config for nodes
- job_name: 'kubernetes-nodes'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
# scrape config for cAdvisor
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
@@ -174,6 +158,14 @@ data:
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
# exclude high cardinality metrics
metric_relabel_configs:
- source_labels: [__name__]
regex: (container|machine)_(cpu|memory|network|fs)_(.+)
action: keep
- source_labels: [__name__]
regex: container_memory_failures_total
action: drop
# scrape config for pods
- job_name: kubernetes-pods
@@ -238,10 +230,10 @@ spec:
serviceAccountName: {{ template "flagger.serviceAccountName" . }}-prometheus
containers:
- name: prometheus
image: "docker.io/prom/prometheus:v2.12.0"
image: {{ .Values.prometheus.image }}
imagePullPolicy: IfNotPresent
args:
- '--storage.tsdb.retention=2h'
- '--storage.tsdb.retention={{ .Values.prometheus.retention }}'
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- containerPort: 9090

View File

@@ -14,69 +14,176 @@ rules:
resources:
- events
- configmaps
- configmaps/finalizers
- secrets
- secrets/finalizers
- services
verbs: ["*"]
- services/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- apps
resources:
- daemonsets
- daemonsets/finalizers
- deployments
verbs: ["*"]
- deployments/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs: ["*"]
- horizontalpodautoscalers/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- "extensions"
- extensions
- networking.k8s.io
resources:
- ingresses
- ingresses/status
verbs: ["*"]
- ingresses/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- flagger.app
resources:
- canaries
- canaries/status
verbs: ["*"]
- metrictemplates
- metrictemplates/status
- alertproviders
- alertproviders/status
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- networking.istio.io
resources:
- virtualservices
- virtualservices/status
- virtualservices/finalizers
- destinationrules
- destinationrules/status
verbs: ["*"]
- destinationrules/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- appmesh.k8s.aws
resources:
- meshes
- meshes/status
- virtualnodes
- virtualnodes/status
- virtualnodes/finalizers
- virtualrouters
- virtualrouters/finalizers
- virtualservices
- virtualservices/status
verbs: ["*"]
- virtualservices/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- split.smi-spec.io
resources:
- trafficsplits
verbs: ["*"]
- trafficsplits/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- specs.smi-spec.io
resources:
- httproutegroups
- httproutegroups/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- gloo.solo.io
resources:
- settings
- upstreams
- upstreams/finalizers
- upstreamgroups
- proxies
- virtualservices
verbs: ["*"]
- upstreamgroups/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- gateway.solo.io
- projectcontour.io
resources:
- virtualservices
- gateways
verbs: ["*"]
- httpproxies
- httpproxies/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- traefik.containo.us
resources:
- traefikservices
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- nonResourceURLs:
- /version
verbs:

View File

@@ -2,22 +2,54 @@
image:
repository: weaveworks/flagger
tag: 0.19.0
tag: 1.4.2
pullPolicy: IfNotPresent
pullSecret:
# accepted values are debug, info, warning, error (defaults to info)
logLevel: info
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
appmesh.k8s.aws/sidecarInjectorWebhook: disabled
# priority class name for pod priority configuration
podPriorityClassName: ""
metricsServer: "http://prometheus:9090"
# accepted values are istio, appmesh, nginx or supergloo:mesh.namespace (defaults to istio)
# accepted values are kubernetes, istio, linkerd, appmesh, contour, nginx, gloo, skipper, traefik
meshProvider: ""
# single namespace restriction
namespace: ""
# list of pod labels that Flagger uses to create pod selectors
# defaults to: app,name,app.kubernetes.io/name
selectorLabels: ""
# when enabled, flagger will track changes in Secrets and ConfigMaps referenced in the target deployment (enabled by default)
configTracking:
enabled: true
# annotations prefix for NGINX ingresses
ingressAnnotationsPrefix: ""
# ingress class used for annotating HTTPProxy objects
ingressClass: ""
# when enabled, it will add a security context for the flagger pod. You may
# need to disable this if you are running flagger on OpenShift
securityContext:
enabled: true
context:
readOnlyRootFilesystem: true
runAsUser: 10001
# when specified, flagger will publish events to the provided webhook
eventWebhook: ""
slack:
user: flagger
channel:
@@ -28,6 +60,30 @@ msteams:
# MS Teams incoming webhook URL
url:
podMonitor:
enabled: false
namespace:
interval: 15s
additionalLabels: {}
#env:
#- name: SLACK_URL
# valueFrom:
# secretKeyRef:
# name: slack
# key: url
#- name: MSTEAMS_URL
# valueFrom:
# secretKeyRef:
# name: msteams
# key: url
#- name: EVENT_WEBHOOK_URL
# valueFrom:
# secretKeyRef:
# name: eventwebhook
# key: url
env: []
leaderElection:
enabled: false
replicaCount: 1
@@ -37,6 +93,8 @@ serviceAccount:
create: true
# serviceAccount.name: The name of the service account to create or use
name: ""
# serviceAccount.annotations: Annotations for service account
annotations: {}
rbac:
# rbac.create: `true` if rbac resources should be created
@@ -46,7 +104,7 @@ rbac:
crd:
# crd.create: `true` if custom resource definitions should be created
create: true
create: false
nameOverride: ""
fullnameOverride: ""
@@ -64,5 +122,19 @@ nodeSelector: {}
tolerations: []
prometheus:
# to be used with AppMesh or nginx ingress
# to be used with ingress controllers
install: false
image: docker.io/prom/prometheus:v2.23.0
retention: 2h
kubeconfigQPS: ""
kubeconfigBurst: ""
# Istio multi-cluster service mesh (shared control plane single-network)
# https://istio.io/docs/setup/install/multicluster/shared-vpn/
istio:
kubeconfig:
# istio.kubeconfig.secretName: The name of the secret containing the Istio control plane kubeconfig
secretName: ""
# istio.kubeconfig.key: The name of secret data key that contains the Istio control plane kubeconfig
key: "kubeconfig"

View File

@@ -1,13 +1,20 @@
apiVersion: v1
name: grafana
version: 1.3.0
appVersion: 6.2.5
version: 1.5.0
appVersion: 7.2.0
description: Grafana dashboards for monitoring Flagger canary deployments
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/flagger-icon.png
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/weaveworks.png
home: https://flagger.app
sources:
- https://github.com/weaveworks/flagger
- https://github.com/weaveworks/flagger
maintainers:
- name: stefanprodan
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com
- name: stefanprodan
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com
keywords:
- flagger
- grafana
- canary
- istio
- appmesh

View File

@@ -1,13 +1,12 @@
# Flagger Grafana
Grafana dashboards for monitoring progressive deployments powered by Istio, Prometheus and Flagger.
Grafana dashboards for monitoring progressive deployments powered by Flagger and Prometheus.
![flagger-grafana](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/screens/grafana-canary-analysis.png)
## Prerequisites
* Kubernetes >= 1.11
* Istio >= 1.0
* Prometheus >= 2.6
## Installing the Chart
@@ -18,14 +17,20 @@ Add Flagger Helm repository:
helm repo add flagger https://flagger.app
```
To install the chart with the release name `flagger-grafana`:
To install the chart for Istio run:
```console
helm upgrade -i flagger-grafana flagger/grafana \
--namespace=istio-system \
--set url=http://prometheus:9090 \
--set user=admin \
--set password=admin
--set url=http://prometheus:9090
```
To install the chart for AWS App Mesh run:
```console
helm upgrade -i flagger-grafana flagger/grafana \
--namespace=appmesh-system \
--set url=http://appmesh-prometheus:9090
```
The command deploys Grafana on the Kubernetes cluster in the default namespace.
@@ -56,10 +61,7 @@ Parameter | Description | Default
`affinity` | node/pod affinities | `node`
`nodeSelector` | node labels for pod assignment | `{}`
`service.type` | type of service | `ClusterIP`
`url` | Prometheus URL, used when Weave Cloud token is empty | `http://prometheus:9090`
`token` | Weave Cloud token | `none`
`user` | Grafana admin username | `admin`
`password` | Grafana admin password | `admin`
`url` | Prometheus URL | `http://prometheus:9090`
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,

View File

@@ -602,11 +602,11 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(container_cpu_usage_seconds_total{cpu=\"total\",namespace=\"$namespace\",pod_name=~\"$primary.*\", container_name!~\"POD|istio-proxy\"}[1m])) by (pod_name)",
"expr": "sum(rate(container_cpu_usage_seconds_total{cpu=\"total\",namespace=\"$namespace\",pod=~\"$primary.*\", container!~\"POD|istio-proxy\"}[1m])) by (pod)",
"format": "time_series",
"hide": false,
"intervalFactor": 1,
"legendFormat": "{{ pod_name }}",
"legendFormat": "{{ pod }}",
"refId": "B"
}
],
@@ -692,11 +692,11 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(container_cpu_usage_seconds_total{cpu=\"total\",namespace=\"$namespace\",pod_name=~\"$canary.*\", pod_name!~\"$primary.*\", container_name!~\"POD|istio-proxy\"}[1m])) by (pod_name)",
"expr": "sum(rate(container_cpu_usage_seconds_total{cpu=\"total\",namespace=\"$namespace\",pod=~\"$canary.*\", pod!~\"$primary.*\", container!~\"POD|istio-proxy\"}[1m])) by (pod)",
"format": "time_series",
"hide": false,
"intervalFactor": 1,
"legendFormat": "{{ pod_name }}",
"legendFormat": "{{ pod }}",
"refId": "B"
}
],
@@ -782,12 +782,12 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(container_memory_working_set_bytes{namespace=\"$namespace\",pod_name=~\"$primary.*\", container_name!~\"POD|istio-proxy\"}) by (pod_name)",
"expr": "sum(container_memory_working_set_bytes{namespace=\"$namespace\",pod=~\"$primary.*\", container!~\"POD|istio-proxy\"}) by (pod)",
"format": "time_series",
"hide": false,
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{ pod_name }}",
"legendFormat": "{{ pod }}",
"refId": "B"
}
],
@@ -874,12 +874,12 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(container_memory_working_set_bytes{namespace=\"$namespace\",pod_name=~\"$canary.*\", pod_name!~\"$primary.*\", container_name!~\"POD|istio-proxy\"}) by (pod_name)",
"expr": "sum(container_memory_working_set_bytes{namespace=\"$namespace\",pod=~\"$canary.*\", pod!~\"$primary.*\", container!~\"POD|istio-proxy\"}) by (pod)",
"format": "time_series",
"hide": false,
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{ pod_name }}",
"legendFormat": "{{ pod }}",
"refId": "B"
}
],
@@ -975,14 +975,14 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(rate (container_network_receive_bytes_total{namespace=\"$namespace\",pod_name=~\"$primary.*\"}[1m])) ",
"expr": "sum(rate (container_network_receive_bytes_total{namespace=\"$namespace\",pod=~\"$primary.*\"}[1m])) ",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "received",
"refId": "A"
},
{
"expr": "-sum (rate (container_network_transmit_bytes_total{namespace=\"$namespace\",pod_name=~\"$primary.*\"}[1m]))",
"expr": "-sum (rate (container_network_transmit_bytes_total{namespace=\"$namespace\",pod=~\"$primary.*\"}[1m]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "transmited",
@@ -1081,14 +1081,14 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(rate (container_network_receive_bytes_total{namespace=\"$namespace\",pod_name=~\"$canary.*\",pod_name!~\"$primary.*\"}[1m])) ",
"expr": "sum(rate (container_network_receive_bytes_total{namespace=\"$namespace\",pod=~\"$canary.*\",pod!~\"$primary.*\"}[1m])) ",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "received",
"refId": "A"
},
{
"expr": "-sum (rate (container_network_transmit_bytes_total{namespace=\"$namespace\",pod_name=~\"$canary.*\",pod_name!~\"$primary.*\"}[1m]))",
"expr": "-sum (rate (container_network_transmit_bytes_total{namespace=\"$namespace\",pod=~\"$canary.*\",pod!~\"$primary.*\"}[1m]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "transmited",

File diff suppressed because it is too large Load Diff

View File

@@ -403,7 +403,7 @@
"steppedLine": false,
"targets": [
{
"expr": "histogram_quantile(0.50, sum(irate(istio_request_duration_seconds_bucket{reporter=\"destination\",destination_workload=~\"$primary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
"expr": "histogram_quantile(0.50, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$primary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
@@ -411,7 +411,7 @@
"refId": "A"
},
{
"expr": "histogram_quantile(0.90, sum(irate(istio_request_duration_seconds_bucket{reporter=\"destination\",destination_workload=~\"$primary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
"expr": "histogram_quantile(0.90, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$primary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
"format": "time_series",
"hide": false,
"intervalFactor": 1,
@@ -419,7 +419,7 @@
"refId": "B"
},
{
"expr": "histogram_quantile(0.99, sum(irate(istio_request_duration_seconds_bucket{reporter=\"destination\",destination_workload=~\"$primary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
"expr": "histogram_quantile(0.99, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$primary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
"format": "time_series",
"hide": false,
"intervalFactor": 1,
@@ -509,7 +509,7 @@
"steppedLine": false,
"targets": [
{
"expr": "histogram_quantile(0.50, sum(irate(istio_request_duration_seconds_bucket{reporter=\"destination\",destination_workload=~\"$canary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
"expr": "histogram_quantile(0.50, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$canary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
@@ -517,7 +517,7 @@
"refId": "A"
},
{
"expr": "histogram_quantile(0.90, sum(irate(istio_request_duration_seconds_bucket{reporter=\"destination\",destination_workload=~\"$canary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
"expr": "histogram_quantile(0.90, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$canary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
"format": "time_series",
"hide": false,
"intervalFactor": 1,
@@ -525,7 +525,7 @@
"refId": "B"
},
{
"expr": "histogram_quantile(0.99, sum(irate(istio_request_duration_seconds_bucket{reporter=\"destination\",destination_workload=~\"$canary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
"expr": "histogram_quantile(0.99, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$canary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
"format": "time_series",
"hide": false,
"intervalFactor": 1,
@@ -630,11 +630,11 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(container_cpu_usage_seconds_total{cpu=\"total\",namespace=\"$namespace\",pod_name=~\"$primary.*\", container_name!~\"POD|istio-proxy\"}[1m])) by (pod_name)",
"expr": "sum(rate(container_cpu_usage_seconds_total{cpu=\"total\",namespace=\"$namespace\",pod=~\"$primary.*\", container!~\"POD|istio-proxy\"}[1m])) by (pod)",
"format": "time_series",
"hide": false,
"intervalFactor": 1,
"legendFormat": "{{ pod_name }}",
"legendFormat": "{{ pod }}",
"refId": "B"
}
],
@@ -720,11 +720,11 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(container_cpu_usage_seconds_total{cpu=\"total\",namespace=\"$namespace\",pod_name=~\"$canary.*\", pod_name!~\"$primary.*\", container_name!~\"POD|istio-proxy\"}[1m])) by (pod_name)",
"expr": "sum(rate(container_cpu_usage_seconds_total{cpu=\"total\",namespace=\"$namespace\",pod=~\"$canary.*\", pod!~\"$primary.*\", container!~\"POD|istio-proxy\"}[1m])) by (pod)",
"format": "time_series",
"hide": false,
"intervalFactor": 1,
"legendFormat": "{{ pod_name }}",
"legendFormat": "{{ pod }}",
"refId": "B"
}
],
@@ -810,12 +810,12 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(container_memory_working_set_bytes{namespace=\"$namespace\",pod_name=~\"$primary.*\", container_name!~\"POD|istio-proxy\"}) by (pod_name)",
"expr": "sum(container_memory_working_set_bytes{namespace=\"$namespace\",pod=~\"$primary.*\", container!~\"POD|istio-proxy\"}) by (pod)",
"format": "time_series",
"hide": false,
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{ pod_name }}",
"legendFormat": "{{ pod }}",
"refId": "B"
}
],
@@ -902,12 +902,12 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(container_memory_working_set_bytes{namespace=\"$namespace\",pod_name=~\"$canary.*\", pod_name!~\"$primary.*\", container_name!~\"POD|istio-proxy\"}) by (pod_name)",
"expr": "sum(container_memory_working_set_bytes{namespace=\"$namespace\",pod=~\"$canary.*\", pod!~\"$primary.*\", container!~\"POD|istio-proxy\"}) by (pod)",
"format": "time_series",
"hide": false,
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{ pod_name }}",
"legendFormat": "{{ pod }}",
"refId": "B"
}
],
@@ -1003,14 +1003,14 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(rate (container_network_receive_bytes_total{namespace=\"$namespace\",pod_name=~\"$primary.*\"}[1m])) ",
"expr": "sum(rate (container_network_receive_bytes_total{namespace=\"$namespace\",pod=~\"$primary.*\"}[1m])) ",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "received",
"refId": "A"
},
{
"expr": "-sum (rate (container_network_transmit_bytes_total{namespace=\"$namespace\",pod_name=~\"$primary.*\"}[1m]))",
"expr": "-sum (rate (container_network_transmit_bytes_total{namespace=\"$namespace\",pod=~\"$primary.*\"}[1m]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "transmited",
@@ -1109,14 +1109,14 @@
"steppedLine": false,
"targets": [
{
"expr": "sum(rate (container_network_receive_bytes_total{namespace=\"$namespace\",pod_name=~\"$canary.*\",pod_name!~\"$primary.*\"}[1m])) ",
"expr": "sum(rate (container_network_receive_bytes_total{namespace=\"$namespace\",pod=~\"$canary.*\",pod!~\"$primary.*\"}[1m])) ",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "received",
"refId": "A"
},
{
"expr": "-sum (rate (container_network_transmit_bytes_total{namespace=\"$namespace\",pod_name=~\"$canary.*\",pod_name!~\"$primary.*\"}[1m]))",
"expr": "-sum (rate (container_network_transmit_bytes_total{namespace=\"$namespace\",pod=~\"$canary.*\",pod!~\"$primary.*\"}[1m]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "transmited",

View File

@@ -1,4 +1,4 @@
apiVersion: apps/v1beta2
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "grafana.fullname" . }}

View File

@@ -6,7 +6,7 @@ replicaCount: 1
image:
repository: grafana/grafana
tag: 6.2.5
tag: 7.3.4
pullPolicy: IfNotPresent
podAnnotations: {}
@@ -32,7 +32,7 @@ affinity: {}
user: admin
password:
# Istio Prometheus instance
# Prometheus instance
url: http://prometheus:9090
# Weave Cloud instance token

View File

@@ -1,12 +1,12 @@
apiVersion: v1
name: loadtester
version: 0.9.0
appVersion: 0.9.0
version: 0.18.0
appVersion: 0.18.0
kubeVersion: ">=1.11.0-0"
engine: gotpl
description: Flagger's load testing services based on rakyll/hey and bojand/ghz that generates traffic during canary analysis when configured as a webhook.
home: https://docs.flagger.app
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/flagger-icon.png
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/weaveworks.png
sources:
- https://github.com/weaveworks/flagger
maintainers:
@@ -14,8 +14,10 @@ maintainers:
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com
keywords:
- canary
- flagger
- istio
- appmesh
- linkerd
- gloo
- gitops
- load testing

View File

@@ -1,8 +1,9 @@
# Flagger load testing service
[Flagger's](https://github.com/weaveworks/flagger) load testing service is based on
[rakyll/hey](https://github.com/rakyll/hey)
and can be used to generates traffic during canary analysis when configured as a webhook.
[rakyll/hey](https://github.com/rakyll/hey) and
[bojand/ghz](https://github.com/bojand/ghz).
It can be used to generate HTTP and gRPC traffic during canary analysis when configured as a webhook.
## Prerequisites
@@ -22,9 +23,10 @@ To install the chart with the release name `flagger-loadtester`:
helm upgrade -i flagger-loadtester flagger/loadtester
```
The command deploys Grafana on the Kubernetes cluster in the default namespace.
The command deploys loadtester on the Kubernetes cluster in the default namespace.
> **Tip**: Note that the namespace where you deploy the load tester should have the Istio or App Mesh sidecar injection enabled
> **Tip**: Note that the namespace where you deploy the load tester should
> have the Istio, App Mesh or Linkerd sidecar injection enabled
The [configuration](#configuration) section lists the parameters that can be configured during installation.
@@ -33,7 +35,7 @@ The [configuration](#configuration) section lists the parameters that can be con
To uninstall/delete the `flagger-loadtester` deployment:
```console
helm delete --purge flagger-loadtester
helm delete flagger-loadtester
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
@@ -58,13 +60,24 @@ Parameter | Description | Default
`service.port` | ClusterIP port | `80`
`cmd.timeout` | Command execution timeout | `1h`
`logLevel` | Log level can be debug, info, warning, error or panic | `info`
`meshName` | AWS App Mesh name | `none`
`backends` | AWS App Mesh virtual services | `none`
`appmesh.enabled` | Create AWS App Mesh v1beta2 virtual node | `false`
`appmesh.backends` | AWS App Mesh virtual services | `none`
`istio.enabled` | Create Istio virtual service | `false`
`istio.host` | Loadtester hostname | `flagger-loadtester.flagger`
`istio.gateway.enabled` | Create Istio gateway in namespace | `false`
`istio.tls.enabled` | Enable TLS in gateway ( TLS secrets should be in namespace ) | `false`
`istio.tls.httpsRedirect` | Redirect traffic to TLS port | `false`
`podPriorityClassName` | PriorityClass name for pod priority configuration | ""
`securityContext.enabled` | Add securityContext to container | ""
`securityContext.context` | securityContext to add | ""
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
Specify each parameter using the `--set key=value[,key=value]` argument to `helm upgrade`. For example,
```console
helm install flagger/loadtester --name flagger-loadtester
helm upgrade -i flagger-loadtester flagger/loadtester \
--set "appmesh.enabled=true" \
--set "appmesh.backends[0]=podinfo" \
--set "appmesh.backends[1]=podinfo-canary"
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,

View File

@@ -0,0 +1,27 @@
{{- if .Values.appmesh.enabled }}
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualNode
metadata:
name: {{ include "loadtester.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "loadtester.name" . }}
helm.sh/chart: {{ include "loadtester.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
podSelector:
matchLabels:
app: {{ include "loadtester.name" . }}
logging:
accessLog:
file:
path: /dev/stdout
{{- if .Values.appmesh.backends }}
backends:
{{- range .Values.appmesh.backends }}
- virtualService:
virtualServiceRef:
name: {{ . }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -27,8 +27,15 @@ spec:
{{- else if .Values.rbac.create }}
serviceAccountName: {{ include "loadtester.fullname" . }}
{{- end }}
{{- if .Values.podPriorityClassName }}
priorityClassName: {{ .Values.podPriorityClassName }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
{{- if .Values.securityContext.enabled }}
securityContext:
{{ toYaml .Values.securityContext.context | indent 12 }}
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:

View File

@@ -0,0 +1,30 @@
{{- if and (.Values.istio.enabled) (.Values.istio.gateway.enabled) }}
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: {{ include "loadtester.fullname" . }}
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-default
protocol: HTTP
hosts:
- {{ .Values.istio.host }}
{{- if .Values.istio.tls.enabled }}
- port:
number: 443
name: https-default
protocol: HTTPS
tls:
httpsRedirect: {{ .Values.istio.tls.httpsRedirect }}
mode: SIMPLE
serverCertificate: "sds"
privateKey: "sds"
credentialName: {{ include "loadtester.fullname" . }}
hosts:
- {{ .Values.istio.host }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,17 @@
{{- if .Values.istio.enabled }}
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: {{ include "loadtester.fullname" . }}
spec:
gateways:
- {{ include "loadtester.fullname" . }}
hosts:
- {{ .Values.istio.host }}
http:
- route:
- destination:
host: {{ include "loadtester.fullname" . }}
port:
number: {{ .Values.service.port }}
{{- end }}

View File

@@ -2,13 +2,15 @@ replicaCount: 1
image:
repository: weaveworks/flagger-loadtester
tag: 0.9.0
tag: 0.18.0
pullPolicy: IfNotPresent
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
podPriorityClassName: ""
logLevel: info
cmd:
timeout: 1h
@@ -47,8 +49,33 @@ rbac:
# name of an existing service account to use - if not creating rbac resources
serviceAccountName: ""
# App Mesh virtual node settings
# App Mesh virtual node settings (to be used for AppMesh v1beta1)
meshName: ""
#backends:
# - app1.namespace
# - app2.namespace
# App Mesh virtual node settings (to be used for AppMesh v1beta2)
appmesh:
enabled: false
backends:
- podinfo
- podinfo-canary
#Istio virtual service and gatway settings. TLS secrets should be in namespace before enbaled it. ( secret format loadtester.fullname )
istio:
enabled: false
host: flagger-loadtester.flagger
gateway:
enabled: false
tls:
enabled: false
httpsRedirect: false
# when enabled, it will add a security context for the loadtester pod
securityContext:
enabled: false
context:
readOnlyRootFilesystem: true
runAsUser: 100
runAsGroup: 101

View File

@@ -1,12 +1,14 @@
apiVersion: v1
version: 3.1.0
appVersion: 3.1.0
version: 5.0.0
appVersion: 5.0.0
name: podinfo
engine: gotpl
description: Flagger canary deployment demo chart
home: https://flagger.app
maintainers:
- email: stefanprodan@users.noreply.github.com
name: stefanprodan
description: Flagger canary deployment demo application
home: https://docs.flagger.app
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/weaveworks.png
sources:
- https://github.com/weaveworks/flagger
- https://github.com/stefanprodan/podinfo
maintainers:
- name: stefanprodan
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com

View File

@@ -1,5 +1,5 @@
{{- if .Values.canary.enabled }}
apiVersion: flagger.app/v1alpha3
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: {{ template "podinfo.fullname" . }}
@@ -13,7 +13,6 @@ spec:
apiVersion: apps/v1
kind: Deployment
name: {{ template "podinfo.fullname" . }}
progressDeadlineSeconds: 60
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
@@ -29,7 +28,7 @@ spec:
trafficPolicy:
tls:
mode: {{ .Values.canary.istioTLS }}
canaryAnalysis:
analysis:
interval: {{ .Values.canary.analysis.interval }}
threshold: {{ .Values.canary.analysis.threshold }}
maxWeight: {{ .Values.canary.analysis.maxWeight }}
@@ -48,8 +47,8 @@ spec:
url: {{ .Values.canary.helmtest.url }}
timeout: 3m
metadata:
type: "helm"
cmd: "test {{ .Release.Name }} --cleanup"
type: "helmv3"
cmd: "test {{ .Release.Name }} -n {{ .Release.Namespace }}"
{{- end }}
{{- if .Values.canary.loadtest.enabled }}
- name: load-test-get
@@ -57,10 +56,5 @@ spec:
timeout: 5s
metadata:
cmd: "hey -z 1m -q 5 -c 2 http://{{ template "podinfo.fullname" . }}.{{ .Release.Namespace }}:{{ .Values.service.port }}"
- name: load-test-post
url: {{ .Values.canary.loadtest.url }}
timeout: 5s
metadata:
cmd: "hey -z 1m -q 5 -c 2 -m POST -d '{\"test\": true}' http://{{ template "podinfo.fullname" . }}.{{ .Release.Namespace }}:{{ .Values.service.port }}/echo"
{{- end }}
{{- end }}

View File

@@ -41,6 +41,8 @@ spec:
- --backend-url={{ . }}
{{- end }}
env:
- name: PODINFO_UI_COLOR
value: "#34577c"
{{- if .Values.message }}
- name: PODINFO_UI_MESSAGE
value: {{ .Values.message }}

View File

@@ -10,7 +10,7 @@ metadata:
heritage: {{ .Release.Service }}
spec:
scaleTargetRef:
apiVersion: apps/v1beta2
apiVersion: apps/v1
kind: Deployment
name: {{ template "podinfo.fullname" . }}
minReplicas: {{ .Values.hpa.minReplicas }}
@@ -28,10 +28,4 @@ spec:
name: memory
targetAverageValue: {{ .Values.hpa.memory }}
{{- end }}
{{- if .Values.hpa.requests }}
- type: Pod
pods:
metricName: http_requests
targetAverageValue: {{ .Values.hpa.requests }}
{{- end }}
{{- end }}

View File

@@ -1,7 +1,7 @@
# Default values for podinfo.
image:
repository: stefanprodan/podinfo
tag: 3.1.0
repository: ghcr.io/stefanprodan/podinfo
tag: 5.0.0
pullPolicy: IfNotPresent
podAnnotations: {}

View File

@@ -9,18 +9,9 @@ import (
"strings"
"time"
"github.com/Masterminds/semver"
clientset "github.com/weaveworks/flagger/pkg/client/clientset/versioned"
informers "github.com/weaveworks/flagger/pkg/client/informers/externalversions"
"github.com/weaveworks/flagger/pkg/controller"
"github.com/weaveworks/flagger/pkg/logger"
"github.com/weaveworks/flagger/pkg/metrics"
"github.com/weaveworks/flagger/pkg/notifier"
"github.com/weaveworks/flagger/pkg/router"
"github.com/weaveworks/flagger/pkg/server"
"github.com/weaveworks/flagger/pkg/signals"
"github.com/weaveworks/flagger/pkg/version"
semver "github.com/Masterminds/semver/v3"
"go.uber.org/zap"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/uuid"
"k8s.io/client-go/kubernetes"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
@@ -30,19 +21,35 @@ import (
"k8s.io/client-go/tools/leaderelection/resourcelock"
"k8s.io/client-go/transport"
_ "k8s.io/code-generator/cmd/client-gen/generators"
"github.com/weaveworks/flagger/pkg/canary"
clientset "github.com/weaveworks/flagger/pkg/client/clientset/versioned"
informers "github.com/weaveworks/flagger/pkg/client/informers/externalversions"
"github.com/weaveworks/flagger/pkg/controller"
"github.com/weaveworks/flagger/pkg/logger"
"github.com/weaveworks/flagger/pkg/metrics/observers"
"github.com/weaveworks/flagger/pkg/notifier"
"github.com/weaveworks/flagger/pkg/router"
"github.com/weaveworks/flagger/pkg/server"
"github.com/weaveworks/flagger/pkg/signals"
"github.com/weaveworks/flagger/pkg/version"
)
var (
masterURL string
kubeconfig string
kubeconfigQPS int
kubeconfigBurst int
metricsServer string
controlLoopInterval time.Duration
logLevel string
port string
msteamsURL string
includeLabelPrefix string
slackURL string
slackUser string
slackChannel string
eventWebhook string
threadiness int
zapReplaceGlobals bool
zapEncoding string
@@ -50,13 +57,18 @@ var (
meshProvider string
selectorLabels string
ingressAnnotationsPrefix string
ingressClass string
enableLeaderElection bool
leaderElectionNamespace string
enableConfigTracking bool
ver bool
kubeconfigServiceMesh string
)
func init() {
flag.StringVar(&kubeconfig, "kubeconfig", "", "Path to a kubeconfig. Only required if out-of-cluster.")
flag.IntVar(&kubeconfigQPS, "kubeconfig-qps", 100, "Set QPS for kubeconfig.")
flag.IntVar(&kubeconfigBurst, "kubeconfig-burst", 250, "Set Burst for kubeconfig.")
flag.StringVar(&masterURL, "master", "", "The address of the Kubernetes API server. Overrides any value in kubeconfig. Only required if out-of-cluster.")
flag.StringVar(&metricsServer, "metrics-server", "http://prometheus:9090", "Prometheus URL.")
flag.DurationVar(&controlLoopInterval, "control-loop-interval", 10*time.Second, "Kubernetes API sync interval.")
@@ -65,17 +77,22 @@ func init() {
flag.StringVar(&slackURL, "slack-url", "", "Slack hook URL.")
flag.StringVar(&slackUser, "slack-user", "flagger", "Slack user name.")
flag.StringVar(&slackChannel, "slack-channel", "", "Slack channel.")
flag.StringVar(&eventWebhook, "event-webhook", "", "Webhook for publishing flagger events")
flag.StringVar(&msteamsURL, "msteams-url", "", "MS Teams incoming webhook URL.")
flag.StringVar(&includeLabelPrefix, "include-label-prefix", "", "List of prefixes of labels that are copied when creating primary deployments or daemonsets. Use * to include all.")
flag.IntVar(&threadiness, "threadiness", 2, "Worker concurrency.")
flag.BoolVar(&zapReplaceGlobals, "zap-replace-globals", false, "Whether to change the logging level of the global zap logger.")
flag.StringVar(&zapEncoding, "zap-encoding", "json", "Zap logger encoding.")
flag.StringVar(&namespace, "namespace", "", "Namespace that flagger would watch canary object.")
flag.StringVar(&meshProvider, "mesh-provider", "istio", "Service mesh provider, can be istio, linkerd, appmesh, supergloo, nginx or smi.")
flag.StringVar(&meshProvider, "mesh-provider", "istio", "Service mesh provider, can be istio, linkerd, appmesh, contour, gloo, nginx, skipper or traefik.")
flag.StringVar(&selectorLabels, "selector-labels", "app,name,app.kubernetes.io/name", "List of pod labels that Flagger uses to create pod selectors.")
flag.StringVar(&ingressAnnotationsPrefix, "ingress-annotations-prefix", "nginx.ingress.kubernetes.io", "Annotations prefix for ingresses.")
flag.StringVar(&ingressAnnotationsPrefix, "ingress-annotations-prefix", "nginx.ingress.kubernetes.io", "Annotations prefix for NGINX ingresses.")
flag.StringVar(&ingressClass, "ingress-class", "", "Ingress class used for annotating HTTPProxy objects.")
flag.BoolVar(&enableLeaderElection, "enable-leader-election", false, "Enable leader election.")
flag.StringVar(&leaderElectionNamespace, "leader-election-namespace", "kube-system", "Namespace used to create the leader election config map.")
flag.BoolVar(&enableConfigTracking, "enable-config-tracking", true, "Enable secrets and configmaps tracking.")
flag.BoolVar(&ver, "version", false, "Print version")
flag.StringVar(&kubeconfigServiceMesh, "kubeconfig-service-mesh", "", "Path to a kubeconfig for the service mesh control plane cluster.")
}
func main() {
@@ -98,68 +115,57 @@ func main() {
stopCh := signals.SetupSignalHandler()
logger.Infof("Starting flagger version %s revision %s mesh provider %s", version.VERSION, version.REVISION, meshProvider)
cfg, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfig)
if err != nil {
logger.Fatalf("Error building kubeconfig: %v", err)
}
cfg.QPS = float32(kubeconfigQPS)
cfg.Burst = kubeconfigBurst
kubeClient, err := kubernetes.NewForConfig(cfg)
if err != nil {
logger.Fatalf("Error building kubernetes clientset: %v", err)
}
meshClient, err := clientset.NewForConfig(cfg)
if err != nil {
logger.Fatalf("Error building mesh clientset: %v", err)
}
flaggerClient, err := clientset.NewForConfig(cfg)
if err != nil {
logger.Fatalf("Error building flagger clientset: %s", err.Error())
}
flaggerInformerFactory := informers.NewSharedInformerFactoryWithOptions(flaggerClient, time.Second*30, informers.WithNamespace(namespace))
canaryInformer := flaggerInformerFactory.Flagger().V1alpha3().Canaries()
logger.Infof("Starting flagger version %s revision %s mesh provider %s", version.VERSION, version.REVISION, meshProvider)
ver, err := kubeClient.Discovery().ServerVersion()
// use a remote cluster for routing if a service mesh kubeconfig is specified
if kubeconfigServiceMesh == "" {
kubeconfigServiceMesh = kubeconfig
}
cfgHost, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfigServiceMesh)
if err != nil {
logger.Fatalf("Error calling Kubernetes API: %v", err)
logger.Fatalf("Error building host kubeconfig: %v", err)
}
k8sVersionConstraint := "^1.11.0"
cfgHost.QPS = float32(kubeconfigQPS)
cfgHost.Burst = kubeconfigBurst
// We append -alpha.1 to the end of our version constraint so that prebuilds of later versions
// are considered valid for our purposes, as well as some managed solutions like EKS where they provide
// a version like `v1.12.6-eks-d69f1b`. It doesn't matter what the prelease value is here, just that it
// exists in our constraint.
semverConstraint, err := semver.NewConstraint(k8sVersionConstraint + "-alpha.1")
meshClient, err := clientset.NewForConfig(cfgHost)
if err != nil {
logger.Fatalf("Error parsing kubernetes version constraint: %v", err)
logger.Fatalf("Error building mesh clientset: %v", err)
}
k8sSemver, err := semver.NewVersion(ver.GitVersion)
if err != nil {
logger.Fatalf("Error parsing kubernetes version as a semantic version: %v", err)
}
if !semverConstraint.Check(k8sSemver) {
logger.Fatalf("Unsupported version of kubernetes detected. Expected %s, got %v", k8sVersionConstraint, ver)
}
verifyCRDs(flaggerClient, logger)
verifyKubernetesVersion(kubeClient, logger)
infos := startInformers(flaggerClient, logger, stopCh)
labels := strings.Split(selectorLabels, ",")
if len(labels) < 1 {
logger.Fatalf("At least one selector label is required")
}
logger.Infof("Connected to Kubernetes API %s", ver)
if namespace != "" {
logger.Infof("Watching namespace %s", namespace)
}
observerFactory, err := metrics.NewFactory(metricsServer, meshProvider, 5*time.Second)
observerFactory, err := observers.NewFactory(metricsServer)
if err != nil {
logger.Fatalf("Error building prometheus client: %s", err.Error())
}
@@ -177,34 +183,38 @@ func main() {
// start HTTP server
go server.ListenAndServe(port, 3*time.Second, logger, stopCh)
routerFactory := router.NewFactory(cfg, kubeClient, flaggerClient, ingressAnnotationsPrefix, logger, meshClient)
routerFactory := router.NewFactory(cfg, kubeClient, flaggerClient, ingressAnnotationsPrefix, ingressClass, logger, meshClient)
var configTracker canary.Tracker
if enableConfigTracking {
configTracker = &canary.ConfigTracker{
Logger: logger,
KubeClient: kubeClient,
FlaggerClient: flaggerClient,
}
} else {
configTracker = &canary.NopTracker{}
}
includeLabelPrefixArray := strings.Split(includeLabelPrefix, ",")
canaryFactory := canary.NewFactory(kubeClient, flaggerClient, configTracker, labels, includeLabelPrefixArray, logger)
c := controller.NewController(
kubeClient,
meshClient,
flaggerClient,
canaryInformer,
infos,
controlLoopInterval,
logger,
notifierClient,
canaryFactory,
routerFactory,
observerFactory,
meshProvider,
version.VERSION,
labels,
fromEnv("EVENT_WEBHOOK_URL", eventWebhook),
)
flaggerInformerFactory.Start(stopCh)
logger.Info("Waiting for informer caches to sync")
for _, synced := range []cache.InformerSynced{
canaryInformer.Informer().HasSynced,
} {
if ok := cache.WaitForCacheSync(stopCh, synced); !ok {
logger.Fatalf("Failed to wait for cache sync")
}
}
// leader election context
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@@ -237,6 +247,37 @@ func main() {
}
}
func startInformers(flaggerClient clientset.Interface, logger *zap.SugaredLogger, stopCh <-chan struct{}) controller.Informers {
flaggerInformerFactory := informers.NewSharedInformerFactoryWithOptions(flaggerClient, time.Second*30, informers.WithNamespace(namespace))
logger.Info("Waiting for canary informer cache to sync")
canaryInformer := flaggerInformerFactory.Flagger().V1beta1().Canaries()
go canaryInformer.Informer().Run(stopCh)
if ok := cache.WaitForNamedCacheSync("flagger", stopCh, canaryInformer.Informer().HasSynced); !ok {
logger.Fatalf("failed to wait for cache to sync")
}
logger.Info("Waiting for metric template informer cache to sync")
metricInformer := flaggerInformerFactory.Flagger().V1beta1().MetricTemplates()
go metricInformer.Informer().Run(stopCh)
if ok := cache.WaitForNamedCacheSync("flagger", stopCh, metricInformer.Informer().HasSynced); !ok {
logger.Fatalf("failed to wait for cache to sync")
}
logger.Info("Waiting for alert provider informer cache to sync")
alertInformer := flaggerInformerFactory.Flagger().V1beta1().AlertProviders()
go alertInformer.Informer().Run(stopCh)
if ok := cache.WaitForNamedCacheSync("flagger", stopCh, alertInformer.Informer().HasSynced); !ok {
logger.Fatalf("failed to wait for cache to sync")
}
return controller.Informers{
CanaryInformer: canaryInformer,
MetricInformer: metricInformer,
AlertInformer: alertInformer,
}
}
func startLeaderElection(ctx context.Context, run func(), ns string, kubeClient kubernetes.Interface, logger *zap.SugaredLogger) {
configMapName := "flagger-leader-election"
id, err := os.Hostname()
@@ -286,21 +327,72 @@ func startLeaderElection(ctx context.Context, run func(), ns string, kubeClient
func initNotifier(logger *zap.SugaredLogger) (client notifier.Interface) {
provider := "slack"
notifierURL := slackURL
if msteamsURL != "" {
notifierURL := fromEnv("SLACK_URL", slackURL)
if msteamsURL != "" || os.Getenv("MSTEAMS_URL") != "" {
provider = "msteams"
notifierURL = msteamsURL
notifierURL = fromEnv("MSTEAMS_URL", msteamsURL)
}
notifierFactory := notifier.NewFactory(notifierURL, slackUser, slackChannel)
if notifierURL != "" {
var err error
client, err = notifierFactory.Notifier(provider)
if err != nil {
logger.Errorf("Notifier %v", err)
} else {
logger.Infof("Notifications enabled for %s", notifierURL[0:30])
}
var err error
client, err = notifierFactory.Notifier(provider)
if err != nil {
logger.Errorf("Notifier %v", err)
} else if len(notifierURL) > 30 {
logger.Infof("Notifications enabled for %s", notifierURL[0:30])
}
return
}
func fromEnv(envVar string, defaultVal string) string {
if v := os.Getenv(envVar); v != "" {
return v
}
return defaultVal
}
func verifyCRDs(flaggerClient clientset.Interface, logger *zap.SugaredLogger) {
_, err := flaggerClient.FlaggerV1beta1().Canaries(namespace).List(context.TODO(), metav1.ListOptions{Limit: 1})
if err != nil {
logger.Fatalf("Canary CRD is not registered %v", err)
}
_, err = flaggerClient.FlaggerV1beta1().MetricTemplates(namespace).List(context.TODO(), metav1.ListOptions{Limit: 1})
if err != nil {
logger.Fatalf("MetricTemplate CRD is not registered %v", err)
}
_, err = flaggerClient.FlaggerV1beta1().AlertProviders(namespace).List(context.TODO(), metav1.ListOptions{Limit: 1})
if err != nil {
logger.Fatalf("AlertProvider CRD is not registered %v", err)
}
}
func verifyKubernetesVersion(kubeClient kubernetes.Interface, logger *zap.SugaredLogger) {
ver, err := kubeClient.Discovery().ServerVersion()
if err != nil {
logger.Fatalf("Error calling Kubernetes API: %v", err)
}
k8sVersionConstraint := "^1.11.0"
// We append -alpha.1 to the end of our version constraint so that prebuilds of later versions
// are considered valid for our purposes, as well as some managed solutions like EKS where they provide
// a version like `v1.12.6-eks-d69f1b`. It doesn't matter what the prelease value is here, just that it
// exists in our constraint.
semverConstraint, err := semver.NewConstraint(k8sVersionConstraint + "-alpha.1")
if err != nil {
logger.Fatalf("Error parsing kubernetes version constraint: %v", err)
}
k8sSemver, err := semver.NewVersion(ver.GitVersion)
if err != nil {
logger.Fatalf("Error parsing kubernetes version as a semantic version: %v", err)
}
if !semverConstraint.Check(k8sSemver) {
logger.Fatalf("Unsupported version of kubernetes detected. Expected %s, got %v", k8sVersionConstraint, ver)
}
logger.Infof("Connected to Kubernetes API %s", ver)
}

View File

@@ -2,15 +2,16 @@ package main
import (
"flag"
"log"
"time"
"github.com/weaveworks/flagger/pkg/loadtester"
"github.com/weaveworks/flagger/pkg/logger"
"github.com/weaveworks/flagger/pkg/signals"
"go.uber.org/zap"
"log"
"time"
)
var VERSION = "0.9.0"
var VERSION = "0.18.0"
var (
logLevel string
port string

Binary file not shown.

Before

Width:  |  Height:  |  Size: 158 KiB

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

View File

@@ -4,19 +4,46 @@ description: Flagger is a progressive delivery Kubernetes operator
# Introduction
[Flagger](https://github.com/weaveworks/flagger) is a **Kubernetes** operator that automates the promotion of canary
deployments using **Istio**, **Linkerd**, **App Mesh**, **NGINX** or **Gloo** routing for traffic shifting and **Prometheus** metrics for canary analysis.
The canary analysis can be extended with webhooks for running system integration/acceptance tests, load tests, or any other custom validation.
[Flagger](https://github.com/weaveworks/flagger) is a **Kubernetes** operator that automates the promotion of
canary deployments using **Istio**, **Linkerd**, **App Mesh**, **NGINX**, **Skipper**, **Contour**, **Gloo** or **Traefik** routing for
traffic shifting and **Prometheus** metrics for canary analysis. The canary analysis can be extended with webhooks for
running system integration/acceptance tests, load tests, or any other custom validation.
Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance
indicators like HTTP requests success rate, requests average duration and pods health.
Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators
like HTTP requests success rate, requests average duration and pods health.
Based on analysis of the **KPIs** a canary is promoted or aborted, and the analysis result is published to **Slack** or **MS Teams**.
![Flagger overview diagram](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/diagrams/flagger-canary-overview.png)
Flagger can be configured with Kubernetes custom resources and is compatible with
any CI/CD solutions made for Kubernetes. Since Flagger is declarative and reacts to Kubernetes events,
it can be used in **GitOps** pipelines together with Weave Flux or JenkinsX.
Flagger can be configured with Kubernetes custom resources and is compatible with any CI/CD solutions made for Kubernetes.
Since Flagger is declarative and reacts to Kubernetes events,
it can be used in **GitOps** pipelines together with Flux CD or JenkinsX.
This project is sponsored by [Weaveworks](https://www.weave.works/)
## Getting started
To get started with Flagger, chose one of the supported routing providers
and [install](install/flagger-install-on-kubernetes.md) Flagger with Helm or Kustomize.
After install Flagger, you can follow one of the tutorials:
**Service mesh tutorials**
* [Istio](tutorials/istio-progressive-delivery.md)
* [Linkerd](tutorials/linkerd-progressive-delivery.md)
* [AWS App Mesh](tutorials/appmesh-progressive-delivery.md)
**Ingress controller tutorials**
* [Contour](tutorials/contour-progressive-delivery.md)
* [Gloo](tutorials/gloo-progressive-delivery.md)
* [NGINX Ingress](tutorials/nginx-progressive-delivery.md)
* [Skipper Ingress](tutorials/skipper-progressive-delivery.md)
* [Traefik](tutorials/traefik-progressive-delivery.md)
**Hands-on GitOps workshops**
* [Istio](https://github.com/stefanprodan/gitops-istio)
* [Linkerd](https://helm.workshop.flagger.dev)
* [AWS App Mesh](https://eks.handson.flagger.dev)

View File

@@ -1,7 +1,6 @@
# Table of contents
* [Introduction](README.md)
* [How it works](how-it-works.md)
* [FAQ](faq.md)
## Install
@@ -9,22 +8,36 @@
* [Flagger Install on Kubernetes](install/flagger-install-on-kubernetes.md)
* [Flagger Install on GKE Istio](install/flagger-install-on-google-cloud.md)
* [Flagger Install on EKS App Mesh](install/flagger-install-on-eks-appmesh.md)
* [Flagger Install with SuperGloo](install/flagger-install-with-supergloo.md)
## Usage
* [Istio Canary Deployments](usage/progressive-delivery.md)
* [Istio A/B Testing](usage/ab-testing.md)
* [Linkerd Canary Deployments](usage/linkerd-progressive-delivery.md)
* [App Mesh Canary Deployments](usage/appmesh-progressive-delivery.md)
* [NGINX Canary Deployments](usage/nginx-progressive-delivery.md)
* [Gloo Canary Deployments](usage/gloo-progressive-delivery.md)
* [Blue/Green Deployments](usage/blue-green.md)
* [Monitoring](usage/monitoring.md)
* [How it works](usage/how-it-works.md)
* [Deployment Strategies](usage/deployment-strategies.md)
* [Metrics Analysis](usage/metrics.md)
* [Webhooks](usage/webhooks.md)
* [Alerting](usage/alerting.md)
* [Monitoring](usage/monitoring.md)
## Tutorials
* [SMI Istio Canary Deployments](tutorials/flagger-smi-istio.md)
* [Istio Canary Deployments](tutorials/istio-progressive-delivery.md)
* [Istio A/B Testing](tutorials/istio-ab-testing.md)
* [Linkerd Canary Deployments](tutorials/linkerd-progressive-delivery.md)
* [App Mesh Canary Deployments](tutorials/appmesh-progressive-delivery.md)
* [Contour Canary Deployments](tutorials/contour-progressive-delivery.md)
* [Gloo Canary Deployments](tutorials/gloo-progressive-delivery.md)
* [NGINX Canary Deployments](tutorials/nginx-progressive-delivery.md)
* [Skipper Canary Deployments](tutorials/skipper-progressive-delivery.md)
* [Traefik Canary Deployments](tutorials/traefik-progressive-delivery.md)
* [Blue/Green Deployments](tutorials/kubernetes-blue-green.md)
* [Crossover Canary Deployments](tutorials/crossover-progressive-delivery.md)
* [Canary analysis with Prometheus Operator](tutorials/prometheus-operator.md)
* [Canaries with Helm charts and GitOps](tutorials/canary-helm-gitops.md)
* [Zero downtime deployments](tutorials/zero-downtime-deployments.md)
* [Rollout Weights](tutorials/rollout-weights.md)
## Dev
* [Development Guide](dev/dev-guide.md)
* [Release Guide](dev/release-guide.md)
* [Upgrade Guide](dev/upgrade-guide.md)

View File

@@ -0,0 +1,211 @@
# Development Guide
This document describes how to build, test and run Flagger from source.
### Setup dev environment
Flagger is written in Go and uses Go modules for dependency management.
On your dev machine install the following tools:
* go >= 1.14
* git >= 2.20
* bash >= 5.0
* make >= 3.81
* kubectl >= 1.16
* kustomize >= 3.5
* helm >= 3.0
* docker >= 19.03
You'll also need a Kubernetes cluster for testing Flagger.
You can use Minikube, Kind, Docker desktop or any remote cluster
(AKS/EKS/GKE/etc) Kubernetes version 1.14 or newer.
To start contributing to Flagger, fork the [repository](https://github.com/weaveworks/flagger) on GitHub.
Create a dir inside your `GOPATH`:
```bash
mkdir -p $GOPATH/src/github.com/weaveworks
```
Clone your fork:
```bash
cd $GOPATH/src/github.com/weaveworks
git clone https://github.com/YOUR_USERNAME/flagger
cd flagger
```
Set Flagger repository as upstream:
```bash
git remote add upstream https://github.com/weaveworks/flagger.git
```
Sync your fork regularly to keep it up-to-date with upstream:
```bash
git fetch upstream
git checkout master
git merge upstream/master
```
### Build
Download Go modules:
```bash
go mod download
```
Build Flagger binary and container image:
```bash
make build
```
Build load tester binary and container image:
```bash
make loadtester-build
```
### Code changes
Before submitting a PR, make sure your changes are covered by unit tests.
If you made changes to `go.mod` run:
```bash
go mod tidy
```
If you made changes to `pkg/apis` regenerate Kubernetes client sets with:
```bash
make codegen
```
Run code formatters:
```bash
make fmt
```
Run unit tests:
```bash
make test
```
### API changes
If you made changes to `pkg/apis` regenerate the Kubernetes client sets with:
```bash
make codegen
```
Update the validation spec in `artifacts/flagger/crd.yaml` and run:
```bash
make crd
```
Note that any change to the CRDs must be accompanied by an update to the Open API schema.
### Manual testing
Install a service mesh and/or an ingress controller on your cluster and deploy Flagger
using one of the install options [listed here](https://docs.flagger.app/install/flagger-install-on-kubernetes).
If you made changes to the CRDs, apply your local copy with:
```bash
kubectl apply -f artifacts/flagger/crd.yaml
```
Shutdown the Flagger instance installed on your cluster (replace the namespace with your mesh/ingress one):
```bash
kubectl -n istio-system scale deployment/flagger --replicas=0
```
Port forward to your Prometheus instance:
```bash
kubectl -n istio-system port-forward svc/prometheus 9090:9090
```
Run Flagger locally against your remote cluster by specifying a kubeconfig path:
```bash
go run cmd/flagger/ -kubeconfig=$HOME/.kube/config \
-log-level=info \
-mesh-provider=istio \
-metrics-server=http://localhost:9090
```
Another option to manually test your changes is to build and push the image to your container registry:
```bash
make build
docker tag weaveworks/flagger:latest <YOUR-DOCKERHUB-USERNAME>/flagger:<YOUR-TAG>
docker push <YOUR-DOCKERHUB-USERNAME>/flagger:<YOUR-TAG>
```
Deploy your image on the cluster and scale up Flagger:
```bash
kubectl -n istio-system set image deployment/flagger flagger=<YOUR-DOCKERHUB-USERNAME>/flagger:<YOUR-TAG>
kubectl -n istio-system scale deployment/flagger --replicas=1
```
Now you can use one of the [tutorials](https://docs.flagger.app/) to manually test your changes.
### Integration testing
Flagger end-to-end tests can be run locally with [Kubernetes Kind](https://github.com/kubernetes-sigs/kind).
Create a Kind cluster:
```bash
kind create cluster
```
Install a service mesh and/or an ingress controller in Kind.
Linkerd example:
```bash
linkerd install | kubectl apply -f -
linkerd check
```
Build Flagger container image and load it on the cluster:
```bash
make build
docker tag weaveworks/flagger:latest test/flagger:latest
kind load docker-image test/flagger:latest
```
Install Flagger on the cluster and set the test image:
```bash
kubectl apply -k ./kustomize/linkerd
kubectl -n linkerd set image deployment/flagger flagger=test/flagger:latest
kubectl -n linkerd rollout status deployment/flagger
```
Run the Linkerd e2e tests:
```bash
./test/e2e-linkerd-tests.sh
```
For each service mesh and ingress controller there is a dedicated e2e test suite,
chose one that matches your changes from this [list](https://github.com/weaveworks/flagger/tree/master/test).
When you open a pull request on Flagger repo, the unit and integration tests will be run in CI.

View File

@@ -0,0 +1,34 @@
# Release Guide
This document describes how to release Flagger.
### Release
To release a new Flagger version (e.g. `2.0.0`) follow these steps:
* create a branch `git checkout -b prep-2.0.0`
* set the version in code and manifests `TAG=2.0.0 make version-set`
* commit changes and merge PR
* checkout master `git checkout master && git pull`
* tag master `make release`
### CI
After the tag has been pushed to GitHub, the CI release pipeline does the following:
* creates a GitHub release
* pushes the Flagger binary and change log to GitHub release
* pushes the Flagger container image to Docker Hub
* pushes the Helm chart to github-pages branch
* GitHub pages publishes the new chart version on the Helm repository
### Docs
The documentation [website](https://docs.flagger.app) is built from the `docs` branch.
After a Flagger release, publish the docs with:
* `git checkout master && git pull`
* `git checkout docs`
* `git rebase master`
* `git push origin docs`

View File

@@ -0,0 +1,90 @@
# Upgrade Guide
This document describes how to upgrade Flagger.
### Upgrade canaries v1alpha3 to v1beta1
Canary CRD changes in `canaries.flagger.app/v1beta1`:
* the `spec.canaryAnalysis` field has been deprecated and replaced with `spec.analysis`
* the `spec.analysis.interval` and `spec.analysis.threshold` fields are required
* the `status.lastAppliedSpec` and `status.lastPromotedSpec` hashing algorithm changed to `hash/fnv`
* the `spec.analysis.alerts` array can reference `alertproviders.flagger.app/v1beta1` resources
* the `spec.analysis.metrics[].templateRef` can reference a `metrictemplate.flagger.app/v1beta1` resource
* the `metric.threshold` field has been deprecated and replaced with `metric.thresholdRange`
* the `metric.query` field has been deprecated and replaced with `metric.templateRef`
* the `spec.ingressRef.apiVersion` accepts `networking.k8s.io/v1beta1`
* the `spec.targetRef` can reference `DaemonSet` kind
* the `spec.service.meshName` field has been deprecated and no longer used for `provider: appmesh:v1beta2`
Upgrade procedure:
* install the `v1beta1` CRDs
* update Flagger deployment
* replace `apiVersion: flagger.app/v1alpha3` with `apiVersion: flagger.app/v1beta1` in all canary manifests
* replace `spec.canaryAnalysis` with `spec.analysis` in all canary manifests
* update canary manifests in cluster
**Note** that after upgrading Flagger, all canaries will be triggered as the hash value used for tracking changes
is computed differently. You can set `spec.skipAnalysis: true` in all canary manifests before upgrading Flagger,
do the upgrade, wait for Flagger to finish the no-op promotions and finally set `skipAnalysis` to `false`.
Update builtin metrics:
* replace `threshold` with `thresholdRange.min` for request-success-rate
* replace `threshold` with `thresholdRange.max` for request-duration
```yaml
metrics:
- name: request-success-rate
thresholdRange:
min: 99
interval: 1m
- name: request-duration
thresholdRange:
max: 500
interval: 1m
```
### Istio telemetry v2
Istio 1.5 comes with a breaking change for Flagger uses. In Istio telemetry v2 the metric
`istio_request_duration_seconds_bucket` has been removed and replaced with `istio_request_duration_milliseconds_bucket`
and this breaks the `request-duration` metric check.
If are using **Istio 1.4**, you can create a metric template using the old duration metric like this:
```yaml
apiVersion: flagger.app/v1beta1
kind: MetricTemplate
metadata:
name: latency
namespace: istio-system
spec:
provider:
type: prometheus
address: http://prometheus.istio-system:9090
query: |
histogram_quantile(
0.99,
sum(
rate(
istio_request_duration_seconds_bucket{
reporter="destination",
destination_workload_namespace="{{ namespace }}",
destination_workload=~"{{ target }}"
}[{{ interval }}]
)
) by (le)
)
```
In the canary manifests, replace the `request-duration` metric with `latency`:
```yaml
metrics:
- name: latency
templateRef:
name: latency
namespace: istio-system
thresholdRange:
max: 0.500
interval: 1m
```

View File

@@ -4,140 +4,46 @@
**Which deployment strategies are supported by Flagger?**
Flagger can run automated application analysis, promotion and rollback for the following deployment strategies:
* Canary (progressive traffic shifting)
* Istio, Linkerd, App Mesh, NGINX, Gloo
* Canary (traffic mirroring)
* Istio
* A/B Testing (HTTP headers and cookies traffic routing)
* Istio, NGINX
* Blue/Green (traffic switch)
* Kubernetes CNI, Istio, Linkerd, App Mesh, NGINX, Gloo
For Canary deployments and A/B testing you'll need a Layer 7 traffic management solution like a service mesh or an ingress controller.
For Blue/Green deployments no service mesh or ingress controller is required.
Flagger implements the following deployment strategies:
* [Canary Release](usage/deployment-strategies.md#canary-release)
* [A/B Testing](usage/deployment-strategies.md#a-b-testing)
* [Blue/Green](usage/deployment-strategies.md#blue-green-deployments)
* [Blue/Green Mirroring](usage/deployment-strategies.md#blue-green-with-traffic-mirroring)
**When should I use A/B testing instead of progressive traffic shifting?**
For frontend applications that require session affinity you should use HTTP headers or cookies match conditions
to ensure a set of users will stay on the same version for the whole duration of the canary analysis.
A/B testing is supported by Istio and NGINX only.
Istio example:
```yaml
canaryAnalysis:
# schedule interval (default 60s)
interval: 1m
# total number of iterations
iterations: 10
# max number of failed iterations before rollback
threshold: 2
# canary match condition
match:
- headers:
x-canary:
regex: ".*insider.*"
- headers:
cookie:
regex: "^(.*?;)?(canary=always)(;.*)?$"
```
NGINX example:
```yaml
canaryAnalysis:
interval: 1m
threshold: 10
iterations: 2
match:
- headers:
x-canary:
exact: "insider"
- headers:
cookie:
exact: "canary"
```
Note that the NGINX ingress controller supports only exact matching for a single header and the cookie value is set to `always`.
The above configurations will route users with the x-canary header or canary cookie to the canary instance during analysis:
```bash
curl -H 'X-Canary: insider' http://app.example.com
curl -b 'canary=always' http://app.example.com
```
**Can I use Flagger to manage applications that live outside of a service mesh?**
For applications that are not deployed on a service mesh, Flagger can orchestrate Blue/Green style deployments
with Kubernetes L4 networking.
Blue/Green example:
```yaml
apiVersion: flagger.app/v1alpha3
kind: Canary
spec:
provider: kubernetes
canaryAnalysis:
interval: 30s
threshold: 2
iterations: 10
metrics:
- name: request-success-rate
threshold: 99
interval: 1m
- name: request-duration
threshold: 500
interval: 30s
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test:9898/"
```
The above configuration will run an analysis for five minutes.
Flagger starts the load test for the canary service (green version) and checks the Prometheus metrics every 30 seconds.
If the analysis result is positive, Flagger will promote the canary (green version) to primary (blue version).
**When can I use traffic mirroring?**
Traffic Mirroring is a pre-stage in a Canary (progressive traffic shifting) or
Blue/Green deployment strategy. Traffic mirroring will copy each incoming
request, sending one request to the primary and one to the canary service.
The response from the primary is sent back to the user. The response from the canary
is discarded. Metrics are collected on both requests so that the deployment will
only proceed if the canary metrics are healthy.
Traffic mirroring can be used for Blue/Green deployment strategy or a pre-stage in a Canary release.
Traffic mirroring will copy each incoming request, sending one request to the primary and one to the canary service.
Mirroring should be used for requests that are **idempotent** or capable of being processed twice (once by the primary and once by the canary).
Mirroring is supported by Istio only.
**How to retry a failed release?**
In Istio, mirrored requests have `-shadow` appended to the `Host` (HTTP) or
`Authority` (HTTP/2) header; for example requests to `podinfo.test` that are
mirrored will be reported in telemetry with a destination host `podinfo.test-shadow`.
A canary analysis is triggered by changes in any of the following objects:
Mirroring must only be used for requests that are **idempotent** or capable of
being processed twice (once by the primary and once by the canary). Reads are
idempotent. Before using mirroring on requests that may be writes, you should
consider what will happen if a write is duplicated and handled by the primary
and canary.
* Deployment/DaemonSet PodSpec (metadata, container image, command, ports, env, resources, etc)
* ConfigMaps mounted as volumes or mapped to environment variables
* Secrets mounted as volumes or mapped to environment variables
To use mirroring, set `spec.canaryAnalysis.mirror` to `true`. Example for
traffic shifting:
To retry a release you can add or change an annotation on the pod template:
```yaml
apiVersion: flagger.app/v1alpha3
kind: Canary
apiVersion: apps/v1
kind: Deployment
spec:
provider: istio
canaryAnalysis:
mirror: true
interval: 30s
stepWeight: 20
maxWeight: 50
template:
metadata:
annotations:
timestamp: "2020-03-10T14:24:48+0000"
```
### Kubernetes services
@@ -147,7 +53,7 @@ spec:
Assuming the app name is podinfo you can define a canary like:
```yaml
apiVersion: flagger.app/v1alpha3
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
@@ -158,6 +64,8 @@ spec:
kind: Deployment
name: podinfo
service:
# service name (optional)
name: podinfo
# ClusterIP port number (required)
port: 9898
# container port name or number
@@ -166,19 +74,21 @@ spec:
portName: http
```
If the `service.name` is not specified, then `targetRef.name` is used for the apex domain and canary/primary services name prefix.
You should treat the service name as an immutable field, changing it could result in routing conflicts.
Based on the canary spec service, Flagger generates the following Kubernetes ClusterIP service:
* `<targetRef.name>.<namespace>.svc.cluster.local`
* `<service.name>.<namespace>.svc.cluster.local`
selector `app=<name>-primary`
* `<targetRef.name>-primary.<namespace>.svc.cluster.local`
* `<service.name>-primary.<namespace>.svc.cluster.local`
selector `app=<name>-primary`
* `<targetRef.name>-canary.<namespace>.svc.cluster.local`
* `<service.name>-canary.<namespace>.svc.cluster.local`
selector `app=<name>`
This ensures that traffic coming from a namespace outside the mesh to `podinfo.test:9898`
will be routed to the latest stable release of your app.
```yaml
apiVersion: v1
kind: Service
@@ -232,7 +142,7 @@ canary analysis and can be used for conformance testing or load testing.
If port discovery is enabled, Flagger scans the deployment spec and extracts the containers
ports excluding the port specified in the canary service and Envoy sidecar ports.
`These ports will be used when generating the ClusterIP services.
These ports will be used when generating the ClusterIP services.
For a deployment that exposes two ports:
@@ -256,7 +166,7 @@ spec:
You can enable port discovery so that Prometheus will be able to reach port `9090` over mTLS:
```yaml
apiVersion: flagger.app/v1alpha3
apiVersion: flagger.app/v1beta1
kind: Canary
spec:
service:
@@ -331,6 +241,149 @@ spec:
topologyKey: kubernetes.io/hostname
```
### Metrics
**How does Flagger measures the request success rate and duration?**
Flagger measures the request success rate and duration using Prometheus queries.
**HTTP requests success rate percentage**
Spec:
```yaml
analysis:
metrics:
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
thresholdRange:
min: 99
interval: 1m
```
Istio query:
```javascript
sum(
rate(
istio_requests_total{
reporter="destination",
destination_workload_namespace=~"$namespace",
destination_workload=~"$workload",
response_code!~"5.*"
}[$interval]
)
)
/
sum(
rate(
istio_requests_total{
reporter="destination",
destination_workload_namespace=~"$namespace",
destination_workload=~"$workload"
}[$interval]
)
)
```
Envoy query (App Mesh):
```javascript
sum(
rate(
envoy_cluster_upstream_rq{
kubernetes_namespace="$namespace",
kubernetes_pod_name=~"$workload",
envoy_response_code!~"5.*"
}[$interval]
)
)
/
sum(
rate(
envoy_cluster_upstream_rq{
kubernetes_namespace="$namespace",
kubernetes_pod_name=~"$workload"
}[$interval]
)
)
```
Envoy query (Contour or Gloo):
```javascript
sum(
rate(
envoy_cluster_upstream_rq{
envoy_cluster_name=~"$namespace-$workload",
envoy_response_code!~"5.*"
}[$interval]
)
)
/
sum(
rate(
envoy_cluster_upstream_rq{
envoy_cluster_name=~"$namespace-$workload",
}[$interval]
)
)
```
**HTTP requests milliseconds duration P99**
Spec:
```yaml
analysis:
metrics:
- name: request-duration
# maximum req duration P99
# milliseconds
thresholdRange:
max: 500
interval: 1m
```
Istio query:
```javascript
histogram_quantile(0.99,
sum(
irate(
istio_request_duration_seconds_bucket{
reporter="destination",
destination_workload=~"$workload",
destination_workload_namespace=~"$namespace"
}[$interval]
)
) by (le)
)
```
Envoy query (App Mesh, Contour or Gloo):
```javascript
histogram_quantile(0.99,
sum(
irate(
envoy_cluster_upstream_rq_time_bucket{
kubernetes_pod_name=~"$workload",
kubernetes_namespace=~"$namespace"
}[$interval]
)
) by (le)
)
```
> **Note** that the metric interval should be lower or equal to the control loop interval.
**Can I use custom metrics?**
The analysis can be extended with metrics provided by Prometheus, Datadog and AWS CloudWatch. For more details
on how custom metrics can be used please read the [metrics docs](usage/metrics.md).
### Istio routing
**How does Flagger interact with Istio?**
@@ -343,7 +396,7 @@ The following spec exposes the `frontend` workload inside the mesh on `frontend.
and outside the mesh on `frontend.example.com`. You'll have to specify an Istio ingress gateway for external hosts.
```yaml
apiVersion: flagger.app/v1alpha3
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: frontend
@@ -373,13 +426,16 @@ spec:
# HTTP rewrite (optional)
rewrite:
uri: /
# Envoy timeout and retry policy (optional)
# Istio retry policy (optional)
retries:
attempts: 3
perTryTimeout: 1s
retryOn: "gateway-error,connect-failure,refused-stream"
# Add headers (optional)
headers:
request:
add:
x-envoy-upstream-rq-timeout-ms: "15000"
x-envoy-max-retries: "10"
x-envoy-retry-on: "gateway-error,connect-failure,refused-stream"
x-some-header: "value"
# cross-origin resource sharing policy (optional)
corsPolicy:
allowOrigin:
@@ -401,7 +457,7 @@ metadata:
name: frontend
namespace: test
ownerReferences:
- apiVersion: flagger.app/v1alpha3
- apiVersion: flagger.app/v1beta1
blockOwnerDeletion: true
controller: true
kind: Canary
@@ -415,11 +471,7 @@ spec:
- frontend.example.com
- frontend
http:
- appendHeaders:
x-envoy-max-retries: "10"
x-envoy-retry-on: gateway-error,connect-failure,refused-stream
x-envoy-upstream-rq-timeout-ms: "15000"
corsPolicy:
- corsPolicy:
allowHeaders:
- x-some-header
allowMethods:
@@ -427,6 +479,10 @@ spec:
allowOrigin:
- example.com
maxAge: 24h
headers:
request:
add:
x-some-header: "value"
match:
- uri:
prefix: /
@@ -439,6 +495,10 @@ spec:
- destination:
host: podinfo-canary
weight: 0
retries:
attempts: 3
perTryTimeout: 1s
retryOn: "gateway-error,connect-failure,refused-stream"
```
For each destination in the virtual service a rule is generated:
@@ -474,7 +534,7 @@ To expose a workload inside the mesh on `http://backend.test.svc.cluster.local:9
the service spec can contain only the container port and the traffic policy:
```yaml
apiVersion: flagger.app/v1alpha3
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: backend
@@ -495,7 +555,7 @@ kind: Service
metadata:
name: backend-primary
ownerReferences:
- apiVersion: flagger.app/v1alpha3
- apiVersion: flagger.app/v1beta1
blockOwnerDeletion: true
controller: true
kind: Canary
@@ -515,6 +575,85 @@ spec:
Flagger works for user facing apps exposed outside the cluster via an ingress gateway
and for backend HTTP APIs that are accessible only from inside the mesh.
If `Delegation` is enabled, Flagger would generate Istio VirtualService without hosts and gateway,
making the service compatible with Istio delegation.
```yaml
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: backend
namespace: test
spec:
service:
delegation: true
port: 9898
targetRef:
apiVersion: v1
kind: Deployment
name: podinfo
analysis:
interval: 15s
threshold: 15
maxWeight: 30
stepWeight: 10
```
Based on the above spec, Flagger will create the following virtual service:
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: backend
namespace: test
ownerReferences:
- apiVersion: flagger.app/v1beta1
blockOwnerDeletion: true
controller: true
kind: Canary
name: backend
uid: 58562662-5e10-4512-b269-2b789c1b30fe
spec:
http:
- route:
- destination:
host: podinfo-primary
weight: 100
- destination:
host: podinfo-canary
weight: 0
```
Therefore, The following virtual service forward the traffic to `/podinfo` by the above delegate VirtualService.
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: frontend
namespace: test
spec:
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
hosts:
- frontend.example.com
- frontend
http:
- match:
- uri:
prefix: /podinfo
rewrite:
uri: /
delegate:
name: backend
namespace: test
```
Note that pilot env `PILOT_ENABLE_VIRTUAL_SERVICE_DELEGATE` must also be set.
(For the use of Istio Delegation, you can refer to the documentation of [Virtual Service](https://istio.io/latest/docs/reference/config/networking/virtual-service/#Delegate) and [pilot environment variables](https://istio.io/latest/docs/reference/commands/pilot-discovery/#envvars).)
### Istio Ingress Gateway
**How can I expose multiple canaries on the same external domain?**
@@ -523,7 +662,7 @@ Assuming you have two apps, one that servers the main website and one that serve
For each app you can define a canary object as:
```yaml
apiVersion: flagger.app/v1alpha3
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: website
@@ -540,7 +679,7 @@ spec:
rewrite:
uri: /
---
apiVersion: flagger.app/v1alpha3
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: webapi
@@ -571,7 +710,7 @@ Note that host merging only works if the canaries are bounded to a ingress gatew
When deploying Istio with global mTLS enabled, you have to set the TLS mode to `ISTIO_MUTUAL`:
```yaml
apiVersion: flagger.app/v1alpha3
apiVersion: flagger.app/v1beta1
kind: Canary
spec:
service:
@@ -583,7 +722,7 @@ spec:
If you run Istio in permissive mode you can disable TLS:
```yaml
apiVersion: flagger.app/v1alpha3
apiVersion: flagger.app/v1beta1
kind: Canary
spec:
service:

View File

@@ -1,858 +0,0 @@
# How it works
[Flagger](https://github.com/weaveworks/flagger) takes a Kubernetes deployment and optionally
a horizontal pod autoscaler \(HPA\) and creates a series of objects
\(Kubernetes deployments, ClusterIP services, virtual service, traffic split or ingress\) to drive the canary analysis and promotion.
### Canary Custom Resource
For a deployment named _podinfo_, a canary promotion can be defined using Flagger's custom resource:
```yaml
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# service mesh provider (optional)
# can be: kubernetes, istio, linkerd, appmesh, nginx, gloo, supergloo
provider: linkerd
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
# ClusterIP port number
port: 9898
# ClusterIP port name can be http or grpc (default http)
portName: http
# container port number or name (optional)
targetPort: 9898
# add all the other container ports
# to the ClusterIP services (default false)
portDiscovery: false
# promote the canary without analysing it (default false)
skipAnalysis: false
# define the canary analysis timing and KPIs
canaryAnalysis:
# schedule interval (default 60s)
interval: 1m
# max number of failed metric checks before rollback
threshold: 10
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 5
# Prometheus checks
metrics:
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 1m
- name: request-duration
# maximum req duration P99
# milliseconds
threshold: 500
interval: 30s
# testing (optional)
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"
```
**Note** that the target deployment must have a single label selector in the format `app: <DEPLOYMENT-NAME>`:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
spec:
selector:
matchLabels:
app: podinfo
template:
metadata:
labels:
app: podinfo
```
Besides `app` Flagger supports `name` and `app.kubernetes.io/name` selectors. If you use a different
convention you can specify your label with the `-selector-labels` flag.
The target deployment should expose a TCP port that will be used by Flagger to create the ClusterIP Services.
The container port from the target deployment should match the `service.port` or `service.targetPort`.
### Canary status
Get the current status of canary deployments cluster wide:
```bash
kubectl get canaries --all-namespaces
NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME
test podinfo Progressing 15 2019-06-30T14:05:07Z
prod frontend Succeeded 0 2019-06-30T16:15:07Z
prod backend Failed 0 2019-06-30T17:05:07Z
```
The status condition reflects the last know state of the canary analysis:
```bash
kubectl -n test get canary/podinfo -oyaml | awk '/status/,0'
```
A successful rollout status:
```yaml
status:
canaryWeight: 0
failedChecks: 0
iterations: 0
lastAppliedSpec: "14788816656920327485"
lastPromotedSpec: "14788816656920327485"
conditions:
- lastTransitionTime: "2019-07-10T08:23:18Z"
lastUpdateTime: "2019-07-10T08:23:18Z"
message: Canary analysis completed successfully, promotion finished.
reason: Succeeded
status: "True"
type: Promoted
```
The `Promoted` status condition can have one of the following reasons:
Initialized, Waiting, Progressing, Promoting, Finalising, Succeeded or Failed.
A failed canary will have the promoted status set to `false`,
the reason to `failed` and the last applied spec will be different to the last promoted one.
Wait for a successful rollout:
```bash
kubectl wait canary/podinfo --for=condition=promoted
```
CI example:
```bash
# update the container image
kubectl set image deployment/podinfo podinfod=stefanprodan/podinfo:3.0.1
# wait for Flagger to detect the change
ok=false
until ${ok}; do
kubectl get canary/podinfo | grep 'Progressing' && ok=true || ok=false
sleep 5
done
# wait for the canary analysis to finish
kubectl wait canary/podinfo --for=condition=promoted --timeout=5m
# check if the deployment was successful
kubectl get canary/podinfo | grep Succeeded
```
### Canary Stages
![Flagger Canary Stages](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/diagrams/flagger-canary-steps.png)
A canary deployment is triggered by changes in any of the following objects:
* Deployment PodSpec (container image, command, ports, env, resources, etc)
* ConfigMaps mounted as volumes or mapped to environment variables
* Secrets mounted as volumes or mapped to environment variables
Gated canary promotion stages:
* scan for canary deployments
* check primary and canary deployment status
* halt advancement if a rolling update is underway
* halt advancement if pods are unhealthy
* call confirm-rollout webhooks and check results
* halt advancement if any hook returns a non HTTP 2xx result
* call pre-rollout webhooks and check results
* halt advancement if any hook returns a non HTTP 2xx result
* increment the failed checks counter
* increase canary traffic weight percentage from 0% to 5% (step weight)
* call rollout webhooks and check results
* check canary HTTP request success rate and latency
* halt advancement if any metric is under the specified threshold
* increment the failed checks counter
* check if the number of failed checks reached the threshold
* route all traffic to primary
* scale to zero the canary deployment and mark it as failed
* call post-rollout webhooks
* post the analysis result to Slack
* wait for the canary deployment to be updated and start over
* increase canary traffic weight by 5% (step weight) till it reaches 50% (max weight)
* halt advancement if any webhook call fails
* halt advancement while canary request success rate is under the threshold
* halt advancement while canary request duration P99 is over the threshold
* halt advancement while any custom metric check fails
* halt advancement if the primary or canary deployment becomes unhealthy
* halt advancement while canary deployment is being scaled up/down by HPA
* call confirm-promotion webhooks and check results
* halt advancement if any hook returns a non HTTP 2xx result
* promote canary to primary
* copy ConfigMaps and Secrets from canary to primary
* copy canary deployment spec template over primary
* wait for primary rolling update to finish
* halt advancement if pods are unhealthy
* route all traffic to primary
* scale to zero the canary deployment
* mark rollout as finished
* call post-rollout webhooks
* post the analysis result to Slack or MS Teams
* wait for the canary deployment to be updated and start over
### Canary Analysis
The canary analysis runs periodically until it reaches the maximum traffic weight or the failed checks threshold.
Spec:
```yaml
canaryAnalysis:
# schedule interval (default 60s)
interval: 1m
# max number of failed metric checks before rollback
threshold: 10
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 2
# deploy straight to production without
# the metrics and webhook checks
skipAnalysis: false
```
The above analysis, if it succeeds, will run for 25 minutes while validating the HTTP metrics and webhooks every minute.
You can determine the minimum time that it takes to validate and promote a canary deployment using this formula:
```
interval * (maxWeight / stepWeight)
```
And the time it takes for a canary to be rollback when the metrics or webhook checks are failing:
```
interval * threshold
```
In emergency cases, you may want to skip the analysis phase and ship changes directly to production.
At any time you can set the `spec.skipAnalysis: true`.
When skip analysis is enabled, Flagger checks if the canary deployment is healthy and
promotes it without analysing it. If an analysis is underway, Flagger cancels it and runs the promotion.
### A/B Testing
Besides weighted routing, Flagger can be configured to route traffic to the canary based on HTTP match conditions.
In an A/B testing scenario, you'll be using HTTP headers or cookies to target a certain segment of your users.
This is particularly useful for frontend applications that require session affinity.
You can enable A/B testing by specifying the HTTP match conditions and the number of iterations:
```yaml
canaryAnalysis:
# schedule interval (default 60s)
interval: 1m
# total number of iterations
iterations: 10
# max number of failed iterations before rollback
threshold: 2
# canary match condition
match:
- headers:
user-agent:
regex: "^(?!.*Chrome).*Safari.*"
- headers:
cookie:
regex: "^(.*?;)?(user=test)(;.*)?$"
```
If Flagger finds a HTTP match condition, it will ignore the `maxWeight` and `stepWeight` settings.
The above configuration will run an analysis for ten minutes targeting the Safari users and those that have a test cookie.
You can determine the minimum time that it takes to validate and promote a canary deployment using this formula:
```
interval * iterations
```
And the time it takes for a canary to be rollback when the metrics or webhook checks are failing:
```
interval * threshold
```
Make sure that the analysis threshold is lower than the number of iterations.
### Blue/Green deployments
For applications that are not deployed on a service mesh, Flagger can orchestrate blue/green style deployments
with Kubernetes L4 networking. When using Istio you have the option to mirror traffic between blue and green.
You can use the blue/green deployment strategy by replacing `stepWeight/maxWeight` with `iterations` in the `canaryAnalysis` spec:
```yaml
canaryAnalysis:
# schedule interval (default 60s)
interval: 1m
# total number of iterations
iterations: 10
# max number of failed iterations before rollback
threshold: 2
# Traffic shadowing (compatible with Istio only)
mirror: true
```
With the above configuration Flagger will run conformance and load tests on the canary pods for ten minutes.
If the metrics analysis succeeds, live traffic will be switched from the old version to the new one when the
canary is promoted.
### HTTP Metrics
The canary analysis is using the following Prometheus queries:
**HTTP requests success rate percentage**
Spec:
```yaml
canaryAnalysis:
metrics:
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 1m
```
Istio query:
```javascript
sum(
rate(
istio_requests_total{
reporter="destination",
destination_workload_namespace=~"$namespace",
destination_workload=~"$workload",
response_code!~"5.*"
}[$interval]
)
)
/
sum(
rate(
istio_requests_total{
reporter="destination",
destination_workload_namespace=~"$namespace",
destination_workload=~"$workload"
}[$interval]
)
)
```
Envoy query (App Mesh or Gloo):
```javascript
sum(
rate(
envoy_cluster_upstream_rq{
kubernetes_namespace="$namespace",
kubernetes_pod_name=~"$workload",
envoy_response_code!~"5.*"
}[$interval]
)
)
/
sum(
rate(
envoy_cluster_upstream_rq{
kubernetes_namespace="$namespace",
kubernetes_pod_name=~"$workload"
}[$interval]
)
)
```
**HTTP requests milliseconds duration P99**
Spec:
```yaml
canaryAnalysis:
metrics:
- name: request-duration
# maximum req duration P99
# milliseconds
threshold: 500
interval: 1m
```
Istio query:
```javascript
histogram_quantile(0.99,
sum(
irate(
istio_request_duration_seconds_bucket{
reporter="destination",
destination_workload=~"$workload",
destination_workload_namespace=~"$namespace"
}[$interval]
)
) by (le)
)
```
Envoy query (App Mesh or Gloo):
```javascript
histogram_quantile(0.99,
sum(
irate(
envoy_cluster_upstream_rq_time_bucket{
kubernetes_pod_name=~"$workload",
kubernetes_namespace=~"$namespace"
}[$interval]
)
) by (le)
)
```
> **Note** that the metric interval should be lower or equal to the control loop interval.
### Custom Metrics
The canary analysis can be extended with custom Prometheus queries.
```yaml
canaryAnalysis:
threshold: 1
maxWeight: 50
stepWeight: 5
metrics:
- name: "404s percentage"
threshold: 5
query: |
100 - sum(
rate(
istio_requests_total{
reporter="destination",
destination_workload_namespace="test",
destination_workload="podinfo",
response_code!="404"
}[1m]
)
)
/
sum(
rate(
istio_requests_total{
reporter="destination",
destination_workload_namespace="test",
destination_workload="podinfo"
}[1m]
)
) * 100
```
The above configuration validates the canary by checking
if the HTTP 404 req/sec percentage is below 5 percent of the total traffic.
If the 404s rate reaches the 5% threshold, then the canary fails.
```yaml
canaryAnalysis:
threshold: 1
maxWeight: 50
stepWeight: 5
metrics:
- name: "rpc error rate"
threshold: 5
query: |
100 - (sum
rate(
grpc_server_handled_total{
grpc_service="my.TestService",
grpc_code!="OK"
}[1m]
)
)
/
sum(
rate(
grpc_server_started_total{
grpc_service="my.TestService"
}[1m]
)
) * 100
```
The above configuration validates the canary by checking if the percentage of
non-OK GRPC req/sec is below 5 percent of the total requests. If the non-OK
rate reaches the 5% threshold, then the canary fails.
When specifying a query, Flagger will run the promql query and convert the result to float64.
Then it compares the query result value with the metric threshold value.
### Webhooks
The canary analysis can be extended with webhooks. Flagger will call each webhook URL and
determine from the response status code (HTTP 2xx) if the canary is failing or not.
There are three types of hooks:
* Confirm-rollout hooks are executed before scaling up the canary deployment and ca be used for manual approval.
The rollout is paused until the hook returns a successful HTTP status code.
* Pre-rollout hooks are executed before routing traffic to canary.
The canary advancement is paused if a pre-rollout hook fails and if the number of failures reach the
threshold the canary will be rollback.
* Rollout hooks are executed during the analysis on each iteration before the metric checks.
If a rollout hook call fails the canary advancement is paused and eventfully rolled back.
* Confirm-promotion hooks are executed before the promotion step.
The canary promotion is paused until the hooks return HTTP 200.
While the promotion is paused, Flagger will continue to run the metrics checks and rollout hooks.
* Post-rollout hooks are executed after the canary has been promoted or rolled back.
If a post rollout hook fails the error is logged.
Spec:
```yaml
canaryAnalysis:
webhooks:
- name: "start gate"
type: confirm-rollout
url: http://flagger-loadtester.test/gate/approve
- name: "smoke test"
type: pre-rollout
url: http://flagger-helmtester.kube-system/
timeout: 3m
metadata:
type: "helm"
cmd: "test podinfo --cleanup"
- name: "load test"
type: rollout
url: http://flagger-loadtester.test/
timeout: 15s
metadata:
cmd: "hey -z 1m -q 5 -c 2 http://podinfo-canary.test:9898/"
- name: "promotion gate"
type: confirm-promotion
url: http://flagger-loadtester.test/gate/approve
- name: "notify"
type: post-rollout
url: http://telegram.bot:8080/
timeout: 5s
metadata:
some: "message"
```
> **Note** that the sum of all rollout webhooks timeouts should be lower than the analysis interval.
Webhook payload (HTTP POST):
```json
{
"name": "podinfo",
"namespace": "test",
"phase": "Progressing",
"metadata": {
"test": "all",
"token": "16688eb5e9f289f1991c"
}
}
```
Response status codes:
* 200-202 - advance canary by increasing the traffic weight
* timeout or non-2xx - halt advancement and increment failed checks
On a non-2xx response Flagger will include the response body (if any) in the failed checks log and Kubernetes events.
### Load Testing
For workloads that are not receiving constant traffic Flagger can be configured with a webhook,
that when called, will start a load test for the target workload.
If the target workload doesn't receive any traffic during the canary analysis,
Flagger metric checks will fail with "no values found for metric request-success-rate".
Flagger comes with a load testing service based on [rakyll/hey](https://github.com/rakyll/hey)
that generates traffic during analysis when configured as a webhook.
![Flagger Load Testing Webhook](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/diagrams/flagger-load-testing.png)
First you need to deploy the load test runner in a namespace with sidecar injection enabled:
```bash
export REPO=https://raw.githubusercontent.com/weaveworks/flagger/master
kubectl -n test apply -f ${REPO}/artifacts/loadtester/deployment.yaml
kubectl -n test apply -f ${REPO}/artifacts/loadtester/service.yaml
```
Or by using Helm:
```bash
helm repo add flagger https://flagger.app
helm upgrade -i flagger-loadtester flagger/loadtester \
--namespace=test \
--set cmd.timeout=1h
```
When deployed the load tester API will be available at `http://flagger-loadtester.test/`.
Now you can add webhooks to the canary analysis spec:
```yaml
webhooks:
- name: load-test-get
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test:9898/"
- name: load-test-post
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 -m POST -d '{test: 2}' http://podinfo-canary.test:9898/echo"
```
When the canary analysis starts, Flagger will call the webhooks and the load tester will run the `hey` commands
in the background, if they are not already running. This will ensure that during the
analysis, the `podinfo-canary.test` service will receive a steady stream of GET and POST requests.
If your workload is exposed outside the mesh you can point `hey` to the
public URL and use HTTP2.
```yaml
webhooks:
- name: load-test-get
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 -h2 https://podinfo.example.com/"
```
For gRPC services you can use [bojand/ghz](https://github.com/bojand/ghz) which is a similar tool to Hey but for gPRC:
```yaml
webhooks:
- name: grpc-load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "ghz -z 1m -q 10 -c 2 --insecure podinfo.test:9898"
```
`ghz` uses reflection to identify which gRPC method to call. If you do not wish to enable reflection for your gRPC service you can implement a standardized health check from the [grpc-proto](https://github.com/grpc/grpc-proto) library. To use this [health check schema](https://github.com/grpc/grpc-proto/blob/master/grpc/health/v1/health.proto) without reflection you can pass a parameter to `ghz` like this
```yaml
webhooks:
- name: grpc-load-test-no-reflection
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "ghz --insecure --proto=/tmp/ghz/health.proto --call=grpc.health.v1.Health/Check podinfo.test:9898"
```
The load tester can run arbitrary commands as long as the binary is present in the container image.
For example if you you want to replace `hey` with another CLI, you can create your own Docker image:
```dockerfile
FROM weaveworks/flagger-loadtester:<VER>
RUN curl -Lo /usr/local/bin/my-cli https://github.com/user/repo/releases/download/ver/my-cli \
&& chmod +x /usr/local/bin/my-cli
```
### Load Testing Delegation
The load tester can also forward testing tasks to external tools, by now [nGrinder](https://github.com/naver/ngrinder)
is supported.
To use this feature, add a load test task of type 'ngrinder' to the canary analysis spec:
```yaml
webhooks:
- name: load-test-post
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
# type of this load test task, cmd or ngrinder
type: ngrinder
# base url of your nGrinder controller server
server: http://ngrinder-server:port
# id of the test to clone from, the test must have been defined.
clone: 100
# user name and base64 encoded password to authenticate against the nGrinder server
username: admin
passwd: YWRtaW4=
# the interval between between nGrinder test status polling, default to 1s
pollInterval: 5s
```
When the canary analysis starts, the load tester will initiate a [clone_and_start request](https://github.com/naver/ngrinder/wiki/REST-API-PerfTest)
to the nGrinder server and start a new performance test. the load tester will periodically poll the nGrinder server
for the status of the test, and prevent duplicate requests from being sent in subsequent analysis loops.
### Integration Testing
Flagger comes with a testing service that can run Helm tests or Bats tests when configured as a webhook.
Deploy the Helm test runner in the `kube-system` namespace using the `tiller` service account:
```bash
helm repo add flagger https://flagger.app
helm upgrade -i flagger-helmtester flagger/loadtester \
--namespace=kube-system \
--set serviceAccountName=tiller
```
When deployed the Helm tester API will be available at `http://flagger-helmtester.kube-system/`.
Now you can add pre-rollout webhooks to the canary analysis spec:
```yaml
canaryAnalysis:
webhooks:
- name: "smoke test"
type: pre-rollout
url: http://flagger-helmtester.kube-system/
timeout: 3m
metadata:
type: "helm"
cmd: "test {{ .Release.Name }} --cleanup"
```
When the canary analysis starts, Flagger will call the pre-rollout webhooks before routing traffic to the canary.
If the helm test fails, Flagger will retry until the analysis threshold is reached and the canary is rolled back.
If you are using Helm v3, you'll have to create a dedicated service account and add the release namespace to the test command:
```yaml
canaryAnalysis:
webhooks:
- name: "smoke test"
type: pre-rollout
url: http://flagger-helmtester.kube-system/
timeout: 3m
metadata:
type: "helmv3"
cmd: "test run {{ .Release.Name }} --cleanup -n {{ .Release.Namespace }}"
```
As an alternative to Helm you can use the [Bash Automated Testing System](https://github.com/bats-core/bats-core) to run your tests.
```yaml
canaryAnalysis:
webhooks:
- name: "acceptance tests"
type: pre-rollout
url: http://flagger-batstester.default/
timeout: 5m
metadata:
type: "bash"
cmd: "bats /tests/acceptance.bats"
```
Note that you should create a ConfigMap with your Bats tests and mount it inside the tester container.
### Manual Gating
For manual approval of a canary deployment you can use the `confirm-rollout` and `confirm-promotion` webhooks.
The confirmation rollout hooks are executed before the pre-rollout hooks.
Flagger will halt the canary traffic shifting and analysis until the confirm webhook returns HTTP status 200.
Manual gating with Flagger's tester:
```yaml
canaryAnalysis:
webhooks:
- name: "gate"
type: confirm-rollout
url: http://flagger-loadtester.test/gate/halt
```
The `/gate/halt` returns HTTP 403 thus blocking the rollout.
If you have notifications enabled, Flagger will post a message to Slack or MS Teams if a canary rollout is waiting for approval.
Change the URL to `/gate/approve` to start the canary analysis:
```yaml
canaryAnalysis:
webhooks:
- name: "gate"
type: confirm-rollout
url: http://flagger-loadtester.test/gate/approve
```
Manual gating can be driven with Flagger's tester API. Set the confirmation URL to `/gate/check`:
```yaml
canaryAnalysis:
webhooks:
- name: "ask for confirmation"
type: confirm-rollout
url: http://flagger-loadtester.test/gate/check
```
By default the gate is closed, you can start or resume the canary rollout with:
```bash
kubectl -n test exec -it flagger-loadtester-xxxx-xxxx sh
curl -d '{"name": "podinfo","namespace":"test"}' http://localhost:8080/gate/open
```
You can pause the rollout at any time with:
```bash
curl -d '{"name": "podinfo","namespace":"test"}' http://localhost:8080/gate/close
```
If a canary analysis is paused the status will change to waiting:
```bash
kubectl get canary/podinfo
NAME STATUS WEIGHT
podinfo Waiting 0
```
The `confirm-promotion` hook type can be used to manually approve the canary promotion.
While the promotion is paused, Flagger will continue to run the metrics checks and load tests.
```yaml
canaryAnalysis:
webhooks:
- name: "promotion gate"
type: confirm-promotion
url: http://flagger-loadtester.test/gate/halt
```
If you have notifications enabled, Flagger will post a message to Slack or MS Teams if a canary promotion is waiting for approval.

View File

@@ -1,20 +1,20 @@
# Flagger install on AWS
# Flagger Install on EKS App Mesh
This guide walks you through setting up Flagger and AWS App Mesh on EKS.
### App Mesh
## App Mesh
The App Mesh integration with EKS is made out of the following components:
* Kubernetes custom resources
* `mesh.appmesh.k8s.aws` defines a logical boundary for network traffic between the services
* `virtualnode.appmesh.k8s.aws` defines a logical pointer to a Kubernetes workload
* `virtualservice.appmesh.k8s.aws` defines the routing rules for a workload inside the mesh
* `mesh.appmesh.k8s.aws` defines a logical boundary for network traffic between the services
* `virtualnode.appmesh.k8s.aws` defines a logical pointer to a Kubernetes workload
* `virtualservice.appmesh.k8s.aws` defines the routing rules for a workload inside the mesh
* CRD controller - keeps the custom resources in sync with the App Mesh control plane
* Admission controller - injects the Envoy sidecar and assigns Kubernetes pods to App Mesh virtual nodes
* Metrics server - Prometheus instance that collects and stores Envoy's metrics
* Telemetry service - Prometheus instance that collects and stores Envoy's metrics
### Create a Kubernetes cluster
## Create a Kubernetes cluster
In order to create an EKS cluster you can use [eksctl](https://eksctl.io).
Eksctl is an open source command-line utility made by Weaveworks in collaboration with Amazon.
@@ -26,7 +26,7 @@ brew tap weaveworks/tap
brew install weaveworks/tap/eksctl
```
Create an EKS cluster:
Create an EKS cluster with:
```bash
eksctl create cluster --name=appmesh \
@@ -36,8 +36,8 @@ eksctl create cluster --name=appmesh \
--appmesh-access
```
The above command will create a two nodes cluster with App Mesh
[IAM policy](https://docs.aws.amazon.com/app-mesh/latest/userguide/MESH_IAM_user_policies.html)
The above command will create a two nodes cluster with
App Mesh [IAM policy](https://docs.aws.amazon.com/app-mesh/latest/userguide/MESH_IAM_user_policies.html)
attached to the EKS node instance role.
Verify the install with:
@@ -46,107 +46,68 @@ Verify the install with:
kubectl get nodes
```
### Install Helm
## Install Helm
Install the [Helm](https://docs.helm.sh/using_helm/#installing-helm) command-line tool:
Install the [Helm](https://docs.helm.sh/using_helm/#installing-helm) v3 command-line tool:
```text
brew install kubernetes-helm
brew install helm
```
Create a service account and a cluster role binding for Tiller:
Add the EKS repository to Helm:
```bash
kubectl -n kube-system create sa tiller
kubectl create clusterrolebinding tiller-cluster-rule \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm repo add eks https://aws.github.io/eks-charts
```
Deploy Tiller in the `kube-system` namespace:
## Enable horizontal pod auto-scaling
```bash
helm init --service-account tiller
```
You should consider using SSL between Helm and Tiller, for more information on securing your Helm
installation see [docs.helm.sh](https://docs.helm.sh/using_helm/#securing-your-helm-installation).
### Enable horizontal pod auto-scaling
Install the Horizontal Pod Autoscaler (HPA) metrics provider:
Install the Horizontal Pod Autoscaler \(HPA\) metrics provider:
```bash
helm upgrade -i metrics-server stable/metrics-server \
--namespace kube-system
--namespace kube-system \
--set args[0]=--kubelet-preferred-address-types=InternalIP
```
After a minute, the metrics API should report CPU and memory usage for pods.
You can very the metrics API with:
After a minute, the metrics API should report CPU and memory usage for pods. You can very the metrics API with:
```bash
kubectl -n kube-system top pods
```
### Install the App Mesh components
## Install the App Mesh components
Install the App Mesh CRDs:
```bash
kubectl apply -k github.com/aws/eks-charts/stable/appmesh-controller//crds?ref=master
```
Create the `appmesh-system` namespace:
```sh
```bash
kubectl create ns appmesh-system
```
Apply the App Mesh CRDs:
Install the App Mesh controller:
```sh
kubectl apply -f https://raw.githubusercontent.com/aws/eks-charts/master/stable/appmesh-controller/crds/crds.yaml
```
Add the EKS repository to Helm:
```sh
helm repo add eks https://aws.github.io/eks-charts
```
Install the App Mesh CRD controller:
```sh
```bash
helm upgrade -i appmesh-controller eks/appmesh-controller \
--wait --namespace appmesh-system
```
Install the App Mesh admission controller:
```sh
helm upgrade -i appmesh-inject eks/appmesh-inject \
--wait --namespace appmesh-system \
--set mesh.create=true \
--set mesh.name=global
```
Verify that the global mesh is active:
```bash
kubectl describe mesh
Status:
Mesh Condition:
Status: True
Type: MeshActive
```
In order to collect the App Mesh metrics that Flagger needs to run the canary analysis,
In order to collect the App Mesh metrics that Flagger needs to run the canary analysis,
you'll need to setup a Prometheus instance to scrape the Envoy sidecars.
Install the App Mesh Prometheus:
```sh
```bash
helm upgrade -i appmesh-prometheus eks/appmesh-prometheus \
--wait --namespace appmesh-system
```
### Install Flagger and Grafana
## Install Flagger
Add Flagger Helm repository:
@@ -166,35 +127,25 @@ Deploy Flagger in the _**appmesh-system**_ namespace:
helm upgrade -i flagger flagger/flagger \
--namespace=appmesh-system \
--set crd.create=false \
--set meshProvider=appmesh \
--set metricsServer=appmesh-prometheus:9090
--set meshProvider=appmesh:v1beta2 \
--set metricsServer=http://appmesh-prometheus:9090
```
You can enable Slack or MS Teams notifications with:
## Install Grafana
Deploy App Mesh Grafana that comes with a dashboard for monitoring Flagger's canary releases:
```bash
helm upgrade -i flagger flagger/flagger \
--reuse-values \
--namespace=appmesh-system \
--set slack.url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK \
--set slack.channel=general \
--set slack.user=flagger
```
Flagger comes with a Grafana dashboard made for monitoring the canary analysis.
Deploy Grafana in the _**appmesh-system**_ namespace:
```bash
helm upgrade -i flagger-grafana flagger/grafana \
--namespace=appmesh-system \
--set url=http://appmesh-prometheus:9090
helm upgrade -i appmesh-grafana eks/appmesh-grafana \
--namespace appmesh-system
```
You can access Grafana using port forwarding:
```bash
kubectl -n appmesh-system port-forward svc/flagger-grafana 3000:80
kubectl -n appmesh-system port-forward svc/appmesh-grafana 3000:3000
```
Now that you have Flagger running you can try the
[App Mesh canary deployments tutorial](https://docs.flagger.app/usage/appmesh-progressive-delivery).
Now that you have Flagger running,
you can try the [App Mesh canary deployments tutorial](https://docs.flagger.app/usage/appmesh-progressive-delivery).

View File

@@ -1,13 +1,12 @@
# Flagger install on Google Cloud
# Flagger Install on GKE Istio
This guide walks you through setting up Flagger and Istio on Google Kubernetes Engine.
![GKE Cluster Overview](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/diagrams/flagger-gke-istio.png)
### Prerequisites
## Prerequisites
You will be creating a cluster on Googles Kubernetes Engine \(GKE\),
if you dont have an account you can sign up [here](https://cloud.google.com/free/) for free credits.
You will be creating a cluster on Googles Kubernetes Engine \(GKE\), if you dont have an account you can sign up [here](https://cloud.google.com/free/) for free credits.
Login into Google Cloud, create a project and enable billing for it.
@@ -39,7 +38,7 @@ Install the kubectl command-line tool:
gcloud components install kubectl
```
### GKE cluster setup
## GKE cluster setup
Create a cluster with the Istio add-on:
@@ -61,9 +60,7 @@ gcloud beta container clusters create istio \
--istio-config=auth=MTLS_PERMISSIVE
```
The above command will create a default node pool consisting of two `n1-highcpu-4` \(vCPU: 4, RAM 3.60GB, DISK: 30GB\)
preemptible VMs. Preemptible VMs are up to 80% cheaper than regular instances and are terminated and replaced
after a maximum of 24 hours.
The above command will create a default node pool consisting of two `n1-highcpu-4` \(vCPU: 4, RAM 3.60GB, DISK: 30GB\) preemptible VMs. Preemptible VMs are up to 80% cheaper than regular instances and are terminated and replaced after a maximum of 24 hours.
Set up credentials for `kubectl`:
@@ -85,9 +82,9 @@ Validate your setup with:
kubectl -n istio-system get svc
```
In a couple of seconds GCP should allocate an external IP to the `istio-ingressgateway` service.
In a couple of seconds GCP should allocate an external IP to the `istio-ingressgateway` service.
### Cloud DNS setup
## Cloud DNS setup
You will need an internet domain and access to the registrar to change the name servers to Google Cloud DNS.
@@ -147,7 +144,7 @@ Verify that the wildcard DNS is working \(replace `example.com` with your domain
watch host test.example.com
```
### Install Helm
## Install Helm
Install the [Helm](https://docs.helm.sh/using_helm/#installing-helm) command-line tool:
@@ -162,7 +159,7 @@ kubectl -n kube-system create sa tiller
kubectl create clusterrolebinding tiller-cluster-rule \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
--serviceaccount=kube-system:tiller
```
Deploy Tiller in the `kube-system` namespace:
@@ -171,15 +168,13 @@ Deploy Tiller in the `kube-system` namespace:
helm init --service-account tiller
```
You should consider using SSL between Helm and Tiller, for more information on securing your Helm
installation see [docs.helm.sh](https://docs.helm.sh/using_helm/#securing-your-helm-installation).
You should consider using SSL between Helm and Tiller, for more information on securing your Helm installation see [docs.helm.sh](https://docs.helm.sh/using_helm/#securing-your-helm-installation).
### Install cert-manager
## Install cert-manager
Jetstack's [cert-manager](https://github.com/jetstack/cert-manager)
is a Kubernetes operator that automatically creates and manages TLS certs issued by Lets Encrypt.
Jetstack's [cert-manager](https://github.com/jetstack/cert-manager) is a Kubernetes operator that automatically creates and manages TLS certs issued by Lets Encrypt.
You'll be using cert-manager to provision a wildcard certificate for the Istio ingress gateway.
You'll be using cert-manager to provision a wildcard certificate for the Istio ingress gateway.
Install cert-manager's CRDs:
@@ -208,7 +203,7 @@ helm upgrade -i cert-manager \
jetstack/cert-manager
```
### Istio Gateway TLS setup
## Istio Gateway TLS setup
![Istio Let&apos;s Encrypt](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/diagrams/istio-cert-manager-gke.png)
@@ -246,8 +241,7 @@ kubectl create secret generic cert-manager-credentials \
--namespace=istio-system
```
Create a letsencrypt issuer for CloudDNS \(replace `email@example.com` with a valid email address and
`my-gcp-project`with your project ID\):
Create a letsencrypt issuer for CloudDNS \(replace `email@example.com` with a valid email address and `my-gcp-project`with your project ID\):
```yaml
apiVersion: certmanager.k8s.io/v1alpha1
@@ -322,16 +316,11 @@ Recreate Istio ingress gateway pods:
kubectl -n istio-system get pods -l istio=ingressgateway
```
Note that Istio gateway doesn't reload the certificates from the TLS secret on cert-manager renewal.
Since the GKE cluster is made out of preemptible VMs the gateway pods will be replaced once every 24h,
if your not using preemptible nodes then you need to manually delete the gateway pods every two months
before the certificate expires.
Note that Istio gateway doesn't reload the certificates from the TLS secret on cert-manager renewal. Since the GKE cluster is made out of preemptible VMs the gateway pods will be replaced once every 24h, if your not using preemptible nodes then you need to manually delete the gateway pods every two months before the certificate expires.
### Install Prometheus
## Install Prometheus
The GKE Istio add-on does not include a Prometheus instance that scrapes the Istio telemetry service.
Because Flagger uses the Istio HTTP metrics to run the canary analysis you have to deploy the following
Prometheus configuration that's similar to the one that comes with the official Istio Helm chart.
The GKE Istio add-on does not include a Prometheus instance that scrapes the Istio telemetry service. Because Flagger uses the Istio HTTP metrics to run the canary analysis you have to deploy the following Prometheus configuration that's similar to the one that comes with the official Istio Helm chart.
Find the GKE Istio version with:
@@ -346,7 +335,7 @@ kubectl -n istio-system apply -f \
https://storage.googleapis.com/gke-release/istio/release/1.0.6-gke.3/patches/install-prometheus.yaml
```
### Install Flagger and Grafana
## Install Flagger and Grafana
Add Flagger Helm repository:
@@ -408,3 +397,4 @@ kubectl apply -f ./grafana-virtual-service.yaml
```
Navigate to `http://grafana.example.com` in your browser and you should be redirected to the HTTPS version.

View File

@@ -1,15 +1,12 @@
# Flagger install on Kubernetes
# Flagger Install on Kubernetes
This guide walks you through setting up Flagger on a Kubernetes cluster with Helm or Kustomize.
This guide walks you through setting up Flagger on a Kubernetes cluster with Helm v3 or Kustomize.
### Prerequisites
## Prerequisites
Flagger requires a Kubernetes cluster **v1.11** or newer with the following admission controllers enabled:
Flagger requires a Kubernetes cluster **v1.14** or newer.
* MutatingAdmissionWebhook
* ValidatingAdmissionWebhook
### Install Flagger with Helm
## Install Flagger with Helm
Add Flagger Helm repository:
@@ -33,6 +30,26 @@ helm upgrade -i flagger flagger/flagger \
--set metricsServer=http://prometheus:9090
```
Note that Flagger depends on Istio telemetry and Prometheus, if you're installing Istio with istioctl
then you should be using the [default profile](https://istio.io/docs/setup/additional-setup/config-profiles/).
For Istio multi-cluster shared control plane you can install Flagger
on each remote cluster and set the Istio control plane host cluster kubeconfig:
```bash
helm upgrade -i flagger flagger/flagger \
--namespace=istio-system \
--set crd.create=false \
--set meshProvider=istio \
--set metricsServer=http://istio-cluster-prometheus:9090 \
--set istio.kubeconfig.secretName=istio-kubeconfig \
--set istio.kubeconfig.key=kubeconfig
```
Note that the Istio kubeconfig must be stored in a Kubernetes secret with a data key named `kubeconfig`.
For more details on how to configure Istio multi-cluster credentials
read the [Istio docs](https://istio.io/docs/setup/install/multicluster/shared-vpn/#credentials).
Deploy Flagger for Linkerd:
```bash
@@ -43,8 +60,26 @@ helm upgrade -i flagger flagger/flagger \
--set metricsServer=http://linkerd-prometheus:9090
```
Deploy Flagger for App Mesh:
```bash
helm upgrade -i flagger flagger/flagger \
--namespace=appmesh-system \
--set crd.create=false \
--set meshProvider=appmesh \
--set metricsServer=http://appmesh-prometheus:9090
```
You can install Flagger in any namespace as long as it can talk to the Prometheus service on port 9090.
For ingress controllers, the install instructions are:
* [Contour](https://docs.flagger.app/tutorials/contour-progressive-delivery)
* [Gloo](https://docs.flagger.app/tutorials/gloo-progressive-delivery)
* [NGINX](https://docs.flagger.app/tutorials/nginx-progressive-delivery)
* [Skipper](https://docs.flagger.app/tutorials/skipper-progressive-delivery)
* [Traefik](https://docs.flagger.app/tutorials/traefik-progressive-delivery)
Enable **Slack** notifications:
```bash
@@ -65,32 +100,30 @@ helm upgrade -i flagger flagger/flagger \
--set msteams.url=https://outlook.office.com/webhook/YOUR/TEAMS/WEBHOOK
```
If you don't have Tiller you can use the helm template command and apply the generated yaml with kubectl:
You can use the helm template command and apply the generated yaml with kubectl:
```bash
# generate
helm fetch --untar --untardir . flagger/flagger &&
helm template flagger \
--name flagger \
helm template flagger ./flagger \
--namespace=istio-system \
--set metricsServer=http://prometheus.istio-system:9090 \
> $HOME/flagger.yaml
> flagger.yaml
# apply
kubectl apply -f $HOME/flagger.yaml
kubectl apply -f flagger.yaml
```
To uninstall the Flagger release with Helm run:
```text
helm delete --purge flagger
helm delete flagger
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
> **Note** that on uninstall the Canary CRD will not be removed.
Deleting the CRD will make Kubernetes remove all the objects owned by Flagger like Istio virtual services,
Kubernetes deployments and ClusterIP services.
> **Note** that on uninstall the Canary CRD will not be removed. Deleting the CRD will make Kubernetes
>remove all the objects owned by Flagger like Istio virtual services, Kubernetes deployments and ClusterIP services.
If you want to remove all the objects created by Flagger you have delete the Canary CRD with kubectl:
@@ -98,7 +131,7 @@ If you want to remove all the objects created by Flagger you have delete the Can
kubectl delete crd canaries.flagger.app
```
### Install Grafana with Helm
## Install Grafana with Helm
Flagger comes with a Grafana dashboard made for monitoring the canary analysis.
@@ -117,13 +150,12 @@ Or use helm template command and apply the generated yaml with kubectl:
```bash
# generate
helm fetch --untar --untardir . flagger/grafana &&
helm template grafana \
--name flagger-grafana \
helm template flagger-grafana ./grafana \
--namespace=istio-system \
> $HOME/flagger-grafana.yaml
> flagger-grafana.yaml
# apply
kubectl apply -f $HOME/flagger-grafana.yaml
kubectl apply -f flagger-grafana.yaml
```
You can access Grafana using port forwarding:
@@ -132,39 +164,30 @@ You can access Grafana using port forwarding:
kubectl -n istio-system port-forward svc/flagger-grafana 3000:80
```
### Install Flagger with Kustomize
## Install Flagger with Kustomize
As an alternative to Helm, Flagger can be installed with Kustomize.
As an alternative to Helm, Flagger can be installed with Kustomize **3.5.0** or newer.
**Service mesh specific installers**
Install Flagger for Istio:
```bash
kubectl apply -k github.com/weaveworks/flagger//kustomize/istio
```
This deploys Flagger in the `istio-system` namespace and sets the metrics server URL to Istio's Prometheus instance.
Note that you'll need kubectl 1.14 to run the above the command or you can download the
[kustomize binary](https://github.com/kubernetes-sigs/kustomize/releases) and run:
```bash
kustomize build github.com/weaveworks/flagger//kustomize/istio | kubectl apply -f -
kustomize build https://github.com/weaveworks/flagger/kustomize/istio | kubectl apply -f -
```
Install Flagger for AWS App Mesh:
```bash
kubectl apply -k github.com/weaveworks/flagger//kustomize/appmesh
kustomize build https://github.com/weaveworks/flagger/kustomize/appmesh | kubectl apply -f -
```
This deploys Flagger and Prometheus (configured to scrape the App Mesh Envoy sidecars) in the `appmesh-system` namespace.
This deploys Flagger and sets the metrics server URL to App Mesh's Prometheus instance.
Install Flagger for Linkerd:
```bash
kubectl apply -k github.com/weaveworks/flagger//kustomize/linkerd
kustomize build https://github.com/weaveworks/flagger/kustomize/linkerd | kubectl apply -f -
```
This deploys Flagger in the `linkerd` namespace and sets the metrics server URL to Linkerd's Prometheus instance.
@@ -172,77 +195,76 @@ This deploys Flagger in the `linkerd` namespace and sets the metrics server URL
If you want to install a specific Flagger release, add the version number to the URL:
```bash
kubectl apply -k github.com/weaveworks/flagger//kustomize/linkerd?ref=0.18.0
kustomize build https://github.com/weaveworks/flagger/kustomize/linkerd?ref=v1.0.0 | kubectl apply -f -
```
**Generic installer**
Install Flagger and Prometheus:
Install Flagger and Prometheus for Contour, Gloo, NGINX, Skipper, or Traefik ingress:
```bash
kubectl apply -k github.com/weaveworks/flagger//kustomize/kubernetes
kustomize build https://github.com/weaveworks/flagger/kustomize/kubernetes | kubectl apply -f -
```
This deploys Flagger and Prometheus in the `flagger-system` namespace,
sets the metrics server URL to `http://flagger-prometheus.flagger-system:9090` and the mesh provider to `kubernetes`.
This deploys Flagger and Prometheus in the `flagger-system` namespace, sets the metrics server URL
to `http://flagger-prometheus.flagger-system:9090` and the mesh provider to `kubernetes`.
The Prometheus instance has a two hours data retention and is configured to scrape all pods in your cluster that
have the `prometheus.io/scrape: "true"` annotation.
The Prometheus instance has a two hours data retention and is configured to scrape all pods in your cluster
that have the `prometheus.io/scrape: "true"` annotation.
To target a different provider you can specify it in the canary custom resource:
```yaml
apiVersion: flagger.app/v1alpha3
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: app
namespace: test
spec:
# can be: kubernetes, istio, linkerd, appmesh, nginx, gloo
# can be: kubernetes, istio, linkerd, appmesh, nginx, skipper, gloo, traefik
# use the kubernetes provider for Blue/Green style deployments
provider: nginx
```
**Customized installer**
Create a kustomization file using flagger as base:
Create a kustomization file using Flagger as base and patch the container args:
```bash
cat > kustomization.yaml <<EOF
namespace: istio-system
bases:
- github.com/weaveworks/flagger/kustomize/base/flagger
patchesStrategicMerge:
- patch.yaml
EOF
```
Create a patch and enable Slack notifications by setting the slack channel and hook URL:
```bash
cat > patch.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: flagger
spec:
template:
patches:
- target:
kind: Deployment
name: flagger
patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: flagger
spec:
containers:
- name: flagger
args:
- -mesh-provider=istio
- -metrics-server=http://prometheus.istio-system:9090
- -slack-user=flagger
- -slack-channel=alerts
- -slack-url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK
template:
spec:
containers:
- name: flagger
args:
- -mesh-provider=istio
- -metrics-server=http://prometheus.istio-system:9090
- -slack-user=flagger
- -slack-channel=alerts
- -slack-url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK
EOF
```
Install Flagger with Slack:
Install Flagger for Istio with Slack notifications:
```bash
kubectl apply -k .
kustomize build . | kubectl apply -f -
```
If you want to use MS Teams instead of Slack, replace `-slack-url` with `-msteams-url` and set the webhook address to `https://outlook.office.com/webhook/YOUR/TEAMS/WEBHOOK`.
If you want to use MS Teams instead of Slack, replace `-slack-url` with `-msteams-url` and set the webhook address
to `https://outlook.office.com/webhook/YOUR/TEAMS/WEBHOOK`.

View File

@@ -1,26 +1,23 @@
# Flagger install on Kubernetes with SuperGloo
# Flagger Install with SuperGloo
This guide walks you through setting up Flagger on a Kubernetes cluster using [SuperGloo](https://github.com/solo-io/supergloo).
SuperGloo by [Solo.io](https://solo.io) is an opinionated abstraction layer that simplifies the installation, management, and operation of your service mesh.
It supports running multiple ingresses with multiple meshes (Istio, App Mesh, Consul Connect and Linkerd 2) in the same cluster.
SuperGloo by [Solo.io](https://solo.io) is an opinionated abstraction layer that simplifies the installation, management, and operation of your service mesh. It supports running multiple ingresses with multiple meshes \(Istio, App Mesh, Consul Connect and Linkerd 2\) in the same cluster.
### Prerequisites
## Prerequisites
Flagger requires a Kubernetes cluster **v1.11** or newer with the following admission controllers enabled:
* MutatingAdmissionWebhook
* ValidatingAdmissionWebhook
### Install Istio with SuperGloo
## Install Istio with SuperGloo
#### Install SuperGloo command line interface helper
### Install SuperGloo command line interface helper
SuperGloo includes a command line helper (CLI) that makes operation of SuperGloo easier.
The CLI is not required for SuperGloo to function correctly.
SuperGloo includes a command line helper \(CLI\) that makes operation of SuperGloo easier. The CLI is not required for SuperGloo to function correctly.
If you use [Homebrew](https://brew.sh) package manager run the following
commands to install the SuperGloo CLI.
If you use [Homebrew](https://brew.sh) package manager run the following commands to install the SuperGloo CLI.
```bash
brew tap solo-io/tap
@@ -34,7 +31,7 @@ curl -sL https://run.solo.io/supergloo/install | sh
export PATH=$HOME/.supergloo/bin:$PATH
```
#### Install SuperGloo controller
### Install SuperGloo controller
Deploy the SuperGloo controller in the `supergloo-system` namespace:
@@ -49,7 +46,7 @@ helm repo add supergloo http://storage.googleapis.com/supergloo-helm
helm upgrade --install supergloo supergloo/supergloo --namespace supergloo-system
```
#### Install Istio using SuperGloo
### Install Istio using SuperGloo
Create the `istio-system` namespace and install Istio with traffic management, telemetry and Prometheus enabled:
@@ -67,7 +64,7 @@ supergloo install istio --name istio \
--version=${ISTIO_VER}
```
This creates a Kubernetes Custom Resource (CRD) like the following.
This creates a Kubernetes Custom Resource \(CRD\) like the following.
```yaml
apiVersion: supergloo.solo.io/v1
@@ -90,7 +87,7 @@ spec:
istioVersion: 1.0.6
```
#### Allow Flagger to manipulate SuperGloo
### Allow Flagger to manipulate SuperGloo
Create a cluster role binding so that Flagger can manipulate SuperGloo custom resources:
@@ -107,7 +104,7 @@ kubectl --namespace istio-system rollout status deployment/istio-sidecar-injecto
kubectl --namespace istio-system rollout status deployment/prometheus
```
### Install Flagger
## Install Flagger
Add Flagger Helm repository:
@@ -144,7 +141,7 @@ helm upgrade -i flagger flagger/flagger \
--set slack.user=flagger
```
### Install Grafana
## Install Grafana
Flagger comes with a Grafana dashboard made for monitoring the canary analysis.
@@ -162,10 +159,9 @@ You can access Grafana using port forwarding:
kubectl -n istio-system port-forward svc/flagger-grafana 3000:80
```
### Install Load Tester
## Install Load Tester
Flagger comes with an optional load testing service that generates traffic
during canary analysis when configured as a webhook.
Flagger comes with an optional load testing service that generates traffic during canary analysis when configured as a webhook.
Deploy the load test runner with Helm:
@@ -189,3 +185,4 @@ kubectl apply -f $HOME/flagger-loadtester.yaml
```
> **Note** that the load tester should be deployed in a namespace with Istio sidecar injection enabled.

Some files were not shown because too many files have changed in this diff Show More