Compare commits

...

148 Commits

Author SHA1 Message Date
Stefan Prodan
f6fa5e3891 Merge pull request #270 from weaveworks/prep-0.18.2
Release v0.18.2
2019-08-05 18:57:54 +03:00
stefanprodan
a305a0b705 Release v0.18.2 2019-08-05 18:43:57 +03:00
Stefan Prodan
dfe619e2ea Merge pull request #269 from weaveworks/helm-circleci
Publish Helm chart from CircleCI
2019-08-05 17:57:21 +03:00
stefanprodan
2b3d425b70 Publish Helm chart from CircleCI 2019-08-05 17:08:33 +03:00
Stefan Prodan
6e55fea413 Merge pull request #268 from weaveworks/istio-1.2.3
Update e2e backends
2019-08-03 15:44:53 +03:00
stefanprodan
b6a08b6615 Fix AppMesh mesh name in docs 2019-08-03 15:24:31 +03:00
stefanprodan
eaa6906516 Update e2e NGINX ingress to v1.12.1 2019-08-03 13:42:27 +03:00
stefanprodan
62a7a92f2a Update e2e Gloo to v0.18.8 2019-08-03 13:01:57 +03:00
stefanprodan
3aeb0945c5 Update e2e Istio to v1.2.3 2019-08-03 12:05:21 +03:00
Stefan Prodan
e8c85efeae Merge pull request #267 from fcantournet/fix_virtualservices_multipleports
Fix Port discovery with multiple port services
2019-08-03 12:04:04 +03:00
Félix Cantournet
6651f6452b Multiple port canary: fix FAQ and add e2e tests 2019-08-02 14:23:58 +02:00
Félix Cantournet
0ca48d77be Fix Port discovery with multiple port services
This fixes issue https://github.com/weaveworks/flagger/issues/263

We actually don't need to specify any ports in the VirtualService
and DestinationRules.
Istio will create clusters/listerners for each named port we have declared in
the kubernetes services and the router can be shared as it operates only on L7 criterias

Also contains a tiny clean-up of imports
2019-08-02 10:07:00 +02:00
Stefan Prodan
a9e0e018e3 Merge pull request #266 from ExpediaInc/master
Parameterize image pull secrets for private docker repos
2019-08-01 11:07:53 +03:00
Sky Moon
122d11f445 Merge pull request #1 from ExpediaInc/parameterizeImagePullSecrets
parameterize image pull secrets for private docker repos.
2019-08-01 00:50:15 -07:00
cmoon
b03555858c parameterize image pull secrets for private docker repos. 2019-08-01 00:47:07 -07:00
Stefan Prodan
dcc5a40441 Merge pull request #262 from weaveworks/prep-0.18.1
Release v0.18.1
2019-07-30 13:52:25 +03:00
stefanprodan
8c949f59de Package helm charts locally 2019-07-30 13:35:09 +03:00
stefanprodan
e8d91a0375 Release v0.18.1 2019-07-30 13:22:51 +03:00
Stefan Prodan
fae9aa664d Merge pull request #261 from weaveworks/blue-green-e2e
Fix Blue/Green metrics provider and add e2e tests
2019-07-30 13:16:20 +03:00
stefanprodan
c31e9e5a96 Use Linkerd metrics for ingress and kubernetes routers 2019-07-30 13:00:28 +03:00
stefanprodan
99fff98274 Kustomize: set Flagger log level to info 2019-07-30 12:43:02 +03:00
stefanprodan
11d84bf35d Enable kubernetes metric provider 2019-07-30 12:27:53 +03:00
stefanprodan
e56ba480c7 Add Blue/Green e2e tests 2019-07-30 12:02:15 +03:00
Stefan Prodan
b9f0517c5d Merge pull request #255 from weaveworks/prep-0.18.0
Release v0.18.0
2019-07-29 16:06:23 +03:00
stefanprodan
6e66f02585 Update changelog 2019-07-29 15:52:50 +03:00
stefanprodan
5922e96044 Merge branch 'prep-0.18.0' of https://github.com/weaveworks/flagger into prep-0.18.0 2019-07-29 15:06:43 +03:00
stefanprodan
f36e7e414a Add manual gating link to readme 2019-07-29 15:06:31 +03:00
stefanprodan
606754d4a5 Disable supergloo e2e 2019-07-29 15:06:31 +03:00
stefanprodan
a3847e64df Add Kustomize download link to docs 2019-07-29 15:06:31 +03:00
stefanprodan
7a3f9f2e73 Use Kustomize for Istio e2e testing 2019-07-29 15:06:31 +03:00
stefanprodan
2e4e8b0bf9 Make installer work with Kustomize v3 2019-07-29 15:06:31 +03:00
stefanprodan
951fe80115 Use crd.create=false in docs 2019-07-29 15:06:30 +03:00
stefanprodan
c0a8149acb Add kubectl min version to Kustomize docs 2019-07-29 15:06:30 +03:00
stefanprodan
80b75b227d Add CRD install step to chart 2019-07-29 15:06:30 +03:00
stefanprodan
dff7de09f2 Use kubectl for CRD install 2019-07-29 15:06:30 +03:00
stefanprodan
b3bbadfccf Add v0.18.0 to changelog 2019-07-29 15:06:30 +03:00
stefanprodan
fc676e3cb7 Release v0.18.0 2019-07-29 15:06:30 +03:00
stefanprodan
860c82dff9 Remove test artifacts 2019-07-29 15:06:30 +03:00
Stefan Prodan
4829f5af7f Merge pull request #257 from weaveworks/promotion
Implement promotion finalising state
2019-07-29 15:03:05 +03:00
stefanprodan
c463b6b231 Add finalising state tests 2019-07-29 14:02:16 +03:00
stefanprodan
b2ca0c4c16 Implement finalising state
Set the canary status to finalising after routing the traffic back to the primary. Run one final loop before scaling the canary to zero so that the canary has a chance to process all inflight requests.
2019-07-29 13:52:11 +03:00
stefanprodan
69875cb3dc Add finalising status phase to CRD 2019-07-29 13:43:30 +03:00
stefanprodan
9e33a116d4 Add manual gating link to readme 2019-07-29 11:33:28 +03:00
stefanprodan
dab3d53b65 Disable supergloo e2e 2019-07-28 11:28:00 +03:00
stefanprodan
e3f8bff6fc Add Kustomize download link to docs 2019-07-27 15:51:22 +03:00
stefanprodan
0648d81d34 Use Kustomize for Istio e2e testing 2019-07-27 14:49:57 +03:00
stefanprodan
ece5c4401e Make installer work with Kustomize v3 2019-07-27 14:45:49 +03:00
stefanprodan
bfc64c7cf1 Use crd.create=false in docs 2019-07-27 13:20:55 +03:00
stefanprodan
0a2c134ece Add kubectl min version to Kustomize docs 2019-07-27 13:07:47 +03:00
stefanprodan
8bea9253c3 Add CRD install step to chart 2019-07-27 13:06:27 +03:00
stefanprodan
e1dacc3983 Use kubectl for CRD install 2019-07-26 15:52:00 +03:00
stefanprodan
0c6a7355e7 Add v0.18.0 to changelog 2019-07-26 14:05:19 +03:00
stefanprodan
83046282c3 Release v0.18.0 2019-07-26 13:53:40 +03:00
stefanprodan
65c9817295 Remove test artifacts 2019-07-26 13:51:15 +03:00
Stefan Prodan
e4905d3d35 Merge pull request #254 from weaveworks/podinfo
Use Kustomize installer in Linkerd docs
2019-07-26 13:44:51 +03:00
stefanprodan
6bc0670a7a Use Kustomize installer in Linkerd docs 2019-07-26 13:30:28 +03:00
stefanprodan
95ff6adc19 Use podinfo 1.7 in GitOps demo 2019-07-26 13:20:06 +03:00
stefanprodan
7ee51c7def Add podinfo to Kustomize installer 2019-07-26 13:19:36 +03:00
Stefan Prodan
dfa065b745 Merge pull request #251 from weaveworks/gates
Implement confirm rollout gate, hook and API
2019-07-26 01:40:35 +03:00
stefanprodan
e3b03debde Use podinfo v1.7 2019-07-26 01:25:44 +03:00
Stefan Prodan
ef759305cb Merge pull request #253 from grampelberg/master
Update Linkerd to use correct canaries directory.
2019-07-26 00:24:52 +03:00
grampelberg
ad65497d4e Update Linkerd to use correct canaries directory. 2019-07-25 11:10:52 -07:00
stefanprodan
163f5292b0 Push a notification when a canary is waiting for approval 2019-07-25 19:13:22 +03:00
stefanprodan
e07a82d024 Add manual gating to docs 2019-07-25 13:32:58 +03:00
stefanprodan
046245a8b5 Use Gloo 0.17.6 in e2e tests 2019-07-24 19:54:33 +03:00
stefanprodan
aa6a180bcc Remove Gloo NodePort from e2e tests 2019-07-24 19:44:06 +03:00
stefanprodan
c4d28e14fc Upgrade Gloo e2e to v0.17.5 2019-07-24 19:35:02 +03:00
stefanprodan
bc4bdcdc1c Upgrade Gloo e2e to v0.17.6 2019-07-24 19:21:41 +03:00
stefanprodan
be22ff9951 Bump load tester version 2019-07-24 16:28:46 +03:00
stefanprodan
f204fe53f4 Implement canary gating API with in-memory storage
POST /gate/[check|open|close]
2019-07-24 16:14:22 +03:00
stefanprodan
28e7e89047 Pause or resume analysis on confirmation gate toggle 2019-07-24 16:09:13 +03:00
stefanprodan
75d49304f3 Add confirm-rollout hook to docs 2019-07-24 12:17:11 +03:00
stefanprodan
04cbacb6e0 Implement confirm rollout gate and hook
The confirm-rollout hooks are executed before the pre-rollout hooks. Flagger will halt the canary rollout until the confirm webhook returns HTTP status 200.
2019-07-24 12:09:39 +03:00
stefanprodan
c46c7b9e21 Add canary status conditions to docs 2019-07-24 12:04:05 +03:00
stefanprodan
919dafa567 Add gate halt and approve endpoints 2019-07-24 12:02:44 +03:00
stefanprodan
dfdcfed26e Add Waiting canary status phase
means the canary rollout is paused (waiting for confirmation to proceed)
2019-07-24 12:00:04 +03:00
Stefan Prodan
a0a4d4cfc5 Merge pull request #248 from weaveworks/ghz
Add gRPC load testing tool
2019-07-23 12:44:04 +03:00
stefanprodan
970a589fd3 Add load tester to kustomize installer 2019-07-23 12:30:38 +03:00
stefanprodan
56d2c0952a Add gPRC load test example to docs 2019-07-22 15:16:13 +03:00
stefanprodan
4871be0345 Release loadtester v0.5.0 2019-07-22 14:57:14 +03:00
stefanprodan
e3e112e279 Add gRPC load testing tool
https://ghz.sh
2019-07-22 14:55:19 +03:00
Stefan Prodan
d2cbd40d89 Merge pull request #240 from weaveworks/refactor
Refactor canary change detection and status
2019-07-22 14:33:02 +03:00
stefanprodan
3786a49f00 Update Linkerd e2e to v2.4.0 2019-07-16 11:20:42 +02:00
stefanprodan
ff4aa62061 Retry canary status update on conflict 2019-07-10 11:31:20 +03:00
stefanprodan
9b6cfdeef7 Update Canary CRD helm chart and Kustomize 2019-07-10 09:55:46 +03:00
stefanprodan
9d89e0c83f Log status update error 2019-07-10 09:55:20 +03:00
stefanprodan
559cbd0d36 Pin NGINX helm chart to v1.8.2 2019-07-10 09:49:39 +03:00
stefanprodan
caea00e47f Pin NGINX helm chart to version 1.8.2 2019-07-10 09:42:49 +03:00
stefanprodan
b26542f38d Do not trigger a canary deployment on manual rollback
Save the primary spec hash and check if it matches the canary spec. If the canary hash is identical with the primary one skip promotion.
2019-07-10 09:08:33 +03:00
Stefan Prodan
bbab7ce855 Merge pull request #238 from weaveworks/prep-0.17.0
Release v0.17.0
2019-07-10 08:46:10 +03:00
stefanprodan
afa2d079f6 Add status conditions and descriptions to CRD 2019-07-09 17:11:13 +03:00
stefanprodan
108bf9ca65 Add initializing canary phase/status condition reason
Fix HPA reconciliation min replicas diff
2019-07-09 17:10:43 +03:00
stefanprodan
438f952128 Implement status conditions
Add Promoted status condition with the following reasons: Initialized, Progressing, Succeeded, Failed
Usage: `kubectl wait canary/app --for=condition=promoted`
Fix: #184
2019-07-09 15:22:56 +03:00
stefanprodan
3e84799644 Detect changes in pod template metadata
Use the pod template spec hash to track changes (breaking)
2019-07-09 08:52:31 +03:00
stefanprodan
d6e80bac7f Update webhook mTLS FAQ
Fix: #239
2019-07-08 17:21:59 +03:00
stefanprodan
9b3b24bddf Add v0.17.0 changelog 2019-07-08 15:02:09 +03:00
stefanprodan
5c831ae482 Add Linkerd to docs 2019-07-08 13:43:01 +03:00
stefanprodan
78233fafd3 Release v0.17.0 2019-07-08 13:35:36 +03:00
Stefan Prodan
73c3e07859 Merge pull request #236 from weaveworks/leader-election
Implement leader election
2019-07-08 11:32:06 +03:00
stefanprodan
10c61daee4 Exit when losing leadership 2019-07-07 13:09:05 +03:00
stefanprodan
b1bb9fa114 Enable leader election for e2e testing 2019-07-07 12:19:09 +03:00
stefanprodan
a7f4b6d2ae Add leader election and pod anti affinity to chart 2019-07-07 12:08:08 +03:00
stefanprodan
b937c4ea8d Implement leader election
Add enable-leader-election and leader-election-namespace flags
2019-07-07 11:41:27 +03:00
Stefan Prodan
e577311b64 Merge pull request #235 from weaveworks/msteams
Implement MS Teams notifications
2019-07-07 11:40:00 +03:00
stefanprodan
b847345308 Add 5 seconds timeout to notifier 2019-07-06 18:02:45 +03:00
stefanprodan
85e683446f Add MS Teams to docs 2019-07-06 17:22:39 +03:00
stefanprodan
4f49aa5760 Add MS Teams webhook field to chart 2019-07-06 17:14:50 +03:00
stefanprodan
8ca9cf24bb Implement MS Teams notifier 2019-07-06 17:14:21 +03:00
stefanprodan
61d0216c21 Add traffic routing to notifications 2019-07-06 17:13:09 +03:00
stefanprodan
ba4a2406ba Refactor notifier to allow more implementations 2019-07-06 15:47:12 +03:00
Stefan Prodan
c2974416b4 Merge pull request #234 from weaveworks/psp
Add pod security policy to Helm chart
2019-07-06 11:52:58 +03:00
stefanprodan
48fac4e876 Disable privilege escalation 2019-07-06 10:38:17 +03:00
stefanprodan
f0add9a67c Use a role binding for the PSP rbac 2019-07-05 17:10:05 +03:00
stefanprodan
20f9df01c2 Add pod security policy to Helm chart
- disable privileged, hostIPC, hostNetwork and hostPID
- add psp flag to chart readme
2019-07-05 16:47:41 +03:00
Stefan Prodan
514e850072 Merge pull request #232 from weaveworks/kustomize
Add Kustomize installer
2019-07-05 16:23:44 +03:00
stefanprodan
61fe78a982 Mention Prometheus data retention in docs 2019-07-04 15:07:05 +03:00
stefanprodan
c4b066c845 Add Kustomize installer to docs 2019-07-04 13:45:56 +03:00
stefanprodan
d24a23f3bd Kustomize installer: add installer readme 2019-07-04 13:29:02 +03:00
stefanprodan
22045982e2 Kustomize installer: add Linkerd overlay 2019-07-04 13:27:23 +03:00
stefanprodan
f496f1e18f Kustomize installer: add Istio overlay 2019-07-04 13:27:03 +03:00
stefanprodan
2e802432c4 Kustomize installer: add Kubernetes overlay 2019-07-04 13:26:52 +03:00
stefanprodan
a2f747e16f Kustomize installer: add Prometheus base manifests 2019-07-04 13:26:22 +03:00
stefanprodan
982338e162 Kustomize installer: add Flagger base manifests 2019-07-04 13:26:08 +03:00
Stefan Prodan
03fe4775dd Merge pull request #231 from weaveworks/gloo-0.14.2
Update Gloo and Prometheus
2019-07-02 09:24:14 +03:00
stefanprodan
def7d9bde0 Update Prometheus to v2.10.0 and set retention to 2h 2019-07-02 09:10:56 +03:00
stefanprodan
a58a7cbeeb Update Gloo to 0.14.2 2019-07-01 16:07:38 +03:00
Stefan Prodan
82ca66c23b Merge pull request #230 from weaveworks/linkerd
Add Linkerd support and e2e testing
2019-07-01 10:28:38 +03:00
stefanprodan
92c971c0d7 Add ingress and A/B testing example to Linkerd docs 2019-07-01 10:02:50 +03:00
stefanprodan
30c4faf72b Add Linkerd canary deployments docs 2019-07-01 09:23:38 +03:00
stefanprodan
85ee7d17cf Set min analysis interval to 10s 2019-07-01 09:23:05 +03:00
stefanprodan
a6d278ae91 Add Linkerd traffic split diagram 2019-06-30 13:52:59 +03:00
stefanprodan
ad8d02f701 Use Linkerd metrics when NGINX is the mesh ingress
Set the metrics provider to Linkerd Prometheus when using NGINX as Linkerd Ingress. This mitigates the lack of canary metrics in the NGINX controller exporter.
2019-06-30 13:03:27 +03:00
stefanprodan
00fa5542f7 Add linkerd as mesh provider 2019-06-30 12:46:23 +03:00
stefanprodan
9ed2719d19 Add canary rollback test to Linkerd e2e 2019-06-30 10:36:37 +03:00
stefanprodan
8a809baf35 Linkerd e2e testing: set canary max weight to 50% 2019-06-29 16:06:16 +03:00
stefanprodan
ff90c42fa7 Fix Linkerd CLI install 2019-06-29 15:53:36 +03:00
stefanprodan
d651e8fe48 Fix Linkerd metrics test 2019-06-29 15:23:40 +03:00
stefanprodan
bc613905e9 Add Linkerd edge-19.6.4 e2e testing 2019-06-29 15:20:35 +03:00
stefanprodan
e3321118e5 Fix linkerd success rate query 2019-06-29 15:03:36 +03:00
Stefan Prodan
31f526cbd6 Merge pull request #229 from weaveworks/istio-1.2.2
Update Istio e2e to v1.2.2
2019-06-29 10:53:31 +03:00
stefanprodan
493554178f Update Istio e2e to v1.2.2
Disable galley and MCP
2019-06-29 10:27:40 +03:00
Stefan Prodan
004b1cc7dd Merge pull request #228 from weaveworks/updates
Update Grafana and Kubernetes Kind
2019-06-27 10:31:20 +03:00
stefanprodan
767602592c Bump podinfo chart version 2019-06-27 10:19:34 +03:00
stefanprodan
34676acaf5 Add Istio TLS mode to podinfo chart 2019-06-27 10:14:06 +03:00
stefanprodan
491ab7affa Update Grafana to v6.2.5 2019-06-27 10:13:25 +03:00
stefanprodan
b522bbd903 Update Kubernetes Kind to v0.4.0 2019-06-27 09:58:56 +03:00
Stefan Prodan
dd3bc28806 Merge pull request #227 from dcherman/validate-k8s-version
Validate the minimum supported k8s version
2019-06-27 09:56:25 +03:00
Daniel Herman
764e7e275d Validate the minimum supported k8s version
Fixes #103
2019-06-27 00:18:08 -04:00
138 changed files with 3522 additions and 968 deletions

View File

@@ -78,6 +78,17 @@ jobs:
- run: test/e2e-istio.sh
- run: test/e2e-tests.sh
e2e-kubernetes-testing:
machine: true
steps:
- checkout
- attach_workspace:
at: /tmp/bin
- run: test/container-build.sh
- run: test/e2e-kind.sh
- run: test/e2e-kubernetes.sh
- run: test/e2e-kubernetes-tests.sh
e2e-smi-istio-testing:
machine: true
steps:
@@ -107,7 +118,7 @@ jobs:
- attach_workspace:
at: /tmp/bin
- run: test/container-build.sh
- run: test/e2e-kind.sh 0.2.1
- run: test/e2e-kind.sh
- run: test/e2e-gloo.sh
- run: test/e2e-gloo-tests.sh
@@ -122,6 +133,58 @@ jobs:
- run: test/e2e-nginx.sh
- run: test/e2e-nginx-tests.sh
e2e-linkerd-testing:
machine: true
steps:
- checkout
- attach_workspace:
at: /tmp/bin
- run: test/container-build.sh
- run: test/e2e-kind.sh
- run: test/e2e-linkerd.sh
- run: test/e2e-linkerd-tests.sh
push-helm-charts:
docker:
- image: circleci/golang:1.12
steps:
- checkout
- run:
name: Install kubectl
command: sudo curl -L https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl && sudo chmod +x /usr/local/bin/kubectl
- run:
name: Install helm
command: sudo curl -L https://storage.googleapis.com/kubernetes-helm/helm-v2.14.2-linux-amd64.tar.gz | tar xz && sudo mv linux-amd64/helm /bin/helm && sudo rm -rf linux-amd64
- run:
name: Initialize helm
command: helm init --client-only --kubeconfig=$HOME/.kube/kubeconfig
- run:
name: Lint charts
command: |
helm lint ./charts/*
- run:
name: Package charts
command: |
mkdir $HOME/charts
helm package ./charts/* --destination $HOME/charts
- run:
name: Publish charts
command: |
if echo "${CIRCLE_TAG}" | grep -Eq "[0-9]+(\.[0-9]+)*(-[a-z]+)?$"; then
REPOSITORY="https://weaveworksbot:${GITHUB_TOKEN}@github.com/weaveworks/flagger.git"
git config user.email weaveworksbot@users.noreply.github.com
git config user.name weaveworksbot
git remote set-url origin ${REPOSITORY}
git checkout gh-pages
mv -f $HOME/charts/*.tgz .
helm repo index . --url https://flagger.app
git add .
git commit -m "Publish Helm charts v${CIRCLE_TAG}"
git push origin gh-pages
else
echo "Not a release! Skip charts publish"
fi
workflows:
version: 2
build-test-push:
@@ -134,26 +197,30 @@ workflows:
- e2e-istio-testing:
requires:
- build-binary
- e2e-smi-istio-testing:
requires:
- build-binary
- e2e-supergloo-testing:
- e2e-kubernetes-testing:
requires:
- build-binary
# - e2e-supergloo-testing:
# requires:
# - build-binary
- e2e-gloo-testing:
requires:
- build-binary
- e2e-nginx-testing:
requires:
- build-binary
- e2e-linkerd-testing:
requires:
- build-binary
- push-container:
requires:
- build-binary
- e2e-istio-testing
- e2e-smi-istio-testing
- e2e-supergloo-testing
- e2e-kubernetes-testing
#- e2e-supergloo-testing
- e2e-gloo-testing
- e2e-nginx-testing
- e2e-linkerd-testing
release:
jobs:
@@ -172,6 +239,14 @@ workflows:
tags:
ignore: /^chart.*/
- push-binary:
requires:
- push-container
filters:
branches:
ignore: /.*/
tags:
ignore: /^chart.*/
- push-helm-charts:
requires:
- push-container
filters:

View File

@@ -2,6 +2,60 @@
All notable changes to this project are documented in this file.
## 0.18.2 (2019-08-05)
Fixes multi-port support for Istio
#### Fixes
- Fix port discovery for multiple port services [#267](https://github.com/weaveworks/flagger/pull/267)
#### Improvements
- Update e2e testing to Istio v1.2.3, Gloo v0.18.8 and NGINX ingress chart v1.12.1 [#268](https://github.com/weaveworks/flagger/pull/268)
## 0.18.1 (2019-07-30)
Fixes Blue/Green style deployments for Kubernetes and Linkerd providers
#### Fixes
- Fix Blue/Green metrics provider and add e2e tests [#261](https://github.com/weaveworks/flagger/pull/261)
## 0.18.0 (2019-07-29)
Adds support for [manual gating](https://docs.flagger.app/how-it-works#manual-gating) and pausing/resuming an ongoing analysis
#### Features
- Implement confirm rollout gate, hook and API [#251](https://github.com/weaveworks/flagger/pull/251)
#### Improvements
- Refactor canary change detection and status [#240](https://github.com/weaveworks/flagger/pull/240)
- Implement finalising state [#257](https://github.com/weaveworks/flagger/pull/257)
- Add gRPC load testing tool [#248](https://github.com/weaveworks/flagger/pull/248)
#### Breaking changes
- Due to the status sub-resource changes in [#240](https://github.com/weaveworks/flagger/pull/240), when upgrading Flagger the canaries status phase will be reset to `Initialized`
- Upgrading Flagger with Helm will fail due to Helm poor support of CRDs, see [workaround](https://github.com/weaveworks/flagger/issues/223)
## 0.17.0 (2019-07-08)
Adds support for Linkerd (SMI Traffic Split API), MS Teams notifications and HA mode with leader election
#### Features
- Add Linkerd support [#230](https://github.com/weaveworks/flagger/pull/230)
- Implement MS Teams notifications [#235](https://github.com/weaveworks/flagger/pull/235)
- Implement leader election [#236](https://github.com/weaveworks/flagger/pull/236)
#### Improvements
- Add [Kustomize](https://docs.flagger.app/install/flagger-install-on-kubernetes#install-flagger-with-kustomize) installer [#232](https://github.com/weaveworks/flagger/pull/232)
- Add Pod Security Policy to Helm chart [#234](https://github.com/weaveworks/flagger/pull/234)
## 0.16.0 (2019-06-23)
Adds support for running [Blue/Green deployments](https://docs.flagger.app/usage/blue-green) without a service mesh or ingress controller

View File

@@ -13,10 +13,15 @@ RUN curl -sSL "https://get.helm.sh/helm-v2.12.3-linux-amd64.tar.gz" | tar xvz &&
chmod +x linux-amd64/helm && mv linux-amd64/helm /usr/local/bin/helm && \
rm -rf linux-amd64
RUN curl -sSL "https://github.com/bojand/ghz/releases/download/v0.39.0/ghz_0.39.0_Linux_x86_64.tar.gz" | tar xz -C /tmp && \
mv /tmp/ghz /usr/local/bin && chmod +x /usr/local/bin/ghz && rm -rf /tmp/ghz-web
RUN ls /tmp
COPY ./bin/loadtester .
RUN chown -R app:app ./
USER app
ENTRYPOINT ["./loadtester"]
ENTRYPOINT ["./loadtester"]

View File

@@ -8,7 +8,14 @@ TS=$(shell date +%Y-%m-%d_%H-%M-%S)
run:
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=istio -namespace=test \
-metrics-server=https://prometheus.istio.weavedx.com
-metrics-server=https://prometheus.istio.weavedx.com \
-enable-leader-election=true
run2:
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=istio -namespace=test \
-metrics-server=https://prometheus.istio.weavedx.com \
-enable-leader-election=true \
-port=9092
run-appmesh:
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=appmesh \
@@ -30,6 +37,10 @@ run-nop:
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=none -namespace=bg \
-metrics-server=https://prometheus.istio.weavedx.com
run-linkerd:
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=smi:linkerd -namespace=demo \
-metrics-server=https://linkerd-prometheus.istio.weavedx.com
build:
GIT_COMMIT=$$(git rev-list -1 HEAD) && GO111MODULE=on CGO_ENABLED=0 GOOS=linux go build -ldflags "-s -w -X github.com/weaveworks/flagger/pkg/version.REVISION=$${GIT_COMMIT}" -a -installsuffix cgo -o ./bin/flagger ./cmd/flagger/*
docker build -t weaveworks/flagger:$(TAG) . -f Dockerfile
@@ -52,8 +63,9 @@ test: test-fmt test-codegen
helm-package:
cd charts/ && helm package ./*
mv charts/*.tgz docs/
helm repo index docs --url https://weaveworks.github.io/flagger --merge ./docs/index.yaml
mv charts/*.tgz bin/
curl -s https://raw.githubusercontent.com/weaveworks/flagger/gh-pages/index.yaml > ./bin/index.yaml
helm repo index bin --url https://flagger.app --merge ./bin/index.yaml
helm-up:
helm upgrade --install flagger ./charts/flagger --namespace=istio-system --set crd.create=false
@@ -67,7 +79,8 @@ version-set:
sed -i '' "s/tag: $$current/tag: $$next/g" charts/flagger/values.yaml && \
sed -i '' "s/appVersion: $$current/appVersion: $$next/g" charts/flagger/Chart.yaml && \
sed -i '' "s/version: $$current/version: $$next/g" charts/flagger/Chart.yaml && \
echo "Version $$next set in code, deployment and charts"
sed -i '' "s/newTag: $$current/newTag: $$next/g" kustomize/base/flagger/kustomization.yaml && \
echo "Version $$next set in code, deployment, chart and kustomize"
version-up:
@next="$(VERSION_MINOR).$$(($(PATCH) + 1))" && \

View File

@@ -7,13 +7,13 @@
[![release](https://img.shields.io/github/release/weaveworks/flagger/all.svg)](https://github.com/weaveworks/flagger/releases)
Flagger is a Kubernetes operator that automates the promotion of canary deployments
using Istio, App Mesh, NGINX or Gloo routing for traffic shifting and Prometheus metrics for canary analysis.
using Istio, Linkerd, App Mesh, NGINX or Gloo routing for traffic shifting and Prometheus metrics for canary analysis.
The canary analysis can be extended with webhooks for running acceptance tests,
load tests or any other custom validation.
Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance
indicators like HTTP requests success rate, requests average duration and pods health.
Based on analysis of the KPIs a canary is promoted or aborted, and the analysis result is published to Slack.
Based on analysis of the KPIs a canary is promoted or aborted, and the analysis result is published to Slack or MS Teams.
![flagger-overview](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/diagrams/flagger-canary-overview.png)
@@ -35,10 +35,12 @@ Flagger documentation can be found at [docs.flagger.app](https://docs.flagger.ap
* [Custom metrics](https://docs.flagger.app/how-it-works#custom-metrics)
* [Webhooks](https://docs.flagger.app/how-it-works#webhooks)
* [Load testing](https://docs.flagger.app/how-it-works#load-testing)
* [Manual gating](https://docs.flagger.app/how-it-works#manual-gating)
* [FAQ](https://docs.flagger.app/faq)
* Usage
* [Istio canary deployments](https://docs.flagger.app/usage/progressive-delivery)
* [Istio A/B testing](https://docs.flagger.app/usage/ab-testing)
* [Linkerd canary deployments](https://docs.flagger.app/usage/linkerd-progressive-delivery)
* [App Mesh canary deployments](https://docs.flagger.app/usage/appmesh-progressive-delivery)
* [NGINX ingress controller canary deployments](https://docs.flagger.app/usage/nginx-progressive-delivery)
* [Gloo ingress controller canary deployments](https://docs.flagger.app/usage/gloo-progressive-delivery)
@@ -67,7 +69,7 @@ metadata:
namespace: test
spec:
# service mesh provider (optional)
# can be: kubernetes, istio, appmesh, smi, nginx, gloo, supergloo
# can be: kubernetes, istio, linkerd, appmesh, nginx, gloo, supergloo
# use the kubernetes provider for Blue/Green style deployments
provider: istio
# deployment reference
@@ -155,19 +157,19 @@ For more details on how the canary analysis and promotion works please [read the
## Features
| Feature | Istio | App Mesh | NGINX | Gloo |
| -------------------------------------------- | ------------------ | ------------------ |------------------ |------------------ |
| Canary deployments (weighted traffic) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| A/B testing (headers and cookies filters) | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign: |
| Webhooks (acceptance/load testing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Request success rate check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Request duration check (L7 metric) | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_check_mark: |
| Custom promql checks | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Traffic policy, CORS, retries and timeouts | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: |
| Feature | Istio | Linkerd | App Mesh | NGINX | Gloo |
| -------------------------------------------- | ------------------ | ------------------ |------------------ |------------------ |------------------ |
| Canary deployments (weighted traffic) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| A/B testing (headers and cookies filters) | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign: |
| Webhooks (acceptance/load testing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Request success rate check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Request duration check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_check_mark: |
| Custom promql checks | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Traffic policy, CORS, retries and timeouts | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: |
## Roadmap
* Integrate with other service mesh technologies like Linkerd v2
* Integrate with other ingress controllers like Contour, HAProxy, ALB
* Add support for comparing the canary metrics to the primary ones and do the validation based on the derivation between the two
## Contributing

View File

@@ -25,7 +25,7 @@ spec:
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:1.4.0
image: quay.io/stefanprodan/podinfo:1.7.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898

View File

@@ -25,7 +25,7 @@ spec:
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:1.4.0
image: quay.io/stefanprodan/podinfo:1.7.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898

View File

@@ -25,7 +25,7 @@ spec:
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:1.4.0
image: quay.io/stefanprodan/podinfo:1.7.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898

View File

@@ -5,17 +5,17 @@ metadata:
namespace: test
annotations:
flux.weave.works/automated: "true"
flux.weave.works/tag.chart-image: regexp:^1.4.*
flux.weave.works/tag.chart-image: regexp:^1.7.*
spec:
releaseName: backend
chart:
repository: https://flagger.app/
name: podinfo
version: 2.0.0
version: 2.2.0
values:
image:
repository: quay.io/stefanprodan/podinfo
tag: 1.4.0
tag: 1.7.0
httpServer:
timeout: 30s
canary:

View File

@@ -5,17 +5,17 @@ metadata:
namespace: test
annotations:
flux.weave.works/automated: "true"
flux.weave.works/tag.chart-image: semver:~1.4
flux.weave.works/tag.chart-image: semver:~1.7
spec:
releaseName: frontend
chart:
repository: https://flagger.app/
name: podinfo
version: 2.0.0
version: 2.2.0
values:
image:
repository: quay.io/stefanprodan/podinfo
tag: 1.4.0
tag: 1.7.0
backend: http://backend-podinfo:9898/echo
canary:
enabled: true

View File

@@ -11,8 +11,8 @@ spec:
chart:
repository: https://flagger.app/
name: loadtester
version: 0.1.0
version: 0.6.0
values:
image:
repository: quay.io/stefanprodan/flagger-loadtester
tag: 0.1.0
repository: weaveworks/flagger-loadtester
tag: 0.6.1

View File

@@ -1,59 +0,0 @@
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
# container port
port: 9898
# Istio gateways (optional)
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
# Istio virtual service host names (optional)
hosts:
- app.iowa.weavedx.com
canaryAnalysis:
# schedule interval (default 60s)
interval: 10s
# max number of failed metric checks before rollback
threshold: 10
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 5
# Istio Prometheus checks
metrics:
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 1m
- name: request-duration
# maximum req duration P99
# milliseconds
threshold: 500
interval: 30s
# external checks (optional)
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"

View File

@@ -1,16 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: podinfo-config-env
namespace: test
data:
color: blue
---
apiVersion: v1
kind: ConfigMap
metadata:
name: podinfo-config-vol
namespace: test
data:
output: console
textmode: "true"

View File

@@ -1,89 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
minReadySeconds: 5
revisionHistoryLimit: 5
progressDeadlineSeconds: 60
strategy:
rollingUpdate:
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: podinfo
template:
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
app: podinfo
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:1.3.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898
name: http
protocol: TCP
command:
- ./podinfo
- --port=9898
- --level=info
- --random-delay=false
- --random-error=false
env:
- name: PODINFO_UI_COLOR
valueFrom:
configMapKeyRef:
name: podinfo-config-env
key: color
- name: SECRET_USER
valueFrom:
secretKeyRef:
name: podinfo-secret-env
key: user
livenessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/healthz
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/readyz
initialDelaySeconds: 5
timeoutSeconds: 5
resources:
limits:
cpu: 2000m
memory: 512Mi
requests:
cpu: 100m
memory: 64Mi
volumeMounts:
- name: configs
mountPath: /etc/podinfo/configs
readOnly: true
- name: secrets
mountPath: /etc/podinfo/secrets
readOnly: true
volumes:
- name: configs
configMap:
name: podinfo-config-vol
- name: secrets
secret:
secretName: podinfo-secret-vol

View File

@@ -1,16 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: podinfo-secret-env
namespace: test
data:
password: cGFzc3dvcmQ=
user: YWRtaW4=
---
apiVersion: v1
kind: Secret
metadata:
name: podinfo-secret-vol
namespace: test
data:
key: cGFzc3dvcmQ=

View File

@@ -102,17 +102,23 @@ spec:
canaryAnalysis:
properties:
interval:
description: Canary schedule interval
type: string
pattern: "^[0-9]+(m|s)"
iterations:
description: Number of checks to run for A/B Testing and Blue/Green
type: number
threshold:
description: Max number of failed checks before rollback
type: number
maxWeight:
description: Max traffic percentage routed to canary
type: number
stepWeight:
description: Canary incremental traffic percentage step
type: number
metrics:
description: Prometheus query list for this canary
type: array
properties:
items:
@@ -120,15 +126,20 @@ spec:
required: ['name', 'threshold']
properties:
name:
description: Name of the Prometheus metric
type: string
interval:
description: Interval of the promql query
type: string
pattern: "^[0-9]+(m|s)"
threshold:
description: Max scalar value accepted for this metric
type: number
query:
description: Prometheus query
type: string
webhooks:
description: Webhook list for this canary
type: array
properties:
items:
@@ -136,8 +147,10 @@ spec:
required: ['name', 'url', 'timeout']
properties:
name:
description: Name of the webhook
type: string
type:
description: Type of the webhook pre, post or during rollout
type: string
enum:
- ""
@@ -145,28 +158,68 @@ spec:
- rollout
- post-rollout
url:
description: URL address of this webhook
type: string
format: url
timeout:
description: Request timeout for this webhook
type: string
pattern: "^[0-9]+(m|s)"
status:
properties:
phase:
description: Analysis phase of this canary
type: string
enum:
- ""
- Initializing
- Initialized
- Waiting
- Progressing
- Finalising
- Succeeded
- Failed
canaryWeight:
description: Traffic weight percentage routed to canary
type: number
failedChecks:
description: Failed check count of the current canary analysis
type: number
iterations:
description: Iteration count of the current canary analysis
type: number
lastAppliedSpec:
description: LastAppliedSpec of this canary
type: string
lastTransitionTime:
description: LastTransitionTime of this canary
format: date-time
type: string
conditions:
description: Status conditions of this canary
type: array
properties:
items:
type: object
required: ['type', 'status', 'reason']
properties:
lastTransitionTime:
description: LastTransitionTime of this condition
format: date-time
type: string
lastUpdateTime:
description: LastUpdateTime of this condition
format: date-time
type: string
message:
description: Message associated with this condition
type: string
reason:
description: Reason for the current status of this condition
type: string
status:
description: Status of this condition
type: string
type:
description: Type of this condition
type: string

View File

@@ -22,7 +22,7 @@ spec:
serviceAccountName: flagger
containers:
- name: flagger
image: weaveworks/flagger:0.16.0
image: weaveworks/flagger:0.18.2
imagePullPolicy: IfNotPresent
ports:
- name: http

View File

@@ -25,7 +25,7 @@ spec:
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:1.4.0
image: quay.io/stefanprodan/podinfo:1.7.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898

View File

@@ -17,7 +17,7 @@ spec:
spec:
containers:
- name: loadtester
image: weaveworks/flagger-loadtester:0.4.0
image: weaveworks/flagger-loadtester:0.6.1
imagePullPolicy: IfNotPresent
ports:
- name: http

View File

@@ -23,7 +23,7 @@ spec:
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:1.4.0
image: quay.io/stefanprodan/podinfo:1.7.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898

View File

@@ -1,45 +0,0 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo
namespace: test
spec:
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
hosts:
- podinfo.istio.weavedx.com
- podinfo
http:
- route:
- destination:
host: podinfo
subset: primary
weight: 50
- destination:
host: podinfo
subset: canary
weight: 50
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: podinfo-destination
namespace: test
spec:
host: podinfo
trafficPolicy:
loadBalancer:
consistentHash:
httpCookie:
name: istiouser
ttl: 30s
subsets:
- name: primary
labels:
app: podinfo
role: primary
- name: canary
labels:
app: podinfo
role: canary

View File

@@ -1,43 +0,0 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo
namespace: test
spec:
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
hosts:
- app.istio.weavedx.com
- podinfo
http:
- match:
- headers:
user-agent:
regex: ^(?!.*Chrome)(?=.*\bSafari\b).*$
uri:
prefix: "/version/"
rewrite:
uri: /api/info
route:
- destination:
host: podinfo-primary
port:
number: 9898
weight: 0
- destination:
host: podinfo
port:
number: 9898
weight: 100
- match:
- uri:
prefix: "/version/"
rewrite:
uri: /api/info
route:
- destination:
host: podinfo-primary
port:
number: 9898
weight: 100

View File

@@ -1,25 +0,0 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
hosts:
- podinfo.iowa.weavedx.com
- podinfo
http:
- route:
- destination:
host: podinfo-primary
port:
number: 9898
weight: 100
mirror:
host: podinfo
port:
number: 9898

View File

@@ -1,26 +0,0 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
hosts:
- podinfo.iowa.weavedx.com
- podinfo
http:
- route:
- destination:
host: podinfo-primary
port:
number: 9898
weight: 100
- destination:
host: podinfo
port:
number: 9898
weight: 0

View File

@@ -1,13 +0,0 @@
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: podinfo-canary
namespace: test
spec:
host: podinfo-canary
trafficPolicy:
loadBalancer:
consistentHash:
httpCookie:
name: user
ttl: 0s

View File

@@ -1,23 +0,0 @@
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: podinfo
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
minReplicas: 2
maxReplicas: 2
metrics:
- type: Resource
resource:
name: cpu
# scale up if usage is above
# 99% of the requested CPU (100m)
targetAverageUtilization: 99
- type: Resource
resource:
name: memory
targetAverageValue: 200Mi

View File

@@ -1,14 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: podinfo-canary
namespace: test
spec:
type: ClusterIP
selector:
app: podinfo
ports:
- name: http
port: 9898
protocol: TCP
targetPort: http

View File

@@ -1,71 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo-primary
namespace: test
labels:
app: podinfo-primary
spec:
replicas: 1
strategy:
rollingUpdate:
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: podinfo-primary
template:
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
app: podinfo-primary
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:1.4.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898
name: http
protocol: TCP
command:
- ./podinfo
- --port=9898
- --level=info
- --random-delay=false
- --random-error=false
env:
- name: PODINFO_UI_COLOR
value: blue
livenessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/healthz
initialDelaySeconds: 5
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/readyz
initialDelaySeconds: 5
failureThreshold: 3
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 2000m
memory: 512Mi
requests:
cpu: 10m
memory: 64Mi

View File

@@ -1,13 +0,0 @@
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: podinfo-primary
namespace: test
spec:
host: podinfo-primary
trafficPolicy:
loadBalancer:
consistentHash:
httpCookie:
name: user
ttl: 0s

View File

@@ -1,19 +0,0 @@
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: podinfo-primary
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo-primary
minReplicas: 2
maxReplicas: 2
metrics:
- type: Resource
resource:
name: cpu
# scale up if usage is above
# 99% of the requested CPU (100m)
targetAverageUtilization: 99

View File

@@ -1,16 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: podinfo-primary
namespace: test
labels:
app: podinfo-primary
spec:
type: ClusterIP
selector:
app: podinfo-primary
ports:
- name: http
port: 9898
protocol: TCP
targetPort: http

View File

@@ -1,14 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: podinfo
namespace: test
spec:
type: ClusterIP
selector:
app: podinfo-primary
ports:
- name: http
port: 9898
protocol: TCP
targetPort: http

View File

@@ -1,23 +0,0 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
gateways:
- mesh
- public-gateway.istio-system.svc.cluster.local
hosts:
- podinfo
- app.istio.weavedx.com
http:
- route:
- destination:
host: podinfo-primary
weight: 50
- destination:
host: podinfo-canary
weight: 50
timeout: 5s

View File

@@ -1,10 +1,10 @@
apiVersion: v1
name: flagger
version: 0.16.0
appVersion: 0.16.0
version: 0.18.2
appVersion: 0.18.2
kubeVersion: ">=1.11.0-0"
engine: gotpl
description: Flagger is a Kubernetes operator that automates the promotion of canary deployments using Istio, App Mesh or NGINX routing for traffic shifting and Prometheus metrics for canary analysis.
description: Flagger is a Kubernetes operator that automates the promotion of canary deployments using Istio, Linkerd, App Mesh, Gloo or NGINX routing for traffic shifting and Prometheus metrics for canary analysis.
home: https://docs.flagger.app
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/flagger-icon.png
sources:
@@ -17,4 +17,5 @@ keywords:
- canary
- istio
- appmesh
- linkerd
- gitops

View File

@@ -1,10 +1,10 @@
# Flagger
[Flagger](https://github.com/weaveworks/flagger) is a Kubernetes operator that automates the promotion of
canary deployments using Istio, App Mesh, NGINX or Gloo routing for traffic shifting and Prometheus metrics for canary analysis.
canary deployments using Istio, Linkerd, App Mesh, NGINX or Gloo routing for traffic shifting and Prometheus metrics for canary analysis.
Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators
like HTTP requests success rate, requests average duration and pods health.
Based on the KPIs analysis a canary is promoted or aborted and the analysis result is published to Slack.
Based on the KPIs analysis a canary is promoted or aborted and the analysis result is published to Slack or MS Teams.
## Prerequisites
@@ -16,16 +16,35 @@ Based on the KPIs analysis a canary is promoted or aborted and the analysis resu
Add Flagger Helm repository:
```console
helm repo add flagger https://flagger.app
$ helm repo add flagger https://flagger.app
```
To install the chart with the release name `flagger`:
Install Flagger's custom resource definitions:
```console
$ helm install --name flagger --namespace istio-system flagger/flagger
$ kubectl apply -f https://raw.githubusercontent.com/weaveworks/flagger/master/artifacts/flagger/crd.yaml
```
To install the chart with the release name `flagger` for Istio:
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=istio-system \
--set crd.create=false \
--set meshProvider=istio \
--set metricsServer=http://prometheus:9090
```
To install the chart with the release name `flagger` for Linkerd:
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=linkerd \
--set crd.create=false \
--set meshProvider=linkerd \
--set metricsServer=http://linkerd-prometheus:9090
```
The command deploys Flagger on the Kubernetes cluster in the istio-system namespace.
The [configuration](#configuration) section lists the parameters that can be configured during installation.
## Uninstalling the Chart
@@ -52,7 +71,11 @@ Parameter | Description | Default
`slack.url` | Slack incoming webhook | None
`slack.channel` | Slack channel | None
`slack.user` | Slack username | `flagger`
`msteams.url` | Microsoft Teams incoming webhook | None
`leaderElection.enabled` | leader election must be enabled when running more than one replica | `false`
`leaderElection.replicaCount` | number of replicas | `1`
`rbac.create` | if `true`, create and use RBAC resources | `true`
`rbac.pspEnabled` | If `true`, create and use a restricted pod security policy | `false`
`crd.create` | if `true`, create Flagger's CRDs | `true`
`resources.requests/cpu` | pod CPU request | `10m`
`resources.requests/memory` | pod memory request | `32Mi`
@@ -67,6 +90,7 @@ Specify each parameter using the `--set key=value[,key=value]` argument to `helm
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace istio-system \
--set crd.create=false \
--set slack.url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK \
--set slack.channel=general
```

View File

@@ -103,17 +103,23 @@ spec:
canaryAnalysis:
properties:
interval:
description: Canary schedule interval
type: string
pattern: "^[0-9]+(m|s)"
iterations:
description: Number of checks to run for A/B Testing and Blue/Green
type: number
threshold:
description: Max number of failed checks before rollback
type: number
maxWeight:
description: Max traffic percentage routed to canary
type: number
stepWeight:
description: Canary incremental traffic percentage step
type: number
metrics:
description: Prometheus query list for this canary
type: array
properties:
items:
@@ -121,15 +127,20 @@ spec:
required: ['name', 'threshold']
properties:
name:
description: Name of the Prometheus metric
type: string
interval:
description: Interval of the promql query
type: string
pattern: "^[0-9]+(m|s)"
threshold:
description: Max scalar value accepted for this metric
type: number
query:
description: Prometheus query
type: string
webhooks:
description: Webhook list for this canary
type: array
properties:
items:
@@ -137,8 +148,10 @@ spec:
required: ['name', 'url', 'timeout']
properties:
name:
description: Name of the webhook
type: string
type:
description: Type of the webhook pre, post or during rollout
type: string
enum:
- ""
@@ -146,29 +159,69 @@ spec:
- rollout
- post-rollout
url:
description: URL address of this webhook
type: string
format: url
timeout:
description: Request timeout for this webhook
type: string
pattern: "^[0-9]+(m|s)"
status:
properties:
phase:
description: Analysis phase of this canary
type: string
enum:
- ""
- Initializing
- Initialized
- Waiting
- Progressing
- Finalising
- Succeeded
- Failed
canaryWeight:
description: Traffic weight percentage routed to canary
type: number
failedChecks:
description: Failed check count of the current canary analysis
type: number
iterations:
description: Iteration count of the current canary analysis
type: number
lastAppliedSpec:
description: LastAppliedSpec of this canary
type: string
lastTransitionTime:
description: LastTransitionTime of this canary
format: date-time
type: string
conditions:
description: Status conditions of this canary
type: array
properties:
items:
type: object
required: ['type', 'status', 'reason']
properties:
lastTransitionTime:
description: LastTransitionTime of this condition
format: date-time
type: string
lastUpdateTime:
description: LastUpdateTime of this condition
format: date-time
type: string
message:
description: Message associated with this condition
type: string
reason:
description: Reason for the current status of this condition
type: string
status:
description: Status of this condition
type: string
type:
description: Type of this condition
type: string
{{- end }}

View File

@@ -8,7 +8,7 @@ metadata:
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
replicas: 1
replicas: {{ .Values.leaderElection.replicaCount }}
strategy:
type: Recreate
selector:
@@ -22,6 +22,20 @@ spec:
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
serviceAccountName: {{ template "flagger.serviceAccountName" . }}
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
topologyKey: kubernetes.io/hostname
{{- if .Values.image.pullSecret }}
imagePullSecrets:
- name: {{ .Values.image.pullSecret }}
{{- end }}
containers:
- name: flagger
securityContext:
@@ -51,6 +65,13 @@ spec:
- -slack-user={{ .Values.slack.user }}
- -slack-channel={{ .Values.slack.channel }}
{{- end }}
{{- if .Values.msteams.url }}
- -msteams-url={{ .Values.msteams.url }}
{{- end }}
{{- if .Values.leaderElection.enabled }}
- -enable-leader-election=true
- -leader-election-namespace={{ .Release.Namespace }}
{{- end }}
livenessProbe:
exec:
command:
@@ -75,10 +96,6 @@ spec:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}

View File

@@ -238,10 +238,10 @@ spec:
serviceAccountName: {{ template "flagger.serviceAccountName" . }}-prometheus
containers:
- name: prometheus
image: "docker.io/prom/prometheus:v2.7.1"
image: "docker.io/prom/prometheus:v2.10.0"
imagePullPolicy: IfNotPresent
args:
- '--storage.tsdb.retention=6h'
- '--storage.tsdb.retention=2h'
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- containerPort: 9090

View File

@@ -0,0 +1,66 @@
{{- if .Values.rbac.pspEnabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "flagger.fullname" . }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
privileged: false
hostIPC: false
hostNetwork: false
hostPID: false
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
allowedCapabilities:
- '*'
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "flagger.fullname" . }}-psp
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- {{ template "flagger.fullname" . }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ template "flagger.fullname" . }}-psp
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "flagger.fullname" . }}-psp
subjects:
- kind: ServiceAccount
name: {{ template "flagger.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -2,8 +2,9 @@
image:
repository: weaveworks/flagger
tag: 0.16.0
tag: 0.18.2
pullPolicy: IfNotPresent
pullSecret:
metricsServer: "http://prometheus:9090"
@@ -19,6 +20,14 @@ slack:
# incoming webhook https://api.slack.com/incoming-webhooks
url:
msteams:
# MS Teams incoming webhook URL
url:
leaderElection:
enabled: false
replicaCount: 1
serviceAccount:
# serviceAccount.create: Whether to create a service account or not
create: true
@@ -28,6 +37,8 @@ serviceAccount:
rbac:
# rbac.create: `true` if rbac resources should be created
create: true
# rbac.pspEnabled: `true` if PodSecurityPolicy resources should be created
pspEnabled: false
crd:
# crd.create: `true` if custom resource definitions should be created
@@ -48,8 +59,6 @@ nodeSelector: {}
tolerations: []
affinity: {}
prometheus:
# to be used with AppMesh or nginx ingress
install: false

View File

@@ -1,7 +1,7 @@
apiVersion: v1
name: grafana
version: 1.2.0
appVersion: 5.4.3
version: 1.3.0
appVersion: 6.2.5
description: Grafana dashboards for monitoring Flagger canary deployments
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/flagger-icon.png
home: https://flagger.app

View File

@@ -6,7 +6,7 @@ replicaCount: 1
image:
repository: grafana/grafana
tag: 5.4.3
tag: 6.2.5
pullPolicy: IfNotPresent
service:

View File

@@ -1,10 +1,10 @@
apiVersion: v1
name: loadtester
version: 0.4.1
appVersion: 0.4.0
version: 0.6.0
appVersion: 0.6.1
kubeVersion: ">=1.11.0-0"
engine: gotpl
description: Flagger's load testing services based on rakyll/hey that generates traffic during canary analysis when configured as a webhook.
description: Flagger's load testing services based on rakyll/hey and bojand/ghz that generates traffic during canary analysis when configured as a webhook.
home: https://docs.flagger.app
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/flagger-icon.png
sources:

View File

@@ -2,7 +2,7 @@ replicaCount: 1
image:
repository: weaveworks/flagger-loadtester
tag: 0.4.0
tag: 0.6.1
pullPolicy: IfNotPresent
logLevel: info

View File

@@ -1,6 +1,6 @@
apiVersion: v1
version: 2.1.0
appVersion: 1.4.0
version: 2.3.0
appVersion: 1.7.0
name: podinfo
engine: gotpl
description: Flagger canary deployment demo chart

View File

@@ -26,6 +26,9 @@ spec:
hosts:
- {{ .Values.canary.istioIngress.host }}
{{- end }}
trafficPolicy:
tls:
mode: {{ .Values.canary.istioTLS }}
canaryAnalysis:
interval: {{ .Values.canary.analysis.interval }}
threshold: {{ .Values.canary.analysis.threshold }}

View File

@@ -1,7 +1,7 @@
# Default values for podinfo.
image:
repository: quay.io/stefanprodan/podinfo
tag: 1.4.0
tag: 1.7.0
pullPolicy: IfNotPresent
service:
@@ -17,6 +17,8 @@ hpa:
canary:
enabled: true
# Istio traffic policy tls can be DISABLE or ISTIO_MUTUAL
istioTLS: DISABLE
istioIngress:
enabled: false
# Istio ingress gateway name

View File

@@ -1,6 +1,7 @@
package main
import (
"context"
"flag"
"fmt"
"log"
@@ -8,6 +9,7 @@ import (
"strings"
"time"
"github.com/Masterminds/semver"
clientset "github.com/weaveworks/flagger/pkg/client/clientset/versioned"
informers "github.com/weaveworks/flagger/pkg/client/informers/externalversions"
"github.com/weaveworks/flagger/pkg/controller"
@@ -19,30 +21,37 @@ import (
"github.com/weaveworks/flagger/pkg/signals"
"github.com/weaveworks/flagger/pkg/version"
"go.uber.org/zap"
"k8s.io/apimachinery/pkg/util/uuid"
"k8s.io/client-go/kubernetes"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/tools/leaderelection"
"k8s.io/client-go/tools/leaderelection/resourcelock"
"k8s.io/client-go/transport"
_ "k8s.io/code-generator/cmd/client-gen/generators"
)
var (
masterURL string
kubeconfig string
metricsServer string
controlLoopInterval time.Duration
logLevel string
port string
slackURL string
slackUser string
slackChannel string
threadiness int
zapReplaceGlobals bool
zapEncoding string
namespace string
meshProvider string
selectorLabels string
ver bool
masterURL string
kubeconfig string
metricsServer string
controlLoopInterval time.Duration
logLevel string
port string
msteamsURL string
slackURL string
slackUser string
slackChannel string
threadiness int
zapReplaceGlobals bool
zapEncoding string
namespace string
meshProvider string
selectorLabels string
enableLeaderElection bool
leaderElectionNamespace string
ver bool
)
func init() {
@@ -55,12 +64,15 @@ func init() {
flag.StringVar(&slackURL, "slack-url", "", "Slack hook URL.")
flag.StringVar(&slackUser, "slack-user", "flagger", "Slack user name.")
flag.StringVar(&slackChannel, "slack-channel", "", "Slack channel.")
flag.StringVar(&msteamsURL, "msteams-url", "", "MS Teams incoming webhook URL.")
flag.IntVar(&threadiness, "threadiness", 2, "Worker concurrency.")
flag.BoolVar(&zapReplaceGlobals, "zap-replace-globals", false, "Whether to change the logging level of the global zap logger.")
flag.StringVar(&zapEncoding, "zap-encoding", "json", "Zap logger encoding.")
flag.StringVar(&namespace, "namespace", "", "Namespace that flagger would watch canary object.")
flag.StringVar(&meshProvider, "mesh-provider", "istio", "Service mesh provider, can be istio, appmesh, supergloo, nginx or smi.")
flag.StringVar(&meshProvider, "mesh-provider", "istio", "Service mesh provider, can be istio, linkerd, appmesh, supergloo, nginx or smi.")
flag.StringVar(&selectorLabels, "selector-labels", "app,name,app.kubernetes.io/name", "List of pod labels that Flagger uses to create pod selectors.")
flag.BoolVar(&enableLeaderElection, "enable-leader-election", false, "Enable leader election.")
flag.StringVar(&leaderElectionNamespace, "leader-election-namespace", "kube-system", "Namespace used to create the leader election config map.")
flag.BoolVar(&ver, "version", false, "Print version")
}
@@ -115,6 +127,26 @@ func main() {
logger.Fatalf("Error calling Kubernetes API: %v", err)
}
k8sVersionConstraint := "^1.11.0"
// We append -alpha.1 to the end of our version constraint so that prebuilds of later versions
// are considered valid for our purposes, as well as some managed solutions like EKS where they provide
// a version like `v1.12.6-eks-d69f1b`. It doesn't matter what the prelease value is here, just that it
// exists in our constraint.
semverConstraint, err := semver.NewConstraint(k8sVersionConstraint + "-alpha.1")
if err != nil {
logger.Fatalf("Error parsing kubernetes version constraint: %v", err)
}
k8sSemver, err := semver.NewVersion(ver.GitVersion)
if err != nil {
logger.Fatalf("Error parsing kubernetes version as a semantic version: %v", err)
}
if !semverConstraint.Check(k8sSemver) {
logger.Fatalf("Unsupported version of kubernetes detected. Expected %s, got %v", k8sVersionConstraint, ver)
}
labels := strings.Split(selectorLabels, ",")
if len(labels) < 1 {
logger.Fatalf("At least one selector label is required")
@@ -137,15 +169,8 @@ func main() {
logger.Errorf("Metrics server %s unreachable %v", metricsServer, err)
}
var slack *notifier.Slack
if slackURL != "" {
slack, err = notifier.NewSlack(slackURL, slackUser, slackChannel)
if err != nil {
logger.Errorf("Notifier %v", err)
} else {
logger.Infof("Slack notifications enabled for channel %s", slack.Channel)
}
}
// setup Slack or MS Teams notifications
notifierClient := initNotifier(logger)
// start HTTP server
go server.ListenAndServe(port, 3*time.Second, logger, stopCh)
@@ -158,9 +183,8 @@ func main() {
flaggerClient,
canaryInformer,
controlLoopInterval,
metricsServer,
logger,
slack,
notifierClient,
routerFactory,
observerFactory,
meshProvider,
@@ -179,12 +203,102 @@ func main() {
}
}
// start controller
go func(ctrl *controller.Controller) {
if err := ctrl.Run(threadiness, stopCh); err != nil {
// leader election context
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// prevents new requests when leadership is lost
cfg.Wrap(transport.ContextCanceller(ctx, fmt.Errorf("the leader is shutting down")))
// cancel leader election context on shutdown signals
go func() {
<-stopCh
cancel()
}()
// wrap controller run
runController := func() {
if err := c.Run(threadiness, stopCh); err != nil {
logger.Fatalf("Error running controller: %v", err)
}
}(c)
}
<-stopCh
// run controller when this instance wins the leader election
if enableLeaderElection {
ns := leaderElectionNamespace
if namespace != "" {
ns = namespace
}
startLeaderElection(ctx, runController, ns, kubeClient, logger)
} else {
runController()
}
}
func startLeaderElection(ctx context.Context, run func(), ns string, kubeClient kubernetes.Interface, logger *zap.SugaredLogger) {
configMapName := "flagger-leader-election"
id, err := os.Hostname()
if err != nil {
logger.Fatalf("Error running controller: %v", err)
}
id = id + "_" + string(uuid.NewUUID())
lock, err := resourcelock.New(
resourcelock.ConfigMapsResourceLock,
ns,
configMapName,
kubeClient.CoreV1(),
kubeClient.CoordinationV1(),
resourcelock.ResourceLockConfig{
Identity: id,
},
)
if err != nil {
logger.Fatalf("Error running controller: %v", err)
}
logger.Infof("Starting leader election id: %s configmap: %s namespace: %s", id, configMapName, ns)
leaderelection.RunOrDie(ctx, leaderelection.LeaderElectionConfig{
Lock: lock,
ReleaseOnCancel: true,
LeaseDuration: 60 * time.Second,
RenewDeadline: 15 * time.Second,
RetryPeriod: 5 * time.Second,
Callbacks: leaderelection.LeaderCallbacks{
OnStartedLeading: func(ctx context.Context) {
logger.Info("Acting as elected leader")
run()
},
OnStoppedLeading: func() {
logger.Infof("Leadership lost")
os.Exit(1)
},
OnNewLeader: func(identity string) {
if identity != id {
logger.Infof("Another instance has been elected as leader: %v", identity)
}
},
},
})
}
func initNotifier(logger *zap.SugaredLogger) (client notifier.Interface) {
provider := "slack"
notifierURL := slackURL
if msteamsURL != "" {
provider = "msteams"
notifierURL = msteamsURL
}
notifierFactory := notifier.NewFactory(notifierURL, slackUser, slackChannel)
if notifierURL != "" {
var err error
client, err = notifierFactory.Notifier(provider)
if err != nil {
logger.Errorf("Notifier %v", err)
} else {
logger.Infof("Notifications enabled for %s", notifierURL[0:30])
}
}
return
}

View File

@@ -10,7 +10,7 @@ import (
"time"
)
var VERSION = "0.4.0"
var VERSION = "0.6.1"
var (
logLevel string
port string
@@ -47,5 +47,7 @@ func main() {
go taskRunner.Start(100*time.Millisecond, stopCh)
logger.Infof("Starting load tester v%s API on port %s", VERSION, port)
loadtester.ListenAndServe(port, time.Minute, logger, taskRunner, stopCh)
gateStorage := loadtester.NewGateStorage("in-memory")
loadtester.ListenAndServe(port, time.Minute, logger, taskRunner, gateStorage, stopCh)
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 210 KiB

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

After

Width:  |  Height:  |  Size: 49 KiB

View File

@@ -5,12 +5,12 @@ description: Flagger is a progressive delivery Kubernetes operator
# Introduction
[Flagger](https://github.com/weaveworks/flagger) is a **Kubernetes** operator that automates the promotion of canary
deployments using **Istio**, **App Mesh**, **NGINX** or **Gloo** routing for traffic shifting and **Prometheus** metrics for canary analysis.
deployments using **Istio**, **Linkerd**, **App Mesh**, **NGINX** or **Gloo** routing for traffic shifting and **Prometheus** metrics for canary analysis.
The canary analysis can be extended with webhooks for running system integration/acceptance tests, load tests, or any other custom validation.
Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance
indicators like HTTP requests success rate, requests average duration and pods health.
Based on analysis of the **KPIs** a canary is promoted or aborted, and the analysis result is published to **Slack**.
Based on analysis of the **KPIs** a canary is promoted or aborted, and the analysis result is published to **Slack** or **MS Teams**.
![Flagger overview diagram](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/diagrams/flagger-canary-overview.png)

View File

@@ -15,6 +15,7 @@
* [Istio Canary Deployments](usage/progressive-delivery.md)
* [Istio A/B Testing](usage/ab-testing.md)
* [Linkerd Canary Deployments](usage/linkerd-progressive-delivery.md)
* [App Mesh Canary Deployments](usage/appmesh-progressive-delivery.md)
* [NGINX Canary Deployments](usage/nginx-progressive-delivery.md)
* [Gloo Canary Deployments](usage/gloo-progressive-delivery.md)

View File

@@ -6,11 +6,13 @@
Flagger can run automated application analysis, promotion and rollback for the following deployment strategies:
* Canary (progressive traffic shifting)
* Istio, Linkerd, App Mesh, NGINX, Gloo
* A/B Testing (HTTP headers and cookies traffic routing)
* Istio, NGINX
* Blue/Green (traffic switch)
* Kubernetes CNI
For canary deployments you'll need a Layer 7 traffic management solution like a service mesh (Istio, App Mesh) or an ingress controller (NGINX, Gloo).
For A/B testing you'll need a Layer 7 traffic management solution that's capable of routing requests based on HTTP headers and cookies (Istio, NGINX).
For Canary deployments and A/B testing you'll need a Layer 7 traffic management solution like a service mesh or an ingress controller.
For Blue/Green deployments no service mesh or ingress controller is required.
**When should I use A/B testing instead of progressive traffic shifting?**
@@ -230,8 +232,7 @@ spec:
mode: ISTIO_MUTUAL
```
Both port `8080` and `9090` will be added to the ClusterIP services but the virtual service
will point to the port specified in `spec.service.port`.
Both port `8080` and `9090` will be added to the ClusterIP services.
### Label selectors
@@ -375,10 +376,10 @@ In order for Flagger to be able to call the load tester service from outside the
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: flagger-namespace
namespace: flagger
name: flagger-loadtester
namespace: test
spec:
host: "*.flagger.svc.cluster.local"
host: "flagger-loadtester.test.svc.cluster.local"
trafficPolicy:
tls:
mode: DISABLE
@@ -386,8 +387,8 @@ spec:
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: loadtester-mtls-disabled
namespace: flagger
name: flagger-loadtester
namespace: test
spec:
targets:
- name: flagger-loadtester

View File

@@ -2,7 +2,7 @@
[Flagger](https://github.com/weaveworks/flagger) takes a Kubernetes deployment and optionally
a horizontal pod autoscaler \(HPA\) and creates a series of objects
\(Kubernetes deployments, ClusterIP services and Istio or App Mesh virtual services\) to drive the canary analysis and promotion.
\(Kubernetes deployments, ClusterIP services, virtual service, traffic split or ingress\) to drive the canary analysis and promotion.
![Flagger Canary Process](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/diagrams/flagger-canary-hpa.png)
@@ -18,7 +18,7 @@ metadata:
namespace: test
spec:
# service mesh provider (optional)
# can be: kubernetes, istio, appmesh, smi, nginx, gloo, supergloo
# can be: kubernetes, istio, linkerd, appmesh, nginx, gloo, supergloo
# use the kubernetes provider for Blue/Green style deployments
provider: istio
# deployment reference
@@ -105,6 +105,54 @@ convention you can specify your label with the `-selector-labels` flag.
The target deployment should expose a TCP port that will be used by Flagger to create the ClusterIP Service and
the Istio Virtual Service. The container port from the target deployment should match the `service.port` value.
### Canary status
Get the current status of canary deployments cluster wide:
```bash
kubectl get canaries --all-namespaces
NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME
test podinfo Progressing 15 2019-06-30T14:05:07Z
prod frontend Succeeded 0 2019-06-30T16:15:07Z
prod backend Failed 0 2019-06-30T17:05:07Z
```
The status condition reflects the last know state of the canary analysis:
```bash
kubectl -n test get canary/podinfo -oyaml | awk '/status/,0'
```
A successful rollout status:
```yaml
status:
canaryWeight: 0
failedChecks: 0
iterations: 0
lastAppliedSpec: "14788816656920327485"
lastPromotedSpec: "14788816656920327485"
conditions:
- lastTransitionTime: "2019-07-10T08:23:18Z"
lastUpdateTime: "2019-07-10T08:23:18Z"
message: Canary analysis completed successfully, promotion finished.
reason: Succeeded
status: "True"
type: Promoted
```
The `Promoted` status condition can have one of the following reasons:
Initialized, Waiting, Progressing, Finalising, Succeeded or Failed.
A failed canary will have the promoted status set to `false`,
the reason to `failed` and the last applied spec will be different to the last promoted one.
Wait for a successful rollout:
```bash
kubectl wait canary/podinfo --for=condition=promoted
```
### Istio routing
Flagger creates an Istio Virtual Service and Destination Rules based on the Canary service spec.
@@ -608,6 +656,8 @@ The canary analysis can be extended with webhooks. Flagger will call each webhoo
determine from the response status code (HTTP 2xx) if the canary is failing or not.
There are three types of hooks:
* Confirm-rollout hooks are executed before scaling up the canary deployment and ca be used for manual approval.
The rollout is paused until the hook returns a successful HTTP status code.
* Pre-rollout hooks are executed before routing traffic to canary.
The canary advancement is paused if a pre-rollout hook fails and if the number of failures reach the
threshold the canary will be rollback.
@@ -621,6 +671,9 @@ Spec:
```yaml
canaryAnalysis:
webhooks:
- name: "start gate"
type: confirm-rollout
url: http://flagger-loadtester.test/gate/approve
- name: "smoke test"
type: pre-rollout
url: http://flagger-helmtester.kube-system/
@@ -677,7 +730,7 @@ that generates traffic during analysis when configured as a webhook.
![Flagger Load Testing Webhook](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/diagrams/flagger-load-testing.png)
First you need to deploy the load test runner in a namespace with Istio sidecar injection enabled:
First you need to deploy the load test runner in a namespace with sidecar injection enabled:
```bash
export REPO=https://raw.githubusercontent.com/weaveworks/flagger/master
@@ -720,7 +773,7 @@ When the canary analysis starts, Flagger will call the webhooks and the load tes
in the background, if they are not already running. This will ensure that during the
analysis, the `podinfo-canary.test` service will receive a steady stream of GET and POST requests.
If your workload is exposed outside the mesh with the Istio Gateway and TLS you can point `hey` to the
If your workload is exposed outside the mesh you can point `hey` to the
public URL and use HTTP2.
```yaml
@@ -733,6 +786,18 @@ webhooks:
cmd: "hey -z 1m -q 10 -c 2 -h2 https://podinfo.example.com/"
```
For gRPC services you can use [bojand/ghz](https://github.com/bojand/ghz) which is a similar tool to Hey but for gPRC:
```yaml
webhooks:
- name: grpc-load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "ghz -z 1m -q 10 -c 2 --insecure podinfo.test:9898"
```
The load tester can run arbitrary commands as long as the binary is present in the container image.
For example if you you want to replace `hey` with another CLI, you can create your own Docker image:
@@ -820,3 +885,66 @@ As an alternative to Helm you can use the [Bash Automated Testing System](https:
```
Note that you should create a ConfigMap with your Bats tests and mount it inside the tester container.
### Manual Gating
For manual approval of a canary deployment you can use the `confirm-rollout` webhook.
The confirmation hooks are executed before the pre-rollout hooks.
Flagger will halt the canary traffic shifting and analysis until the confirm webhook returns HTTP status 200.
Manual gating with Flagger's tester:
```yaml
canaryAnalysis:
webhooks:
- name: "gate"
type: confirm-rollout
url: http://flagger-loadtester.test/gate/halt
```
The `/gate/halt` returns HTTP 403 thus blocking the rollout.
If you have notifications enabled, Flagger will post a message to Slack or MS Teams if a canary rollout is waiting for approval.
Change the URL to `/gate/approve` to start the canary analysis:
```yaml
canaryAnalysis:
webhooks:
- name: "gate"
type: confirm-rollout
url: http://flagger-loadtester.test/gate/approve
```
Manual gating can be driven with Flagger's tester API. Set the confirmation URL to `/gate/check`:
```yaml
canaryAnalysis:
webhooks:
- name: "ask for confirmation"
type: confirm-rollout
url: http://flagger-loadtester.test/gate/check
```
By default the gate is closed, you can start or resume the canary rollout with:
```bash
kubectl -n test exec -it flagger-loadtester-xxxx-xxxx sh
curl -d '{"name": "podinfo","namespace":"test"}' http://localhost:8080/gate/open
```
You can pause the rollout at any time with:
```bash
curl -d '{"name": "podinfo","namespace":"test"}' http://localhost:8080/gate/close
```
If a canary analysis is paused the status will change to waiting:
```bash
kubectl get canary/podinfo
NAME STATUS WEIGHT
podinfo Waiting 0
```

View File

@@ -133,11 +133,18 @@ Add Flagger Helm repository:
helm repo add flagger https://flagger.app
```
Install Flagger's Canary CRD:
```yaml
kubectl apply -f https://raw.githubusercontent.com/weaveworks/flagger/master/artifacts/flagger/crd.yaml
```
Deploy Flagger and Prometheus in the _**appmesh-system**_ namespace:
```bash
helm upgrade -i flagger flagger/flagger \
--namespace=appmesh-system \
--set crd.create=false \
--set meshProvider=appmesh \
--set prometheus.install=true
```
@@ -150,6 +157,7 @@ You can enable **Slack** notifications with:
```bash
helm upgrade -i flagger flagger/flagger \
--namespace=appmesh-system \
--set crd.create=false \
--set meshProvider=appmesh \
--set metricsServer=http://prometheus.appmesh:9090 \
--set slack.url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK \

View File

@@ -354,11 +354,18 @@ Add Flagger Helm repository:
helm repo add flagger https://flagger.app
```
Install Flagger's Canary CRD:
```yaml
kubectl apply -f https://raw.githubusercontent.com/weaveworks/flagger/master/artifacts/flagger/crd.yaml
```
Deploy Flagger in the `istio-system` namespace with Slack notifications enabled:
```bash
helm upgrade -i flagger flagger/flagger \
--namespace=istio-system \
--set crd.create=false \
--set metricsServer=http://prometheus.istio-system:9090 \
--set slack.url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK \
--set slack.channel=general \

View File

@@ -1,6 +1,6 @@
# Flagger install on Kubernetes
This guide walks you through setting up Flagger on a Kubernetes cluster.
This guide walks you through setting up Flagger on a Kubernetes cluster with Helm or Kustomize.
### Prerequisites
@@ -9,18 +9,7 @@ Flagger requires a Kubernetes cluster **v1.11** or newer with the following admi
* MutatingAdmissionWebhook
* ValidatingAdmissionWebhook
Flagger depends on [Istio](https://istio.io/docs/setup/kubernetes/quick-start/) **v1.0.3** or newer
with traffic management, telemetry and Prometheus enabled.
A minimal Istio installation should contain the following services:
* istio-pilot
* istio-ingressgateway
* istio-sidecar-injector
* istio-telemetry
* prometheus
### Install Flagger
### Install Flagger with Helm
Add Flagger Helm repository:
@@ -28,26 +17,54 @@ Add Flagger Helm repository:
helm repo add flagger https://flagger.app
```
Deploy Flagger in the _**istio-system**_ namespace:
Install Flagger's Canary CRD:
```yaml
kubectl apply -f https://raw.githubusercontent.com/weaveworks/flagger/master/artifacts/flagger/crd.yaml
```
Deploy Flagger for Istio:
```bash
helm upgrade -i flagger flagger/flagger \
--namespace=istio-system \
--set metricsServer=http://prometheus.istio-system:9090
--set crd.create=false \
--set meshProvider=istio \
--set metricsServer=http://prometheus:9090
```
You can install Flagger in any namespace as long as it can talk to the Istio Prometheus service on port 9090.
Deploy Flagger for Linkerd:
```bash
helm upgrade -i flagger flagger/flagger \
--namespace=linkerd \
--set crd.create=false \
--set meshProvider=linkerd \
--set metricsServer=http://linkerd-prometheus:9090
```
You can install Flagger in any namespace as long as it can talk to the Prometheus service on port 9090.
Enable **Slack** notifications:
```bash
helm upgrade -i flagger flagger/flagger \
--namespace=istio-system \
--set crd.create=false \
--set slack.url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK \
--set slack.channel=general \
--set slack.user=flagger
```
Enable **Microsoft Teams** notifications:
```bash
helm upgrade -i flagger flagger/flagger \
--namespace=istio-system \
--set crd.create=false \
--set msteams.url=https://outlook.office.com/webhook/YOUR/TEAMS/WEBHOOK
```
If you don't have Tiller you can use the helm template command and apply the generated yaml with kubectl:
```bash
@@ -81,7 +98,7 @@ If you want to remove all the objects created by Flagger you have delete the Can
kubectl delete crd canaries.flagger.app
```
### Install Grafana
### Install Grafana with Helm
Flagger comes with a Grafana dashboard made for monitoring the canary analysis.
@@ -115,31 +132,109 @@ You can access Grafana using port forwarding:
kubectl -n istio-system port-forward svc/flagger-grafana 3000:80
```
### Install Load Tester
### Install Flagger with Kustomize
Flagger comes with an optional load testing service that generates traffic
during canary analysis when configured as a webhook.
As an alternative to Helm, Flagger can be installed with Kustomize.
Deploy the load test runner with Helm:
**Service mesh specific installers**
Install Flagger for Istio:
```bash
helm upgrade -i flagger-loadtester flagger/loadtester \
--namespace=test \
--set cmd.timeout=1h
kubectl apply -k github.com/weaveworks/flagger//kustomize/istio
```
Deploy with kubectl:
This deploys Flagger in the `istio-system` namespace and sets the metrics server URL to Istio's Prometheus instance.
Note that you'll need kubectl 1.14 to run the above the command or you can download the
[kustomize binary](https://github.com/kubernetes-sigs/kustomize/releases) and run:
```bash
helm fetch --untar --untardir . flagger/loadtester &&
helm template loadtester \
--name flagger-loadtester \
--namespace=test
> $HOME/flagger-loadtester.yaml
# apply
kubectl apply -f $HOME/flagger-loadtester.yaml
kustomize build github.com/weaveworks/flagger//kustomize/istio | kubectl apply -f -
```
> **Note** that the load tester should be deployed in a namespace with Istio sidecar injection enabled.
Install Flagger for Linkerd:
```bash
kubectl apply -k github.com/weaveworks/flagger//kustomize/linkerd
```
This deploys Flagger in the `linkerd` namespace and sets the metrics server URL to Linkerd's Prometheus instance.
If you want to install a specific Flagger release, add the version number to the URL:
```bash
kubectl apply -k github.com/weaveworks/flagger//kustomize/linkerd?ref=0.18.0
```
**Generic installer**
Install Flagger and Prometheus:
```bash
kubectl apply -k github.com/weaveworks/flagger//kustomize/kubernetes
```
This deploys Flagger and Prometheus in the `flagger-system` namespace,
sets the metrics server URL to `http://flagger-prometheus.flagger-system:9090` and the mesh provider to `kubernetes`.
The Prometheus instance has a two hours data retention and is configured to scrape all pods in your cluster that
have the `prometheus.io/scrape: "true"` annotation.
To target a different provider you can specify it in the canary custom resource:
```yaml
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: app
namespace: test
spec:
# can be: kubernetes, istio, linkerd, appmesh, nginx, gloo
# use the kubernetes provider for Blue/Green style deployments
provider: nginx
```
**Customized installer**
Create a kustomization file using flagger as base:
```bash
cat > kustomization.yaml <<EOF
namespace: istio-system
bases:
- github.com/weaveworks/flagger/kustomize/base/flagger
patchesStrategicMerge:
- patch.yaml
EOF
```
Create a patch and enable Slack notifications by setting the slack channel and hook URL:
```bash
cat > patch.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: flagger
spec:
template:
spec:
containers:
- name: flagger
args:
- -mesh-provider=istio
- -metrics-server=http://prometheus.istio-system:9090
- -slack-user=flagger
- -slack-channel=alerts
- -slack-url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK
EOF
```
Install Flagger with Slack:
```bash
kubectl apply -k .
```
If you want to use MS Teams instead of Slack, replace `-slack-url` with `-msteams-url` and set the webhook address to `https://outlook.office.com/webhook/YOUR/TEAMS/WEBHOOK`.

View File

@@ -115,11 +115,18 @@ Add Flagger Helm repository:
helm repo add flagger https://flagger.app
```
Install Flagger's Canary CRD:
```yaml
kubectl apply -f https://raw.githubusercontent.com/weaveworks/flagger/master/artifacts/flagger/crd.yaml
```
Deploy Flagger in the _**istio-system**_ namespace and set the service mesh provider to SuperGloo:
```bash
helm upgrade -i flagger flagger/flagger \
--namespace=istio-system \
--set crd.create=false \
--set metricsServer=http://prometheus.istio-system:9090 \
--set meshProvider=supergloo:istio.supergloo-system
```

View File

@@ -138,7 +138,7 @@ helm upgrade -i frontend flagger/podinfo/ \
--reuse-values \
--set canary.loadtest.enabled=true \
--set canary.helmtest.enabled=true \
--set image.tag=1.4.1
--set image.tag=1.7.1
```
Flagger detects that the deployment revision changed and starts the canary analysis:
@@ -283,17 +283,17 @@ metadata:
namespace: test
annotations:
flux.weave.works/automated: "true"
flux.weave.works/tag.chart-image: semver:~1.4
flux.weave.works/tag.chart-image: semver:~1.7
spec:
releaseName: frontend
chart:
repository: https://stefanprodan.github.io/flagger/
name: podinfo
version: 2.0.0
version: 2.3.0
values:
image:
repository: quay.io/stefanprodan/podinfo
tag: 1.4.0
tag: 1.7.0
backend: http://backend-podinfo:9898/echo
canary:
enabled: true
@@ -311,7 +311,7 @@ In the `chart` section I've defined the release source by specifying the Helm re
In the `values` section I've overwritten the defaults set in values.yaml.
With the `flux.weave.works` annotations I instruct Flux to automate this release.
When an image tag in the sem ver range of `1.4.0 - 1.4.99` is pushed to Quay,
When an image tag in the sem ver range of `1.7.0 - 1.7.99` is pushed to Quay,
Flux will upgrade the Helm release and from there Flagger will pick up the change and start a canary deployment.
Install [Weave Flux](https://github.com/weaveworks/flux) and its Helm Operator by specifying your Git repo URL:
@@ -321,6 +321,7 @@ helm repo add weaveworks https://weaveworks.github.io/flux
helm install --name flux \
--set helmOperator.create=true \
--set helmOperator.createCRD=true \
--set git.url=git@github.com:<USERNAME>/<REPOSITORY> \
--namespace flux \
weaveworks/flux
@@ -343,9 +344,9 @@ launch the `frontend` and `backend` apps.
A CI/CD pipeline for the `frontend` release could look like this:
* cut a release from the master branch of the podinfo code repo with the git tag `1.4.1`
* CI builds the image and pushes the `podinfo:1.4.1` image to the container registry
* Flux scans the registry and updates the Helm release `image.tag` to `1.4.1`
* cut a release from the master branch of the podinfo code repo with the git tag `1.7.1`
* CI builds the image and pushes the `podinfo:1.7.1` image to the container registry
* Flux scans the registry and updates the Helm release `image.tag` to `1.7.1`
* Flux commits and push the change to the cluster repo
* Flux applies the updated Helm release on the cluster
* Flux Helm Operator picks up the change and calls Tiller to upgrade the release
@@ -355,7 +356,7 @@ A CI/CD pipeline for the `frontend` release could look like this:
* Based on the analysis result the canary deployment is promoted to production or rolled back
* Flagger sends a Slack notification with the canary result
If the canary fails, fix the bug, do another patch release eg `1.4.2` and the whole process will run again.
If the canary fails, fix the bug, do another patch release eg `1.7.2` and the whole process will run again.
A canary deployment can fail due to any of the following reasons:

View File

@@ -236,7 +236,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=quay.io/stefanprodan/podinfo:1.4.1
podinfod=quay.io/stefanprodan/podinfo:1.7.1
```
Flagger detects that the deployment revision changed and starts a new rollout:

View File

@@ -131,7 +131,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/abtest \
podinfod=quay.io/stefanprodan/podinfo:1.4.1
podinfod=quay.io/stefanprodan/podinfo:1.7.1
```
Flagger detects that the deployment revision changed and starts a new rollout:

View File

@@ -6,7 +6,6 @@ Flagger can be configured to send Slack notifications:
```bash
helm upgrade -i flagger flagger/flagger \
--namespace=istio-system \
--set slack.url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK \
--set slack.channel=general \
--set slack.user=flagger
@@ -22,6 +21,23 @@ maximum number of failed checks:
![Slack Notifications](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/screens/slack-canary-failed.png)
### Microsoft Teams
Flagger can be configured to send notifications to Microsoft Teams:
```bash
helm upgrade -i flagger flagger/flagger \
--set msteams.url=https://outlook.office.com/webhook/YOUR/TEAMS/WEBHOOK
```
Flagger will post a message card to MS Teams when a new revision has been detected and if the canary analysis failed or succeeded:
![MS Teams Notifications](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/screens/flagger-ms-teams-notifications.png)
And you'll get a notification on rollback:
![MS Teams Notifications](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/screens/flagger-ms-teams-failed.png)
### Prometheus Alert Manager
Besides Slack, you can use Alertmanager to trigger alerts when a canary deployment failed:

View File

@@ -37,8 +37,10 @@ Deploy the load testing service to generate traffic during the canary analysis:
```bash
helm upgrade -i flagger-loadtester flagger/loadtester \
--namespace=test \
--set meshName=global.appmesh-system \
--set "backends[0]=podinfo.test"
--set meshName=global \
--set "backends[0]=podinfo.test" \
--set "backends[1]=podinfo-canary.test" \
--set "backends[2]=podinfo-primary.test"
```
Create a canary custom resource:
@@ -67,7 +69,7 @@ spec:
# container port
port: 9898
# App Mesh reference
meshName: global.appmesh-system
meshName: global
# App Mesh egress (optional)
backends:
- backend.test
@@ -176,7 +178,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=quay.io/stefanprodan/podinfo:1.4.1
podinfod=quay.io/stefanprodan/podinfo:1.7.1
```
Flagger detects that the deployment revision changed and starts a new rollout:

View File

@@ -59,8 +59,8 @@ Create a deployment and a horizontal pod autoscaler:
```bash
export REPO=https://raw.githubusercontent.com/weaveworks/flagger/master
kubectl apply -f ${REPO}/artifacts/canary/deployment.yaml
kubectl apply -f ${REPO}/artifacts/canary/hpa.yaml
kubectl apply -f ${REPO}/artifacts/canaries/deployment.yaml
kubectl apply -f ${REPO}/artifacts/canaries/hpa.yaml
```
Deploy the load testing service to generate traffic during the analysis:
@@ -172,7 +172,7 @@ Trigger a deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=quay.io/stefanprodan/podinfo:1.4.1
podinfod=quay.io/stefanprodan/podinfo:1.7.1
```
Flagger detects that the deployment revision changed and starts a new rollout:
@@ -184,6 +184,7 @@ Events:
New revision detected podinfo.test
Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Pre-rollout check acceptance-test passed
Advance podinfo.test canary iteration 1/10
Advance podinfo.test canary iteration 2/10
Advance podinfo.test canary iteration 3/10
@@ -296,7 +297,7 @@ Trigger a deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=quay.io/stefanprodan/podinfo:1.4.3
podinfod=quay.io/stefanprodan/podinfo:1.7.3
```
Generate 404s:

View File

@@ -197,7 +197,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=quay.io/stefanprodan/podinfo:1.4.1
podinfod=quay.io/stefanprodan/podinfo:1.7.1
```
Flagger detects that the deployment revision changed and starts a new rollout:

View File

@@ -0,0 +1,471 @@
# Linkerd Canary Deployments
This guide shows you how to use Linkerd and Flagger to automate canary deployments.
![Flagger Linkerd Traffic Split](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/diagrams/flagger-linkerd-traffic-split.png)
### Prerequisites
Flagger requires a Kubernetes cluster **v1.11** or newer and Linkerd **2.4** or newer.
Install Flagger in the linkerd namespace:
```bash
kubectl apply -k github.com/weaveworks/flagger//kustomize/linkerd
```
Note that you'll need kubectl 1.14 or newer to run the above command.
To enable Slack or MS Teams notifications,
see Flagger's [install docs](https://docs.flagger.app/install/flagger-install-on-kubernetes) for Kustomize or Helm options.
### Bootstrap
Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA),
then creates a series of objects (Kubernetes deployments, ClusterIP services and SMI traffic split).
These objects expose the application inside the mesh and drive the canary analysis and promotion.
Create a test namespace and enable Linkerd proxy injection:
```bash
kubectl create ns test
kubectl annotate namespace test linkerd.io/inject=enabled
```
Install the load testing service to generate traffic during the canary analysis:
```bash
kubectl apply -k github.com/weaveworks/flagger//kustomize/tester
```
Create a deployment and a horizontal pod autoscaler:
```bash
kubectl apply -k github.com/weaveworks/flagger//kustomize/podinfo
```
Create a canary custom resource for the podinfo deployment:
```yaml
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
service:
# container port
port: 9898
canaryAnalysis:
# schedule interval (default 60s)
interval: 30s
# max number of failed metric checks before rollback
threshold: 5
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 5
# Linkerd Prometheus checks
metrics:
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 1m
- name: request-duration
# maximum req duration P99
# milliseconds
threshold: 500
interval: 30s
# testing (optional)
webhooks:
- name: acceptance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 30s
metadata:
type: bash
cmd: "curl -sd 'test' http://podinfo-canary:9898/token | grep token"
- name: load-test
type: rollout
url: http://flagger-loadtester.test/
metadata:
cmd: "hey -z 2m -q 10 -c 2 http://podinfo:9898/"
```
Save the above resource as podinfo-canary.yaml and then apply it:
```bash
kubectl apply -f ./podinfo-canary.yaml
```
When the canary analysis starts, Flagger will call the pre-rollout webhooks before routing traffic to the canary.
The canary analysis will run for five minutes while validating the HTTP metrics and rollout hooks every half a minute.
After a couple of seconds Flagger will create the canary objects:
```bash
# applied
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
ingresses.extensions/podinfo
canary.flagger.app/podinfo
# generated
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
trafficsplits.split.smi-spec.io/podinfo
```
After the boostrap, the podinfo deployment will be scaled to zero and the traffic to `podinfo.test` will be routed
to the primary pods. During the canary analysis, the `podinfo-canary.test` address can be used to target directly the canary pods.
### Automated canary promotion
Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators
like HTTP requests success rate, requests average duration and pod health.
Based on analysis of the KPIs a canary is promoted or aborted, and the analysis result is published to Slack.
![Flagger Canary Stages](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/diagrams/flagger-canary-steps.png)
Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=quay.io/stefanprodan/podinfo:1.7.1
```
Flagger detects that the deployment revision changed and starts a new rollout:
```text
kubectl -n test describe canary/podinfo
Status:
Canary Weight: 0
Failed Checks: 0
Phase: Succeeded
Events:
New revision detected! Scaling up podinfo.test
Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Pre-rollout check acceptance-test passed
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Advance podinfo.test canary weight 20
Advance podinfo.test canary weight 25
Waiting for podinfo.test rollout to finish: 1 of 2 updated replicas are available
Advance podinfo.test canary weight 30
Advance podinfo.test canary weight 35
Advance podinfo.test canary weight 40
Advance podinfo.test canary weight 45
Advance podinfo.test canary weight 50
Copying podinfo.test template spec to podinfo-primary.test
Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Promotion completed! Scaling down podinfo.test
```
**Note** that if you apply new changes to the deployment during the canary analysis, Flagger will restart the analysis.
A canary deployment is triggered by changes in any of the following objects:
* Deployment PodSpec (container image, command, ports, env, resources, etc)
* ConfigMaps mounted as volumes or mapped to environment variables
* Secrets mounted as volumes or mapped to environment variables
You can monitor all canaries with:
```bash
watch kubectl get canaries --all-namespaces
NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME
test podinfo Progressing 15 2019-06-30T14:05:07Z
prod frontend Succeeded 0 2019-06-30T16:15:07Z
prod backend Failed 0 2019-06-30T17:05:07Z
```
### Automated rollback
During the canary analysis you can generate HTTP 500 errors and high latency to test if Flagger pauses and rolls back the faulted version.
Trigger another canary deployment:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=quay.io/stefanprodan/podinfo:1.7.2
```
Exec into the load tester pod with:
```bash
kubectl -n test exec -it flagger-loadtester-xx-xx sh
```
Generate HTTP 500 errors:
```bash
watch -n 1 curl http://podinfo-canary.test:9898/status/500
```
Generate latency:
```bash
watch -n 1 curl http://podinfo-canary.test:9898/delay/1
```
When the number of failed checks reaches the canary analysis threshold, the traffic is routed back to the primary,
the canary is scaled to zero and the rollout is marked as failed.
```text
kubectl -n test describe canary/podinfo
Status:
Canary Weight: 0
Failed Checks: 10
Phase: Failed
Events:
Starting canary analysis for podinfo.test
Pre-rollout check acceptance-test passed
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Halt podinfo.test advancement success rate 69.17% < 99%
Halt podinfo.test advancement success rate 61.39% < 99%
Halt podinfo.test advancement success rate 55.06% < 99%
Halt podinfo.test advancement request duration 1.20s > 0.5s
Halt podinfo.test advancement request duration 1.45s > 0.5s
Rolling back podinfo.test failed checks threshold reached 5
Canary failed! Scaling down podinfo.test
```
### Custom metrics
The canary analysis can be extended with Prometheus queries.
Let's a define a check for not found errors. Edit the canary analysis and add the following metric:
```yaml
canaryAnalysis:
metrics:
- name: "404s percentage"
threshold: 3
query: |
100 - sum(
rate(
response_total{
namespace="test",
deployment="podinfo",
status_code!="404",
direction="inbound"
}[1m]
)
)
/
sum(
rate(
response_total{
namespace="test",
deployment="podinfo",
direction="inbound"
}[1m]
)
)
* 100
```
The above configuration validates the canary version by checking if the HTTP 404 req/sec percentage is below
three percent of the total traffic. If the 404s rate reaches the 3% threshold, then the analysis is aborted and the
canary is marked as failed.
Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=quay.io/stefanprodan/podinfo:1.7.3
```
Generate 404s:
```bash
watch -n 1 curl http://podinfo-canary:9898/status/404
```
Watch Flagger logs:
```
kubectl -n linkerd logs deployment/flagger -f | jq .msg
Starting canary deployment for podinfo.test
Pre-rollout check acceptance-test passed
Advance podinfo.test canary weight 5
Halt podinfo.test advancement 404s percentage 6.20 > 3
Halt podinfo.test advancement 404s percentage 6.45 > 3
Halt podinfo.test advancement 404s percentage 7.22 > 3
Halt podinfo.test advancement 404s percentage 6.50 > 3
Halt podinfo.test advancement 404s percentage 6.34 > 3
Rolling back podinfo.test failed checks threshold reached 5
Canary failed! Scaling down podinfo.test
```
If you have Slack configured, Flagger will send a notification with the reason why the canary failed.
### Linkerd Ingress
There are two ingress controllers that are compatible with both Flagger and Linkerd: NGINX and Gloo.
Install NGINX:
```bash
helm upgrade -i nginx-ingress stable/nginx-ingress \
--namespace ingress-nginx
```
Create an ingress definition for podinfo that rewrites the incoming header to the internal service name (required by Linkerd):
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:9898;
proxy_hide_header l5d-remote-ip;
proxy_hide_header l5d-server-id;
spec:
rules:
- host: app.example.com
http:
paths:
- backend:
serviceName: podinfo
servicePort: 9898
```
When using an ingress controller, the Linkerd traffic split does not apply to incoming traffic since NGINX in running outside of
the mesh. In order to run a canary analysis for a frontend app, Flagger creates a shadow ingress and sets the NGINX specific annotations.
### A/B Testing
Besides weighted routing, Flagger can be configured to route traffic to the canary based on HTTP match conditions.
In an A/B testing scenario, you'll be using HTTP headers or cookies to target a certain segment of your users.
This is particularly useful for frontend applications that require session affinity.
![Flagger Linkerd Ingress](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/diagrams/flagger-nginx-linkerd.png)
Edit podinfo canary analysis, set the provider to `nginx`, add the ingress reference, remove the max/step weight and add the match conditions and iterations:
```yaml
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# ingress reference
provider: nginx
ingressRef:
apiVersion: extensions/v1beta1
kind: Ingress
name: podinfo
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
# container port
port: 9898
canaryAnalysis:
interval: 1m
threshold: 10
iterations: 10
match:
# curl -H 'X-Canary: always' http://app.example.com
- headers:
x-canary:
exact: "always"
# curl -b 'canary=always' http://app.example.com
- headers:
cookie:
exact: "canary"
# Linkerd Prometheus checks
metrics:
- name: request-success-rate
threshold: 99
interval: 1m
- name: request-duration
threshold: 500
interval: 30s
webhooks:
- name: acceptance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 30s
metadata:
type: bash
cmd: "curl -sd 'test' http://podinfo-canary:9898/token | grep token"
- name: load-test
type: rollout
url: http://flagger-loadtester.test/
metadata:
cmd: "hey -z 2m -q 10 -c 2 -H 'Cookie: canary=always' http://app.example.com"
```
The above configuration will run an analysis for ten minutes targeting users that have a `canary` cookie set to `always` or
those that call the service using the `X-Canary: always` header.
**Note** that the load test now targets the external address and uses the canary cookie.
Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=quay.io/stefanprodan/podinfo:1.7.4
```
Flagger detects that the deployment revision changed and starts the A/B testing:
```text
kubectl -n test describe canary/podinfo
Events:
Starting canary deployment for podinfo.test
Pre-rollout check acceptance-test passed
Advance podinfo.test canary iteration 1/10
Advance podinfo.test canary iteration 2/10
Advance podinfo.test canary iteration 3/10
Advance podinfo.test canary iteration 4/10
Advance podinfo.test canary iteration 5/10
Advance podinfo.test canary iteration 6/10
Advance podinfo.test canary iteration 7/10
Advance podinfo.test canary iteration 8/10
Advance podinfo.test canary iteration 9/10
Advance podinfo.test canary iteration 10/10
Copying podinfo.test template spec to podinfo-primary.test
Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Promotion completed! Scaling down podinfo.test
```

View File

@@ -189,7 +189,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=quay.io/stefanprodan/podinfo:1.4.1
podinfod=quay.io/stefanprodan/podinfo:1.7.1
```
Flagger detects that the deployment revision changed and starts a new rollout:

View File

@@ -119,7 +119,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=quay.io/stefanprodan/podinfo:1.4.1
podinfod=quay.io/stefanprodan/podinfo:1.7.1
```
Flagger detects that the deployment revision changed and starts a new rollout:

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

3
go.mod
View File

@@ -4,6 +4,7 @@ go 1.12
require (
cloud.google.com/go v0.37.4 // indirect
github.com/Masterminds/semver v1.4.2
github.com/beorn7/perks v1.0.0 // indirect
github.com/bxcodec/faker v2.0.1+incompatible // indirect
github.com/envoyproxy/go-control-plane v0.8.0 // indirect
@@ -29,6 +30,8 @@ require (
github.com/mattn/go-isatty v0.0.7 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/mitchellh/go-testing-interface v1.0.0 // indirect
github.com/mitchellh/hashstructure v1.0.0
github.com/pkg/errors v0.8.1
github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90 // indirect
github.com/prometheus/common v0.3.0 // indirect

9
go.sum
View File

@@ -8,6 +8,7 @@ github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/MakeNowJust/heredoc v0.0.0-20171113091838-e9091a26100e/go.mod h1:64YHyfSL2R96J44Nlwm39UHepQbyR5q10x7iYa1ks2E=
github.com/Masterminds/goutils v1.1.0/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU=
github.com/Masterminds/semver v1.4.2 h1:WBLTQ37jOCzSLtXNdoo8bNM8876KhNqOKvrlGITgsTc=
github.com/Masterminds/semver v1.4.2/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF078ddwwvV3Y=
github.com/Masterminds/sprig v2.18.0+incompatible/go.mod h1:y6hNFY5UBTIWBxnzTeuNhlNS5hqE0NB0E6fgfo2Br3o=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
@@ -30,6 +31,7 @@ github.com/appscode/jsonpatch v0.0.0-20190108182946-7c0e3b262f30/go.mod h1:4AJxU
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da h1:8GUt8eRujhVEGZFFEjBj46YV4rDjvGrNxb0KMWYkL2I=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310 h1:BUAU3CGlLvorLI26FmByPp2eC2qla6E1Tw+scpcg/to=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/asaskevich/govalidator v0.0.0-20180720115003-f9ffefc3facf/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/avast/retry-go v2.2.0+incompatible h1:m+w7mVLWa/oKqX2xYqiEKQQkeGH8DDEXB/XnjS54Wyw=
@@ -37,6 +39,7 @@ github.com/avast/retry-go v2.2.0+incompatible/go.mod h1:XtSnn+n/sHqQIpZ10K1qAevB
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0 h1:HWo1m869IqiPhD389kmkxeTalrjNbbJTC8LXupb+sl0=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/bgentry/speakeasy v0.1.0 h1:ByYyxL9InA1OWqxJqqp2A5pYHUrCiAL6K3J+LKSsQkY=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bitly/go-simplejson v0.5.0/go.mod h1:cXHtHw4XUPsvGaxgjIAn8PhEWG9NfngEKAMDJEczWVA=
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869/go.mod h1:Ekp36dRnpXw/yCqJaO+ZrUyxD+3VXMFFr56k5XYrpB4=
@@ -79,6 +82,7 @@ github.com/evanphx/json-patch v4.1.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLi
github.com/evanphx/json-patch v4.2.0+incompatible h1:fUDGZCv/7iAN7u0puUVhvKCcsR6vRfwrJatElLBEf0I=
github.com/evanphx/json-patch v4.2.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d/go.mod h1:ZZMPRZwes7CROmyNKgQzC3XPs6L/G2EJLHddWejkmf4=
github.com/fatih/color v1.7.0 h1:DkWD4oS2D8LGGgTQ6IvwJJXSL5Vp2ffcQg58nFV38Ys=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fgrosse/zaptest v1.1.0 h1:sK9hP0/xBoNX5qfFo3KWFluDXfc809APomI1QXuYELA=
github.com/fgrosse/zaptest v1.1.0/go.mod h1:vMnRSul6kW7kIUXZgnZZcDwyTn8k49ODfAULL8nmL5w=
@@ -153,6 +157,7 @@ github.com/google/gofuzz v1.0.0 h1:A8PeW59pxE9IoFRqBp37U+mSNaQoZ46F1f0f863XSXw=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/uuid v1.0.0 h1:b4Gk+7WdP/d3HZH8EJsZpvV7EtDOgaZLtnaNGIu1adA=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gnostic v0.0.0-20170426233943-68f4ded48ba9/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
@@ -190,6 +195,7 @@ github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
github.com/hashicorp/go-sockaddr v1.0.2 h1:ztczhD1jLxIRjVejw8gFomI1BQZOe2WoVOu0SyteCQc=
github.com/hashicorp/go-sockaddr v1.0.2/go.mod h1:rB4wwRAUzs07qva3c5SdrY/NEtAUjGlgmH/UkBUC97A=
github.com/hashicorp/go-syslog v1.0.0 h1:KaodqZuhUoZereWVIYmpUgZysurB1kBLX2j0MwMrUAE=
github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.1 h1:fv1ep09latC32wFoVwnqcnKJGnMSdBanPczbHAYm1BE=
@@ -200,6 +206,7 @@ github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hashicorp/logutils v1.0.0 h1:dLEQVugN8vlakKOUE3ihGLTZJRB4j+M2cdTm/ORI65Y=
github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ=
github.com/hashicorp/memberlist v0.1.3 h1:EmmoJme1matNzb+hMpDuR/0sbJSUisxyqBGG676r31M=
@@ -263,6 +270,7 @@ github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5
github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE=
github.com/miekg/dns v1.0.14 h1:9jZdLNd/P4+SfEJ0TNyxYpsK8N4GtfylBLqtbYN1sbA=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/mitchellh/cli v1.0.0 h1:iGBIsUe3+HZ/AD/Vd7DErOt5sU9fa8Uj7A2s1aggv1Y=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
@@ -313,6 +321,7 @@ github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINE
github.com/pmezard/go-difflib v0.0.0-20151028094244-d8ed2627bdf0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/posener/complete v1.1.1 h1:ccV59UEOTzVDnDUEFdT95ZzHVZ+5+158q8+SJb2QV5w=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/prometheus/client_golang v0.8.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=

145
kustomize/README.md Normal file
View File

@@ -0,0 +1,145 @@
# Flagger Kustomize installer
As an alternative to Helm, Flagger can be installed with [Kustomize](https://kustomize.io/).
## Service mesh specific installers
Install Flagger for Istio:
```bash
kubectl apply -k github.com/weaveworks/flagger//kustomize/istio
```
This deploys Flagger in the `istio-system` namespace and sets the metrics server URL to Istio's Prometheus instance.
Note that you'll need kubectl 1.14 to run the above the command or you can download the
[kustomize binary](https://github.com/kubernetes-sigs/kustomize/releases) and run:
```bash
kustomize build github.com/weaveworks/flagger//kustomize/istio | kubectl apply -f -
```
Install Flagger for Linkerd:
```bash
kubectl apply -k github.com/weaveworks/flagger//kustomize/linkerd
```
This deploys Flagger in the `linkerd` namespace and sets the metrics server URL to Linkerd's Prometheus instance.
If you want to install a specific Flagger release, add the version number to the URL:
```bash
kubectl apply -k github.com/weaveworks/flagger//kustomize/linkerd?ref=0.18.0
```
## Generic installer
Install Flagger and Prometheus:
```bash
kubectl apply -k github.com/weaveworks/flagger//kustomize/kubernetes
```
This deploys Flagger and Prometheus in the `flagger-system` namespace,
sets the metrics server URL to `http://flagger-prometheus.flagger-system:9090` and the mesh provider to `kubernetes`.
To target a different provider you can specify it in the canary custom resource:
```yaml
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: app
namespace: test
spec:
# can be: kubernetes, istio, linkerd, appmesh, nginx, gloo
# use the kubernetes provider for Blue/Green style deployments
provider: nginx
```
You'll need Prometheus when using Flagger with AWS App Mesh, Gloo or NGINX ingress controller.
The Prometheus instance has a two hours data retention and is configured to scrape all pods in your cluster that
have the `prometheus.io/scrape: "true"` annotation.
## Configure Slack notifications
Create a kustomization file using flagger as base:
```bash
cat > kustomization.yaml <<EOF
namespace: istio-system
bases:
- github.com/weaveworks/flagger/kustomize/base/flagger
patchesStrategicMerge:
- patch.yaml
EOF
```
Create a patch and enable Slack notifications by setting the slack channel and hook URL:
```bash
cat > patch.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: flagger
spec:
template:
spec:
containers:
- name: flagger
args:
- -mesh-provider=istio
- -metrics-server=http://prometheus.istio-system:9090
- -slack-user=flagger
- -slack-channel=alerts
- -slack-url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK
EOF
```
Install Flagger for Istio with Slack notifications:
```bash
kubectl apply -k .
```
## Configure MS Teams notifications
Create a kustomization file using flagger as base:
```bash
cat > kustomization.yaml <<EOF
namespace: linkerd
bases:
- github.com/weaveworks/flagger/kustomize/base/flagger
patchesStrategicMerge:
- patch.yaml
EOF
```
Create a patch and set the MS Teams webhook URL:
```bash
cat > patch.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: flagger
spec:
template:
spec:
containers:
- name: flagger
args:
- -mesh-provider=linkerd
- -metrics-server=http://linkerd-prometheus:9090
- -msteams-url=https://outlook.office.com/webhook/YOUR/TEAMS/WEBHOOK
EOF
```
Install Flagger for Linkerd with MS Teams notifications:
```bash
kubectl apply -k .
```

View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: flagger
namespace: flagger-system

View File

@@ -0,0 +1,225 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: canaries.flagger.app
annotations:
helm.sh/resource-policy: keep
spec:
group: flagger.app
version: v1alpha3
versions:
- name: v1alpha3
served: true
storage: true
- name: v1alpha2
served: true
storage: false
- name: v1alpha1
served: true
storage: false
names:
plural: canaries
singular: canary
kind: Canary
categories:
- all
scope: Namespaced
subresources:
status: {}
additionalPrinterColumns:
- name: Status
type: string
JSONPath: .status.phase
- name: Weight
type: string
JSONPath: .status.canaryWeight
- name: LastTransitionTime
type: string
JSONPath: .status.lastTransitionTime
validation:
openAPIV3Schema:
properties:
spec:
required:
- targetRef
- service
- canaryAnalysis
properties:
provider:
type: string
progressDeadlineSeconds:
type: number
targetRef:
type: object
required: ['apiVersion', 'kind', 'name']
properties:
apiVersion:
type: string
kind:
type: string
name:
type: string
autoscalerRef:
anyOf:
- type: string
- type: object
required: ['apiVersion', 'kind', 'name']
properties:
apiVersion:
type: string
kind:
type: string
name:
type: string
ingressRef:
anyOf:
- type: string
- type: object
required: ['apiVersion', 'kind', 'name']
properties:
apiVersion:
type: string
kind:
type: string
name:
type: string
service:
type: object
required: ['port']
properties:
port:
type: number
portName:
type: string
portDiscovery:
type: boolean
meshName:
type: string
timeout:
type: string
skipAnalysis:
type: boolean
canaryAnalysis:
properties:
interval:
description: Canary schedule interval
type: string
pattern: "^[0-9]+(m|s)"
iterations:
description: Number of checks to run for A/B Testing and Blue/Green
type: number
threshold:
description: Max number of failed checks before rollback
type: number
maxWeight:
description: Max traffic percentage routed to canary
type: number
stepWeight:
description: Canary incremental traffic percentage step
type: number
metrics:
description: Prometheus query list for this canary
type: array
properties:
items:
type: object
required: ['name', 'threshold']
properties:
name:
description: Name of the Prometheus metric
type: string
interval:
description: Interval of the promql query
type: string
pattern: "^[0-9]+(m|s)"
threshold:
description: Max scalar value accepted for this metric
type: number
query:
description: Prometheus query
type: string
webhooks:
description: Webhook list for this canary
type: array
properties:
items:
type: object
required: ['name', 'url', 'timeout']
properties:
name:
description: Name of the webhook
type: string
type:
description: Type of the webhook pre, post or during rollout
type: string
enum:
- ""
- pre-rollout
- rollout
- post-rollout
url:
description: URL address of this webhook
type: string
format: url
timeout:
description: Request timeout for this webhook
type: string
pattern: "^[0-9]+(m|s)"
status:
properties:
phase:
description: Analysis phase of this canary
type: string
enum:
- ""
- Initializing
- Initialized
- Waiting
- Progressing
- Finalising
- Succeeded
- Failed
canaryWeight:
description: Traffic weight percentage routed to canary
type: number
failedChecks:
description: Failed check count of the current canary analysis
type: number
iterations:
description: Iteration count of the current canary analysis
type: number
lastAppliedSpec:
description: LastAppliedSpec of this canary
type: string
lastTransitionTime:
description: LastTransitionTime of this canary
format: date-time
type: string
conditions:
description: Status conditions of this canary
type: array
properties:
items:
type: object
required: ['type', 'status', 'reason']
properties:
lastTransitionTime:
description: LastTransitionTime of this condition
format: date-time
type: string
lastUpdateTime:
description: LastUpdateTime of this condition
format: date-time
type: string
message:
description: Message associated with this condition
type: string
reason:
description: Reason for the current status of this condition
type: string
status:
description: Status of this condition
type: string
type:
description: Type of this condition
type: string

View File

@@ -0,0 +1,56 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: flagger
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: flagger
template:
metadata:
labels:
app: flagger
annotations:
prometheus.io/scrape: "true"
spec:
serviceAccountName: flagger
containers:
- name: flagger
image: weaveworks/flagger:0.16.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
livenessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=2
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
readinessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=2
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
resources:
limits:
memory: "512Mi"
cpu: "1000m"
requests:
memory: "32Mi"
cpu: "10m"
securityContext:
readOnlyRootFilesystem: true
runAsUser: 10001

View File

@@ -0,0 +1,11 @@
namespace: flagger-system
commonLabels:
app: flagger
resources:
- account.yaml
- rbac.yaml
- crd.yaml
- deployment.yaml
images:
- name: weaveworks/flagger
newTag: 0.18.2

View File

@@ -0,0 +1,90 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: flagger
rules:
- apiGroups:
- ""
resources:
- events
- configmaps
- secrets
- services
verbs: ["*"]
- apiGroups:
- apps
resources:
- deployments
verbs: ["*"]
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs: ["*"]
- apiGroups:
- "extensions"
resources:
- ingresses
- ingresses/status
verbs: ["*"]
- apiGroups:
- flagger.app
resources:
- canaries
- canaries/status
verbs: ["*"]
- apiGroups:
- networking.istio.io
resources:
- virtualservices
- virtualservices/status
- destinationrules
- destinationrules/status
verbs: ["*"]
- apiGroups:
- appmesh.k8s.aws
resources:
- meshes
- meshes/status
- virtualnodes
- virtualnodes/status
- virtualservices
- virtualservices/status
verbs: ["*"]
- apiGroups:
- split.smi-spec.io
resources:
- trafficsplits
verbs: ["*"]
- apiGroups:
- gloo.solo.io
resources:
- settings
- upstreams
- upstreamgroups
- proxies
- virtualservices
verbs: ["*"]
- apiGroups:
- gateway.solo.io
resources:
- virtualservices
- gateways
verbs: ["*"]
- nonResourceURLs:
- /version
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: flagger
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flagger
subjects:
- kind: ServiceAccount
name: flagger
namespace: flagger-system

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: flagger-prometheus
namespace: flagger-system

View File

@@ -0,0 +1,52 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: flagger-prometheus
namespace: flagger-system
spec:
replicas: 1
selector:
matchLabels:
app: flagger-prometheus
template:
metadata:
labels:
app: flagger-prometheus
annotations:
appmesh.k8s.aws/sidecarInjectorWebhook: disabled
sidecar.istio.io/inject: "false"
spec:
serviceAccountName: flagger-prometheus
containers:
- name: prometheus
image: prom/prometheus:v2.10.0
imagePullPolicy: IfNotPresent
args:
- '--storage.tsdb.retention=2h'
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- containerPort: 9090
name: http
livenessProbe:
httpGet:
path: /-/healthy
port: 9090
readinessProbe:
httpGet:
path: /-/ready
port: 9090
resources:
requests:
cpu: 10m
memory: 128Mi
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
- name: data-volume
mountPath: /prometheus/data
volumes:
- name: config-volume
configMap:
name: flagger-prometheus
- name: data-volume
emptyDir: {}

View File

@@ -0,0 +1,15 @@
namespace: flagger-system
commonLabels:
app: flagger-prometheus
resources:
- account.yaml
- rbac.yaml
- service.yaml
- deployment.yaml
configMapGenerator:
- name: flagger-prometheus
files:
- prometheus.yml
images:
- name: prom/prometheus
newTag: v2.10.0

View File

@@ -0,0 +1,144 @@
global:
scrape_interval: 5s
scrape_configs:
# Scrape config for AppMesh Envoy sidecar
- job_name: 'appmesh-envoy'
metrics_path: /stats/prometheus
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_name]
action: keep
regex: '^envoy$'
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: ${1}:9901
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
# Exclude high cardinality metrics
metric_relabel_configs:
- source_labels: [ cluster_name ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ tcp_prefix ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ listener_address ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_listener_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tls.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tcp_downstream.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_http_(stats|admin).*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_cluster_(lb|retry|bind|internal|max|original).*'
action: drop
# Scrape config for API servers
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- default
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: kubernetes;https
# Scrape config for nodes
- job_name: 'kubernetes-nodes'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
# scrape config for cAdvisor
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
# scrape config for pods
- job_name: kubernetes-pods
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_scrape
- source_labels: [ __address__ ]
regex: '.*9901.*'
action: drop
- action: replace
regex: (.+)
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_path
target_label: __metrics_path__
- action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_pod_annotation_prometheus_io_port
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: kubernetes_namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: kubernetes_pod_name

View File

@@ -0,0 +1,32 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: flagger-prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
- nodes/proxy
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: flagger-prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flagger-prometheus
subjects:
- kind: ServiceAccount
name: flagger-prometheus
namespace: flagger-system

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: Service
metadata:
name: flagger-prometheus
namespace: flagger-system
spec:
selector:
app: flagger-prometheus
ports:
- name: http
protocol: TCP
port: 9090

View File

@@ -0,0 +1,5 @@
namespace: istio-system
bases:
- ../base/flagger/
patchesStrategicMerge:
- patch.yaml

View File

@@ -0,0 +1,16 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: flagger
spec:
template:
spec:
containers:
- name: flagger
args:
- -log-level=info
- -mesh-provider=istio
- -metrics-server=http://prometheus:9090
- -slack-user=flagger
- -slack-channel=
- -slack-url=

View File

@@ -0,0 +1,8 @@
namespace: flagger-system
resources:
- namespace.yaml
bases:
- ../base/flagger/
- ../base/prometheus/
patchesStrategicMerge:
- patch.yaml

View File

@@ -0,0 +1,9 @@
apiVersion: v1
kind: Namespace
metadata:
name: flagger-system
annotations:
linkerd.io/inject: disabled
labels:
istio-injection: disabled
appmesh.k8s.aws/sidecarInjectorWebhook: disabled

View File

@@ -0,0 +1,16 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: flagger
spec:
template:
spec:
containers:
- name: flagger
args:
- -log-level=info
- -mesh-provider=kubernetes
- -metrics-server=http://flagger-prometheus:9090
- -slack-user=flagger
- -slack-channel=
- -slack-url=

View File

@@ -0,0 +1,5 @@
namespace: linkerd
bases:
- ../base/flagger/
patchesStrategicMerge:
- patch.yaml

View File

@@ -0,0 +1,16 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: flagger
spec:
template:
spec:
containers:
- name: flagger
args:
- -log-level=info
- -mesh-provider=linkerd
- -metrics-server=http://linkerd-prometheus:9090
- -slack-user=flagger
- -slack-channel=
- -slack-url=

View File

@@ -2,11 +2,12 @@ apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
replicas: 1
minReadySeconds: 5
revisionHistoryLimit: 5
progressDeadlineSeconds: 60
strategy:
rollingUpdate:
maxUnavailable: 0
@@ -23,7 +24,7 @@ spec:
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:1.4.0
image: quay.io/stefanprodan/podinfo:1.7.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898
@@ -37,7 +38,7 @@ spec:
- --random-error=false
env:
- name: PODINFO_UI_COLOR
value: green
value: blue
livenessProbe:
exec:
command:
@@ -45,10 +46,8 @@ spec:
- check
- http
- localhost:9898/healthz
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
exec:
command:
@@ -56,14 +55,12 @@ spec:
- check
- http
- localhost:9898/readyz
failureThreshold: 3
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 2
initialDelaySeconds: 5
timeoutSeconds: 5
resources:
limits:
cpu: 1000m
memory: 256Mi
cpu: 2000m
memory: 512Mi
requests:
cpu: 100m
memory: 16Mi
memory: 64Mi

View File

@@ -2,13 +2,12 @@ apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: podinfo
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
minReplicas: 1
minReplicas: 2
maxReplicas: 4
metrics:
- type: Resource

View File

@@ -0,0 +1,5 @@
namespace: test
resources:
- hpa.yaml
- deployment.yaml

View File

@@ -0,0 +1,59 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: flagger-loadtester
labels:
app: flagger-loadtester
spec:
selector:
matchLabels:
app: flagger-loadtester
template:
metadata:
labels:
app: flagger-loadtester
annotations:
prometheus.io/scrape: "true"
spec:
containers:
- name: loadtester
image: weaveworks/flagger-loadtester:0.6.1
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
command:
- ./loadtester
- -port=8080
- -log-level=info
- -timeout=1h
livenessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
readinessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
resources:
limits:
memory: "512Mi"
cpu: "1000m"
requests:
memory: "32Mi"
cpu: "10m"
securityContext:
readOnlyRootFilesystem: true
runAsUser: 10001

Some files were not shown because too many files have changed in this diff Show More