Compare commits

..

425 Commits

Author SHA1 Message Date
Stefan Prodan
4463d41744 Update traffic routing
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2025-10-16 18:35:53 +03:00
Stefan Prodan
66663c2aaf Update traffic routing
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2025-10-16 18:34:50 +03:00
Stefan Prodan
d34a504516 Update intro
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2025-10-16 18:23:54 +03:00
Stefan Prodan
9fc5f9ca31 Update copyright
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2025-10-16 18:02:25 +03:00
Stefan Prodan
3cf9fe7cb7 Bump actions
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2025-10-16 17:58:02 +03:00
Stefan Prodan
6d57b5c490 Remove deprecated tutorials
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2025-10-16 17:54:41 +03:00
Stefan Prodan
7254a5a7f6 Merge pull request #1250 from staceypotter/patch-1
fix typo on readme
2022-08-23 17:18:33 +03:00
Stacey Potter
671016834d fix typo on readme
Signed-off-by: Stacey Potter <50154848+staceypotter@users.noreply.github.com>

Signed-off-by: Stacey Potter <50154848+staceypotter@users.noreply.github.com>
2022-08-03 09:29:07 -04:00
Stefan Prodan
41dc056c87 Add Kuma
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-02-14 15:31:31 +02:00
Daniel Holbach
6757553685 Merge pull request #1028 from politician/patch-1
Fix typo (closes #1027)
2021-10-05 09:39:05 +02:00
Romain Barissat
1abceb8334 Fix typo
Closes #1027

Signed-off-by: Romain Barissat <romain-noreply@barissat.com>
2021-10-05 14:29:30 +07:00
Stefan Prodan
0939a1bcc9 Add metric providers section
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2021-09-20 11:49:01 +03:00
Stefan Prodan
d91012d79c Add OSM to homepage
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2021-08-23 15:30:00 +03:00
Stefan Prodan
6e9fe2c584 Flagger v1.6
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2021-02-26 17:04:27 +02:00
Stefan Prodan
ccad05c92f Add Traefik tutorial
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2020-12-07 12:13:08 +02:00
stefanprodan
8b9929008b Add Skipper ingress 2020-08-20 09:14:36 +03:00
stefanprodan
a881446252 Fix docs links 2020-05-08 13:20:53 +03:00
stefanprodan
183c73f2c4 Update website to v1.0.0-rc.1 2020-03-05 17:16:27 +02:00
stefanprodan
3e3eaebbf2 Add Contour to ingress providers 2020-01-06 16:06:55 +02:00
stefanprodan
bcc5bbf6fa Fix CI 2019-11-13 15:27:14 +02:00
stefanprodan
42c3052247 Update deployment strategies and tutorials list 2019-11-13 13:38:27 +02:00
Stefan Prodan
3593e20ad5 Add Helm v3 workshop link 2019-09-22 00:14:53 +03:00
stefanprodan
1f0819465b Link to fluxcd/flux repo 2019-08-20 19:23:01 +03:00
stefanprodan
7e48d7537f Add Helm Operator tutorial 2019-08-19 18:59:37 +03:00
stefanprodan
45c69ce3f7 Fix typo 2019-08-19 18:42:40 +03:00
stefanprodan
8699a07de9 Move meta to vue module 2019-08-19 17:39:48 +03:00
stefanprodan
fb92d31744 Remove node_modules 2019-08-19 17:30:55 +03:00
stefanprodan
bb7c4b1909 Remove hero image 2019-08-19 17:28:50 +03:00
stefanprodan
84d74d1eb5 Add Flagger website 2019-08-19 17:03:43 +03:00
Stefan Prodan
ddab72cd59 Merge pull request #276 from weaveworks/podinfo
Update podinfo to v2.0
2019-08-14 10:46:06 +03:00
stefanprodan
87d0b33327 Add provider field to nginx and gloo docs 2019-08-14 10:14:00 +03:00
stefanprodan
225a9015bb Update podinfo to v2.0 2019-08-14 09:28:36 +03:00
Stefan Prodan
c0b60b1497 Merge pull request #272 from weaveworks/appmesh
Set HTTP listeners for AppMesh virtual routers
2019-08-13 09:48:49 +03:00
Stefan Prodan
0463c19825 Merge pull request #275 from hiddeco/build/codegen
Support non `$GOPATH/src` location for codegen
2019-08-13 09:48:27 +03:00
Hidde Beydals
8e70aa90c1 Support non $GOPATH/src location for codegen
This commit fixes two things:

- it ensures the code generation works no matter the location of the
  project directory
- as a side effect; fixes the `hack/verify-codegen.sh` ran during CI

The previous script (or more specific: the `code-generator` library)
made the assumption during execution that the project was placed
inside `$GOPATH/src` and made the modifications there.

The idea of Go Modules is however that a project and/or package can
be placed anywhere, and this is also what the CI did, resulting in a
comparison of two identical `cp -r` copied directories. Giving a
false green light on every CI run.

To work around this limitation in `code-generator`: create a
temporary directory, use this as an output base and copy
everything back once generated.
2019-08-12 22:41:10 +02:00
stefanprodan
0a418eb88a Add notifier tests 2019-08-12 09:47:11 +03:00
stefanprodan
040dbb8d03 Add http listener to virtual router reconciliation 2019-08-10 11:04:15 +03:00
stefanprodan
64f2288bdd Add listeners to AppMesh virtual router 2019-08-10 10:58:20 +03:00
Stefan Prodan
8008562a33 Merge pull request #271 from weaveworks/crd
Add missing fields to CRD validation spec
2019-08-07 11:09:07 +03:00
stefanprodan
a39652724d Add confirm and pre rollout hooks to e2e tests 2019-08-07 10:55:15 +03:00
stefanprodan
691c3c4f36 Add missing fields to CRD validation spec 2019-08-07 10:22:07 +03:00
Stefan Prodan
f6fa5e3891 Merge pull request #270 from weaveworks/prep-0.18.2
Release v0.18.2
2019-08-05 18:57:54 +03:00
stefanprodan
a305a0b705 Release v0.18.2 2019-08-05 18:43:57 +03:00
Stefan Prodan
dfe619e2ea Merge pull request #269 from weaveworks/helm-circleci
Publish Helm chart from CircleCI
2019-08-05 17:57:21 +03:00
stefanprodan
2b3d425b70 Publish Helm chart from CircleCI 2019-08-05 17:08:33 +03:00
Stefan Prodan
6e55fea413 Merge pull request #268 from weaveworks/istio-1.2.3
Update e2e backends
2019-08-03 15:44:53 +03:00
stefanprodan
b6a08b6615 Fix AppMesh mesh name in docs 2019-08-03 15:24:31 +03:00
stefanprodan
eaa6906516 Update e2e NGINX ingress to v1.12.1 2019-08-03 13:42:27 +03:00
stefanprodan
62a7a92f2a Update e2e Gloo to v0.18.8 2019-08-03 13:01:57 +03:00
stefanprodan
3aeb0945c5 Update e2e Istio to v1.2.3 2019-08-03 12:05:21 +03:00
Stefan Prodan
e8c85efeae Merge pull request #267 from fcantournet/fix_virtualservices_multipleports
Fix Port discovery with multiple port services
2019-08-03 12:04:04 +03:00
Félix Cantournet
6651f6452b Multiple port canary: fix FAQ and add e2e tests 2019-08-02 14:23:58 +02:00
Félix Cantournet
0ca48d77be Fix Port discovery with multiple port services
This fixes issue https://github.com/weaveworks/flagger/issues/263

We actually don't need to specify any ports in the VirtualService
and DestinationRules.
Istio will create clusters/listerners for each named port we have declared in
the kubernetes services and the router can be shared as it operates only on L7 criterias

Also contains a tiny clean-up of imports
2019-08-02 10:07:00 +02:00
Stefan Prodan
a9e0e018e3 Merge pull request #266 from ExpediaInc/master
Parameterize image pull secrets for private docker repos
2019-08-01 11:07:53 +03:00
Sky Moon
122d11f445 Merge pull request #1 from ExpediaInc/parameterizeImagePullSecrets
parameterize image pull secrets for private docker repos.
2019-08-01 00:50:15 -07:00
cmoon
b03555858c parameterize image pull secrets for private docker repos. 2019-08-01 00:47:07 -07:00
Stefan Prodan
dcc5a40441 Merge pull request #262 from weaveworks/prep-0.18.1
Release v0.18.1
2019-07-30 13:52:25 +03:00
stefanprodan
8c949f59de Package helm charts locally 2019-07-30 13:35:09 +03:00
stefanprodan
e8d91a0375 Release v0.18.1 2019-07-30 13:22:51 +03:00
Stefan Prodan
fae9aa664d Merge pull request #261 from weaveworks/blue-green-e2e
Fix Blue/Green metrics provider and add e2e tests
2019-07-30 13:16:20 +03:00
stefanprodan
c31e9e5a96 Use Linkerd metrics for ingress and kubernetes routers 2019-07-30 13:00:28 +03:00
stefanprodan
99fff98274 Kustomize: set Flagger log level to info 2019-07-30 12:43:02 +03:00
stefanprodan
11d84bf35d Enable kubernetes metric provider 2019-07-30 12:27:53 +03:00
stefanprodan
e56ba480c7 Add Blue/Green e2e tests 2019-07-30 12:02:15 +03:00
Stefan Prodan
b9f0517c5d Merge pull request #255 from weaveworks/prep-0.18.0
Release v0.18.0
2019-07-29 16:06:23 +03:00
stefanprodan
6e66f02585 Update changelog 2019-07-29 15:52:50 +03:00
stefanprodan
5922e96044 Merge branch 'prep-0.18.0' of https://github.com/weaveworks/flagger into prep-0.18.0 2019-07-29 15:06:43 +03:00
stefanprodan
f36e7e414a Add manual gating link to readme 2019-07-29 15:06:31 +03:00
stefanprodan
606754d4a5 Disable supergloo e2e 2019-07-29 15:06:31 +03:00
stefanprodan
a3847e64df Add Kustomize download link to docs 2019-07-29 15:06:31 +03:00
stefanprodan
7a3f9f2e73 Use Kustomize for Istio e2e testing 2019-07-29 15:06:31 +03:00
stefanprodan
2e4e8b0bf9 Make installer work with Kustomize v3 2019-07-29 15:06:31 +03:00
stefanprodan
951fe80115 Use crd.create=false in docs 2019-07-29 15:06:30 +03:00
stefanprodan
c0a8149acb Add kubectl min version to Kustomize docs 2019-07-29 15:06:30 +03:00
stefanprodan
80b75b227d Add CRD install step to chart 2019-07-29 15:06:30 +03:00
stefanprodan
dff7de09f2 Use kubectl for CRD install 2019-07-29 15:06:30 +03:00
stefanprodan
b3bbadfccf Add v0.18.0 to changelog 2019-07-29 15:06:30 +03:00
stefanprodan
fc676e3cb7 Release v0.18.0 2019-07-29 15:06:30 +03:00
stefanprodan
860c82dff9 Remove test artifacts 2019-07-29 15:06:30 +03:00
Stefan Prodan
4829f5af7f Merge pull request #257 from weaveworks/promotion
Implement promotion finalising state
2019-07-29 15:03:05 +03:00
stefanprodan
c463b6b231 Add finalising state tests 2019-07-29 14:02:16 +03:00
stefanprodan
b2ca0c4c16 Implement finalising state
Set the canary status to finalising after routing the traffic back to the primary. Run one final loop before scaling the canary to zero so that the canary has a chance to process all inflight requests.
2019-07-29 13:52:11 +03:00
stefanprodan
69875cb3dc Add finalising status phase to CRD 2019-07-29 13:43:30 +03:00
stefanprodan
9e33a116d4 Add manual gating link to readme 2019-07-29 11:33:28 +03:00
stefanprodan
dab3d53b65 Disable supergloo e2e 2019-07-28 11:28:00 +03:00
stefanprodan
e3f8bff6fc Add Kustomize download link to docs 2019-07-27 15:51:22 +03:00
stefanprodan
0648d81d34 Use Kustomize for Istio e2e testing 2019-07-27 14:49:57 +03:00
stefanprodan
ece5c4401e Make installer work with Kustomize v3 2019-07-27 14:45:49 +03:00
stefanprodan
bfc64c7cf1 Use crd.create=false in docs 2019-07-27 13:20:55 +03:00
stefanprodan
0a2c134ece Add kubectl min version to Kustomize docs 2019-07-27 13:07:47 +03:00
stefanprodan
8bea9253c3 Add CRD install step to chart 2019-07-27 13:06:27 +03:00
stefanprodan
e1dacc3983 Use kubectl for CRD install 2019-07-26 15:52:00 +03:00
stefanprodan
0c6a7355e7 Add v0.18.0 to changelog 2019-07-26 14:05:19 +03:00
stefanprodan
83046282c3 Release v0.18.0 2019-07-26 13:53:40 +03:00
stefanprodan
65c9817295 Remove test artifacts 2019-07-26 13:51:15 +03:00
Stefan Prodan
e4905d3d35 Merge pull request #254 from weaveworks/podinfo
Use Kustomize installer in Linkerd docs
2019-07-26 13:44:51 +03:00
stefanprodan
6bc0670a7a Use Kustomize installer in Linkerd docs 2019-07-26 13:30:28 +03:00
stefanprodan
95ff6adc19 Use podinfo 1.7 in GitOps demo 2019-07-26 13:20:06 +03:00
stefanprodan
7ee51c7def Add podinfo to Kustomize installer 2019-07-26 13:19:36 +03:00
Stefan Prodan
dfa065b745 Merge pull request #251 from weaveworks/gates
Implement confirm rollout gate, hook and API
2019-07-26 01:40:35 +03:00
stefanprodan
e3b03debde Use podinfo v1.7 2019-07-26 01:25:44 +03:00
Stefan Prodan
ef759305cb Merge pull request #253 from grampelberg/master
Update Linkerd to use correct canaries directory.
2019-07-26 00:24:52 +03:00
grampelberg
ad65497d4e Update Linkerd to use correct canaries directory. 2019-07-25 11:10:52 -07:00
stefanprodan
163f5292b0 Push a notification when a canary is waiting for approval 2019-07-25 19:13:22 +03:00
stefanprodan
e07a82d024 Add manual gating to docs 2019-07-25 13:32:58 +03:00
stefanprodan
046245a8b5 Use Gloo 0.17.6 in e2e tests 2019-07-24 19:54:33 +03:00
stefanprodan
aa6a180bcc Remove Gloo NodePort from e2e tests 2019-07-24 19:44:06 +03:00
stefanprodan
c4d28e14fc Upgrade Gloo e2e to v0.17.5 2019-07-24 19:35:02 +03:00
stefanprodan
bc4bdcdc1c Upgrade Gloo e2e to v0.17.6 2019-07-24 19:21:41 +03:00
stefanprodan
be22ff9951 Bump load tester version 2019-07-24 16:28:46 +03:00
stefanprodan
f204fe53f4 Implement canary gating API with in-memory storage
POST /gate/[check|open|close]
2019-07-24 16:14:22 +03:00
stefanprodan
28e7e89047 Pause or resume analysis on confirmation gate toggle 2019-07-24 16:09:13 +03:00
stefanprodan
75d49304f3 Add confirm-rollout hook to docs 2019-07-24 12:17:11 +03:00
stefanprodan
04cbacb6e0 Implement confirm rollout gate and hook
The confirm-rollout hooks are executed before the pre-rollout hooks. Flagger will halt the canary rollout until the confirm webhook returns HTTP status 200.
2019-07-24 12:09:39 +03:00
stefanprodan
c46c7b9e21 Add canary status conditions to docs 2019-07-24 12:04:05 +03:00
stefanprodan
919dafa567 Add gate halt and approve endpoints 2019-07-24 12:02:44 +03:00
stefanprodan
dfdcfed26e Add Waiting canary status phase
means the canary rollout is paused (waiting for confirmation to proceed)
2019-07-24 12:00:04 +03:00
Stefan Prodan
a0a4d4cfc5 Merge pull request #248 from weaveworks/ghz
Add gRPC load testing tool
2019-07-23 12:44:04 +03:00
stefanprodan
970a589fd3 Add load tester to kustomize installer 2019-07-23 12:30:38 +03:00
stefanprodan
56d2c0952a Add gPRC load test example to docs 2019-07-22 15:16:13 +03:00
stefanprodan
4871be0345 Release loadtester v0.5.0 2019-07-22 14:57:14 +03:00
stefanprodan
e3e112e279 Add gRPC load testing tool
https://ghz.sh
2019-07-22 14:55:19 +03:00
Stefan Prodan
d2cbd40d89 Merge pull request #240 from weaveworks/refactor
Refactor canary change detection and status
2019-07-22 14:33:02 +03:00
stefanprodan
3786a49f00 Update Linkerd e2e to v2.4.0 2019-07-16 11:20:42 +02:00
stefanprodan
ff4aa62061 Retry canary status update on conflict 2019-07-10 11:31:20 +03:00
stefanprodan
9b6cfdeef7 Update Canary CRD helm chart and Kustomize 2019-07-10 09:55:46 +03:00
stefanprodan
9d89e0c83f Log status update error 2019-07-10 09:55:20 +03:00
stefanprodan
559cbd0d36 Pin NGINX helm chart to v1.8.2 2019-07-10 09:49:39 +03:00
stefanprodan
caea00e47f Pin NGINX helm chart to version 1.8.2 2019-07-10 09:42:49 +03:00
stefanprodan
b26542f38d Do not trigger a canary deployment on manual rollback
Save the primary spec hash and check if it matches the canary spec. If the canary hash is identical with the primary one skip promotion.
2019-07-10 09:08:33 +03:00
Stefan Prodan
bbab7ce855 Merge pull request #238 from weaveworks/prep-0.17.0
Release v0.17.0
2019-07-10 08:46:10 +03:00
stefanprodan
afa2d079f6 Add status conditions and descriptions to CRD 2019-07-09 17:11:13 +03:00
stefanprodan
108bf9ca65 Add initializing canary phase/status condition reason
Fix HPA reconciliation min replicas diff
2019-07-09 17:10:43 +03:00
stefanprodan
438f952128 Implement status conditions
Add Promoted status condition with the following reasons: Initialized, Progressing, Succeeded, Failed
Usage: `kubectl wait canary/app --for=condition=promoted`
Fix: #184
2019-07-09 15:22:56 +03:00
stefanprodan
3e84799644 Detect changes in pod template metadata
Use the pod template spec hash to track changes (breaking)
2019-07-09 08:52:31 +03:00
stefanprodan
d6e80bac7f Update webhook mTLS FAQ
Fix: #239
2019-07-08 17:21:59 +03:00
stefanprodan
9b3b24bddf Add v0.17.0 changelog 2019-07-08 15:02:09 +03:00
stefanprodan
5c831ae482 Add Linkerd to docs 2019-07-08 13:43:01 +03:00
stefanprodan
78233fafd3 Release v0.17.0 2019-07-08 13:35:36 +03:00
Stefan Prodan
73c3e07859 Merge pull request #236 from weaveworks/leader-election
Implement leader election
2019-07-08 11:32:06 +03:00
stefanprodan
10c61daee4 Exit when losing leadership 2019-07-07 13:09:05 +03:00
stefanprodan
b1bb9fa114 Enable leader election for e2e testing 2019-07-07 12:19:09 +03:00
stefanprodan
a7f4b6d2ae Add leader election and pod anti affinity to chart 2019-07-07 12:08:08 +03:00
stefanprodan
b937c4ea8d Implement leader election
Add enable-leader-election and leader-election-namespace flags
2019-07-07 11:41:27 +03:00
Stefan Prodan
e577311b64 Merge pull request #235 from weaveworks/msteams
Implement MS Teams notifications
2019-07-07 11:40:00 +03:00
stefanprodan
b847345308 Add 5 seconds timeout to notifier 2019-07-06 18:02:45 +03:00
stefanprodan
85e683446f Add MS Teams to docs 2019-07-06 17:22:39 +03:00
stefanprodan
4f49aa5760 Add MS Teams webhook field to chart 2019-07-06 17:14:50 +03:00
stefanprodan
8ca9cf24bb Implement MS Teams notifier 2019-07-06 17:14:21 +03:00
stefanprodan
61d0216c21 Add traffic routing to notifications 2019-07-06 17:13:09 +03:00
stefanprodan
ba4a2406ba Refactor notifier to allow more implementations 2019-07-06 15:47:12 +03:00
Stefan Prodan
c2974416b4 Merge pull request #234 from weaveworks/psp
Add pod security policy to Helm chart
2019-07-06 11:52:58 +03:00
stefanprodan
48fac4e876 Disable privilege escalation 2019-07-06 10:38:17 +03:00
stefanprodan
f0add9a67c Use a role binding for the PSP rbac 2019-07-05 17:10:05 +03:00
stefanprodan
20f9df01c2 Add pod security policy to Helm chart
- disable privileged, hostIPC, hostNetwork and hostPID
- add psp flag to chart readme
2019-07-05 16:47:41 +03:00
Stefan Prodan
514e850072 Merge pull request #232 from weaveworks/kustomize
Add Kustomize installer
2019-07-05 16:23:44 +03:00
stefanprodan
61fe78a982 Mention Prometheus data retention in docs 2019-07-04 15:07:05 +03:00
stefanprodan
c4b066c845 Add Kustomize installer to docs 2019-07-04 13:45:56 +03:00
stefanprodan
d24a23f3bd Kustomize installer: add installer readme 2019-07-04 13:29:02 +03:00
stefanprodan
22045982e2 Kustomize installer: add Linkerd overlay 2019-07-04 13:27:23 +03:00
stefanprodan
f496f1e18f Kustomize installer: add Istio overlay 2019-07-04 13:27:03 +03:00
stefanprodan
2e802432c4 Kustomize installer: add Kubernetes overlay 2019-07-04 13:26:52 +03:00
stefanprodan
a2f747e16f Kustomize installer: add Prometheus base manifests 2019-07-04 13:26:22 +03:00
stefanprodan
982338e162 Kustomize installer: add Flagger base manifests 2019-07-04 13:26:08 +03:00
Stefan Prodan
03fe4775dd Merge pull request #231 from weaveworks/gloo-0.14.2
Update Gloo and Prometheus
2019-07-02 09:24:14 +03:00
stefanprodan
def7d9bde0 Update Prometheus to v2.10.0 and set retention to 2h 2019-07-02 09:10:56 +03:00
stefanprodan
a58a7cbeeb Update Gloo to 0.14.2 2019-07-01 16:07:38 +03:00
Stefan Prodan
82ca66c23b Merge pull request #230 from weaveworks/linkerd
Add Linkerd support and e2e testing
2019-07-01 10:28:38 +03:00
stefanprodan
92c971c0d7 Add ingress and A/B testing example to Linkerd docs 2019-07-01 10:02:50 +03:00
stefanprodan
30c4faf72b Add Linkerd canary deployments docs 2019-07-01 09:23:38 +03:00
stefanprodan
85ee7d17cf Set min analysis interval to 10s 2019-07-01 09:23:05 +03:00
stefanprodan
a6d278ae91 Add Linkerd traffic split diagram 2019-06-30 13:52:59 +03:00
stefanprodan
ad8d02f701 Use Linkerd metrics when NGINX is the mesh ingress
Set the metrics provider to Linkerd Prometheus when using NGINX as Linkerd Ingress. This mitigates the lack of canary metrics in the NGINX controller exporter.
2019-06-30 13:03:27 +03:00
stefanprodan
00fa5542f7 Add linkerd as mesh provider 2019-06-30 12:46:23 +03:00
stefanprodan
9ed2719d19 Add canary rollback test to Linkerd e2e 2019-06-30 10:36:37 +03:00
stefanprodan
8a809baf35 Linkerd e2e testing: set canary max weight to 50% 2019-06-29 16:06:16 +03:00
stefanprodan
ff90c42fa7 Fix Linkerd CLI install 2019-06-29 15:53:36 +03:00
stefanprodan
d651e8fe48 Fix Linkerd metrics test 2019-06-29 15:23:40 +03:00
stefanprodan
bc613905e9 Add Linkerd edge-19.6.4 e2e testing 2019-06-29 15:20:35 +03:00
stefanprodan
e3321118e5 Fix linkerd success rate query 2019-06-29 15:03:36 +03:00
Stefan Prodan
31f526cbd6 Merge pull request #229 from weaveworks/istio-1.2.2
Update Istio e2e to v1.2.2
2019-06-29 10:53:31 +03:00
stefanprodan
493554178f Update Istio e2e to v1.2.2
Disable galley and MCP
2019-06-29 10:27:40 +03:00
Stefan Prodan
004b1cc7dd Merge pull request #228 from weaveworks/updates
Update Grafana and Kubernetes Kind
2019-06-27 10:31:20 +03:00
stefanprodan
767602592c Bump podinfo chart version 2019-06-27 10:19:34 +03:00
stefanprodan
34676acaf5 Add Istio TLS mode to podinfo chart 2019-06-27 10:14:06 +03:00
stefanprodan
491ab7affa Update Grafana to v6.2.5 2019-06-27 10:13:25 +03:00
stefanprodan
b522bbd903 Update Kubernetes Kind to v0.4.0 2019-06-27 09:58:56 +03:00
Stefan Prodan
dd3bc28806 Merge pull request #227 from dcherman/validate-k8s-version
Validate the minimum supported k8s version
2019-06-27 09:56:25 +03:00
Daniel Herman
764e7e275d Validate the minimum supported k8s version
Fixes #103
2019-06-27 00:18:08 -04:00
stefanprodan
931c051153 Fix tag composition in release script 2019-06-24 17:46:12 +03:00
Stefan Prodan
3da86fe118 Merge pull request #224 from weaveworks/hpa
Update primary HPA only when canary changed
2019-06-24 17:14:26 +03:00
stefanprodan
93f37a3022 Update primary HPA only when canary changed 2019-06-24 16:21:22 +03:00
stefanprodan
77b3d861e6 Add release workflow to CI 2019-06-24 16:08:56 +03:00
Stefan Prodan
ce0e16ffe8 Merge pull request #222 from weaveworks/release-v0.16.0
Release v0.16.0
2019-06-24 13:53:59 +03:00
stefanprodan
fb9709ae78 Add Blue/Green to FAQ 2019-06-23 15:48:13 +03:00
stefanprodan
191c3868ab Update changelog for v0.16.0 2019-06-23 13:52:01 +03:00
stefanprodan
d076f0859e Release v0.16.0 2019-06-23 13:48:39 +03:00
stefanprodan
df24ba86d0 Add Blue/Green tutorial to docs 2019-06-23 13:40:12 +03:00
stefanprodan
3996bcfa67 Add canary provider field to docs 2019-06-23 13:35:36 +03:00
Stefan Prodan
9e8a4ad384 Merge pull request #221 from weaveworks/gloo-v0.14.0
Update Gloo e2e testing to v0.14.0
2019-06-23 11:35:31 +03:00
stefanprodan
26ee668612 Use Kind 0.2.1 for Gloo e2e 2019-06-23 11:22:21 +03:00
stefanprodan
e3c102e7f8 Use test ns for Gloo virtual service in e2e 2019-06-22 17:49:13 +03:00
stefanprodan
ba60b127ea Use Kind 0.3.0 for Gloo e2e 2019-06-22 16:50:50 +03:00
stefanprodan
74c69dc07e Update Gloo e2e to v0.14.0 2019-06-22 16:48:38 +03:00
Stefan Prodan
0687d89178 Merge pull request #220 from weaveworks/update-e2e
Update e2e tests to Kubernetes 1.14
2019-06-22 16:12:20 +03:00
stefanprodan
7a454c005f Use Kind 0.2.1 for supergloo e2e 2019-06-22 15:55:48 +03:00
stefanprodan
2ce4f3a93e Revert supergloo upgrade (Istio 1.1 not supported in v0.3.23) 2019-06-22 15:41:17 +03:00
stefanprodan
7baaaebdd4 Use Kind 0.2.1 for Gloo e2e 2019-06-22 15:28:45 +03:00
stefanprodan
608c7f7a31 Use Istio 1.1.3 for supergloo e2e testing 2019-06-22 15:15:38 +03:00
stefanprodan
1a0daa8678 Use http probes with Kind 0.3.0 2019-06-22 14:58:01 +03:00
stefanprodan
ed0d25af97 Revert to Kind 0.2.1 2019-06-22 14:33:06 +03:00
stefanprodan
720d04aba1 Update Supergloo to v0.3.23 2019-06-22 14:16:30 +03:00
stefanprodan
901648393a Update Kubernetes Kind to v0.3.0 2019-06-22 14:02:33 +03:00
Stefan Prodan
b5acd817fc Merge pull request #219 from weaveworks/istio-1.2.0
Fix Istio 1.2.0 e2e testing
2019-06-22 13:58:48 +03:00
stefanprodan
2586fc6ef0 Update Kubernetes Kind to v0.3.0 2019-06-22 13:58:18 +03:00
stefanprodan
62e0eb6395 Update changelog 2019-06-22 13:44:47 +03:00
stefanprodan
768b0490e2 Show CircleCI build status 2019-06-22 13:43:39 +03:00
stefanprodan
852454fa2c Fix Istio v1.2.0 e2e testing by enabling galley 2019-06-22 13:42:57 +03:00
Stefan Prodan
970b67d6f6 Merge pull request #212 from marcoferrer/bump-e2e-istio-version
Upgrade e2e tests to Istio v1.2.0
2019-06-22 13:41:11 +03:00
Stefan Prodan
ea0eddff82 Merge pull request #218 from weaveworks/ci
Refactor CI and e2e testing
2019-06-22 13:11:29 +03:00
stefanprodan
0d4d2ac37b CircleCI: Build and push load tester 2019-06-22 09:51:37 +03:00
stefanprodan
d0591916a4 Update k8s packages 2019-06-21 23:42:59 +03:00
stefanprodan
6a8aef8675 CircleCI: workaround for code gen 2019-06-21 21:47:09 +03:00
stefanprodan
a894a7a0ce CircleCI: update code gen package 2019-06-21 20:36:18 +03:00
stefanprodan
0bbe724b8c CircleCI: chmod k8s code gen 2019-06-21 16:43:16 +03:00
stefanprodan
bea22c0259 CircleCI: run go mod download 2019-06-21 16:40:25 +03:00
stefanprodan
6363580120 Fix k8s code gen 2019-06-21 16:34:19 +03:00
stefanprodan
cbdc7ef2d3 Build and run k8s code gen with go modules 2019-06-21 16:31:51 +03:00
stefanprodan
0959406609 Remove vendor dir 2019-06-21 16:31:19 +03:00
stefanprodan
cf41f9a478 CircleCI - fix deprecated goreleaser config 2019-06-21 16:15:51 +03:00
stefanprodan
6fe6a41e3e CircleCI - cleanup branch filters 2019-06-21 15:58:23 +03:00
stefanprodan
91cd2648d9 CircleCI - run goreleaser for git tags 2019-06-21 15:39:56 +03:00
stefanprodan
240591a6b8 CircleCI - run goreleaser 2019-06-21 15:29:38 +03:00
stefanprodan
2973822113 CircleCI - test goreleaser job 2019-06-21 15:17:39 +03:00
stefanprodan
a6b2b1246c CircleCI - add goreleaser job 2019-06-21 15:07:00 +03:00
stefanprodan
c74456411d CircleCI - install Tiller after Kind create 2019-06-21 14:54:17 +03:00
stefanprodan
31b3fcf906 CircleCI - refactor e2e tests 2019-06-21 14:43:00 +03:00
stefanprodan
767be5b6a8 CircleCI - reset go mod cache 2019-06-21 14:28:29 +03:00
stefanprodan
48834cd8d1 CircleCI - refactor Istio e2e testing 2019-06-21 14:26:12 +03:00
stefanprodan
f4bb0ea9c2 CircleCI - add codecov 2019-06-21 13:58:36 +03:00
stefanprodan
cf25a9a8a5 CircleCI - fix config 2019-06-21 13:47:06 +03:00
stefanprodan
4f0ad7a067 CircleCI - run Istio e2e 2019-06-21 13:45:16 +03:00
stefanprodan
c0fe461a9f CircleCI - push to Docker Hub 2019-06-21 13:33:26 +03:00
stefanprodan
1911143514 CircleCI - run go test 2019-06-21 13:29:32 +03:00
stefanprodan
9b67b360d0 CircleCI - build and push container 2019-06-21 13:26:12 +03:00
stefanprodan
991e01efd2 CircleCI - fix container build 2019-06-21 13:10:27 +03:00
stefanprodan
83b8ae46c9 CircleCI - copy bin to workspace dir 2019-06-21 13:07:59 +03:00
stefanprodan
c3b7aee063 CircleCI - make workspace dir 2019-06-21 13:05:16 +03:00
stefanprodan
66d662c085 CircleCI - fix working_directory 2019-06-21 13:01:47 +03:00
stefanprodan
4d5876fb76 CircleCI - fix job name 2019-06-21 13:00:26 +03:00
stefanprodan
7ca2558a81 CircleCI - fix config 2019-06-21 12:59:26 +03:00
stefanprodan
8957994c1a CircleCI - set job deps 2019-06-21 12:58:17 +03:00
stefanprodan
0147aea69b Build binary and container in CircleCI
Cache go modules
2019-06-21 12:55:27 +03:00
stefanprodan
b5f73d66ec Add version command 2019-06-21 12:54:43 +03:00
Stefan Prodan
6800181594 Merge pull request #217 from weaveworks/provider
Add the service mesh provider to the canary spec
2019-06-21 11:13:17 +03:00
Stefan Prodan
6f5f80a085 Merge pull request #216 from weaveworks/hpa-promotion
Reconcile the primary HPA on canary promotion
2019-06-21 11:13:02 +03:00
stefanprodan
fd23a2f98f Add kubernetes provider type
Synonym to provider `none`, to be used for blue/green deployments
2019-06-20 15:18:48 +03:00
stefanprodan
63cb8a5ba5 Lookup the canary provider field during reconciliation
Override the global provider if one is specified in the canary spec
2019-06-20 14:52:43 +03:00
stefanprodan
4a9e3182c6 Add the mesh provider field to canary CRD 2019-06-20 14:50:21 +03:00
stefanprodan
5cbc3df7b5 Use internal load testers address in canary example 2019-06-20 13:32:06 +03:00
stefanprodan
dcadc2303f Add HPA promotion tests 2019-06-20 13:31:34 +03:00
stefanprodan
cf5f364ed2 Update the primary HPA on canary promotion 2019-06-20 13:30:55 +03:00
Stefan Prodan
e45ace5d9b Merge pull request #211 from weaveworks/noprouter
Allow blue/green deployments without a service mesh provider
2019-06-20 13:02:02 +03:00
Marco Ferrer
6e7421b0d8 Upgrade e2e tests to Istio v1.2.0 2019-06-19 13:26:22 -04:00
stefanprodan
647d02890f Add HTTP metrics when no mesh provider is specified
Implement request-success-rate and request-duration checks using http_request_duration_seconds histogram
2019-06-19 13:15:17 +03:00
stefanprodan
7e72d23b60 Bump load tester version to 0.4.0 2019-06-19 13:12:04 +03:00
stefanprodan
9fada306f0 Add a service mesh provider of type none
To be used for Kubernetes blue/green deployments with the no-operations router
2019-06-19 12:02:40 +03:00
stefanprodan
8d1cc83405 Add a no-operation router
To be used for Kubernetes blue/green deployments (no service mesh or ingress controller)
2019-06-19 12:01:02 +03:00
Stefan Prodan
1979bc59d0 Merge pull request #210 from weaveworks/nop-router
Kubernetes service reconciliation improvements
2019-06-19 11:49:10 +03:00
stefanprodan
bf7ebc9708 Skip readiness check on init for Istio SMI 2019-06-19 11:16:11 +03:00
stefanprodan
dc3cde88d2 Use Helm to install Flagger for Istio e2e tests 2019-06-19 11:03:44 +03:00
stefanprodan
98beb1011e Skip primary check on init when using Istio
The deployment will become ready after the ClusterIP are created
2019-06-19 10:50:55 +03:00
stefanprodan
8c59e9d2b4 Fix metrics URL getter 2019-06-19 10:30:19 +03:00
stefanprodan
9a87d47f45 Check primary readiness on initialisation
Wait for the primary to become ready before scaling down the canary in the init phase
2019-06-19 09:49:25 +03:00
stefanprodan
f25023ed1b Include selector in service reconciliation
- detect changes in the Kubernetes service selectors and ports
- preserve the immutable fields when updating the ClusterIP services
2019-06-18 17:57:00 +03:00
stefanprodan
806b233d58 Fix typo in ClusterIP FAQ 2019-06-18 13:35:14 +03:00
Stefan Prodan
677ee8d639 Merge pull request #207 from weaveworks/port-discovery
Implement port discovery
2019-06-18 13:09:35 +03:00
stefanprodan
61ac8d7a8c Add port discovery to canary example 2019-06-18 12:46:21 +03:00
stefanprodan
278680b248 Add port discovery to changelog 2019-06-18 12:43:50 +03:00
stefanprodan
5e4a58a1c1 Upgrade e2e tests to Istio v1.1.9 2019-06-18 11:35:21 +03:00
stefanprodan
757b5ca22e Add missing config params to chart readme 2019-06-18 11:35:07 +03:00
stefanprodan
6d1da5bb45 Use container name in port discovery
If the port name is missing, append the container name to the tcp port name
2019-06-17 20:50:21 +03:00
stefanprodan
9ca79d147d Add Istio virtual service merging to FAQ 2019-06-17 12:06:53 +03:00
stefanprodan
37fcfe15bb Merge feature comparison table 2019-06-16 11:48:21 +03:00
stefanprodan
a9c7466359 Add pod affinity and label selectors to FAQ 2019-06-16 11:18:51 +03:00
stefanprodan
91a3f2c9a7 Add NGINX A/B testing convention to FAQ 2019-06-16 10:52:33 +03:00
stefanprodan
9aa341d088 Add load tester mTLS to FAQ
Ref: #186
2019-06-16 10:38:07 +03:00
stefanprodan
c9e09fa8eb Add Istio mTLS to FAQ
Fix: #205
2019-06-16 10:36:25 +03:00
stefanprodan
e6257b7531 Add port discovery to FAQ 2019-06-16 10:33:03 +03:00
stefanprodan
aee027c91c Add Kubernetes services to FAQ 2019-06-16 10:32:25 +03:00
stefanprodan
c106796751 Add A/B testing to FAQ 2019-06-16 10:31:25 +03:00
stefanprodan
42bd600482 Update GKE Prometheus config 2019-06-15 16:55:27 +03:00
stefanprodan
47ad81be5b Remove unused go modules 2019-06-15 16:54:52 +03:00
stefanprodan
88c450e3bd Implement port discovery
If port discovery is enabled, Flagger scans the deployment pod template and extracts the container ports excluding the port specified in the canary service spec and Istio proxy ports. All the extra ports will be used when generation the ClusterIP services.
2019-06-15 16:34:32 +03:00
stefanprodan
2ebedd185c Add port discovery field to canary service spec 2019-06-15 16:18:54 +03:00
Stefan Prodan
0fdbef4cda Merge pull request #203 from weaveworks/prep-v0.15.0
Release v0.15.0
2019-06-12 16:50:58 +03:00
stefanprodan
68500dc579 Fix e2e helm install 2019-06-12 16:33:09 +03:00
stefanprodan
12a29f1939 Release v0.15.0 2019-06-12 15:38:56 +03:00
stefanprodan
9974968dee Update Istio e2e to 1.1.8 2019-06-12 14:46:29 +03:00
Stefan Prodan
f2eaa91c9c Merge pull request #202 from weaveworks/gomod
Switch to go mod from dep
2019-06-12 11:15:44 +03:00
Stefan Prodan
f117f72901 Merge pull request #200 from weaveworks/traffic-policy
Add support for Istio traffic policy
2019-06-12 11:15:23 +03:00
stefanprodan
5424126d3c Remove go mod from code gen script 2019-06-11 19:48:11 +03:00
stefanprodan
028933b635 Switch to go mod from dep 2019-06-11 19:37:36 +03:00
stefanprodan
678f79fc61 Revendor with go mod 2019-06-11 19:35:26 +03:00
stefanprodan
933c19fdf4 Add generated destination rules to docs 2019-06-11 11:08:05 +03:00
stefanprodan
d678c59285 Add traffic policy to docs 2019-06-07 14:17:29 +03:00
stefanprodan
2285bd210e Add traffic policy to canary service spec
Attach traffic policy to canary and primary destination rules
2019-06-07 13:58:59 +03:00
stefanprodan
cba6e5f811 Add Istio destination rule to RBAC 2019-06-07 13:32:34 +03:00
stefanprodan
3fa9f37192 Reconcile Istio destination rule
Remove port selector from virtual service destinations
Ignore the destination weight field when diffing the virtual service spec
2019-06-07 13:31:07 +03:00
stefanprodan
c243756802 Make Istio port selector optional 2019-06-07 13:22:39 +03:00
stefanprodan
27b1b882ea Add destination rule to Istio clientset 2019-06-07 11:52:51 +03:00
Stefan Prodan
2505cbfe15 Merge pull request #198 from weaveworks/release-v0.14.1
Release v0.14.1
2019-06-05 10:36:18 +03:00
stefanprodan
396452b7b6 Add changelog for v0.14.1 2019-06-05 10:28:40 +03:00
stefanprodan
76c82f48a4 Release v0.14.1 2019-06-05 10:28:13 +03:00
Stefan Prodan
948226dd4e Merge pull request #196 from weaveworks/helm-test-hook
Implement Helm and Bash pre-rollout hooks
2019-06-05 10:11:04 +03:00
stefanprodan
1c97fc86c9 Restrict Helm task to a single command 2019-06-05 09:40:18 +03:00
Stefan Prodan
00de7abfde Merge pull request #197 from Laci21/set-url-custom-path
Add ability to set Prometheus url with custom path without trailing "/"
2019-06-04 19:04:56 +03:00
László Bence Nagy
631d93b8d9 Add ability to set Prometheus url with custom path without trailing '/' 2019-06-04 17:31:27 +02:00
stefanprodan
2e38dbc565 Release test runner v0.4.0 2019-06-04 17:27:58 +03:00
stefanprodan
b122f7f71a Add integration tests to docs 2019-06-04 17:27:19 +03:00
stefanprodan
6101557000 Use the canary service as load testing target 2019-06-04 17:26:35 +03:00
stefanprodan
cdc66128a9 Add helm test pre-rollout example to docs 2019-06-04 16:15:48 +03:00
stefanprodan
eace3713ce Add helm test pre-rollout hook example to podinfo chart 2019-06-04 16:15:25 +03:00
stefanprodan
fd50c4b4b7 Add service account option to tester chart 2019-06-04 15:31:02 +03:00
stefanprodan
62a5f8c5d6 Log helm command before running it 2019-06-04 14:56:58 +03:00
stefanprodan
093cb24602 Run tester locally with docker 2019-06-04 14:02:28 +03:00
stefanprodan
4f63f7f9e4 Bump tester version to 0.4.0-beta.5 2019-06-04 14:01:53 +03:00
stefanprodan
9f359327f0 Add generic bash blocking command 2019-06-04 14:01:25 +03:00
stefanprodan
2bc8194d96 Prepend helm to command 2019-06-04 14:00:00 +03:00
stefanprodan
181d50b7b6 Add Helm tester deployment spec
To be deployed in kube-system namespace, uses tiller service account
2019-06-03 15:57:08 +03:00
stefanprodan
3ae995f55c Bump load tester version to v0.4.0-beta.2 2019-06-03 15:56:14 +03:00
stefanprodan
fbb37ad5e4 Add helm command type (blocking) to tester API
To be used as pre-rollout hook
2019-06-03 15:55:40 +03:00
stefanprodan
5cc3b905b4 Add Helm binary to load tester image 2019-06-03 15:54:06 +03:00
Stefan Prodan
abb8d946cc Merge pull request #194 from christian-posta/ceposta-fix-readme
Fix link to Gloo progressive delivery
2019-05-30 22:34:16 +02:00
Christian Posta
797316fc4d Fix link to Gloo progressive delivery 2019-05-30 12:37:59 -07:00
Stefan Prodan
beed6369a0 Merge pull request #190 from olga-mir/fix-promotion-usecase
Fix promoting canary when max weight is not a multiple of step
2019-05-23 15:13:50 +02:00
Olga Mirensky
9618d2ea0d Fix promoting canary when max weight is not a multiple of step 2019-05-23 10:18:19 +10:00
Stefan Prodan
94e5bfc031 Merge pull request #188 from weaveworks/release-v0.14.0
Release v0.14.0
2019-05-21 14:05:04 +02:00
stefanprodan
bb620ad94a Release v0.14.0 changelog 2019-05-21 13:54:54 +02:00
stefanprodan
7c6d1c48a3 Release v0.14.0 2019-05-21 13:54:15 +02:00
Stefan Prodan
bd5d884c8b Merge pull request #187 from weaveworks/docs-smi
Flagger docs SMI
2019-05-21 13:34:19 +02:00
Stefan Prodan
1c06721c9a Merge pull request #185 from weaveworks/docs-gloo
Add Gloo ingress controller docs
2019-05-21 13:24:19 +02:00
stefanprodan
1e29e2c4eb Fix Grafana Prometheus URL 2019-05-19 10:34:36 +03:00
stefanprodan
88c39d7379 Add Gloo canary deployment docs and diagram 2019-05-17 15:07:43 +03:00
stefanprodan
da43a152ba Add Gloo canary deployment example 2019-05-17 13:15:53 +03:00
stefanprodan
ec63aa9999 Add Gloo custom resources to RBAC 2019-05-17 11:55:15 +03:00
Stefan Prodan
7b9df746ad Merge pull request #179 from yuval-k/gloo2
Add support for Gloo
2019-05-17 11:17:27 +03:00
Yuval Kohavi
52d93ddda2 fix router tests 2019-05-16 13:08:53 -04:00
Yuval Kohavi
eb0331f2bf fix tests 2019-05-16 12:48:03 -04:00
Yuval Kohavi
6a66a87a44 PR updates 2019-05-16 07:28:22 -04:00
stefanprodan
f3cc810948 Update Flagger image tag (fix latency check) 2019-05-15 20:31:25 +03:00
Stefan Prodan
12d84b2e24 Merge pull request #183 from weaveworks/metrics-fix
Fix Istio latency check
2019-05-15 20:24:15 +03:00
stefanprodan
58bde24ece Fix Istio request duration test 2019-05-15 20:10:27 +03:00
stefanprodan
5b3fd0efca Set Istio request duration to milliseconds 2019-05-15 20:01:27 +03:00
stefanprodan
ee6e39afa6 Add SMI tutorial 2019-05-15 17:37:29 +03:00
Yuval Kohavi
677b9d9197 gloo metrics 2019-05-14 17:48:13 -04:00
Yuval Kohavi
786c5aa93a Merge remote-tracking branch 'upstream/master' into gloo2 2019-05-14 10:26:57 -04:00
Stefan Prodan
fd44f1fabf Merge pull request #182 from weaveworks/linkerd-metrics
Fix Linkerd promql queries
2019-05-14 15:23:37 +03:00
Stefan Prodan
b20e0178e1 Merge pull request #180 from weaveworks/smi
Add support for SMI
2019-05-14 13:24:29 +03:00
stefanprodan
5a490abfdd Remove the mesh gateway from docs examples 2019-05-14 13:06:52 +03:00
stefanprodan
674c79da94 Fix Linkerd promql queries
- include all inbound traffic stats
2019-05-14 12:14:47 +03:00
stefanprodan
23ebb4235d merge metrics-v2 into smi 2019-05-14 09:53:42 +03:00
Stefan Prodan
b2500d0ccb Merge pull request #181 from weaveworks/metrics-v2
Refactor the metrics package
2019-05-14 09:49:24 +03:00
stefanprodan
ee500d83ac Add Linkerd observer implementation 2019-05-13 17:51:39 +03:00
stefanprodan
0032c14a78 Refactor metrics
- add observer interface with builtin metrics functions
- add metrics observer factory
- add prometheus client
- implement the observer interface for istio, envoy and nginx
- remove deprecated istio and app mesh metric aliases (istio_requests_total, istio_request_duration_seconds_bucket, envoy_cluster_upstream_rq, envoy_cluster_upstream_rq_time_bucket)
2019-05-13 17:34:08 +03:00
stefanprodan
8fd3e927b8 Merge branch 'master' into smi 2019-05-12 13:58:37 +03:00
stefanprodan
1902884b56 Release v0.13.2 2019-05-11 15:16:31 +03:00
Stefan Prodan
98d2805267 Merge pull request #178 from carlossg/issue-177
Fix #177 Do not copy labels from canary to primary deployment
2019-05-11 14:56:22 +03:00
Carlos Sanchez
24a74d3589 Fix #177 Do not copy labels from canary to primary deployment 2019-05-11 13:42:08 +02:00
stefanprodan
7fe273a21d Fix SMI cluster role binding 2019-05-11 14:08:58 +03:00
stefanprodan
bd817cc520 Run SMI Istio e2e tests 2019-05-11 14:00:53 +03:00
stefanprodan
eb856fda13 Add SMI Istio e2e tests 2019-05-11 13:46:24 +03:00
stefanprodan
d63f05c92e Add SMI group to RBAC 2019-05-11 13:45:32 +03:00
stefanprodan
8fde6bdb8a Add SMI Istio adapter deployment 2019-05-11 13:35:36 +03:00
stefanprodan
8148120421 Enable Istio checks for SMI-Istio adapter 2019-05-11 13:06:06 +03:00
stefanprodan
95b8840bf2 Add SMI traffic split to router 2019-05-11 13:05:19 +03:00
stefanprodan
0e8b1ef20f Generate the SMI TrafficSplit clientset 2019-05-11 12:49:23 +03:00
Yuval Kohavi
0fbf4dcdb2 add canary promotion 2019-05-10 20:16:21 -04:00
Yuval Kohavi
7aca9468ac re-enable helm 2019-05-10 19:48:22 -04:00
Yuval Kohavi
a6c0f08fcc add gloo to circle 2019-05-10 19:44:46 -04:00
Yuval Kohavi
9c1bcc08bb float -> percent 2019-05-10 19:21:08 -04:00
Yuval Kohavi
87e9dfe3d3 e2e test 2019-05-10 19:16:16 -04:00
Yuval Kohavi
d7be66743e Merge remote-tracking branch 'upstream/master' into gloo2 2019-05-10 10:38:14 -04:00
Stefan Prodan
15463456ec Merge pull request #176 from weaveworks/nginx-tests
Add nginx e2e and unit tests
2019-05-10 12:09:40 +03:00
stefanprodan
752eceed4b Add tests for ingress weight changes 2019-05-10 11:53:12 +03:00
stefanprodan
eadce34d6f Add ingress router unit tests 2019-05-10 11:39:52 +03:00
stefanprodan
11ccf34bbc Document the nginx e2e tests 2019-05-10 10:50:24 +03:00
stefanprodan
e308678ed5 Deploy ingress for nginx e2e tests 2019-05-10 10:40:38 +03:00
stefanprodan
cbe72f0aa2 Add ingress target to nginx e2e tests 2019-05-10 10:29:09 +03:00
stefanprodan
bc84e1c154 Fix typos 2019-05-10 10:24:47 +03:00
stefanprodan
344bd45a0e Add nginx e2e tests 2019-05-10 10:24:35 +03:00
stefanprodan
72014f736f Release v0.13.1 2019-05-09 14:29:42 +03:00
Stefan Prodan
0a2949b6ad Merge pull request #174 from weaveworks/fix-metrics
Fix NGINX promql and custom metrics checks
2019-05-09 14:22:30 +03:00
stefanprodan
2ff695ecfe Fix nginx metrics tests 2019-05-09 14:00:15 +03:00
stefanprodan
8d0b54e059 Add custom metrics to nginx docs 2019-05-09 13:51:37 +03:00
stefanprodan
121a65fad0 Fix nginx promql namespace selector 2019-05-09 13:50:47 +03:00
stefanprodan
ecaa203091 Fix custom metric checks
- escape the prom query before encoding it
2019-05-09 13:49:48 +03:00
Stefan Prodan
6d0e3c6468 Merge pull request #173 from weaveworks/release-v0.13.0
Prepare release v0.13.0
2019-05-08 20:52:18 +03:00
stefanprodan
c933476fff Bump Grafana chart version 2019-05-08 20:26:40 +03:00
stefanprodan
1335210cf5 Add the Prometheus add-on to App Mesh docs 2019-05-08 19:03:53 +03:00
stefanprodan
9d12794600 Add NGINX to readme 2019-05-08 18:30:00 +03:00
stefanprodan
d57fc7d03e Add v0.13.0 change log 2019-05-08 18:05:58 +03:00
stefanprodan
1f9f6fb55a Release v0.13.0 2019-05-08 18:05:47 +03:00
Stefan Prodan
948df55de3 Merge pull request #170 from weaveworks/nginx
Add support for nginx ingress controller
2019-05-08 17:44:29 +03:00
stefanprodan
8914f26754 Add ngnix docs 2019-05-08 17:03:36 +03:00
stefanprodan
79b3370892 Add Prometheus add-on to Flagger chart 2019-05-08 15:44:28 +03:00
stefanprodan
a233b99f0b Add HPA to nginx demo 2019-05-07 11:12:36 +03:00
stefanprodan
0d94c01678 Toggle canary annotation based on weight 2019-05-07 11:10:19 +03:00
stefanprodan
00151e92fe Implement A/B testing for nginx ingress 2019-05-07 10:33:40 +03:00
stefanprodan
f7db0210ea Add nginx ingress controller checks 2019-05-06 18:43:02 +03:00
stefanprodan
cf3ba35fb9 Add nginx ingress controller metrics 2019-05-06 18:42:31 +03:00
stefanprodan
177dc824e3 Implement nginx ingress router 2019-05-06 18:42:02 +03:00
stefanprodan
5f544b90d6 Log mesh provider at startup 2019-05-06 18:41:04 +03:00
stefanprodan
921ac00383 Add ingress ref to CRD and RBAC 2019-05-06 18:33:00 +03:00
Stefan Prodan
7df7218978 Merge pull request #168 from scranton/supergloo
Fix and clarify SuperGloo installation docs
2019-05-06 11:33:40 +03:00
Scott Cranton
e4c6903a01 Fix and clarify SuperGloo installation docs
Added missing `=` for --version, and added brew and helm install options
2019-05-05 15:42:06 -04:00
Stefan Prodan
027342dc72 Merge pull request #167 from weaveworks/grafana-fix
Change dashboard selector to destination workload
2019-05-04 09:03:57 +03:00
stefanprodan
e17a747785 Change dashboard selector to destination workload 2019-05-03 19:32:29 +03:00
Stefan Prodan
e477b37bd0 Merge pull request #162 from weaveworks/fix-vs
Fix duplicate hosts error when using wildcard
2019-05-02 19:17:52 +03:00
Stefan Prodan
ad25068375 Merge pull request #160 from aackerman/patch-1
Update default image repo in flagger chart readme to be weaveworks
2019-05-02 19:17:38 +03:00
stefanprodan
c92230c109 Fix duplicate hosts error when using wildcard 2019-05-02 19:05:54 +03:00
Stefan Prodan
9e082d9ee3 Update charts/flagger/README.md
Co-Authored-By: aackerman <theron17@gmail.com>
2019-05-02 11:05:43 -05:00
Aaron Ackerman
cfd610ac55 Update default image repo in flagger chart readme to be weaveworks 2019-05-02 07:18:00 -05:00
stefanprodan
82067f13bf Add GitOps diagram 2019-05-01 13:09:18 +03:00
Yuval Kohavi
350efb2bfe gloo upstream group support 2019-04-23 07:47:50 -04:00
3139 changed files with 7572 additions and 1014527 deletions

View File

@@ -1,38 +0,0 @@
version: 2.1
jobs:
e2e-testing:
machine: true
steps:
- checkout
- run: test/e2e-kind.sh
- run: test/e2e-istio.sh
- run: test/e2e-build.sh
- run: test/e2e-tests.sh
e2e-supergloo-testing:
machine: true
steps:
- checkout
- run: test/e2e-kind.sh
- run: test/e2e-supergloo.sh
- run: test/e2e-build.sh supergloo:test.supergloo-system
- run: test/e2e-tests.sh canary
workflows:
version: 2
build-and-test:
jobs:
- e2e-testing:
filters:
branches:
ignore:
- /gh-pages.*/
- /docs-.*/
- /release-.*/
- e2e-supergloo-testing:
filters:
branches:
ignore:
- /gh-pages.*/
- /docs-.*/
- /release-.*/

View File

@@ -1,11 +0,0 @@
coverage:
status:
project:
default:
target: auto
threshold: 50
base: auto
patch: off
comment:
require_changes: yes

View File

@@ -1 +0,0 @@
root: ./docs/gitbook

1
.github/CODEOWNERS vendored
View File

@@ -1 +0,0 @@
* @stefanprodan

17
.github/main.workflow vendored
View File

@@ -1,17 +0,0 @@
workflow "Publish Helm charts" {
on = "push"
resolves = ["helm-push"]
}
action "helm-lint" {
uses = "stefanprodan/gh-actions/helm@master"
args = ["lint charts/*"]
}
action "helm-push" {
needs = ["helm-lint"]
uses = "stefanprodan/gh-actions/helm-gh-pages@master"
args = ["charts/*","https://flagger.app"]
secrets = ["GITHUB_TOKEN"]
}

53
.github/workflows/website.yaml vendored Normal file
View File

@@ -0,0 +1,53 @@
name: website
on:
push:
branches:
- website
jobs:
vuepress:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v5
- uses: actions/setup-node@v5
with:
node-version: '12.x'
- id: yarn-cache-dir-path
run: echo "::set-output name=dir::$(yarn cache dir)"
- uses: actions/cache@v4
id: yarn-cache
with:
path: ${{ steps.yarn-cache-dir-path.outputs.dir }}
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-
- name: Build
run: |
yarn add -D vuepress
yarn docs:build
mkdir $HOME/site
rsync -avzh docs/.vuepress/dist/ $HOME/site
- name: Publish
env:
REPOSITORY: "https://x-access-token:${{ secrets.BOT_GITHUB_TOKEN }}@github.com/fluxcd/flagger.git"
run: |
tmpDir=$(mktemp -d)
pushd $tmpDir >& /dev/null
git clone ${REPOSITORY}
cd flagger
git config user.name ${{ github.actor }}
git config user.email ${{ github.actor }}@users.noreply.github.com
git remote set-url origin ${REPOSITORY}
git checkout gh-pages
rm -rf assets
rsync -avzh $HOME/site/ .
rm -rf node_modules
git add .
git commit -m "Publish website"
git push origin gh-pages
popd >& /dev/null
rm -rf $tmpDir

74
.gitignore vendored
View File

@@ -1,17 +1,65 @@
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Test binary, build with `go test -c`
*.test
# Runtime data
pids
*.pid
*.seed
*.pid.lock
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
.DS_Store
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
# Coverage directory used by tools like istanbul
coverage
# nyc test coverage
.nyc_output
# Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files)
.grunt
# Bower dependency directory (https://bower.io/)
bower_components
# node-waf configuration
.lock-wscript
# Compiled binary addons (https://nodejs.org/api/addons.html)
build/Release
# Dependency directories
node_modules/
jspm_packages/
# TypeScript v1 declaration files
typings/
# Optional npm cache directory
.npm
# Optional eslint cache
.eslintcache
# Optional REPL history
.node_repl_history
# Output of 'npm pack'
*.tgz
# Yarn Integrity file
.yarn-integrity
# dotenv environment variables file
.env
# next.js build output
.next
bin/
artifacts/gcloud/
.idea
docs/.vuepress/dist/
Makefile.dev

View File

@@ -1,14 +0,0 @@
builds:
- main: ./cmd/flagger
binary: flagger
ldflags: -s -w -X github.com/weaveworks/flagger/pkg/version.REVISION={{.Commit}}
goos:
- linux
goarch:
- amd64
env:
- CGO_ENABLED=0
archive:
name_template: "{{ .Binary }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}"
files:
- none*

View File

@@ -1,50 +0,0 @@
sudo: required
language: go
branches:
except:
- /gh-pages.*/
- /docs-.*/
go:
- 1.12.x
services:
- docker
addons:
apt:
packages:
- docker-ce
script:
- make test-fmt
- make test-codegen
- go test -race -coverprofile=coverage.txt -covermode=atomic $(go list ./pkg/...)
- make build
after_success:
- if [ -z "$DOCKER_USER" ]; then
echo "PR build, skipping image push";
else
BRANCH_COMMIT=${TRAVIS_BRANCH}-$(echo ${TRAVIS_COMMIT} | head -c7);
docker tag weaveworks/flagger:latest weaveworks/flagger:${BRANCH_COMMIT};
echo $DOCKER_PASS | docker login -u=$DOCKER_USER --password-stdin;
docker push weaveworks/flagger:${BRANCH_COMMIT};
fi
- if [ -z "$TRAVIS_TAG" ]; then
echo "Not a release, skipping image push";
else
docker tag weaveworks/flagger:latest weaveworks/flagger:${TRAVIS_TAG};
echo $DOCKER_PASS | docker login -u=$DOCKER_USER --password-stdin;
docker push weaveworks/flagger:$TRAVIS_TAG;
fi
- bash <(curl -s https://codecov.io/bash)
- rm coverage.txt
deploy:
- provider: script
skip_cleanup: true
script: curl -sL http://git.io/goreleaser | bash
on:
tags: true

View File

@@ -1,245 +0,0 @@
# Changelog
All notable changes to this project are documented in this file.
## 0.12.0 (2019-04-29)
Adds support for [SuperGloo](https://docs.flagger.app/install/flagger-install-with-supergloo)
#### Features
- Supergloo support for canary deployment (weighted traffic) [#151](https://github.com/weaveworks/flagger/pull/151)
## 0.11.1 (2019-04-18)
Move Flagger and the load tester container images to Docker Hub
#### Features
- Add Bash Automated Testing System support to Flagger tester for running acceptance tests as pre-rollout hooks
## 0.11.0 (2019-04-17)
Adds pre/post rollout [webhooks](https://docs.flagger.app/how-it-works#webhooks)
#### Features
- Add `pre-rollout` and `post-rollout` webhook types [#147](https://github.com/weaveworks/flagger/pull/147)
#### Improvements
- Unify App Mesh and Istio builtin metric checks [#146](https://github.com/weaveworks/flagger/pull/146)
- Make the pod selector label configurable [#148](https://github.com/weaveworks/flagger/pull/148)
#### Breaking changes
- Set default `mesh` Istio gateway only if no gateway is specified [#141](https://github.com/weaveworks/flagger/pull/141)
## 0.10.0 (2019-03-27)
Adds support for App Mesh
#### Features
- AWS App Mesh integration
[#107](https://github.com/weaveworks/flagger/pull/107)
[#123](https://github.com/weaveworks/flagger/pull/123)
#### Improvements
- Reconcile Kubernetes ClusterIP services [#122](https://github.com/weaveworks/flagger/pull/122)
#### Fixes
- Preserve pod labels on canary promotion [#105](https://github.com/weaveworks/flagger/pull/105)
- Fix canary status Prometheus metric [#121](https://github.com/weaveworks/flagger/pull/121)
## 0.9.0 (2019-03-11)
Allows A/B testing scenarios where instead of weighted routing, the traffic is split between the
primary and canary based on HTTP headers or cookies.
#### Features
- A/B testing - canary with session affinity [#88](https://github.com/weaveworks/flagger/pull/88)
#### Fixes
- Update the analysis interval when the custom resource changes [#91](https://github.com/weaveworks/flagger/pull/91)
## 0.8.0 (2019-03-06)
Adds support for CORS policy and HTTP request headers manipulation
#### Features
- CORS policy support [#83](https://github.com/weaveworks/flagger/pull/83)
- Allow headers to be appended to HTTP requests [#82](https://github.com/weaveworks/flagger/pull/82)
#### Improvements
- Refactor the routing management
[#72](https://github.com/weaveworks/flagger/pull/72)
[#80](https://github.com/weaveworks/flagger/pull/80)
- Fine-grained RBAC [#73](https://github.com/weaveworks/flagger/pull/73)
- Add option to limit Flagger to a single namespace [#78](https://github.com/weaveworks/flagger/pull/78)
## 0.7.0 (2019-02-28)
Adds support for custom metric checks, HTTP timeouts and HTTP retries
#### Features
- Allow custom promql queries in the canary analysis spec [#60](https://github.com/weaveworks/flagger/pull/60)
- Add HTTP timeout and retries to canary service spec [#62](https://github.com/weaveworks/flagger/pull/62)
## 0.6.0 (2019-02-25)
Allows for [HTTPMatchRequests](https://istio.io/docs/reference/config/istio.networking.v1alpha3/#HTTPMatchRequest)
and [HTTPRewrite](https://istio.io/docs/reference/config/istio.networking.v1alpha3/#HTTPRewrite)
to be customized in the service spec of the canary custom resource.
#### Features
- Add HTTP match conditions and URI rewrite to the canary service spec [#55](https://github.com/weaveworks/flagger/pull/55)
- Update virtual service when the canary service spec changes
[#54](https://github.com/weaveworks/flagger/pull/54)
[#51](https://github.com/weaveworks/flagger/pull/51)
#### Improvements
- Run e2e testing on [Kubernetes Kind](https://github.com/kubernetes-sigs/kind) for canary promotion
[#53](https://github.com/weaveworks/flagger/pull/53)
## 0.5.1 (2019-02-14)
Allows skipping the analysis phase to ship changes directly to production
#### Features
- Add option to skip the canary analysis [#46](https://github.com/weaveworks/flagger/pull/46)
#### Fixes
- Reject deployment if the pod label selector doesn't match `app: <DEPLOYMENT_NAME>` [#43](https://github.com/weaveworks/flagger/pull/43)
## 0.5.0 (2019-01-30)
Track changes in ConfigMaps and Secrets [#37](https://github.com/weaveworks/flagger/pull/37)
#### Features
- Promote configmaps and secrets changes from canary to primary
- Detect changes in configmaps and/or secrets and (re)start canary analysis
- Add configs checksum to Canary CRD status
- Create primary configmaps and secrets at bootstrap
- Scan canary volumes and containers for configmaps and secrets
#### Fixes
- Copy deployment labels from canary to primary at bootstrap and promotion
## 0.4.1 (2019-01-24)
Load testing webhook [#35](https://github.com/weaveworks/flagger/pull/35)
#### Features
- Add the load tester chart to Flagger Helm repository
- Implement a load test runner based on [rakyll/hey](https://github.com/rakyll/hey)
- Log warning when no values are found for Istio metric due to lack of traffic
#### Fixes
- Run wekbooks before the metrics checks to avoid failures when using a load tester
## 0.4.0 (2019-01-18)
Restart canary analysis if revision changes [#31](https://github.com/weaveworks/flagger/pull/31)
#### Breaking changes
- Drop support for Kubernetes 1.10
#### Features
- Detect changes during canary analysis and reset advancement
- Add status and additional printer columns to CRD
- Add canary name and namespace to controller structured logs
#### Fixes
- Allow canary name to be different to the target name
- Check if multiple canaries have the same target and log error
- Use deep copy when updating Kubernetes objects
- Skip readiness checks if canary analysis has finished
## 0.3.0 (2019-01-11)
Configurable canary analysis duration [#20](https://github.com/weaveworks/flagger/pull/20)
#### Breaking changes
- Helm chart: flag `controlLoopInterval` has been removed
#### Features
- CRD: canaries.flagger.app v1alpha3
- Schedule canary analysis independently based on `canaryAnalysis.interval`
- Add analysis interval to Canary CRD (defaults to one minute)
- Make autoscaler (HPA) reference optional
## 0.2.0 (2019-01-04)
Webhooks [#18](https://github.com/weaveworks/flagger/pull/18)
#### Features
- CRD: canaries.flagger.app v1alpha2
- Implement canary external checks based on webhooks HTTP POST calls
- Add webhooks to Canary CRD
- Move docs to gitbook [docs.flagger.app](https://docs.flagger.app)
## 0.1.2 (2018-12-06)
Improve Slack notifications [#14](https://github.com/weaveworks/flagger/pull/14)
#### Features
- Add canary analysis metadata to init and start Slack messages
- Add rollback reason to failed canary Slack messages
## 0.1.1 (2018-11-28)
Canary progress deadline [#10](https://github.com/weaveworks/flagger/pull/10)
#### Features
- Rollback canary based on the deployment progress deadline check
- Add progress deadline to Canary CRD (defaults to 10 minutes)
## 0.1.0 (2018-11-25)
First stable release
#### Features
- CRD: canaries.flagger.app v1alpha1
- Notifications: post canary events to Slack
- Instrumentation: expose Prometheus metrics for canary status and traffic weight percentage
- Autoscaling: add HPA reference to CRD and create primary HPA at bootstrap
- Bootstrap: create primary deployment, ClusterIP services and Istio virtual service based on CRD spec
## 0.0.1 (2018-10-07)
Initial semver release
#### Features
- Implement canary rollback based on failed checks threshold
- Scale up the deployment when canary revision changes
- Add OpenAPI v3 schema validation to Canary CRD
- Use CRD status for canary state persistence
- Add Helm charts for Flagger and Grafana
- Add canary analysis Grafana dashboard

View File

@@ -1,72 +0,0 @@
# How to Contribute
Flagger is [Apache 2.0 licensed](LICENSE) and accepts contributions via GitHub
pull requests. This document outlines some of the conventions on development
workflow, commit message formatting, contact points and other resources to make
it easier to get your contribution accepted.
We gratefully welcome improvements to documentation as well as to code.
## Certificate of Origin
By contributing to this project you agree to the Developer Certificate of
Origin (DCO). This document was created by the Linux Kernel community and is a
simple statement that you, as a contributor, have the legal right to make the
contribution.
## Chat
The project uses Slack: To join the conversation, simply join the
[Weave community](https://slack.weave.works/) Slack workspace.
## Getting Started
- Fork the repository on GitHub
- If you want to contribute as a developer, continue reading this document for further instructions
- If you have questions, concerns, get stuck or need a hand, let us know
on the Slack channel. We are happy to help and look forward to having
you part of the team. No matter in which capacity.
- Play with the project, submit bugs, submit pull requests!
## Contribution workflow
This is a rough outline of how to prepare a contribution:
- Create a topic branch from where you want to base your work (usually branched from master).
- Make commits of logical units.
- Make sure your commit messages are in the proper format (see below).
- Push your changes to a topic branch in your fork of the repository.
- If you changed code:
- add automated tests to cover your changes
- Submit a pull request to the original repository.
## Acceptance policy
These things will make a PR more likely to be accepted:
- a well-described requirement
- new code and tests follow the conventions in old code and tests
- a good commit message (see below)
- All code must abide [Go Code Review Comments](https://github.com/golang/go/wiki/CodeReviewComments)
- Names should abide [What's in a name](https://talks.golang.org/2014/names.slide#1)
- Code must build on both Linux and Darwin, via plain `go build`
- Code should have appropriate test coverage and tests should be written
to work with `go test`
In general, we will merge a PR once one maintainer has endorsed it.
For substantial changes, more people may become involved, and you might
get asked to resubmit the PR or divide the changes into more than one PR.
### Format of the Commit Message
For Flux we prefer the following rules for good commit messages:
- Limit the subject to 50 characters and write as the continuation
of the sentence "If applied, this commit will ..."
- Explain what and why in the body, if more than a trivial change;
wrap it at 72 characters.
The [following article](https://chris.beams.io/posts/git-commit/#seven-rules)
has some more helpful advice on documenting your work.
This doc is adapted from the [Weaveworks Flux](https://github.com/weaveworks/flux/blob/master/CONTRIBUTING.md)

View File

@@ -1,29 +0,0 @@
FROM golang:1.12
RUN mkdir -p /go/src/github.com/weaveworks/flagger/
WORKDIR /go/src/github.com/weaveworks/flagger
COPY . .
RUN GIT_COMMIT=$(git rev-list -1 HEAD) && \
CGO_ENABLED=0 GOOS=linux go build -ldflags "-s -w \
-X github.com/weaveworks/flagger/pkg/version.REVISION=${GIT_COMMIT}" \
-a -installsuffix cgo -o flagger ./cmd/flagger/*
FROM alpine:3.9
RUN addgroup -S flagger \
&& adduser -S -g flagger flagger \
&& apk --no-cache add ca-certificates
WORKDIR /home/flagger
COPY --from=0 /go/src/github.com/weaveworks/flagger/flagger .
RUN chown -R flagger:flagger ./
USER flagger
ENTRYPOINT ["./flagger"]

View File

@@ -1,30 +0,0 @@
FROM golang:1.12 AS builder
RUN mkdir -p /go/src/github.com/weaveworks/flagger/
WORKDIR /go/src/github.com/weaveworks/flagger
COPY . .
RUN go test -race ./pkg/loadtester/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o loadtester ./cmd/loadtester/*
FROM bats/bats:v1.1.0
RUN addgroup -S app \
&& adduser -S -g app app \
&& apk --no-cache add ca-certificates curl jq
WORKDIR /home/app
RUN curl -sSLo hey "https://storage.googleapis.com/jblabs/dist/hey_linux_v0.1.2" && \
chmod +x hey && mv hey /usr/local/bin/hey
COPY --from=builder /go/src/github.com/weaveworks/flagger/loadtester .
RUN chown -R app:app ./
USER app
ENTRYPOINT ["./loadtester"]

1195
Gopkg.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,74 +0,0 @@
required = [
"k8s.io/apimachinery/pkg/util/sets/types",
"k8s.io/code-generator/cmd/deepcopy-gen",
"k8s.io/code-generator/cmd/defaulter-gen",
"k8s.io/code-generator/cmd/client-gen",
"k8s.io/code-generator/cmd/lister-gen",
"k8s.io/code-generator/cmd/informer-gen",
]
[[constraint]]
name = "go.uber.org/zap"
version = "v1.9.1"
[[constraint]]
name = "gopkg.in/h2non/gock.v1"
version = "v1.0.14"
[[override]]
name = "gopkg.in/yaml.v2"
version = "v2.2.1"
[[override]]
name = "k8s.io/api"
version = "kubernetes-1.13.1"
[[override]]
name = "k8s.io/apimachinery"
version = "kubernetes-1.13.1"
[[override]]
name = "k8s.io/code-generator"
version = "kubernetes-1.13.1"
[[override]]
name = "k8s.io/client-go"
version = "kubernetes-1.13.1"
[[override]]
name = "k8s.io/apiextensions-apiserver"
version = "kubernetes-1.13.1"
[[override]]
name = "k8s.io/apiserver"
version = "kubernetes-1.13.1"
[[constraint]]
name = "github.com/prometheus/client_golang"
version = "v0.8.0"
[[constraint]]
name = "github.com/google/go-cmp"
version = "v0.2.0"
[[override]]
name = "k8s.io/klog"
source = "github.com/stefanprodan/klog"
[prune]
go-tests = true
unused-packages = true
non-go = true
[[prune.project]]
name = "k8s.io/code-generator"
unused-packages = false
non-go = false
[[constraint]]
name = "github.com/solo-io/supergloo"
version = "v0.3.11"
[[constraint]]
name = "github.com/solo-io/solo-kit"
version = "v0.6.3"

View File

@@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2018 Weaveworks. All rights reserved.
Copyright 2019 Weaveworks
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -1,5 +0,0 @@
The maintainers are generally available in Slack at
https://weave-community.slack.com/messages/flagger/ (obtain an invitation
at https://slack.weave.works/).
Stefan Prodan, Weaveworks <stefan@weave.works> (Slack: @stefan Twitter: @stefanprodan)

View File

@@ -1,93 +0,0 @@
TAG?=latest
VERSION?=$(shell grep 'VERSION' pkg/version/version.go | awk '{ print $$4 }' | tr -d '"')
VERSION_MINOR:=$(shell grep 'VERSION' pkg/version/version.go | awk '{ print $$4 }' | tr -d '"' | rev | cut -d'.' -f2- | rev)
PATCH:=$(shell grep 'VERSION' pkg/version/version.go | awk '{ print $$4 }' | tr -d '"' | awk -F. '{print $$NF}')
SOURCE_DIRS = cmd pkg/apis pkg/controller pkg/server pkg/logging pkg/version
LT_VERSION?=$(shell grep 'VERSION' cmd/loadtester/main.go | awk '{ print $$4 }' | tr -d '"' | head -n1)
TS=$(shell date +%Y-%m-%d_%H-%M-%S)
run:
go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info \
-metrics-server=https://prometheus.istio.weavedx.com \
-slack-url=https://hooks.slack.com/services/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 \
-slack-channel="devops-alerts"
run-appmesh:
go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=appmesh \
-metrics-server=http://acfc235624ca911e9a94c02c4171f346-1585187926.us-west-2.elb.amazonaws.com:9090 \
-slack-url=https://hooks.slack.com/services/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 \
-slack-channel="devops-alerts"
build:
docker build -t weaveworks/flagger:$(TAG) . -f Dockerfile
push:
docker tag weaveworks/flagger:$(TAG) weaveworks/flagger:$(VERSION)
docker push weaveworks/flagger:$(VERSION)
fmt:
gofmt -l -s -w $(SOURCE_DIRS)
test-fmt:
gofmt -l -s $(SOURCE_DIRS) | grep ".*\.go"; if [ "$$?" = "0" ]; then exit 1; fi
test-codegen:
./hack/verify-codegen.sh
test: test-fmt test-codegen
go test ./...
helm-package:
cd charts/ && helm package ./*
mv charts/*.tgz docs/
helm repo index docs --url https://weaveworks.github.io/flagger --merge ./docs/index.yaml
helm-up:
helm upgrade --install flagger ./charts/flagger --namespace=istio-system --set crd.create=false
helm upgrade --install flagger-grafana ./charts/grafana --namespace=istio-system
version-set:
@next="$(TAG)" && \
current="$(VERSION)" && \
sed -i '' "s/$$current/$$next/g" pkg/version/version.go && \
sed -i '' "s/flagger:$$current/flagger:$$next/g" artifacts/flagger/deployment.yaml && \
sed -i '' "s/tag: $$current/tag: $$next/g" charts/flagger/values.yaml && \
sed -i '' "s/appVersion: $$current/appVersion: $$next/g" charts/flagger/Chart.yaml && \
sed -i '' "s/version: $$current/version: $$next/g" charts/flagger/Chart.yaml && \
echo "Version $$next set in code, deployment and charts"
version-up:
@next="$(VERSION_MINOR).$$(($(PATCH) + 1))" && \
current="$(VERSION)" && \
sed -i '' "s/$$current/$$next/g" pkg/version/version.go && \
sed -i '' "s/flagger:$$current/flagger:$$next/g" artifacts/flagger/deployment.yaml && \
sed -i '' "s/tag: $$current/tag: $$next/g" charts/flagger/values.yaml && \
sed -i '' "s/appVersion: $$current/appVersion: $$next/g" charts/flagger/Chart.yaml && \
echo "Version $$next set in code, deployment and chart"
dev-up: version-up
@echo "Starting build/push/deploy pipeline for $(VERSION)"
docker build -t quay.io/stefanprodan/flagger:$(VERSION) . -f Dockerfile
docker push quay.io/stefanprodan/flagger:$(VERSION)
kubectl apply -f ./artifacts/flagger/crd.yaml
helm upgrade -i flagger ./charts/flagger --namespace=istio-system --set crd.create=false
release:
git tag $(VERSION)
git push origin $(VERSION)
release-set: fmt version-set helm-package
git add .
git commit -m "Release $(VERSION)"
git push origin master
git tag $(VERSION)
git push origin $(VERSION)
reset-test:
kubectl delete -f ./artifacts/namespaces
kubectl apply -f ./artifacts/namespaces
kubectl apply -f ./artifacts/canaries
loadtester-push:
docker build -t weaveworks/flagger-loadtester:$(LT_VERSION) . -f Dockerfile.loadtester
docker push weaveworks/flagger-loadtester:$(LT_VERSION)

195
README.md
View File

@@ -1,195 +1,4 @@
# flagger
# flagger website
[![build](https://travis-ci.org/weaveworks/flagger.svg?branch=master)](https://travis-ci.org/weaveworks/flagger)
[![report](https://goreportcard.com/badge/github.com/weaveworks/flagger)](https://goreportcard.com/report/github.com/weaveworks/flagger)
[![codecov](https://codecov.io/gh/weaveworks/flagger/branch/master/graph/badge.svg)](https://codecov.io/gh/weaveworks/flagger)
[![license](https://img.shields.io/github/license/weaveworks/flagger.svg)](https://github.com/weaveworks/flagger/blob/master/LICENSE)
[![release](https://img.shields.io/github/release/weaveworks/flagger/all.svg)](https://github.com/weaveworks/flagger/releases)
[flagger.app](https://flagger.app)
Flagger is a Kubernetes operator that automates the promotion of canary deployments
using Istio or App Mesh routing for traffic shifting and Prometheus metrics for canary analysis.
The canary analysis can be extended with webhooks for running acceptance tests,
load tests or any other custom validation.
Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance
indicators like HTTP requests success rate, requests average duration and pods health.
Based on analysis of the KPIs a canary is promoted or aborted, and the analysis result is published to Slack.
![flagger-overview](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/diagrams/flagger-canary-overview.png)
## Documentation
Flagger documentation can be found at [docs.flagger.app](https://docs.flagger.app)
* Install
* [Flagger install on Kubernetes](https://docs.flagger.app/install/flagger-install-on-kubernetes)
* [Flagger install on GKE Istio](https://docs.flagger.app/install/flagger-install-on-google-cloud)
* [Flagger install on EKS App Mesh](https://docs.flagger.app/install/flagger-install-on-eks-appmesh)
* [Flagger install with SuperGloo](https://docs.flagger.app/install/flagger-install-with-supergloo)
* How it works
* [Canary custom resource](https://docs.flagger.app/how-it-works#canary-custom-resource)
* [Routing](https://docs.flagger.app/how-it-works#istio-routing)
* [Canary deployment stages](https://docs.flagger.app/how-it-works#canary-deployment)
* [Canary analysis](https://docs.flagger.app/how-it-works#canary-analysis)
* [HTTP metrics](https://docs.flagger.app/how-it-works#http-metrics)
* [Custom metrics](https://docs.flagger.app/how-it-works#custom-metrics)
* [Webhooks](https://docs.flagger.app/how-it-works#webhooks)
* [Load testing](https://docs.flagger.app/how-it-works#load-testing)
* Usage
* [Istio canary deployments](https://docs.flagger.app/usage/progressive-delivery)
* [Istio A/B testing](https://docs.flagger.app/usage/ab-testing)
* [App Mesh canary deployments](https://docs.flagger.app/usage/appmesh-progressive-delivery)
* [Monitoring](https://docs.flagger.app/usage/monitoring)
* [Alerting](https://docs.flagger.app/usage/alerting)
* Tutorials
* [Canary deployments with Helm charts and Weave Flux](https://docs.flagger.app/tutorials/canary-helm-gitops)
## Canary CRD
Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA),
then creates a series of objects (Kubernetes deployments, ClusterIP services and Istio or App Mesh virtual services).
These objects expose the application on the mesh and drive the canary analysis and promotion.
Flagger keeps track of ConfigMaps and Secrets referenced by a Kubernetes Deployment and triggers a canary analysis if any of those objects change.
When promoting a workload in production, both code (container images) and configuration (config maps and secrets) are being synchronised.
For a deployment named _podinfo_, a canary promotion can be defined using Flagger's custom resource:
```yaml
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
# container port
port: 9898
# Istio gateways (optional)
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
# Istio virtual service host names (optional)
hosts:
- podinfo.example.com
# HTTP match conditions (optional)
match:
- uri:
prefix: /
# HTTP rewrite (optional)
rewrite:
uri: /
# Envoy timeout and retry policy (optional)
headers:
request:
add:
x-envoy-upstream-rq-timeout-ms: "15000"
x-envoy-max-retries: "10"
x-envoy-retry-on: "gateway-error,connect-failure,refused-stream"
# cross-origin resource sharing policy (optional)
corsPolicy:
allowOrigin:
- example.com
# promote the canary without analysing it (default false)
skipAnalysis: false
# define the canary analysis timing and KPIs
canaryAnalysis:
# schedule interval (default 60s)
interval: 1m
# max number of failed metric checks before rollback
threshold: 10
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 5
# Istio Prometheus checks
metrics:
# builtin checks
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 1m
- name: request-duration
# maximum req duration P99
# milliseconds
threshold: 500
interval: 30s
# custom check
- name: "kafka lag"
threshold: 100
query: |
avg_over_time(
kafka_consumergroup_lag{
consumergroup=~"podinfo-consumer-.*",
topic="podinfo"
}[1m]
)
# external checks (optional)
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"
```
For more details on how the canary analysis and promotion works please [read the docs](https://docs.flagger.app/how-it-works).
## Features
| Feature | Istio | App Mesh | SuperGloo |
| -------------------------------------------- | ------------------ | ------------------ |------------------ |
| Canary deployments (weighted traffic) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| A/B testing (headers and cookies filters) | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: |
| Load testing | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Webhooks (custom acceptance tests) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Request success rate check (Envoy metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Request duration check (Envoy metric) | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: |
| Custom promql checks | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Ingress gateway (CORS, retries and timeouts) | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: |
## Roadmap
* Integrate with other service mesh technologies like Linkerd v2
* Add support for comparing the canary metrics to the primary ones and do the validation based on the derivation between the two
## Contributing
Flagger is Apache 2.0 licensed and accepts contributions via GitHub pull requests.
When submitting bug reports please include as much details as possible:
* which Flagger version
* which Flagger CRD version
* which Kubernetes/Istio version
* what configuration (canary, virtual service and workloads definitions)
* what happened (Flagger, Istio Pilot and Proxy logs)
## Getting Help
If you have any questions about Flagger and progressive delivery:
* Read the Flagger [docs](https://docs.flagger.app).
* Invite yourself to the [Weave community slack](https://slack.weave.works/)
and join the [#flagger](https://weave-community.slack.com/messages/flagger/) channel.
* Join the [Weave User Group](https://www.meetup.com/pro/Weave/) and get invited to online talks,
hands-on training and meetups in your area.
* File an [issue](https://github.com/weaveworks/flagger/issues/new).
Your feedback is always welcome!

View File

@@ -1,62 +0,0 @@
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: abtest
namespace: test
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: abtest
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: abtest
service:
# container port
port: 9898
# Istio gateways (optional)
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
# Istio virtual service host names (optional)
hosts:
- abtest.istio.weavedx.com
canaryAnalysis:
# schedule interval (default 60s)
interval: 10s
# max number of failed metric checks before rollback
threshold: 10
# total number of iterations
iterations: 10
# canary match condition
match:
- headers:
user-agent:
regex: "^(?!.*Chrome)(?=.*\bSafari\b).*$"
- headers:
cookie:
regex: "^(.*?;)?(type=insider)(;.*)?$"
metrics:
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 1m
- name: request-duration
# maximum req duration P99
# milliseconds
threshold: 500
interval: 30s
# external checks (optional)
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 10 -c 2 -H 'Cookie: type=insider' http://podinfo.test:9898/"

View File

@@ -1,67 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: abtest
namespace: test
labels:
app: abtest
spec:
minReadySeconds: 5
revisionHistoryLimit: 5
progressDeadlineSeconds: 60
strategy:
rollingUpdate:
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: abtest
template:
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
app: abtest
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:1.4.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898
name: http
protocol: TCP
command:
- ./podinfo
- --port=9898
- --level=info
- --random-delay=false
- --random-error=false
env:
- name: PODINFO_UI_COLOR
value: blue
livenessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/healthz
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/readyz
initialDelaySeconds: 5
timeoutSeconds: 5
resources:
limits:
cpu: 2000m
memory: 512Mi
requests:
cpu: 100m
memory: 64Mi

View File

@@ -1,19 +0,0 @@
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: abtest
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: abtest
minReplicas: 2
maxReplicas: 4
metrics:
- type: Resource
resource:
name: cpu
# scale up if usage is above
# 99% of the requested CPU (100m)
targetAverageUtilization: 99

View File

@@ -1,50 +0,0 @@
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
# container port
port: 9898
# App Mesh reference
meshName: global
# define the canary analysis timing and KPIs
canaryAnalysis:
# schedule interval (default 60s)
interval: 10s
# max number of failed metric checks before rollback
threshold: 10
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 5
# App Mesh Prometheus checks
metrics:
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 1m
# external checks (optional)
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"

View File

@@ -1,65 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
minReadySeconds: 5
revisionHistoryLimit: 5
progressDeadlineSeconds: 60
strategy:
rollingUpdate:
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: podinfo
template:
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
app: podinfo
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:1.4.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898
name: http
protocol: TCP
command:
- ./podinfo
- --port=9898
- --level=info
env:
- name: PODINFO_UI_COLOR
value: blue
livenessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/healthz
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/readyz
initialDelaySeconds: 5
timeoutSeconds: 5
resources:
limits:
cpu: 2000m
memory: 512Mi
requests:
cpu: 100m
memory: 64Mi

View File

@@ -1,6 +0,0 @@
apiVersion: appmesh.k8s.aws/v1beta1
kind: Mesh
metadata:
name: global
spec:
serviceDiscoveryType: dns

View File

@@ -1,19 +0,0 @@
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: podinfo
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
minReplicas: 2
maxReplicas: 4
metrics:
- type: Resource
resource:
name: cpu
# scale up if usage is above
# 99% of the requested CPU (100m)
targetAverageUtilization: 99

View File

@@ -1,177 +0,0 @@
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-config
namespace: test
labels:
app: ingress
data:
envoy.yaml: |
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
access_log:
- name: envoy.file_access_log
config:
path: /dev/stdout
codec_type: auto
stat_prefix: ingress_http
http_filters:
- name: envoy.router
config: {}
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match:
prefix: "/"
route:
cluster: podinfo
host_rewrite: podinfo.test
timeout: 15s
retry_policy:
retry_on: "gateway-error,connect-failure,refused-stream"
num_retries: 10
per_try_timeout: 5s
clusters:
- name: podinfo
connect_timeout: 0.30s
type: strict_dns
lb_policy: round_robin
http2_protocol_options: {}
hosts:
- socket_address:
address: podinfo.test
port_value: 9898
admin:
access_log_path: /dev/null
address:
socket_address:
address: 0.0.0.0
port_value: 9999
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress
namespace: test
labels:
app: ingress
spec:
replicas: 1
selector:
matchLabels:
app: ingress
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
app: ingress
annotations:
prometheus.io/path: "/stats/prometheus"
prometheus.io/port: "9999"
prometheus.io/scrape: "true"
# dummy port to exclude ingress from mesh traffic
# only egress should go over the mesh
appmesh.k8s.aws/ports: "444"
spec:
terminationGracePeriodSeconds: 30
containers:
- name: ingress
image: "envoyproxy/envoy-alpine:d920944aed67425f91fc203774aebce9609e5d9a"
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
command:
- /usr/bin/dumb-init
- --
args:
- /usr/local/bin/envoy
- --base-id 30
- --v2-config-only
- -l
- $loglevel
- -c
- /config/envoy.yaml
ports:
- name: admin
containerPort: 9999
protocol: TCP
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
initialDelaySeconds: 5
tcpSocket:
port: admin
readinessProbe:
initialDelaySeconds: 5
tcpSocket:
port: admin
resources:
requests:
cpu: 100m
memory: 64Mi
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
configMap:
name: ingress-config
---
kind: Service
apiVersion: v1
metadata:
name: ingress
namespace: test
spec:
selector:
app: ingress
ports:
- protocol: TCP
name: http
port: 80
targetPort: 80
- protocol: TCP
name: https
port: 443
targetPort: 443
type: LoadBalancer
---
apiVersion: appmesh.k8s.aws/v1beta1
kind: VirtualNode
metadata:
name: ingress
namespace: test
spec:
meshName: global
listeners:
- portMapping:
port: 80
protocol: http
serviceDiscovery:
dns:
hostName: ingress.test
backends:
- virtualService:
virtualServiceName: podinfo.test

View File

@@ -1,77 +0,0 @@
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
# container port
port: 9898
# Istio gateways (optional)
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
# Istio virtual service host names (optional)
hosts:
- app.istio.weavedx.com
# HTTP match conditions (optional)
match:
- uri:
prefix: /
# HTTP rewrite (optional)
rewrite:
uri: /
# Envoy timeout and retry policy (optional)
headers:
request:
add:
x-envoy-upstream-rq-timeout-ms: "15000"
x-envoy-max-retries: "10"
x-envoy-retry-on: "gateway-error,connect-failure,refused-stream"
# promote the canary without analysing it (default false)
skipAnalysis: false
canaryAnalysis:
# schedule interval (default 60s)
interval: 10s
# max number of failed metric checks before rollback
threshold: 10
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 5
# Istio Prometheus checks
metrics:
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 1m
- name: request-duration
# maximum req duration P99
# milliseconds
threshold: 500
interval: 30s
# external checks (optional)
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"
logCmdOutput: "true"

View File

@@ -1,67 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
minReadySeconds: 5
revisionHistoryLimit: 5
progressDeadlineSeconds: 60
strategy:
rollingUpdate:
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: podinfo
template:
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
app: podinfo
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:1.4.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898
name: http
protocol: TCP
command:
- ./podinfo
- --port=9898
- --level=info
- --random-delay=false
- --random-error=false
env:
- name: PODINFO_UI_COLOR
value: blue
livenessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/healthz
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/readyz
initialDelaySeconds: 5
timeoutSeconds: 5
resources:
limits:
cpu: 2000m
memory: 512Mi
requests:
cpu: 100m
memory: 64Mi

View File

@@ -1,19 +0,0 @@
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: podinfo
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
minReplicas: 2
maxReplicas: 4
metrics:
- type: Resource
resource:
name: cpu
# scale up if usage is above
# 99% of the requested CPU (100m)
targetAverageUtilization: 99

View File

@@ -1,6 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: test
labels:
istio-injection: enabled

View File

@@ -1,26 +0,0 @@
apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
name: backend
namespace: test
annotations:
flux.weave.works/automated: "true"
flux.weave.works/tag.chart-image: regexp:^1.4.*
spec:
releaseName: backend
chart:
repository: https://flagger.app/
name: podinfo
version: 2.0.0
values:
image:
repository: quay.io/stefanprodan/podinfo
tag: 1.4.0
httpServer:
timeout: 30s
canary:
enabled: true
istioIngress:
enabled: false
loadtest:
enabled: true

View File

@@ -1,27 +0,0 @@
apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
name: frontend
namespace: test
annotations:
flux.weave.works/automated: "true"
flux.weave.works/tag.chart-image: semver:~1.4
spec:
releaseName: frontend
chart:
repository: https://flagger.app/
name: podinfo
version: 2.0.0
values:
image:
repository: quay.io/stefanprodan/podinfo
tag: 1.4.0
backend: http://backend-podinfo:9898/echo
canary:
enabled: true
istioIngress:
enabled: true
gateway: public-gateway.istio-system.svc.cluster.local
host: frontend.istio.example.com
loadtest:
enabled: true

View File

@@ -1,18 +0,0 @@
apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
name: loadtester
namespace: test
annotations:
flux.weave.works/automated: "true"
flux.weave.works/tag.chart-image: glob:0.*
spec:
releaseName: flagger-loadtester
chart:
repository: https://flagger.app/
name: loadtester
version: 0.1.0
values:
image:
repository: quay.io/stefanprodan/flagger-loadtester
tag: 0.1.0

View File

@@ -1,59 +0,0 @@
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
# container port
port: 9898
# Istio gateways (optional)
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
# Istio virtual service host names (optional)
hosts:
- app.iowa.weavedx.com
canaryAnalysis:
# schedule interval (default 60s)
interval: 10s
# max number of failed metric checks before rollback
threshold: 10
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 5
# Istio Prometheus checks
metrics:
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 1m
- name: request-duration
# maximum req duration P99
# milliseconds
threshold: 500
interval: 30s
# external checks (optional)
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"

View File

@@ -1,16 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: podinfo-config-env
namespace: test
data:
color: blue
---
apiVersion: v1
kind: ConfigMap
metadata:
name: podinfo-config-vol
namespace: test
data:
output: console
textmode: "true"

View File

@@ -1,89 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
minReadySeconds: 5
revisionHistoryLimit: 5
progressDeadlineSeconds: 60
strategy:
rollingUpdate:
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: podinfo
template:
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
app: podinfo
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:1.3.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898
name: http
protocol: TCP
command:
- ./podinfo
- --port=9898
- --level=info
- --random-delay=false
- --random-error=false
env:
- name: PODINFO_UI_COLOR
valueFrom:
configMapKeyRef:
name: podinfo-config-env
key: color
- name: SECRET_USER
valueFrom:
secretKeyRef:
name: podinfo-secret-env
key: user
livenessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/healthz
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/readyz
initialDelaySeconds: 5
timeoutSeconds: 5
resources:
limits:
cpu: 2000m
memory: 512Mi
requests:
cpu: 100m
memory: 64Mi
volumeMounts:
- name: configs
mountPath: /etc/podinfo/configs
readOnly: true
- name: secrets
mountPath: /etc/podinfo/secrets
readOnly: true
volumes:
- name: configs
configMap:
name: podinfo-config-vol
- name: secrets
secret:
secretName: podinfo-secret-vol

View File

@@ -1,19 +0,0 @@
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: podinfo
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
minReplicas: 1
maxReplicas: 4
metrics:
- type: Resource
resource:
name: cpu
# scale up if usage is above
# 99% of the requested CPU (100m)
targetAverageUtilization: 99

View File

@@ -1,16 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: podinfo-secret-env
namespace: test
data:
password: cGFzc3dvcmQ=
user: YWRtaW4=
---
apiVersion: v1
kind: Secret
metadata:
name: podinfo-secret-vol
namespace: test
data:
key: cGFzc3dvcmQ=

View File

@@ -1,264 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
labels:
app: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
- nodes/proxy
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus
labels:
app: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: appmesh-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: appmesh-system
labels:
app: prometheus
---
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus
namespace: appmesh-system
labels:
app: prometheus
data:
prometheus.yml: |-
global:
scrape_interval: 5s
scrape_configs:
# Scrape config for AppMesh Envoy sidecar
- job_name: 'appmesh-envoy'
metrics_path: /stats/prometheus
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_name]
action: keep
regex: '^envoy$'
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: ${1}:9901
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
# Exclude high cardinality metrics
metric_relabel_configs:
- source_labels: [ cluster_name ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ tcp_prefix ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ listener_address ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_listener_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tls.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tcp_downstream.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_http_(stats|admin).*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_cluster_(lb|retry|bind|internal|max|original).*'
action: drop
# Scrape config for API servers
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- default
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: kubernetes;https
# Scrape config for nodes
- job_name: 'kubernetes-nodes'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
# scrape config for cAdvisor
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
# scrape config for pods
- job_name: kubernetes-pods
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_scrape
- source_labels: [ __address__ ]
regex: '.*9901.*'
action: drop
- action: replace
regex: (.+)
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_path
target_label: __metrics_path__
- action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_pod_annotation_prometheus_io_port
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: kubernetes_namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: kubernetes_pod_name
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus
namespace: appmesh-system
labels:
app: prometheus
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
annotations:
version: "appmesh-v1alpha1"
spec:
serviceAccountName: prometheus
containers:
- name: prometheus
image: "docker.io/prom/prometheus:v2.7.1"
imagePullPolicy: IfNotPresent
args:
- '--storage.tsdb.retention=6h'
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- containerPort: 9090
name: http
livenessProbe:
httpGet:
path: /-/healthy
port: 9090
readinessProbe:
httpGet:
path: /-/ready
port: 9090
resources:
requests:
cpu: 10m
memory: 128Mi
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
volumes:
- name: config-volume
configMap:
name: prometheus
---
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: appmesh-system
labels:
name: prometheus
spec:
selector:
app: prometheus
ports:
- name: http
protocol: TCP
port: 9090

View File

@@ -1,74 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: flagger
namespace: istio-system
labels:
app: flagger
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: flagger
labels:
app: flagger
rules:
- apiGroups:
- ""
resources:
- events
- configmaps
- secrets
- services
verbs: ["*"]
- apiGroups:
- apps
resources:
- deployments
verbs: ["*"]
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs: ["*"]
- apiGroups:
- flagger.app
resources:
- canaries
- canaries/status
verbs: ["*"]
- apiGroups:
- networking.istio.io
resources:
- virtualservices
- virtualservices/status
verbs: ["*"]
- apiGroups:
- appmesh.k8s.aws
resources:
- meshes
- meshes/status
- virtualnodes
- virtualnodes/status
- virtualservices
- virtualservices/status
verbs: ["*"]
- nonResourceURLs:
- /version
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: flagger
labels:
app: flagger
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flagger
subjects:
- kind: ServiceAccount
name: flagger
namespace: istio-system

View File

@@ -1,156 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: canaries.flagger.app
annotations:
helm.sh/resource-policy: keep
spec:
group: flagger.app
version: v1alpha3
versions:
- name: v1alpha3
served: true
storage: true
- name: v1alpha2
served: true
storage: false
- name: v1alpha1
served: true
storage: false
names:
plural: canaries
singular: canary
kind: Canary
categories:
- all
scope: Namespaced
subresources:
status: {}
additionalPrinterColumns:
- name: Status
type: string
JSONPath: .status.phase
- name: Weight
type: string
JSONPath: .status.canaryWeight
- name: LastTransitionTime
type: string
JSONPath: .status.lastTransitionTime
validation:
openAPIV3Schema:
properties:
spec:
required:
- targetRef
- service
- canaryAnalysis
properties:
progressDeadlineSeconds:
type: number
targetRef:
type: object
required: ['apiVersion', 'kind', 'name']
properties:
apiVersion:
type: string
kind:
type: string
name:
type: string
autoscalerRef:
anyOf:
- type: string
- type: object
required: ['apiVersion', 'kind', 'name']
properties:
apiVersion:
type: string
kind:
type: string
name:
type: string
service:
type: object
required: ['port']
properties:
port:
type: number
portName:
type: string
meshName:
type: string
timeout:
type: string
skipAnalysis:
type: boolean
canaryAnalysis:
properties:
interval:
type: string
pattern: "^[0-9]+(m|s)"
iterations:
type: number
threshold:
type: number
maxWeight:
type: number
stepWeight:
type: number
metrics:
type: array
properties:
items:
type: object
required: ['name', 'threshold']
properties:
name:
type: string
interval:
type: string
pattern: "^[0-9]+(m|s)"
threshold:
type: number
query:
type: string
webhooks:
type: array
properties:
items:
type: object
required: ['name', 'url', 'timeout']
properties:
name:
type: string
type:
type: string
enum:
- ""
- pre-rollout
- rollout
- post-rollout
url:
type: string
format: url
timeout:
type: string
pattern: "^[0-9]+(m|s)"
status:
properties:
phase:
type: string
enum:
- ""
- Initialized
- Progressing
- Succeeded
- Failed
canaryWeight:
type: number
failedChecks:
type: number
iterations:
type: number
lastAppliedSpec:
type: string
lastTransitionTime:
type: string

View File

@@ -1,65 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: flagger
namespace: istio-system
labels:
app: flagger
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: flagger
template:
metadata:
labels:
app: flagger
annotations:
prometheus.io/scrape: "true"
spec:
serviceAccountName: flagger
containers:
- name: flagger
image: weaveworks/flagger:0.12.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
command:
- ./flagger
- -log-level=info
- -control-loop-interval=10s
- -mesh-provider=$(MESH_PROVIDER)
- -metrics-server=http://prometheus.istio-system.svc.cluster.local:9090
livenessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=2
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
readinessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=2
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
resources:
limits:
memory: "512Mi"
cpu: "1000m"
requests:
memory: "32Mi"
cpu: "10m"
securityContext:
readOnlyRootFilesystem: true
runAsUser: 10001

View File

@@ -1,27 +0,0 @@
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt

View File

@@ -1,443 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
labels:
app: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
- nodes/proxy
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus
labels:
app: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: istio-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: istio-system
labels:
app: prometheus
---
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus
namespace: istio-system
labels:
app: prometheus
data:
prometheus.yml: |-
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'istio-mesh'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-telemetry;prometheus
# Scrape config for envoy stats
- job_name: 'envoy-stats'
metrics_path: /stats/prometheus
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_port_name]
action: keep
regex: '.*-envoy-prom'
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:15090
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
metric_relabel_configs:
# Exclude some of the envoy metrics that have massive cardinality
# This list may need to be pruned further moving forward, as informed
# by performance and scalability testing.
- source_labels: [ cluster_name ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ tcp_prefix ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ listener_address ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_listener_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tls.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tcp_downstream.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_http_(stats|admin).*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_cluster_(lb|retry|bind|internal|max|original).*'
action: drop
- job_name: 'istio-policy'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-policy;http-monitoring
- job_name: 'istio-telemetry'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-telemetry;http-monitoring
- job_name: 'pilot'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-pilot;http-monitoring
- job_name: 'galley'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-galley;http-monitoring
# scrape config for API servers
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- default
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: kubernetes;https
# scrape config for nodes (kubelet)
- job_name: 'kubernetes-nodes'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
# Scrape config for Kubelet cAdvisor.
#
# This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics
# (those whose names begin with 'container_') have been removed from the
# Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to
# retrieve those metrics.
#
# In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor
# HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics"
# in that case (and ensure cAdvisor's HTTP server hasn't been disabled with
# the --cadvisor-port=0 Kubelet flag).
#
# This job is not necessary and should be removed in Kubernetes 1.6 and
# earlier versions, or it will cause the metrics to be scraped twice.
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
# scrape config for service endpoints.
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs: # If first two labels are present, pod should be scraped by the istio-secure job.
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status]
action: drop
regex: (.+)
- source_labels: [__meta_kubernetes_pod_annotation_istio_mtls]
action: drop
regex: (true)
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
- job_name: 'kubernetes-pods-istio-secure'
scheme: https
tls_config:
ca_file: /etc/istio-certs/root-cert.pem
cert_file: /etc/istio-certs/cert-chain.pem
key_file: /etc/istio-certs/key.pem
insecure_skip_verify: true # prometheus does not support secure naming.
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
# sidecar status annotation is added by sidecar injector and
# istio_workload_mtls_ability can be specifically placed on a pod to indicate its ability to receive mtls traffic.
- source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status, __meta_kubernetes_pod_annotation_istio_mtls]
action: keep
regex: (([^;]+);([^;]*))|(([^;]*);(true))
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__] # Only keep address that is host:port
action: keep # otherwise an extra target with ':443' is added for https scheme
regex: ([^:]+):(\d+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
---
# Source: istio/charts/prometheus/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: istio-system
annotations:
prometheus.io/scrape: 'true'
labels:
name: prometheus
spec:
selector:
app: prometheus
ports:
- name: http-prometheus
protocol: TCP
port: 9090
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus
namespace: istio-system
labels:
app: prometheus
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
annotations:
sidecar.istio.io/inject: "false"
scheduler.alpha.kubernetes.io/critical-pod: ""
spec:
serviceAccountName: prometheus
containers:
- name: prometheus
image: "docker.io/prom/prometheus:v2.7.1"
imagePullPolicy: IfNotPresent
args:
- '--storage.tsdb.retention=6h'
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- containerPort: 9090
name: http
livenessProbe:
httpGet:
path: /-/healthy
port: 9090
readinessProbe:
httpGet:
path: /-/ready
port: 9090
resources:
requests:
cpu: 10m
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
- mountPath: /etc/istio-certs
name: istio-certs
volumes:
- name: config-volume
configMap:
name: prometheus
- name: istio-certs
secret:
defaultMode: 420
optional: true
secretName: istio.default

View File

@@ -1,19 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: flagger-loadtester-bats
data:
tests: |
#!/usr/bin/env bats
@test "check message" {
curl -sS http://${URL} | jq -r .message | {
run cut -d $' ' -f1
[ $output = "greetings" ]
}
}
@test "check headers" {
curl -sS http://${URL}/headers | grep X-Request-Id
}

View File

@@ -1,67 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: flagger-loadtester
labels:
app: flagger-loadtester
spec:
selector:
matchLabels:
app: flagger-loadtester
template:
metadata:
labels:
app: flagger-loadtester
annotations:
prometheus.io/scrape: "true"
spec:
containers:
- name: loadtester
image: weaveworks/flagger-loadtester:0.3.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
command:
- ./loadtester
- -port=8080
- -log-level=info
- -timeout=1h
livenessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
readinessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
resources:
limits:
memory: "512Mi"
cpu: "1000m"
requests:
memory: "32Mi"
cpu: "10m"
securityContext:
readOnlyRootFilesystem: true
runAsUser: 10001
# volumeMounts:
# - name: tests
# mountPath: /bats
# readOnly: true
# volumes:
# - name: tests
# configMap:
# name: flagger-loadtester-bats

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: flagger-loadtester
labels:
app: flagger-loadtester
spec:
type: ClusterIP
selector:
app: flagger-loadtester
ports:
- name: http
port: 80
protocol: TCP
targetPort: http

View File

@@ -1,7 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: test
labels:
istio-injection: enabled
appmesh.k8s.aws/sidecarInjectorWebhook: enabled

View File

@@ -1,45 +0,0 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo
namespace: test
spec:
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
hosts:
- podinfo.istio.weavedx.com
- podinfo
http:
- route:
- destination:
host: podinfo
subset: primary
weight: 50
- destination:
host: podinfo
subset: canary
weight: 50
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: podinfo-destination
namespace: test
spec:
host: podinfo
trafficPolicy:
loadBalancer:
consistentHash:
httpCookie:
name: istiouser
ttl: 30s
subsets:
- name: primary
labels:
app: podinfo
role: primary
- name: canary
labels:
app: podinfo
role: canary

View File

@@ -1,43 +0,0 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo
namespace: test
spec:
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
hosts:
- app.istio.weavedx.com
- podinfo
http:
- match:
- headers:
user-agent:
regex: ^(?!.*Chrome)(?=.*\bSafari\b).*$
uri:
prefix: "/version/"
rewrite:
uri: /api/info
route:
- destination:
host: podinfo-primary
port:
number: 9898
weight: 0
- destination:
host: podinfo
port:
number: 9898
weight: 100
- match:
- uri:
prefix: "/version/"
rewrite:
uri: /api/info
route:
- destination:
host: podinfo-primary
port:
number: 9898
weight: 100

View File

@@ -1,25 +0,0 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
hosts:
- podinfo.iowa.weavedx.com
- podinfo
http:
- route:
- destination:
host: podinfo-primary
port:
number: 9898
weight: 100
mirror:
host: podinfo
port:
number: 9898

View File

@@ -1,26 +0,0 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
hosts:
- podinfo.iowa.weavedx.com
- podinfo
http:
- route:
- destination:
host: podinfo-primary
port:
number: 9898
weight: 100
- destination:
host: podinfo
port:
number: 9898
weight: 0

View File

@@ -1,69 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
replicas: 1
strategy:
rollingUpdate:
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: podinfo
template:
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
app: podinfo
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:1.4.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898
name: http
protocol: TCP
command:
- ./podinfo
- --port=9898
- --level=info
- --random-delay=false
- --random-error=false
env:
- name: PODINFO_UI_COLOR
value: green
livenessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/healthz
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
readinessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/readyz
failureThreshold: 3
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 2
resources:
limits:
cpu: 1000m
memory: 256Mi
requests:
cpu: 100m
memory: 16Mi

View File

@@ -1,23 +0,0 @@
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: podinfo
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
minReplicas: 2
maxReplicas: 3
metrics:
- type: Resource
resource:
name: cpu
# scale up if usage is above
# 99% of the requested CPU (100m)
targetAverageUtilization: 99
- type: Resource
resource:
name: memory
targetAverageValue: 200Mi

View File

@@ -1,14 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: podinfo-canary
namespace: test
spec:
type: ClusterIP
selector:
app: podinfo
ports:
- name: http
port: 9898
protocol: TCP
targetPort: http

View File

@@ -1,71 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo-primary
namespace: test
labels:
app: podinfo-primary
spec:
replicas: 1
strategy:
rollingUpdate:
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: podinfo-primary
template:
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
app: podinfo-primary
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:1.4.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9898
name: http
protocol: TCP
command:
- ./podinfo
- --port=9898
- --level=info
- --random-delay=false
- --random-error=false
env:
- name: PODINFO_UI_COLOR
value: blue
livenessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/healthz
initialDelaySeconds: 5
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
exec:
command:
- podcli
- check
- http
- localhost:9898/readyz
initialDelaySeconds: 5
failureThreshold: 3
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 2000m
memory: 512Mi
requests:
cpu: 10m
memory: 64Mi

View File

@@ -1,19 +0,0 @@
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: podinfo-primary
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo-primary
minReplicas: 2
maxReplicas: 4
metrics:
- type: Resource
resource:
name: cpu
# scale up if usage is above
# 99% of the requested CPU (100m)
targetAverageUtilization: 99

View File

@@ -1,16 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: podinfo-primary
namespace: test
labels:
app: podinfo-primary
spec:
type: ClusterIP
selector:
app: podinfo-primary
ports:
- name: http
port: 9898
protocol: TCP
targetPort: http

View File

@@ -1,14 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: podinfo
namespace: test
spec:
type: ClusterIP
selector:
app: podinfo-primary
ports:
- name: http
port: 9898
protocol: TCP
targetPort: http

View File

@@ -1,30 +0,0 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
hosts:
- podinfo.istio.weavedx.com
- podinfo
http:
- route:
- destination:
host: podinfo-primary
port:
number: 9898
weight: 100
- destination:
host: podinfo
port:
number: 9898
weight: 0
timeout: 10s
retries:
attempts: 3
perTryTimeout: 2s

View File

@@ -1,21 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@@ -1,20 +0,0 @@
apiVersion: v1
name: flagger
version: 0.12.0
appVersion: 0.12.0
kubeVersion: ">=1.11.0-0"
engine: gotpl
description: Flagger is a Kubernetes operator that automates the promotion of canary deployments using Istio routing for traffic shifting and Prometheus metrics for canary analysis.
home: https://docs.flagger.app
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/flagger-icon.png
sources:
- https://github.com/weaveworks/flagger
maintainers:
- name: stefanprodan
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com
keywords:
- canary
- istio
- appmesh
- gitops

View File

@@ -1,84 +0,0 @@
# Flagger
[Flagger](https://github.com/weaveworks/flagger) is a Kubernetes operator that automates the promotion of
canary deployments using Istio routing for traffic shifting and Prometheus metrics for canary analysis.
Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators
like HTTP requests success rate, requests average duration and pods health.
Based on the KPIs analysis a canary is promoted or aborted and the analysis result is published to Slack.
## Prerequisites
* Kubernetes >= 1.11
* Istio >= 1.0
* Prometheus >= 2.6
## Installing the Chart
Add Flagger Helm repository:
```console
helm repo add flagger https://flagger.app
```
To install the chart with the release name `flagger`:
```console
$ helm install --name flagger --namespace istio-system flagger/flagger
```
The command deploys Flagger on the Kubernetes cluster in the istio-system namespace.
The [configuration](#configuration) section lists the parameters that can be configured during installation.
## Uninstalling the Chart
To uninstall/delete the `flagger` deployment:
```console
$ helm delete --purge flagger
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the Flagger chart and their default values.
Parameter | Description | Default
--- | --- | ---
`image.repository` | image repository | `quay.io/stefanprodan/flagger`
`image.tag` | image tag | `<VERSION>`
`image.pullPolicy` | image pull policy | `IfNotPresent`
`metricsServer` | Prometheus URL | `http://prometheus.istio-system:9090`
`slack.url` | Slack incoming webhook | None
`slack.channel` | Slack channel | None
`slack.user` | Slack username | `flagger`
`rbac.create` | if `true`, create and use RBAC resources | `true`
`crd.create` | if `true`, create Flagger's CRDs | `true`
`resources.requests/cpu` | pod CPU request | `10m`
`resources.requests/memory` | pod memory request | `32Mi`
`resources.limits/cpu` | pod CPU limit | `1000m`
`resources.limits/memory` | pod memory limit | `512Mi`
`affinity` | node/pod affinities | None
`nodeSelector` | node labels for pod assignment | `{}`
`tolerations` | list of node taints to tolerate | `[]`
Specify each parameter using the `--set key=value[,key=value]` argument to `helm upgrade`. For example,
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace istio-system \
--set slack.url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK \
--set slack.channel=general
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace istio-system \
-f values.yaml
```
> **Tip**: You can use the default [values.yaml](values.yaml)

View File

@@ -1 +0,0 @@
Flagger installed

View File

@@ -1,42 +0,0 @@
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "flagger.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Expand the name of the chart.
*/}}
{{- define "flagger.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "flagger.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "flagger.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "flagger.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}

View File

@@ -1,11 +0,0 @@
{{- if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "flagger.serviceAccountName" . }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

View File

@@ -1,158 +0,0 @@
{{- if .Values.crd.create }}
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: canaries.flagger.app
annotations:
helm.sh/resource-policy: keep
spec:
group: flagger.app
version: v1alpha3
versions:
- name: v1alpha3
served: true
storage: true
- name: v1alpha2
served: true
storage: false
- name: v1alpha1
served: true
storage: false
names:
plural: canaries
singular: canary
kind: Canary
categories:
- all
scope: Namespaced
subresources:
status: {}
additionalPrinterColumns:
- name: Status
type: string
JSONPath: .status.phase
- name: Weight
type: string
JSONPath: .status.canaryWeight
- name: LastTransitionTime
type: string
JSONPath: .status.lastTransitionTime
validation:
openAPIV3Schema:
properties:
spec:
required:
- targetRef
- service
- canaryAnalysis
properties:
progressDeadlineSeconds:
type: number
targetRef:
type: object
required: ['apiVersion', 'kind', 'name']
properties:
apiVersion:
type: string
kind:
type: string
name:
type: string
autoscalerRef:
anyOf:
- type: string
- type: object
required: ['apiVersion', 'kind', 'name']
properties:
apiVersion:
type: string
kind:
type: string
name:
type: string
service:
type: object
required: ['port']
properties:
port:
type: number
portName:
type: string
meshName:
type: string
timeout:
type: string
skipAnalysis:
type: boolean
canaryAnalysis:
properties:
interval:
type: string
pattern: "^[0-9]+(m|s)"
iterations:
type: number
threshold:
type: number
maxWeight:
type: number
stepWeight:
type: number
metrics:
type: array
properties:
items:
type: object
required: ['name', 'threshold']
properties:
name:
type: string
interval:
type: string
pattern: "^[0-9]+(m|s)"
threshold:
type: number
query:
type: string
webhooks:
type: array
properties:
items:
type: object
required: ['name', 'url', 'timeout']
properties:
name:
type: string
type:
type: string
enum:
- ""
- pre-rollout
- rollout
- post-rollout
url:
type: string
format: url
timeout:
type: string
pattern: "^[0-9]+(m|s)"
status:
properties:
phase:
type: string
enum:
- ""
- Initialized
- Progressing
- Succeeded
- Failed
canaryWeight:
type: number
failedChecks:
type: number
iterations:
type: number
lastAppliedSpec:
type: string
lastTransitionTime:
type: string
{{- end }}

View File

@@ -1,83 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "flagger.fullname" . }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
serviceAccountName: {{ template "flagger.serviceAccountName" . }}
containers:
- name: flagger
securityContext:
readOnlyRootFilesystem: true
runAsUser: 10001
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
command:
- ./flagger
- -log-level=info
{{- if .Values.meshProvider }}
- -mesh-provider={{ .Values.meshProvider }}
{{- end }}
- -metrics-server={{ .Values.metricsServer }}
{{- if .Values.namespace }}
- -namespace={{ .Values.namespace }}
{{- end }}
{{- if .Values.slack.url }}
- -slack-url={{ .Values.slack.url }}
- -slack-user={{ .Values.slack.user }}
- -slack-channel={{ .Values.slack.channel }}
{{- end }}
livenessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
readinessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}

View File

@@ -1,74 +0,0 @@
{{- if .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: {{ template "flagger.fullname" . }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
rules:
- apiGroups:
- ""
resources:
- events
- configmaps
- secrets
- services
verbs: ["*"]
- apiGroups:
- apps
resources:
- deployments
verbs: ["*"]
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs: ["*"]
- apiGroups:
- flagger.app
resources:
- canaries
- canaries/status
verbs: ["*"]
- apiGroups:
- networking.istio.io
resources:
- virtualservices
- virtualservices/status
verbs: ["*"]
- apiGroups:
- appmesh.k8s.aws
resources:
- meshes
- meshes/status
- virtualnodes
- virtualnodes/status
- virtualservices
- virtualservices/status
verbs: ["*"]
- nonResourceURLs:
- /version
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: {{ template "flagger.fullname" . }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "flagger.fullname" . }}
subjects:
- name: {{ template "flagger.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
kind: ServiceAccount
{{- end }}

View File

@@ -1,51 +0,0 @@
# Default values for flagger.
image:
repository: weaveworks/flagger
tag: 0.12.0
pullPolicy: IfNotPresent
metricsServer: "http://prometheus:9090"
# accepted values are istio or appmesh (defaults to istio)
meshProvider: ""
# single namespace restriction
namespace: ""
slack:
user: flagger
channel:
# incoming webhook https://api.slack.com/incoming-webhooks
url:
serviceAccount:
# serviceAccount.create: Whether to create a service account or not
create: true
# serviceAccount.name: The name of the service account to create or use
name: ""
rbac:
# rbac.create: `true` if rbac resources should be created
create: true
crd:
# crd.create: `true` if custom resource definitions should be created
create: true
nameOverride: ""
fullnameOverride: ""
resources:
limits:
memory: "512Mi"
cpu: "1000m"
requests:
memory: "32Mi"
cpu: "10m"
nodeSelector: {}
tolerations: []
affinity: {}

View File

@@ -1,21 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@@ -1,13 +0,0 @@
apiVersion: v1
name: grafana
version: 1.1.0
appVersion: 5.4.3
description: Grafana dashboards for monitoring Flagger canary deployments
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/flagger-icon.png
home: https://flagger.app
sources:
- https://github.com/weaveworks/flagger
maintainers:
- name: stefanprodan
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com

View File

@@ -1,79 +0,0 @@
# Flagger Grafana
Grafana dashboards for monitoring progressive deployments powered by Istio, Prometheus and Flagger.
![flagger-grafana](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/screens/grafana-canary-analysis.png)
## Prerequisites
* Kubernetes >= 1.11
* Istio >= 1.0
* Prometheus >= 2.6
## Installing the Chart
Add Flagger Helm repository:
```console
helm repo add flagger https://flagger.app
```
To install the chart with the release name `flagger-grafana`:
```console
helm upgrade -i flagger-grafana flagger/grafana \
--namespace=istio-system \
--set url=http://prometheus:9090 \
--set user=admin \
--set password=admin
```
The command deploys Grafana on the Kubernetes cluster in the default namespace.
The [configuration](#configuration) section lists the parameters that can be configured during installation.
## Uninstalling the Chart
To uninstall/delete the `flagger-grafana` deployment:
```console
helm delete --purge flagger-grafana
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the Grafana chart and their default values.
Parameter | Description | Default
--- | --- | ---
`image.repository` | Image repository | `grafana/grafana`
`image.pullPolicy` | Image pull policy | `IfNotPresent`
`image.tag` | Image tag | `<VERSION>`
`replicaCount` | desired number of pods | `1`
`resources` | pod resources | `none`
`tolerations` | List of node taints to tolerate | `[]`
`affinity` | node/pod affinities | `node`
`nodeSelector` | node labels for pod assignment | `{}`
`service.type` | type of service | `ClusterIP`
`url` | Prometheus URL, used when Weave Cloud token is empty | `http://prometheus:9090`
`token` | Weave Cloud token | `none`
`user` | Grafana admin username | `admin`
`password` | Grafana admin password | `admin`
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
helm install flagger/grafana --name flagger-grafana \
--set token=WEAVE-CLOUD-TOKEN
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
helm install flagger/grafana --name flagger-grafana -f values.yaml
```
> **Tip**: You can use the default [values.yaml](values.yaml)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +0,0 @@
1. Run the port forward command:
kubectl -n {{ .Release.Namespace }} port-forward svc/{{ .Release.Name }} 3000:80
2. Navigate to:
http://localhost:3000

View File

@@ -1,32 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "grafana.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "grafana.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "grafana.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}

View File

@@ -1,6 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "grafana.fullname" . }}-dashboards
data:
{{ (.Files.Glob "dashboards/*").AsConfig | indent 2 }}

View File

@@ -1,32 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "grafana.fullname" . }}-datasources
data:
datasources.yaml: |-
apiVersion: 1
deleteDatasources:
- name: prometheus
{{- if .Values.token }}
datasources:
- name: prometheus
type: prometheus
access: proxy
url: https://cloud.weave.works/api/prom
isDefault: true
editable: true
version: 1
basicAuth: true
basicAuthUser: weave
basicAuthPassword: {{ .Values.token }}
{{- else }}
datasources:
- name: prometheus
type: prometheus
access: proxy
url: {{ .Values.url }}
isDefault: true
editable: true
version: 1
{{- end }}

View File

@@ -1,90 +0,0 @@
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ template "grafana.fullname" . }}
labels:
app: {{ template "grafana.fullname" . }}
chart: {{ template "grafana.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "grafana.fullname" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "grafana.fullname" . }}
release: {{ .Release.Name }}
annotations:
prometheus.io/scrape: 'false'
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 3000
protocol: TCP
# livenessProbe:
# httpGet:
# path: /
# port: http
# readinessProbe:
# httpGet:
# path: /
# port: http
env:
- name: GF_PATHS_PROVISIONING
value: /etc/grafana/provisioning/
{{- if .Values.password }}
- name: GF_SECURITY_ADMIN_USER
value: {{ .Values.user }}
- name: GF_SECURITY_ADMIN_PASSWORD
value: {{ .Values.password }}
{{- else }}
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
{{- end }}
volumeMounts:
- name: grafana
mountPath: /var/lib/grafana
- name: dashboards
mountPath: /etc/grafana/dashboards
- name: datasources
mountPath: /etc/grafana/provisioning/datasources
- name: providers
mountPath: /etc/grafana/provisioning/dashboards
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
- name: grafana
emptyDir: {}
- name: dashboards
configMap:
name: {{ template "grafana.fullname" . }}-dashboards
- name: providers
configMap:
name: {{ template "grafana.fullname" . }}-providers
- name: datasources
configMap:
name: {{ template "grafana.fullname" . }}-datasources

View File

@@ -1,17 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "grafana.fullname" . }}-providers
data:
providers.yaml: |+
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /etc/grafana/dashboards

View File

@@ -1,19 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "grafana.fullname" . }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
app: {{ template "grafana.fullname" . }}
release: {{ .Release.Name }}

View File

@@ -1,37 +0,0 @@
# Default values for grafana.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: grafana/grafana
tag: 5.4.3
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
user: admin
password:
# Istio Prometheus instance
url: http://prometheus:9090
# Weave Cloud instance token
token:

View File

@@ -1,22 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,21 +0,0 @@
apiVersion: v1
name: loadtester
version: 0.4.0
appVersion: 0.3.0
kubeVersion: ">=1.11.0-0"
engine: gotpl
description: Flagger's load testing services based on rakyll/hey that generates traffic during canary analysis when configured as a webhook.
home: https://docs.flagger.app
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/flagger-icon.png
sources:
- https://github.com/weaveworks/flagger
maintainers:
- name: stefanprodan
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com
keywords:
- canary
- istio
- appmesh
- gitops
- load testing

View File

@@ -1,78 +0,0 @@
# Flagger load testing service
[Flagger's](https://github.com/weaveworks/flagger) load testing service is based on
[rakyll/hey](https://github.com/rakyll/hey)
and can be used to generates traffic during canary analysis when configured as a webhook.
## Prerequisites
* Kubernetes >= 1.11
* Istio >= 1.0
## Installing the Chart
Add Flagger Helm repository:
```console
helm repo add flagger https://flagger.app
```
To install the chart with the release name `flagger-loadtester`:
```console
helm upgrade -i flagger-loadtester flagger/loadtester
```
The command deploys Grafana on the Kubernetes cluster in the default namespace.
> **Tip**: Note that the namespace where you deploy the load tester should have the Istio sidecar injection enabled
The [configuration](#configuration) section lists the parameters that can be configured during installation.
## Uninstalling the Chart
To uninstall/delete the `flagger-loadtester` deployment:
```console
helm delete --purge flagger-loadtester
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the load tester chart and their default values.
Parameter | Description | Default
--- | --- | ---
`image.repository` | Image repository | `quay.io/stefanprodan/flagger-loadtester`
`image.pullPolicy` | Image pull policy | `IfNotPresent`
`image.tag` | Image tag | `<VERSION>`
`replicaCount` | desired number of pods | `1`
`resources.requests.cpu` | CPU requests | `10m`
`resources.requests.memory` | memory requests | `64Mi`
`tolerations` | List of node taints to tolerate | `[]`
`affinity` | node/pod affinities | `node`
`nodeSelector` | node labels for pod assignment | `{}`
`service.type` | type of service | `ClusterIP`
`service.port` | ClusterIP port | `80`
`cmd.timeout` | Command execution timeout | `1h`
`logLevel` | Log level can be debug, info, warning, error or panic | `info`
`meshName` | AWS App Mesh name | `none`
`backends` | AWS App Mesh virtual services | `none`
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
helm install flagger/loadtester --name flagger-loadtester
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
helm install flagger/loadtester --name flagger-loadtester -f values.yaml
```
> **Tip**: You can use the default [values.yaml](values.yaml)

View File

@@ -1 +0,0 @@
Flagger's load testing service is available at http://{{ include "loadtester.fullname" . }}.{{ .Release.Namespace }}/

View File

@@ -1,32 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "loadtester.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "loadtester.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "loadtester.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}

View File

@@ -1,67 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "loadtester.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "loadtester.name" . }}
helm.sh/chart: {{ include "loadtester.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ include "loadtester.name" . }}
template:
metadata:
labels:
app: {{ include "loadtester.name" . }}
annotations:
appmesh.k8s.aws/ports: "444"
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
command:
- ./loadtester
- -port=8080
- -log-level={{ .Values.logLevel }}
- -timeout={{ .Values.cmd.timeout }}
livenessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
readinessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -1,18 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "loadtester.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "loadtester.name" . }}
helm.sh/chart: {{ include "loadtester.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
app: {{ include "loadtester.name" . }}

View File

@@ -1,27 +0,0 @@
{{- if .Values.meshName }}
apiVersion: appmesh.k8s.aws/v1beta1
kind: VirtualNode
metadata:
name: {{ include "loadtester.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "loadtester.name" . }}
helm.sh/chart: {{ include "loadtester.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
meshName: {{ .Values.meshName }}
listeners:
- portMapping:
port: 444
protocol: http
serviceDiscovery:
dns:
hostName: {{ include "loadtester.fullname" . }}.{{ .Release.Namespace }}
{{- if .Values.backends }}
backends:
{{- range .Values.backends }}
- virtualService:
virtualServiceName: {{ . }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,34 +0,0 @@
replicaCount: 1
image:
repository: weaveworks/flagger-loadtester
tag: 0.3.0
pullPolicy: IfNotPresent
logLevel: info
cmd:
timeout: 1h
nameOverride: ""
fullnameOverride: ""
service:
type: ClusterIP
port: 80
resources:
requests:
cpu: 10m
memory: 64Mi
nodeSelector: {}
tolerations: []
affinity: {}
# App Mesh virtual node settings
meshName: ""
#backends:
# - app1.namespace
# - app2.namespace

View File

@@ -1,21 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@@ -1,12 +0,0 @@
apiVersion: v1
version: 2.0.1
appVersion: 1.4.0
name: podinfo
engine: gotpl
description: Flagger canary deployment demo chart
home: https://github.com/weaveworks/flagger
maintainers:
- email: stefanprodan@users.noreply.github.com
name: stefanprodan
sources:
- https://github.com/weaveworks/flagger

View File

@@ -1,79 +0,0 @@
# Podinfo
Podinfo is a tiny web application made with Go
that showcases best practices of running canary deployments with Flagger and Istio.
## Installing the Chart
Add Flagger Helm repository:
```console
helm repo add flagger https://flagger.app
```
To install the chart with the release name `frontend`:
```console
helm upgrade -i frontend flagger/podinfo \
--namespace test \
--set nameOverride=frontend \
--set backend=http://backend.test:9898/echo \
--set canary.enabled=true \
--set canary.istioIngress.enabled=true \
--set canary.istioIngress.gateway=public-gateway.istio-system.svc.cluster.local \
--set canary.istioIngress.host=frontend.istio.example.com
```
To install the chart as `backend`:
```console
helm upgrade -i backend flagger/podinfo \
--namespace test \
--set nameOverride=backend \
--set canary.enabled=true
```
## Uninstalling the Chart
To uninstall/delete the `frontend` deployment:
```console
$ helm delete --purge frontend
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the podinfo chart and their default values.
Parameter | Description | Default
--- | --- | ---
`image.repository` | image repository | `quay.io/stefanprodan/podinfo`
`image.tag` | image tag | `<VERSION>`
`image.pullPolicy` | image pull policy | `IfNotPresent`
`hpa.enabled` | enables HPA | `true`
`hpa.cpu` | target CPU usage per pod | `80`
`hpa.memory` | target memory usage per pod | `512Mi`
`hpa.minReplicas` | maximum pod replicas | `2`
`hpa.maxReplicas` | maximum pod replicas | `4`
`resources.requests/cpu` | pod CPU request | `1m`
`resources.requests/memory` | pod memory request | `16Mi`
`backend` | backend URL | None
`faults.delay` | random HTTP response delays between 0 and 5 seconds | `false`
`faults.error` | 1/3 chances of a random HTTP response error | `false`
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
$ helm install flagger/podinfo --name frontend \
--set=image.tag=1.4.1,hpa.enabled=false
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
$ helm install flagger/podinfo --name frontend -f values.yaml
```

View File

@@ -1 +0,0 @@
podinfo {{ .Release.Name }} deployed!

View File

@@ -1,43 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "podinfo.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "podinfo.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "podinfo.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create chart name suffix.
*/}}
{{- define "podinfo.suffix" -}}
{{- if .Values.canary.enabled -}}
{{- "-primary" -}}
{{- else -}}
{{- "" -}}
{{- end -}}
{{- end -}}

View File

@@ -1,54 +0,0 @@
{{- if .Values.canary.enabled }}
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: {{ template "podinfo.fullname" . }}
labels:
app: {{ template "podinfo.name" . }}
chart: {{ template "podinfo.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ template "podinfo.fullname" . }}
progressDeadlineSeconds: 60
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: {{ template "podinfo.fullname" . }}
service:
port: {{ .Values.service.port }}
{{- if .Values.canary.istioIngress.enabled }}
gateways:
- {{ .Values.canary.istioIngress.gateway }}
hosts:
- {{ .Values.canary.istioIngress.host }}
{{- end }}
canaryAnalysis:
interval: {{ .Values.canary.analysis.interval }}
threshold: {{ .Values.canary.analysis.threshold }}
maxWeight: {{ .Values.canary.analysis.maxWeight }}
stepWeight: {{ .Values.canary.analysis.stepWeight }}
metrics:
- name: request-success-rate
threshold: {{ .Values.canary.thresholds.successRate }}
interval: 1m
- name: request-duration
threshold: {{ .Values.canary.thresholds.latency }}
interval: 1m
{{- if .Values.canary.loadtest.enabled }}
webhooks:
- name: load-test-get
url: {{ .Values.canary.loadtest.url }}
timeout: 5s
metadata:
cmd: "hey -z 1m -q 5 -c 2 http://{{ template "podinfo.fullname" . }}.{{ .Release.Namespace }}:{{ .Values.service.port }}"
- name: load-test-post
url: {{ .Values.canary.loadtest.url }}
timeout: 5s
metadata:
cmd: "hey -z 1m -q 5 -c 2 -m POST -d '{\"test\": true}' http://{{ template "podinfo.fullname" . }}.{{ .Release.Namespace }}:{{ .Values.service.port }}/echo"
{{- end }}
{{- end }}

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "podinfo.fullname" . }}
labels:
app: {{ template "podinfo.name" . }}
chart: {{ template "podinfo.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
config.yaml: |-
# http settings
http-client-timeout: 1m
http-server-timeout: {{ .Values.httpServer.timeout }}
http-server-shutdown-timeout: 5s

Some files were not shown because too many files have changed in this diff Show More