mirror of
https://github.com/fluxcd/flagger.git
synced 2026-02-15 02:20:22 +00:00
Compare commits
755 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b6d6f32c7f | ||
|
|
b6c98799d1 | ||
|
|
06dab2e137 | ||
|
|
6494893812 | ||
|
|
11b82dbcc7 | ||
|
|
e09f44df77 | ||
|
|
ad8233cf46 | ||
|
|
dad70a6876 | ||
|
|
39e55daa04 | ||
|
|
a9ad6c92a6 | ||
|
|
ca14a08f9c | ||
|
|
be16bd8768 | ||
|
|
47d00857bc | ||
|
|
7c3cb5c5a3 | ||
|
|
f12fe4254a | ||
|
|
bb627779d9 | ||
|
|
eba066e044 | ||
|
|
34f0273c34 | ||
|
|
394c9545ce | ||
|
|
a6f0481b27 | ||
|
|
4d2664b57e | ||
|
|
1242825c42 | ||
|
|
fd34614c84 | ||
|
|
68312570b6 | ||
|
|
fa9de7d8f9 | ||
|
|
a04bb3d3c0 | ||
|
|
23e805965e | ||
|
|
9aa775f409 | ||
|
|
9655ed652f | ||
|
|
744b83253a | ||
|
|
74db314288 | ||
|
|
f8e68a2dad | ||
|
|
1c35524b13 | ||
|
|
7352237fa9 | ||
|
|
997e7be8af | ||
|
|
0e2858d311 | ||
|
|
d7790ad5b1 | ||
|
|
96234c1d6c | ||
|
|
0f1a42a5cc | ||
|
|
8a5a0538fd | ||
|
|
7fd8251a06 | ||
|
|
72c7a103f9 | ||
|
|
b890b79234 | ||
|
|
1a65937278 | ||
|
|
a490cde692 | ||
|
|
682a1bf5ae | ||
|
|
de3aeab702 | ||
|
|
fa25872ceb | ||
|
|
e8ca5f270b | ||
|
|
6f65f6096d | ||
|
|
f2eca79a1f | ||
|
|
8e9c326561 | ||
|
|
9c89346a22 | ||
|
|
2827ecbc31 | ||
|
|
8a02195ac2 | ||
|
|
93e76e5050 | ||
|
|
b2fd6f994c | ||
|
|
95dcc17bc2 | ||
|
|
71725c4771 | ||
|
|
bebfac8b9f | ||
|
|
45d4d1ff55 | ||
|
|
bf27aed2e4 | ||
|
|
0715e1ca37 | ||
|
|
37ec07d2ec | ||
|
|
7a18bfaac5 | ||
|
|
c367e65672 | ||
|
|
8c55bb222d | ||
|
|
a74ae1f4a2 | ||
|
|
8376623839 | ||
|
|
dcab2d518f | ||
|
|
9afd741dc1 | ||
|
|
9ba78031e2 | ||
|
|
ce6ae8d511 | ||
|
|
33076941b9 | ||
|
|
6db5b5c417 | ||
|
|
d2fe182e2d | ||
|
|
8740f41a3a | ||
|
|
b6b6633692 | ||
|
|
fe58b32d9b | ||
|
|
df50c32c09 | ||
|
|
ada9288f88 | ||
|
|
df103fb257 | ||
|
|
3dd5dfa6aa | ||
|
|
44cee4210d | ||
|
|
893a53234b | ||
|
|
4f299e5696 | ||
|
|
3cf6400092 | ||
|
|
476eb8c185 | ||
|
|
f5a3b9df24 | ||
|
|
be96a11479 | ||
|
|
2e75dbb170 | ||
|
|
eaa5b14be6 | ||
|
|
f3b444ab49 | ||
|
|
0056b99309 | ||
|
|
e0de9d0afa | ||
|
|
a17e8b4794 | ||
|
|
ad73643e4a | ||
|
|
5d84596bc0 | ||
|
|
0b0c49bd2a | ||
|
|
99bc7040a3 | ||
|
|
30073f2a8d | ||
|
|
3e19ef0f01 | ||
|
|
68ccbc4817 | ||
|
|
fbaf8fedc7 | ||
|
|
ff94e14d5a | ||
|
|
5c7fd5d4db | ||
|
|
48467eb8b3 | ||
|
|
5bd6906c32 | ||
|
|
772099f073 | ||
|
|
a6b8d19629 | ||
|
|
e7f2d22505 | ||
|
|
6cfa432834 | ||
|
|
474a5a20be | ||
|
|
af02ed46a5 | ||
|
|
972596f443 | ||
|
|
6498cccb85 | ||
|
|
6c9847ae14 | ||
|
|
b0b0cedde1 | ||
|
|
d96672eec1 | ||
|
|
fed6948dab | ||
|
|
86939d9dce | ||
|
|
854d7665f0 | ||
|
|
52c757250a | ||
|
|
fe1d85b0ce | ||
|
|
0aac94b782 | ||
|
|
e55af2ff19 | ||
|
|
2e388fceee | ||
|
|
2d1c4a9d84 | ||
|
|
004eb88962 | ||
|
|
eba6478729 | ||
|
|
7686b4b01a | ||
|
|
55c89770d7 | ||
|
|
d6f3a2453b | ||
|
|
d320b558d0 | ||
|
|
66203c0916 | ||
|
|
d97a8cbc01 | ||
|
|
6bb47f2e5d | ||
|
|
f89f0d6515 | ||
|
|
f46eaa8d05 | ||
|
|
e3f18b3d7e | ||
|
|
2b7a95fee5 | ||
|
|
647eb81021 | ||
|
|
bda620aae9 | ||
|
|
d41ed43ef9 | ||
|
|
86d3b498b6 | ||
|
|
e473d4b2fb | ||
|
|
fcac3380d7 | ||
|
|
f5fd57f3df | ||
|
|
2c6259495b | ||
|
|
0d1e41504c | ||
|
|
4a4f8555df | ||
|
|
4890a71283 | ||
|
|
84dd0006ca | ||
|
|
8d37b7b20b | ||
|
|
3f961ae73f | ||
|
|
4460cb7385 | ||
|
|
37527854d2 | ||
|
|
c609a90959 | ||
|
|
2657e135b8 | ||
|
|
b7441a7ce7 | ||
|
|
0aee385145 | ||
|
|
5c48430ed2 | ||
|
|
9d907deece | ||
|
|
b564a2fda2 | ||
|
|
c0515fc6ff | ||
|
|
adae6afc91 | ||
|
|
bbdac24ed3 | ||
|
|
e0b3b7134b | ||
|
|
af2ef409b4 | ||
|
|
740b46e818 | ||
|
|
db4a15e21d | ||
|
|
e6901467f2 | ||
|
|
14e9c7f466 | ||
|
|
b8e9f57e1e | ||
|
|
38ef4ef4d8 | ||
|
|
e99e8f22a2 | ||
|
|
cbcd6ab03b | ||
|
|
f199923a3c | ||
|
|
7c00e5bbd8 | ||
|
|
f6baba271a | ||
|
|
3b04f12b65 | ||
|
|
686de4bf06 | ||
|
|
9150816ec6 | ||
|
|
3c11749f80 | ||
|
|
b14bcc4a43 | ||
|
|
29c31e56bd | ||
|
|
26c18a3385 | ||
|
|
6bbf99dbc5 | ||
|
|
4be2a0c4e1 | ||
|
|
ccc17b080f | ||
|
|
a5986987b7 | ||
|
|
3807a5019d | ||
|
|
6d2c172fca | ||
|
|
017ca70807 | ||
|
|
4ff00958ef | ||
|
|
4ba27f018d | ||
|
|
388ad0400d | ||
|
|
ef8b6fe9b8 | ||
|
|
7676918184 | ||
|
|
65d4b28b58 | ||
|
|
9c46be131e | ||
|
|
61fd505179 | ||
|
|
4242bf0f07 | ||
|
|
7821bc66d6 | ||
|
|
df5a2d8266 | ||
|
|
42c3080c19 | ||
|
|
a548bfd8e6 | ||
|
|
37c7ddc9a4 | ||
|
|
70475d475b | ||
|
|
6a16dc0c7c | ||
|
|
2b1aacc8e3 | ||
|
|
658dec2693 | ||
|
|
99366b4960 | ||
|
|
562467765a | ||
|
|
423424cb3d | ||
|
|
79759243d4 | ||
|
|
1d43447994 | ||
|
|
ce426b50e3 | ||
|
|
683bc0b5ff | ||
|
|
41af740798 | ||
|
|
3458757c35 | ||
|
|
4e24ad53bd | ||
|
|
0bdaa008aa | ||
|
|
16ecb4bed7 | ||
|
|
dbdf198b74 | ||
|
|
7cf836e982 | ||
|
|
d8d2345359 | ||
|
|
a3e3567f1e | ||
|
|
c9a07cec87 | ||
|
|
92937a8f48 | ||
|
|
b39add9ee6 | ||
|
|
a193d13e37 | ||
|
|
52d9951eb9 | ||
|
|
8684a074aa | ||
|
|
1d618a9945 | ||
|
|
0def28d7c3 | ||
|
|
b146163130 | ||
|
|
28c98f0793 | ||
|
|
2954317982 | ||
|
|
830f3ac18f | ||
|
|
d6d65e20e8 | ||
|
|
061c17971e | ||
|
|
bbbcfd6cde | ||
|
|
7f590b0701 | ||
|
|
4d90abf581 | ||
|
|
f2ef8339d3 | ||
|
|
b02370102f | ||
|
|
3aa3ae2de4 | ||
|
|
e76a6792b9 | ||
|
|
ebfbe1b535 | ||
|
|
f0bd307d3c | ||
|
|
bc4e0d69a2 | ||
|
|
b8682ccfd4 | ||
|
|
51adfc1f60 | ||
|
|
38299fd947 | ||
|
|
3a1e66ec03 | ||
|
|
9a5328b507 | ||
|
|
99103840a1 | ||
|
|
7a24cee6d5 | ||
|
|
523903e0af | ||
|
|
7380dbb8ab | ||
|
|
3425d6e965 | ||
|
|
d911e1ddc5 | ||
|
|
aec0010b14 | ||
|
|
3a887afa38 | ||
|
|
adff6989f5 | ||
|
|
1f6160148c | ||
|
|
8242e7691a | ||
|
|
cb130d3239 | ||
|
|
246e5f8c13 | ||
|
|
c5dffbaa3f | ||
|
|
31090d08b6 | ||
|
|
010852edd1 | ||
|
|
2e54ef4a31 | ||
|
|
39c5968606 | ||
|
|
951386392d | ||
|
|
714dd86cd4 | ||
|
|
d35290dd6e | ||
|
|
fbc886794e | ||
|
|
5cd78bfd40 | ||
|
|
862dfbde94 | ||
|
|
ea42f704f0 | ||
|
|
30dc29b689 | ||
|
|
ffbbc2ca33 | ||
|
|
a32bd63eda | ||
|
|
22f860a3a3 | ||
|
|
ce89a24947 | ||
|
|
f34d94a912 | ||
|
|
0be72ab981 | ||
|
|
23ab1bdb4b | ||
|
|
64efd56ce9 | ||
|
|
5843b02931 | ||
|
|
2ec24bb17d | ||
|
|
7fb675e8aa | ||
|
|
8e9b9a358f | ||
|
|
e76e718967 | ||
|
|
438a9839d2 | ||
|
|
e59acc7bae | ||
|
|
2fb36d58b1 | ||
|
|
c0dbef37c6 | ||
|
|
cfd2ff92bf | ||
|
|
8d9dde2dc7 | ||
|
|
f164eac58e | ||
|
|
a0a9b7d29a | ||
|
|
6d4db45d6c | ||
|
|
4f0f7ff9db | ||
|
|
e8924a7e27 | ||
|
|
eced0f45c6 | ||
|
|
23e6209789 | ||
|
|
3d2817dd0d | ||
|
|
5fecefe3b4 | ||
|
|
a616199b81 | ||
|
|
c42c624763 | ||
|
|
ccd27b4614 | ||
|
|
9258cbeecb | ||
|
|
c66ef8f935 | ||
|
|
cf1f8f4140 | ||
|
|
ac3492a7b4 | ||
|
|
b47cfb62b2 | ||
|
|
ecb8207488 | ||
|
|
5faf63ed24 | ||
|
|
7b6a5f96a1 | ||
|
|
4ff28d7bd5 | ||
|
|
62f2851dfd | ||
|
|
34c9fecf8c | ||
|
|
b6958733e1 | ||
|
|
0c998c36cf | ||
|
|
bf0499e8a6 | ||
|
|
c4a9712b81 | ||
|
|
49c088595e | ||
|
|
be4c67540d | ||
|
|
a9fba0a1f2 | ||
|
|
98ecae93e1 | ||
|
|
5aa9dd154c | ||
|
|
e4da4a34a6 | ||
|
|
2837d4407e | ||
|
|
0e81b5f4d2 | ||
|
|
8f12128aaf | ||
|
|
dd7a045542 | ||
|
|
baadc19a42 | ||
|
|
981abdbc85 | ||
|
|
273f84b374 | ||
|
|
793e998a39 | ||
|
|
9a44c5baac | ||
|
|
a30f688450 | ||
|
|
19faf67523 | ||
|
|
3e0867040f | ||
|
|
82660e23da | ||
|
|
43662582b8 | ||
|
|
287977c2b5 | ||
|
|
41f644ab8c | ||
|
|
74618a9016 | ||
|
|
450dcd692e | ||
|
|
e2b4a3de32 | ||
|
|
c17c69ec1b | ||
|
|
a157824130 | ||
|
|
6c398c246f | ||
|
|
31f38a4f43 | ||
|
|
9c8b887d30 | ||
|
|
eec343f3aa | ||
|
|
4fe19be9b7 | ||
|
|
cc07c2891e | ||
|
|
336344720c | ||
|
|
5af1665ef8 | ||
|
|
a828b43463 | ||
|
|
bf1089b204 | ||
|
|
0c2d7da136 | ||
|
|
3968e84efd | ||
|
|
54d4df5751 | ||
|
|
65c7fd1cf8 | ||
|
|
c3cb9e394d | ||
|
|
ab00a0099c | ||
|
|
0d493f658d | ||
|
|
13342e5e7f | ||
|
|
1f0a4d9f35 | ||
|
|
8e996b61ae | ||
|
|
5462db7c11 | ||
|
|
91ef81201e | ||
|
|
34ed690416 | ||
|
|
0598e4b51e | ||
|
|
7f9cc30b07 | ||
|
|
52ee018ffd | ||
|
|
3b8c285870 | ||
|
|
77aef5591d | ||
|
|
143397c45e | ||
|
|
c6884fb5b4 | ||
|
|
2cb6ce4697 | ||
|
|
c3b1ee6dae | ||
|
|
f5182061ef | ||
|
|
890365c189 | ||
|
|
0801576dcf | ||
|
|
3a5a0faa4f | ||
|
|
172c4f56dd | ||
|
|
e1bb8e741e | ||
|
|
b4753f68b5 | ||
|
|
33d57af233 | ||
|
|
0106dff2d7 | ||
|
|
37a2bf966a | ||
|
|
57b1732b67 | ||
|
|
acce3a9c13 | ||
|
|
05050c950a | ||
|
|
8c1166fa5b | ||
|
|
68c6d302b7 | ||
|
|
951a4435eb | ||
|
|
98bd8696f2 | ||
|
|
41f535191e | ||
|
|
0697343b7c | ||
|
|
39e44e6a7a | ||
|
|
67ba14e438 | ||
|
|
1a9cec9cb7 | ||
|
|
6347861fda | ||
|
|
bd3435b82d | ||
|
|
0bd66f4603 | ||
|
|
78dacc98fa | ||
|
|
71a220d432 | ||
|
|
089aa1fe22 | ||
|
|
c88fa5d882 | ||
|
|
14214bc2fe | ||
|
|
4c5b226b4c | ||
|
|
4d8b153cf9 | ||
|
|
ea4d9ba58d | ||
|
|
c181eb464c | ||
|
|
ad68ca3a4a | ||
|
|
963a9afd09 | ||
|
|
4e8b7d4cb4 | ||
|
|
fd85a3426a | ||
|
|
e257d48262 | ||
|
|
1a87a9be45 | ||
|
|
35cf634d89 | ||
|
|
86e813f6b7 | ||
|
|
c4c3342eb9 | ||
|
|
898edee67e | ||
|
|
0673b54092 | ||
|
|
3cfb1fbb65 | ||
|
|
ea4a84991e | ||
|
|
d5ba46965f | ||
|
|
7c0e3d9a0b | ||
|
|
5c479d9d80 | ||
|
|
8f99e589a6 | ||
|
|
e4e92b3353 | ||
|
|
228954b5db | ||
|
|
de03d49f55 | ||
|
|
b7c2dcda0e | ||
|
|
f2f2a9fc58 | ||
|
|
22589265ce | ||
|
|
3f83f306a5 | ||
|
|
448c210324 | ||
|
|
ea39041b24 | ||
|
|
eec287a501 | ||
|
|
54c03f4d07 | ||
|
|
95b389a8fa | ||
|
|
b17d84a39d | ||
|
|
d7d9d1eabe | ||
|
|
d154c63ac3 | ||
|
|
d9252748d2 | ||
|
|
1cca5a455b | ||
|
|
1b651500a1 | ||
|
|
e457b6d35c | ||
|
|
402dda71e6 | ||
|
|
69e969ac51 | ||
|
|
edbc373109 | ||
|
|
1d23c0f0a2 | ||
|
|
fa950e1a48 | ||
|
|
e31ecbedf0 | ||
|
|
b982c9e2ae | ||
|
|
3766c843fe | ||
|
|
e00d9962d6 | ||
|
|
940e547e88 | ||
|
|
e3ecebc9ae | ||
|
|
c38bd144e4 | ||
|
|
2be6f3d678 | ||
|
|
3d7091a56b | ||
|
|
1f0305949e | ||
|
|
1332db85c5 | ||
|
|
1f06ec838d | ||
|
|
308351918c | ||
|
|
558a1fc6e6 | ||
|
|
bc3256e1c5 | ||
|
|
6eaf421f98 | ||
|
|
1271f12d3f | ||
|
|
4776b1d285 | ||
|
|
e4dc923299 | ||
|
|
98ba38d436 | ||
|
|
9d765feb38 | ||
|
|
7e6a70bdbf | ||
|
|
455ec1b6e7 | ||
|
|
3b152a370f | ||
|
|
8d7d5e6810 | ||
|
|
8dc4c03258 | ||
|
|
0082b3307b | ||
|
|
b1a9c33d36 | ||
|
|
6e06cf1074 | ||
|
|
8d61e6f893 | ||
|
|
9c71e70a0a | ||
|
|
91395ea1ab | ||
|
|
0894304fce | ||
|
|
9cfa0ac43f | ||
|
|
1d5029d607 | ||
|
|
e6d1880c93 | ||
|
|
6da533090a | ||
|
|
17efcaa6d1 | ||
|
|
38dfda9d8f | ||
|
|
0abc254ef2 | ||
|
|
db427b5e54 | ||
|
|
b49d63bdfe | ||
|
|
c84f7addff | ||
|
|
5d72398925 | ||
|
|
11d16468c9 | ||
|
|
82b61d69b7 | ||
|
|
824391321f | ||
|
|
a7c242e437 | ||
|
|
1544610203 | ||
|
|
14ca775ed9 | ||
|
|
f1d29f5951 | ||
|
|
ad0a66ffcc | ||
|
|
4288fa261c | ||
|
|
a537637dc9 | ||
|
|
851c6701b3 | ||
|
|
bb4591106a | ||
|
|
7641190ecb | ||
|
|
02b579f128 | ||
|
|
9cf6b407f1 | ||
|
|
c3564176f8 | ||
|
|
ae9cf57fd5 | ||
|
|
ae63b01373 | ||
|
|
c066a9163b | ||
|
|
38b04f2690 | ||
|
|
ee0e7b091a | ||
|
|
e922c3e9d9 | ||
|
|
2c31a4bf90 | ||
|
|
7332e6b173 | ||
|
|
968d67a7c3 | ||
|
|
266b957fc6 | ||
|
|
357ef86c8b | ||
|
|
d75ade5e8c | ||
|
|
806b95c8ce | ||
|
|
bf58cd763f | ||
|
|
52856177e3 | ||
|
|
58c3cebaac | ||
|
|
1e5d05c3fc | ||
|
|
020129bf5c | ||
|
|
3ff0786e1f | ||
|
|
a60dc55dad | ||
|
|
ff6acae544 | ||
|
|
09b5295c85 | ||
|
|
9e423a6f71 | ||
|
|
0ef05edf1e | ||
|
|
a59901aaa9 | ||
|
|
53be3e07d2 | ||
|
|
2eb2ae52cd | ||
|
|
7bcc76eca0 | ||
|
|
0d531e7bd1 | ||
|
|
08851f83c7 | ||
|
|
295f5d7b39 | ||
|
|
a828524957 | ||
|
|
6661406b75 | ||
|
|
8766523279 | ||
|
|
b02a6da614 | ||
|
|
89d7cb1b04 | ||
|
|
59d18de753 | ||
|
|
e1d8703a15 | ||
|
|
1ba595bc6f | ||
|
|
446a2b976c | ||
|
|
9af6ade54d | ||
|
|
3fbe62aa47 | ||
|
|
4454c9b5b5 | ||
|
|
c2cf9bf4b1 | ||
|
|
3afc7978bd | ||
|
|
7a0ba8b477 | ||
|
|
0eb21a98a5 | ||
|
|
2876092912 | ||
|
|
3dbfa34a53 | ||
|
|
656f81787c | ||
|
|
920d558fde | ||
|
|
638a9f1c93 | ||
|
|
f1c3ee7a82 | ||
|
|
878f106573 | ||
|
|
945eded6bf | ||
|
|
f94f9c23d6 | ||
|
|
527b73e8ef | ||
|
|
d4555c5919 | ||
|
|
560bb93e3d | ||
|
|
e7fc72e6b5 | ||
|
|
4203232b05 | ||
|
|
a06aa05201 | ||
|
|
8e582e9b73 | ||
|
|
0e9fe8a446 | ||
|
|
27b4bcc648 | ||
|
|
614b7c74c4 | ||
|
|
5901129ec6 | ||
|
|
ded14345b4 | ||
|
|
dd272c6870 | ||
|
|
b31c7c6230 | ||
|
|
b0297213c3 | ||
|
|
d0fba2d111 | ||
|
|
9924cc2152 | ||
|
|
008a74f86c | ||
|
|
4ca110292f | ||
|
|
55b4c19670 | ||
|
|
8349dd1cda | ||
|
|
402fb66b2a | ||
|
|
f991274b97 | ||
|
|
0d94a49b6a | ||
|
|
7c14225442 | ||
|
|
2af0a050bc | ||
|
|
582f8d6abd | ||
|
|
eeea3123ac | ||
|
|
51fe43e169 | ||
|
|
6e6b127092 | ||
|
|
c9bacdfe05 | ||
|
|
f56a69770c | ||
|
|
0196124c9f | ||
|
|
63756d9d5f | ||
|
|
8e346960ac | ||
|
|
1b485b3459 | ||
|
|
ee05108279 | ||
|
|
dfaa039c9c | ||
|
|
46579d2ee6 | ||
|
|
f372523fb8 | ||
|
|
5e434df6ea | ||
|
|
d6c5bdd241 | ||
|
|
cdcd97244c | ||
|
|
60c4bba263 | ||
|
|
2b73bc5e38 | ||
|
|
03652dc631 | ||
|
|
00155aff37 | ||
|
|
206c3e6d7a | ||
|
|
8345fea812 | ||
|
|
c11dba1e05 | ||
|
|
7d4c3c5814 | ||
|
|
9b36794c9d | ||
|
|
1f34c656e9 | ||
|
|
9982dc9c83 | ||
|
|
780f3d2ab9 | ||
|
|
1cb09890fb | ||
|
|
faae6a7c3b | ||
|
|
d4250f3248 | ||
|
|
a8ee477b62 | ||
|
|
673b6102a7 | ||
|
|
316de42a2c | ||
|
|
dfb4b35e6c | ||
|
|
61ab596d1b | ||
|
|
3345692751 | ||
|
|
dff9287c75 | ||
|
|
b5fb7cdae5 | ||
|
|
2e79817437 | ||
|
|
5f439adc36 | ||
|
|
45df96ff3c | ||
|
|
98ee150364 | ||
|
|
d328a2146a | ||
|
|
4513f2e8be | ||
|
|
095fef1de6 | ||
|
|
754f02a30f | ||
|
|
01a4e7f6a8 | ||
|
|
6bba84422d | ||
|
|
26190d0c6a | ||
|
|
2d9098e43c | ||
|
|
7581b396b2 | ||
|
|
67a6366906 | ||
|
|
5605fab740 | ||
|
|
b76d0001ed | ||
|
|
625eed0840 | ||
|
|
37f9151de3 | ||
|
|
20af98e4dc | ||
|
|
76800d0ed0 | ||
|
|
3103bde7f7 | ||
|
|
298d8c2d65 | ||
|
|
5cdacf81e3 | ||
|
|
2141d88ce1 | ||
|
|
e8a2d4be2e | ||
|
|
9a9baadf0e | ||
|
|
a21e53fa31 | ||
|
|
61f8aea7d8 | ||
|
|
e384b03d49 | ||
|
|
0c60cf39f8 | ||
|
|
268fa9999f | ||
|
|
ff7d4e747c | ||
|
|
121fc57aa6 | ||
|
|
991fa1cfc8 | ||
|
|
fb2961715d | ||
|
|
74c1c2f1ef | ||
|
|
4da6c1b6e4 | ||
|
|
fff03b170f | ||
|
|
434acbb71b | ||
|
|
01962c32cd | ||
|
|
6b0856a054 | ||
|
|
708dbd6bbc | ||
|
|
e3801cbff6 | ||
|
|
fc68635098 | ||
|
|
6706ca5d65 | ||
|
|
44c2fd57c5 | ||
|
|
a9aab3e3ac | ||
|
|
6478d0b6cf | ||
|
|
958af18dc0 | ||
|
|
54b8257c60 | ||
|
|
e86f62744e | ||
|
|
0734773993 | ||
|
|
888cc667f1 | ||
|
|
053d0da617 | ||
|
|
7a4e0bc80c | ||
|
|
7b7306584f | ||
|
|
d6027af632 | ||
|
|
761746af21 | ||
|
|
510a6eaaed | ||
|
|
655df36913 | ||
|
|
2e079ba7a1 | ||
|
|
9df6bfbb5e | ||
|
|
2ff86fa56e | ||
|
|
1b2e0481b9 | ||
|
|
fe96af64e9 | ||
|
|
77d8e4e4d3 | ||
|
|
800b0475ee | ||
|
|
b58e13809c | ||
|
|
9845578cdd | ||
|
|
96ccfa54fb | ||
|
|
b8a64c79be | ||
|
|
4a4c261a88 | ||
|
|
8282f86d9c | ||
|
|
2b6966d8e3 | ||
|
|
c667c947ad | ||
|
|
105b28bf42 | ||
|
|
37a1ff5c99 | ||
|
|
d19a070faf | ||
|
|
d908355ab3 | ||
|
|
a6d86f2e81 | ||
|
|
9d856a4f96 | ||
|
|
a7112fafb0 | ||
|
|
93f9e51280 | ||
|
|
65e9a402cf | ||
|
|
f7513b33a6 | ||
|
|
0b3fa517d3 | ||
|
|
507075920c | ||
|
|
a212f032a6 | ||
|
|
eb8755249f | ||
|
|
73bb2a9fa2 | ||
|
|
5d3ffa8c90 | ||
|
|
87f143f5fd | ||
|
|
f56b6dd6a7 | ||
|
|
5e40340f9c | ||
|
|
2456737df7 | ||
|
|
1191d708de | ||
|
|
4d26971fc7 | ||
|
|
0421b32834 | ||
|
|
360dd63e49 | ||
|
|
f1670dbe6a | ||
|
|
e7ad5c0381 | ||
|
|
2cfe2a105a | ||
|
|
bc83cee503 | ||
|
|
5091d3573c | ||
|
|
ffe5dd91c5 | ||
|
|
d76b560967 | ||
|
|
f062ef3a57 | ||
|
|
5fc1baf4df | ||
|
|
777b77b69e | ||
|
|
5d221e781a |
@@ -3,7 +3,7 @@ jobs:
|
||||
|
||||
build-binary:
|
||||
docker:
|
||||
- image: circleci/golang:1.12
|
||||
- image: circleci/golang:1.14
|
||||
working_directory: ~/build
|
||||
steps:
|
||||
- checkout
|
||||
@@ -11,8 +11,11 @@ jobs:
|
||||
keys:
|
||||
- go-mod-v3-{{ checksum "go.sum" }}
|
||||
- run:
|
||||
name: Run go fmt
|
||||
command: make test-fmt
|
||||
name: Run go mod download
|
||||
command: go mod download
|
||||
- run:
|
||||
name: Check code formatting
|
||||
command: go install golang.org/x/tools/cmd/goimports && make test-fmt
|
||||
- run:
|
||||
name: Build Flagger
|
||||
command: |
|
||||
@@ -44,7 +47,7 @@ jobs:
|
||||
|
||||
push-container:
|
||||
docker:
|
||||
- image: circleci/golang:1.12
|
||||
- image: circleci/golang:1.14
|
||||
steps:
|
||||
- checkout
|
||||
- setup_remote_docker:
|
||||
@@ -56,7 +59,7 @@ jobs:
|
||||
|
||||
push-binary:
|
||||
docker:
|
||||
- image: circleci/golang:1.12
|
||||
- image: circleci/golang:1.14
|
||||
working_directory: ~/build
|
||||
steps:
|
||||
- checkout
|
||||
@@ -65,19 +68,10 @@ jobs:
|
||||
- restore_cache:
|
||||
keys:
|
||||
- go-mod-v3-{{ checksum "go.sum" }}
|
||||
- run: make release-notes
|
||||
- run: github-release-notes -org weaveworks -repo flagger -since-latest-release -include-author > /tmp/release.txt
|
||||
- run: test/goreleaser.sh
|
||||
|
||||
e2e-istio-testing:
|
||||
machine: true
|
||||
steps:
|
||||
- checkout
|
||||
- attach_workspace:
|
||||
at: /tmp/bin
|
||||
- run: test/container-build.sh
|
||||
- run: test/e2e-kind.sh
|
||||
- run: test/e2e-istio.sh
|
||||
- run: test/e2e-tests.sh
|
||||
|
||||
e2e-kubernetes-testing:
|
||||
machine: true
|
||||
steps:
|
||||
@@ -85,31 +79,22 @@ jobs:
|
||||
- attach_workspace:
|
||||
at: /tmp/bin
|
||||
- run: test/container-build.sh
|
||||
- run: test/e2e-kind.sh
|
||||
- run: test/e2e-kind.sh v1.18.2
|
||||
- run: test/e2e-kubernetes.sh
|
||||
- run: test/e2e-kubernetes-tests.sh
|
||||
- run: test/e2e-kubernetes-tests-deployment.sh
|
||||
- run: test/e2e-kubernetes-cleanup.sh
|
||||
- run: test/e2e-kubernetes-tests-daemonset.sh
|
||||
|
||||
e2e-smi-istio-testing:
|
||||
e2e-istio-testing:
|
||||
machine: true
|
||||
steps:
|
||||
- checkout
|
||||
- attach_workspace:
|
||||
at: /tmp/bin
|
||||
- run: test/container-build.sh
|
||||
- run: test/e2e-kind.sh
|
||||
- run: test/e2e-smi-istio.sh
|
||||
- run: test/e2e-tests.sh canary
|
||||
|
||||
e2e-supergloo-testing:
|
||||
machine: true
|
||||
steps:
|
||||
- checkout
|
||||
- attach_workspace:
|
||||
at: /tmp/bin
|
||||
- run: test/container-build.sh
|
||||
- run: test/e2e-kind.sh 0.2.1
|
||||
- run: test/e2e-supergloo.sh
|
||||
- run: test/e2e-tests.sh canary
|
||||
- run: test/e2e-kind.sh v1.18.2
|
||||
- run: test/e2e-istio.sh
|
||||
- run: test/e2e-istio-tests.sh
|
||||
|
||||
e2e-gloo-testing:
|
||||
machine: true
|
||||
@@ -132,6 +117,9 @@ jobs:
|
||||
- run: test/e2e-kind.sh
|
||||
- run: test/e2e-nginx.sh
|
||||
- run: test/e2e-nginx-tests.sh
|
||||
- run: test/e2e-nginx-cleanup.sh
|
||||
- run: test/e2e-nginx-custom-annotations.sh
|
||||
- run: test/e2e-nginx-tests.sh
|
||||
|
||||
e2e-linkerd-testing:
|
||||
machine: true
|
||||
@@ -144,9 +132,32 @@ jobs:
|
||||
- run: test/e2e-linkerd.sh
|
||||
- run: test/e2e-linkerd-tests.sh
|
||||
|
||||
e2e-contour-testing:
|
||||
machine: true
|
||||
steps:
|
||||
- checkout
|
||||
- attach_workspace:
|
||||
at: /tmp/bin
|
||||
- run: test/container-build.sh
|
||||
- run: test/e2e-kind.sh
|
||||
- run: test/e2e-contour.sh
|
||||
- run: test/e2e-contour-tests.sh
|
||||
|
||||
e2e-skipper-testing:
|
||||
machine: true
|
||||
steps:
|
||||
- checkout
|
||||
- attach_workspace:
|
||||
at: /tmp/bin
|
||||
- run: test/container-build.sh
|
||||
- run: test/e2e-kind.sh
|
||||
- run: test/e2e-skipper.sh
|
||||
- run: test/e2e-skipper-tests.sh
|
||||
- run: test/e2e-skipper-cleanup.sh
|
||||
|
||||
push-helm-charts:
|
||||
docker:
|
||||
- image: circleci/golang:1.12
|
||||
- image: circleci/golang:1.14
|
||||
steps:
|
||||
- checkout
|
||||
- run:
|
||||
@@ -170,7 +181,7 @@ jobs:
|
||||
- run:
|
||||
name: Publish charts
|
||||
command: |
|
||||
if echo "${CIRCLE_TAG}" | grep -Eq "[0-9]+(\.[0-9]+)*(-[a-z]+)?$"; then
|
||||
if echo "${CIRCLE_TAG}" | grep v; then
|
||||
REPOSITORY="https://weaveworksbot:${GITHUB_TOKEN}@github.com/weaveworks/flagger.git"
|
||||
git config user.email weaveworksbot@users.noreply.github.com
|
||||
git config user.name weaveworksbot
|
||||
@@ -194,15 +205,13 @@ workflows:
|
||||
branches:
|
||||
ignore:
|
||||
- gh-pages
|
||||
- e2e-istio-testing:
|
||||
requires:
|
||||
- build-binary
|
||||
- /^user-.*/
|
||||
- e2e-kubernetes-testing:
|
||||
requires:
|
||||
- build-binary
|
||||
# - e2e-supergloo-testing:
|
||||
# requires:
|
||||
# - build-binary
|
||||
- e2e-istio-testing:
|
||||
requires:
|
||||
- build-binary
|
||||
- e2e-gloo-testing:
|
||||
requires:
|
||||
- build-binary
|
||||
@@ -212,15 +221,25 @@ workflows:
|
||||
- e2e-linkerd-testing:
|
||||
requires:
|
||||
- build-binary
|
||||
- e2e-contour-testing:
|
||||
requires:
|
||||
- build-binary
|
||||
- e2e-skipper-testing:
|
||||
requires:
|
||||
- build-binary
|
||||
- push-container:
|
||||
requires:
|
||||
- build-binary
|
||||
- e2e-istio-testing
|
||||
- e2e-kubernetes-testing
|
||||
#- e2e-supergloo-testing
|
||||
- e2e-istio-testing
|
||||
- e2e-gloo-testing
|
||||
- e2e-nginx-testing
|
||||
- e2e-linkerd-testing
|
||||
- e2e-skipper-testing
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- master
|
||||
|
||||
release:
|
||||
jobs:
|
||||
@@ -253,4 +272,4 @@ workflows:
|
||||
branches:
|
||||
ignore: /.*/
|
||||
tags:
|
||||
ignore: /^chart.*/
|
||||
ignore: /^chart.*/
|
||||
|
||||
@@ -8,4 +8,7 @@ coverage:
|
||||
patch: off
|
||||
|
||||
comment:
|
||||
require_changes: yes
|
||||
require_changes: true
|
||||
branches:
|
||||
- "!docs"
|
||||
- "!release"
|
||||
|
||||
@@ -1 +1,14 @@
|
||||
root: ./docs/gitbook
|
||||
root: ./docs/gitbook
|
||||
|
||||
redirects:
|
||||
how-it-works: usage/how-it-works.md
|
||||
usage/progressive-delivery: tutorials/istio-progressive-delivery.md
|
||||
usage/ab-testing: tutorials/istio-ab-testing.md
|
||||
usage/blue-green: tutorials/kubernetes-blue-green.md
|
||||
usage/appmesh-progressive-delivery: tutorials/appmesh-progressive-delivery.md
|
||||
usage/linkerd-progressive-delivery: tutorials/linkerd-progressive-delivery.md
|
||||
usage/contour-progressive-delivery: tutorials/contour-progressive-delivery.md
|
||||
usage/gloo-progressive-delivery: tutorials/gloo-progressive-delivery.md
|
||||
usage/nginx-progressive-delivery: tutorials/nginx-progressive-delivery.md
|
||||
usage/skipper-progressive-delivery: tutorials/skipper-progressive-delivery.md
|
||||
usage/crossover-progressive-delivery: tutorials/crossover-progressive-delivery.md
|
||||
|
||||
5
.gitignore
vendored
5
.gitignore
vendored
@@ -16,4 +16,7 @@ bin/
|
||||
_tmp/
|
||||
|
||||
artifacts/gcloud/
|
||||
.idea
|
||||
.idea
|
||||
Makefile.dev
|
||||
|
||||
vendor
|
||||
|
||||
463
CHANGELOG.md
463
CHANGELOG.md
@@ -2,6 +2,449 @@
|
||||
|
||||
All notable changes to this project are documented in this file.
|
||||
|
||||
## 1.1.0 (2020-08-18)
|
||||
|
||||
Add support for Skipper ingress controller
|
||||
|
||||
#### Features
|
||||
|
||||
- Skipper Ingress Controller support
|
||||
[#670](https://github.com/weaveworks/flagger/pull/670)
|
||||
- Support per-config configTracker disable via ConfigMap/Secret annotation
|
||||
[#671](https://github.com/weaveworks/flagger/pull/671)
|
||||
|
||||
#### Improvements
|
||||
|
||||
- Add priorityClassName and securityContext to Helm charts
|
||||
[#652](https://github.com/weaveworks/flagger/pull/652)
|
||||
[#668](https://github.com/weaveworks/flagger/pull/668)
|
||||
- Update Kubernetes packages to v1.18.8
|
||||
[#672](https://github.com/weaveworks/flagger/pull/672)
|
||||
- Update Istio, Linkerd and Contour e2e tests
|
||||
[#661](https://github.com/weaveworks/flagger/pull/661)
|
||||
|
||||
#### Fixes
|
||||
|
||||
- Fix O(log n) bug over network in GetTargetConfigs
|
||||
[#663](https://github.com/weaveworks/flagger/pull/663)
|
||||
- Fix(grafana): metrics change since Kubernetes 1.16
|
||||
[#663](https://github.com/weaveworks/flagger/pull/663)
|
||||
|
||||
## 1.0.1 (2020-07-18)
|
||||
|
||||
Add support for App Mesh Gateway GA
|
||||
|
||||
#### Improvements
|
||||
|
||||
- Update App Mesh docs to v1beta2 API
|
||||
[#649](https://github.com/weaveworks/flagger/pull/649)
|
||||
- Add threadiness to Flagger helm chart
|
||||
[#643](https://github.com/weaveworks/flagger/pull/643)
|
||||
- Add Istio virtual service to loadtester helm chart
|
||||
[#643](https://github.com/weaveworks/flagger/pull/643)
|
||||
|
||||
#### Fixes
|
||||
|
||||
- Fix multiple paths per rule on canary ingress
|
||||
[#632](https://github.com/weaveworks/flagger/pull/632)
|
||||
- Fix installers for kustomize >= 3.6.0
|
||||
[#646](https://github.com/weaveworks/flagger/pull/646)
|
||||
|
||||
## 1.0.0 (2020-06-17)
|
||||
|
||||
This is the GA release for Flagger v1.0.0.
|
||||
|
||||
The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.app/dev/upgrade-guide).
|
||||
|
||||
Two new resources were added to the API: `MetricTemplate` and `AlertProvider`.
|
||||
The analysis can reference [metric templates](https://docs.flagger.app//usage/metrics#custom-metrics)
|
||||
to query Prometheus, Datadog and AWS CloudWatch.
|
||||
[Alerting](https://docs.flagger.app/v/master/usage/alerting#canary-configuration) can be configured on a per
|
||||
canary basis for Slack, MS Teams, Discord and Rocket.
|
||||
|
||||
#### Features
|
||||
|
||||
- Implement progressive promotion
|
||||
[#593](https://github.com/weaveworks/flagger/pull/593)
|
||||
|
||||
#### Improvements
|
||||
|
||||
- istio: Add source labels to analysis matching rules
|
||||
[#594](https://github.com/weaveworks/flagger/pull/594)
|
||||
- istio: Add allow origins field to CORS spec
|
||||
[#604](https://github.com/weaveworks/flagger/pull/604)
|
||||
- istio: Change builtin metrics to work with Istio telemetry v2
|
||||
[#623](https://github.com/weaveworks/flagger/pull/623)
|
||||
- appmesh: Implement App Mesh v1beta2 timeout
|
||||
[#611](https://github.com/weaveworks/flagger/pull/611)
|
||||
- metrics: Check metrics server availability during canary initialization
|
||||
[#592](https://github.com/weaveworks/flagger/pull/592)
|
||||
|
||||
## 1.0.0-rc.5 (2020-05-14)
|
||||
|
||||
This is a release candidate for Flagger v1.0.0.
|
||||
|
||||
The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.app/dev/upgrade-guide).
|
||||
|
||||
#### Features
|
||||
|
||||
- Add support for AWS AppMesh v1beta2 API
|
||||
[#584](https://github.com/weaveworks/flagger/pull/584)
|
||||
- Add support for Contour v1.4 ingress class
|
||||
[#588](https://github.com/weaveworks/flagger/pull/588)
|
||||
- Add user-specified labels/annotations to the generated Services
|
||||
[#538](https://github.com/weaveworks/flagger/pull/538)
|
||||
|
||||
#### Improvements
|
||||
|
||||
- Support compatible Prometheus service
|
||||
[#557](https://github.com/weaveworks/flagger/pull/557)
|
||||
- Update e2e tests and packages to Kubernetes v1.18
|
||||
[#549](https://github.com/weaveworks/flagger/pull/549)
|
||||
[#576](https://github.com/weaveworks/flagger/pull/576)
|
||||
|
||||
#### Fixes
|
||||
|
||||
- pkg/controller: retry canary initialization on conflict
|
||||
[#586](https://github.com/weaveworks/flagger/pull/586)
|
||||
|
||||
## 1.0.0-rc.4 (2020-04-03)
|
||||
|
||||
This is a release candidate for Flagger v1.0.0.
|
||||
|
||||
The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.app/dev/upgrade-guide).
|
||||
|
||||
**Breaking change**: the minimum supported version of Kubernetes is v1.14.0.
|
||||
|
||||
#### Features
|
||||
|
||||
- Implement NGINX Ingress header regex matching
|
||||
[#546](https://github.com/weaveworks/flagger/pull/546)
|
||||
|
||||
#### Improvements
|
||||
|
||||
- pkg/router: update ingress API to networking.k8s.io/v1beta1
|
||||
[#534](https://github.com/weaveworks/flagger/pull/534)
|
||||
- loadtester: add return cmd output option
|
||||
[#535](https://github.com/weaveworks/flagger/pull/535)
|
||||
- refactoring: finalizer error handling and unit testing
|
||||
[#531](https://github.com/weaveworks/flagger/pull/535)
|
||||
[#530](https://github.com/weaveworks/flagger/pull/530)
|
||||
- chart: add finalizers to RBAC rules for OpenShift
|
||||
[#537](https://github.com/weaveworks/flagger/pull/537)
|
||||
- chart: allow security context to be disabled on OpenShift
|
||||
[#543](https://github.com/weaveworks/flagger/pull/543)
|
||||
- chart: add annotations for service account
|
||||
[#521](https://github.com/weaveworks/flagger/pull/521)
|
||||
- docs: Add Prometheus Operator tutorial
|
||||
[#524](https://github.com/weaveworks/flagger/pull/524)
|
||||
|
||||
#### Fixes
|
||||
|
||||
- pkg/controller: avoid status conflicts on initialization
|
||||
[#544](https://github.com/weaveworks/flagger/pull/544)
|
||||
- pkg/canary: fix status retry
|
||||
[#541](https://github.com/weaveworks/flagger/pull/541)
|
||||
- loadtester: fix timeout errors
|
||||
[#539](https://github.com/weaveworks/flagger/pull/539)
|
||||
- pkg/canary/daemonset: fix readiness check
|
||||
[#529](https://github.com/weaveworks/flagger/pull/529)
|
||||
- logs: reduce log verbosity and fix typos
|
||||
[#540](https://github.com/weaveworks/flagger/pull/540)
|
||||
[#526](https://github.com/weaveworks/flagger/pull/526)
|
||||
|
||||
|
||||
## 1.0.0-rc.3 (2020-03-23)
|
||||
|
||||
This is a release candidate for Flagger v1.0.0.
|
||||
|
||||
The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.app/dev/upgrade-guide).
|
||||
|
||||
#### Features
|
||||
|
||||
- Add opt-in finalizers to revert Flagger's mutations on deletion of a canary
|
||||
[#495](https://github.com/weaveworks/flagger/pull/495)
|
||||
|
||||
#### Improvements
|
||||
|
||||
- e2e: update end-to-end tests to Contour 1.3.0 and Gloo 1.3.14
|
||||
[#519](https://github.com/weaveworks/flagger/pull/519)
|
||||
- build: update Kubernetes packages to 1.17.4
|
||||
[#516](https://github.com/weaveworks/flagger/pull/516)
|
||||
|
||||
#### Fixes
|
||||
|
||||
- Preserve node ports on service reconciliation
|
||||
[#514](https://github.com/weaveworks/flagger/pull/514)
|
||||
|
||||
## 1.0.0-rc.2 (2020-03-19)
|
||||
|
||||
This is a release candidate for Flagger v1.0.0.
|
||||
|
||||
The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.app/dev/upgrade-guide).
|
||||
|
||||
#### Features
|
||||
|
||||
- Make mirror percentage configurable when using Istio traffic shadowing
|
||||
[#492](https://github.com/weaveworks/flagger/pull/455)
|
||||
- Add support for running Concord tests with loadtester webhooks
|
||||
[#507](https://github.com/weaveworks/flagger/pull/507)
|
||||
|
||||
#### Improvements
|
||||
|
||||
- docs: add Istio telemetry v2 upgrade guide
|
||||
[#486](https://github.com/weaveworks/flagger/pull/486),
|
||||
update A/B testing tutorial for Istio 1.5
|
||||
[#502](https://github.com/weaveworks/flagger/pull/502),
|
||||
add how to retry a failed release to FAQ
|
||||
[#494](https://github.com/weaveworks/flagger/pull/494)
|
||||
- e2e: update end-to-end tests to
|
||||
Istio 1.5 [#447](https://github.com/weaveworks/flagger/pull/447) and
|
||||
NGINX Ingress 0.30
|
||||
[#489](https://github.com/weaveworks/flagger/pull/489)
|
||||
[#511](https://github.com/weaveworks/flagger/pull/511)
|
||||
- refactoring:
|
||||
error handling [#480](https://github.com/weaveworks/flagger/pull/480),
|
||||
scheduler [#484](https://github.com/weaveworks/flagger/pull/484) and
|
||||
unit tests [#475](https://github.com/weaveworks/flagger/pull/475)
|
||||
- chart: add the log level configuration to Flagger helm chart
|
||||
[#506](https://github.com/weaveworks/flagger/pull/506)
|
||||
|
||||
#### Fixes
|
||||
|
||||
- Fix nil pointer for the global notifiers [#504](https://github.com/weaveworks/flagger/pull/504)
|
||||
|
||||
## 1.0.0-rc.1 (2020-03-03)
|
||||
|
||||
This is a release candidate for Flagger v1.0.0.
|
||||
|
||||
The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.app/dev/upgrade-guide).
|
||||
|
||||
Two new resources were added to the API: `MetricTemplate` and `AlertProvider`.
|
||||
The analysis can reference [metric templates](https://docs.flagger.app//usage/metrics#custom-metrics)
|
||||
to query Prometheus, Datadog and AWS CloudWatch.
|
||||
[Alerting](https://docs.flagger.app/v/master/usage/alerting#canary-configuration) can be configured on a per
|
||||
canary basis for Slack, MS Teams, Discord and Rocket.
|
||||
|
||||
#### Features
|
||||
|
||||
- Implement metric templates for Prometheus [#419](https://github.com/weaveworks/flagger/pull/419),
|
||||
Datadog [#460](https://github.com/weaveworks/flagger/pull/460) and
|
||||
CloudWatch [#464](https://github.com/weaveworks/flagger/pull/464)
|
||||
- Implement metric range validation [#424](https://github.com/weaveworks/flagger/pull/424)
|
||||
- Add support for targeting DaemonSets [#455](https://github.com/weaveworks/flagger/pull/455)
|
||||
- Implement canary alerts and alert providers (Slack, MS Teams, Discord and Rocket)
|
||||
[#429](https://github.com/weaveworks/flagger/pull/429)
|
||||
|
||||
#### Improvements
|
||||
|
||||
- Add support for Istio multi-cluster
|
||||
[#447](https://github.com/weaveworks/flagger/pull/447) [#450](https://github.com/weaveworks/flagger/pull/450)
|
||||
- Extend Istio traffic policy [#441](https://github.com/weaveworks/flagger/pull/441),
|
||||
add support for header operations [#442](https://github.com/weaveworks/flagger/pull/442) and
|
||||
set ingress destination port when multiple ports are discovered [#436](https://github.com/weaveworks/flagger/pull/436)
|
||||
- Add support for rollback gating [#449](https://github.com/weaveworks/flagger/pull/449)
|
||||
- Allow disabling ConfigMaps and Secrets tracking [#425](https://github.com/weaveworks/flagger/pull/425)
|
||||
|
||||
#### Fixes
|
||||
|
||||
- Fix spec changes detection [#446](https://github.com/weaveworks/flagger/pull/446)
|
||||
- Track projected ConfigMaps and Secrets [#433](https://github.com/weaveworks/flagger/pull/433)
|
||||
|
||||
## 0.23.0 (2020-02-06)
|
||||
|
||||
Adds support for service name configuration and rollback webhook
|
||||
|
||||
#### Features
|
||||
|
||||
- Implement service name override [#416](https://github.com/weaveworks/flagger/pull/416)
|
||||
- Add support for gated rollback [#420](https://github.com/weaveworks/flagger/pull/420)
|
||||
|
||||
## 0.22.0 (2020-01-16)
|
||||
|
||||
Adds event dispatching through webhooks
|
||||
|
||||
#### Features
|
||||
|
||||
- Implement event dispatching webhook [#409](https://github.com/weaveworks/flagger/pull/409)
|
||||
- Add general purpose event webhook [#401](https://github.com/weaveworks/flagger/pull/401)
|
||||
|
||||
#### Improvements
|
||||
|
||||
- Update Contour to v1.1 and add Linkerd header [#411](https://github.com/weaveworks/flagger/pull/411)
|
||||
- Update Istio e2e to v1.4.3 [#407](https://github.com/weaveworks/flagger/pull/407)
|
||||
- Update Kubernetes packages to 1.17 [#406](https://github.com/weaveworks/flagger/pull/406)
|
||||
|
||||
## 0.21.0 (2020-01-06)
|
||||
|
||||
Adds support for Contour ingress controller
|
||||
|
||||
#### Features
|
||||
|
||||
- Add support for Contour ingress controller [#397](https://github.com/weaveworks/flagger/pull/397)
|
||||
- Add support for Envoy managed by Crossover via SMI [#386](https://github.com/weaveworks/flagger/pull/386)
|
||||
- Extend canary target ref to Kubernetes Service kind [#372](https://github.com/weaveworks/flagger/pull/372)
|
||||
|
||||
#### Improvements
|
||||
|
||||
- Add Prometheus operator PodMonitor template to Helm chart [#399](https://github.com/weaveworks/flagger/pull/399)
|
||||
- Update e2e tests to Kubernetes v1.16 [#390](https://github.com/weaveworks/flagger/pull/390)
|
||||
|
||||
## 0.20.4 (2019-12-03)
|
||||
|
||||
Adds support for taking over a running deployment without disruption
|
||||
|
||||
#### Improvements
|
||||
|
||||
- Add initialization phase to Kubernetes router [#384](https://github.com/weaveworks/flagger/pull/384)
|
||||
- Add canary controller interface and Kubernetes deployment kind implementation [#378](https://github.com/weaveworks/flagger/pull/378)
|
||||
|
||||
#### Fixes
|
||||
|
||||
- Skip primary check on skip analysis [#380](https://github.com/weaveworks/flagger/pull/380)
|
||||
|
||||
## 0.20.3 (2019-11-13)
|
||||
|
||||
Adds wrk to load tester tools and the App Mesh gateway chart to Flagger Helm repository
|
||||
|
||||
#### Improvements
|
||||
|
||||
- Add wrk to load tester tools [#368](https://github.com/weaveworks/flagger/pull/368)
|
||||
- Add App Mesh gateway chart [#365](https://github.com/weaveworks/flagger/pull/365)
|
||||
|
||||
## 0.20.2 (2019-11-07)
|
||||
|
||||
Adds support for exposing canaries outside the cluster using App Mesh Gateway annotations
|
||||
|
||||
#### Improvements
|
||||
|
||||
- Expose canaries on public domains with App Mesh Gateway [#358](https://github.com/weaveworks/flagger/pull/358)
|
||||
|
||||
#### Fixes
|
||||
|
||||
- Use the specified replicas when scaling up the canary [#363](https://github.com/weaveworks/flagger/pull/363)
|
||||
|
||||
## 0.20.1 (2019-11-03)
|
||||
|
||||
Fixes promql execution and updates the load testing tools
|
||||
|
||||
#### Improvements
|
||||
|
||||
- Update load tester Helm tools [#8349dd1](https://github.com/weaveworks/flagger/commit/8349dd1cda59a741c7bed9a0f67c0fc0fbff4635)
|
||||
- e2e testing: update providers [#346](https://github.com/weaveworks/flagger/pull/346)
|
||||
|
||||
#### Fixes
|
||||
|
||||
- Fix Prometheus query escape [#353](https://github.com/weaveworks/flagger/pull/353)
|
||||
- Updating hey release link [#350](https://github.com/weaveworks/flagger/pull/350)
|
||||
|
||||
## 0.20.0 (2019-10-21)
|
||||
|
||||
Adds support for [A/B Testing](https://docs.flagger.app/usage/progressive-delivery#traffic-mirroring)
|
||||
and retry policies when using App Mesh
|
||||
|
||||
#### Features
|
||||
|
||||
- Implement App Mesh A/B testing based on HTTP headers match conditions [#340](https://github.com/weaveworks/flagger/pull/340)
|
||||
- Implement App Mesh HTTP retry policy [#338](https://github.com/weaveworks/flagger/pull/338)
|
||||
- Implement metrics server override [#342](https://github.com/weaveworks/flagger/pull/342)
|
||||
|
||||
#### Improvements
|
||||
|
||||
- Add the app/name label to services and primary deployment [#333](https://github.com/weaveworks/flagger/pull/333)
|
||||
- Allow setting Slack and Teams URLs with env vars [#334](https://github.com/weaveworks/flagger/pull/334)
|
||||
- Refactor Gloo integration [#344](https://github.com/weaveworks/flagger/pull/344)
|
||||
|
||||
#### Fixes
|
||||
|
||||
- Generate unique names for App Mesh virtual routers and routes [#336](https://github.com/weaveworks/flagger/pull/336)
|
||||
|
||||
## 0.19.0 (2019-10-08)
|
||||
|
||||
Adds support for canary and blue/green [traffic mirroring](https://docs.flagger.app/usage/progressive-delivery#traffic-mirroring)
|
||||
|
||||
#### Features
|
||||
|
||||
- Add traffic mirroring for Istio service mesh [#311](https://github.com/weaveworks/flagger/pull/311)
|
||||
- Implement canary service target port [#327](https://github.com/weaveworks/flagger/pull/327)
|
||||
|
||||
#### Improvements
|
||||
|
||||
- Allow gRPC protocol for App Mesh [#325](https://github.com/weaveworks/flagger/pull/325)
|
||||
- Enforce blue/green when using Kubernetes networking [#326](https://github.com/weaveworks/flagger/pull/326)
|
||||
|
||||
#### Fixes
|
||||
|
||||
- Fix port discovery diff [#324](https://github.com/weaveworks/flagger/pull/324)
|
||||
- Helm chart: Enable Prometheus scraping of Flagger metrics
|
||||
[#2141d88](https://github.com/weaveworks/flagger/commit/2141d88ce1cc6be220dab34171c215a334ecde24)
|
||||
|
||||
## 0.18.6 (2019-10-03)
|
||||
|
||||
Adds support for App Mesh conformance tests and latency metric checks
|
||||
|
||||
#### Improvements
|
||||
|
||||
- Add support for acceptance testing when using App Mesh [#322](https://github.com/weaveworks/flagger/pull/322)
|
||||
- Add Kustomize installer for App Mesh [#310](https://github.com/weaveworks/flagger/pull/310)
|
||||
- Update Linkerd to v2.5.0 and Prometheus to v2.12.0 [#323](https://github.com/weaveworks/flagger/pull/323)
|
||||
|
||||
#### Fixes
|
||||
|
||||
- Fix slack/teams notification fields mapping [#318](https://github.com/weaveworks/flagger/pull/318)
|
||||
|
||||
## 0.18.5 (2019-10-02)
|
||||
|
||||
Adds support for [confirm-promotion](https://docs.flagger.app/how-it-works#webhooks)
|
||||
webhooks and blue/green deployments when using a service mesh
|
||||
|
||||
#### Features
|
||||
|
||||
- Implement confirm-promotion hook [#307](https://github.com/weaveworks/flagger/pull/307)
|
||||
- Implement B/G for service mesh providers [#305](https://github.com/weaveworks/flagger/pull/305)
|
||||
|
||||
#### Improvements
|
||||
|
||||
- Canary promotion improvements to avoid dropping in-flight requests [#310](https://github.com/weaveworks/flagger/pull/310)
|
||||
- Update end-to-end tests to Kubernetes v1.15.3 and Istio 1.3.0 [#306](https://github.com/weaveworks/flagger/pull/306)
|
||||
|
||||
#### Fixes
|
||||
|
||||
- Skip primary check for App Mesh [#315](https://github.com/weaveworks/flagger/pull/315)
|
||||
|
||||
## 0.18.4 (2019-09-08)
|
||||
|
||||
Adds support for NGINX custom annotations and Helm v3 acceptance testing
|
||||
|
||||
#### Features
|
||||
|
||||
- Add annotations prefix for NGINX ingresses [#293](https://github.com/weaveworks/flagger/pull/293)
|
||||
- Add wide columns in CRD [#289](https://github.com/weaveworks/flagger/pull/289)
|
||||
- loadtester: implement Helm v3 test command [#296](https://github.com/weaveworks/flagger/pull/296)
|
||||
- loadtester: add gRPC health check to load tester image [#295](https://github.com/weaveworks/flagger/pull/295)
|
||||
|
||||
#### Fixes
|
||||
|
||||
- loadtester: fix tests error logging [#286](https://github.com/weaveworks/flagger/pull/286)
|
||||
|
||||
## 0.18.3 (2019-08-22)
|
||||
|
||||
Adds support for tillerless helm tests and protobuf health checking
|
||||
|
||||
#### Features
|
||||
|
||||
- loadtester: add support for tillerless helm [#280](https://github.com/weaveworks/flagger/pull/280)
|
||||
- loadtester: add support for protobuf health checking [#280](https://github.com/weaveworks/flagger/pull/280)
|
||||
|
||||
#### Improvements
|
||||
|
||||
- Set HTTP listeners for AppMesh virtual routers [#272](https://github.com/weaveworks/flagger/pull/272)
|
||||
|
||||
#### Fixes
|
||||
|
||||
- Add missing fields to CRD validation spec [#271](https://github.com/weaveworks/flagger/pull/271)
|
||||
- Fix App Mesh backends validation in CRD [#281](https://github.com/weaveworks/flagger/pull/281)
|
||||
|
||||
## 0.18.2 (2019-08-05)
|
||||
|
||||
Fixes multi-port support for Istio
|
||||
@@ -38,8 +481,10 @@ Adds support for [manual gating](https://docs.flagger.app/how-it-works#manual-ga
|
||||
|
||||
#### Breaking changes
|
||||
|
||||
- Due to the status sub-resource changes in [#240](https://github.com/weaveworks/flagger/pull/240), when upgrading Flagger the canaries status phase will be reset to `Initialized`
|
||||
- Upgrading Flagger with Helm will fail due to Helm poor support of CRDs, see [workaround](https://github.com/weaveworks/flagger/issues/223)
|
||||
- Due to the status sub-resource changes in [#240](https://github.com/weaveworks/flagger/pull/240),
|
||||
when upgrading Flagger the canaries status phase will be reset to `Initialized`
|
||||
- Upgrading Flagger with Helm will fail due to Helm poor support of CRDs,
|
||||
see [workaround](https://github.com/weaveworks/flagger/issues/223)
|
||||
|
||||
## 0.17.0 (2019-07-08)
|
||||
|
||||
@@ -53,12 +498,14 @@ Adds support for Linkerd (SMI Traffic Split API), MS Teams notifications and HA
|
||||
|
||||
#### Improvements
|
||||
|
||||
- Add [Kustomize](https://docs.flagger.app/install/flagger-install-on-kubernetes#install-flagger-with-kustomize) installer [#232](https://github.com/weaveworks/flagger/pull/232)
|
||||
- Add [Kustomize](https://docs.flagger.app/install/flagger-install-on-kubernetes#install-flagger-with-kustomize)
|
||||
installer [#232](https://github.com/weaveworks/flagger/pull/232)
|
||||
- Add Pod Security Policy to Helm chart [#234](https://github.com/weaveworks/flagger/pull/234)
|
||||
|
||||
## 0.16.0 (2019-06-23)
|
||||
|
||||
Adds support for running [Blue/Green deployments](https://docs.flagger.app/usage/blue-green) without a service mesh or ingress controller
|
||||
Adds support for running [Blue/Green deployments](https://docs.flagger.app/usage/blue-green)
|
||||
without a service mesh or ingress controller
|
||||
|
||||
#### Features
|
||||
|
||||
@@ -90,7 +537,8 @@ Adds support for customising the Istio [traffic policy](https://docs.flagger.app
|
||||
|
||||
## 0.14.1 (2019-06-05)
|
||||
|
||||
Adds support for running [acceptance/integration tests](https://docs.flagger.app/how-it-works#integration-testing) with Helm test or Bash Bats using pre-rollout hooks
|
||||
Adds support for running [acceptance/integration tests](https://docs.flagger.app/how-it-works#integration-testing)
|
||||
with Helm test or Bash Bats using pre-rollout hooks
|
||||
|
||||
#### Features
|
||||
|
||||
@@ -137,7 +585,8 @@ Adds support for [NGINX](https://docs.flagger.app/usage/nginx-progressive-delive
|
||||
#### Features
|
||||
|
||||
- Add support for nginx ingress controller (weighted traffic and A/B testing) [#170](https://github.com/weaveworks/flagger/pull/170)
|
||||
- Add Prometheus add-on to Flagger Helm chart for App Mesh and NGINX [79b3370](https://github.com/weaveworks/flagger/pull/170/commits/79b337089294a92961bc8446fd185b38c50a32df)
|
||||
- Add Prometheus add-on to Flagger Helm chart for App Mesh and
|
||||
NGINX [79b3370](https://github.com/weaveworks/flagger/pull/170/commits/79b337089294a92961bc8446fd185b38c50a32df)
|
||||
|
||||
#### Fixes
|
||||
|
||||
@@ -383,4 +832,4 @@ Initial semver release
|
||||
- Add OpenAPI v3 schema validation to Canary CRD
|
||||
- Use CRD status for canary state persistence
|
||||
- Add Helm charts for Flagger and Grafana
|
||||
- Add canary analysis Grafana dashboard
|
||||
- Add canary analysis Grafana dashboard
|
||||
|
||||
@@ -17,12 +17,12 @@ contribution.
|
||||
## Chat
|
||||
|
||||
The project uses Slack: To join the conversation, simply join the
|
||||
[Weave community](https://slack.weave.works/) Slack workspace.
|
||||
[Weave community](https://slack.weave.works/) Slack workspace #flagger channel.
|
||||
|
||||
## Getting Started
|
||||
|
||||
- Fork the repository on GitHub
|
||||
- If you want to contribute as a developer, continue reading this document for further instructions
|
||||
- If you want to contribute as a developer, read [Flagger Development Guide](https://docs.flagger.app/dev/dev-guide)
|
||||
- If you have questions, concerns, get stuck or need a hand, let us know
|
||||
on the Slack channel. We are happy to help and look forward to having
|
||||
you part of the team. No matter in which capacity.
|
||||
@@ -59,7 +59,7 @@ get asked to resubmit the PR or divide the changes into more than one PR.
|
||||
|
||||
### Format of the Commit Message
|
||||
|
||||
For Flux we prefer the following rules for good commit messages:
|
||||
For Flagger we prefer the following rules for good commit messages:
|
||||
|
||||
- Limit the subject to 50 characters and write as the continuation
|
||||
of the sentence "If applied, this commit will ..."
|
||||
@@ -69,4 +69,4 @@ For Flux we prefer the following rules for good commit messages:
|
||||
The [following article](https://chris.beams.io/posts/git-commit/#seven-rules)
|
||||
has some more helpful advice on documenting your work.
|
||||
|
||||
This doc is adapted from the [Weaveworks Flux](https://github.com/weaveworks/flux/blob/master/CONTRIBUTING.md)
|
||||
This doc is adapted from [FluxCD](https://github.com/fluxcd/flux/blob/master/CONTRIBUTING.md).
|
||||
|
||||
15
Dockerfile
15
Dockerfile
@@ -1,16 +1,9 @@
|
||||
FROM alpine:3.9
|
||||
FROM alpine:3.12
|
||||
|
||||
RUN addgroup -S flagger \
|
||||
&& adduser -S -g flagger flagger \
|
||||
&& apk --no-cache add ca-certificates
|
||||
RUN apk --no-cache add ca-certificates
|
||||
|
||||
WORKDIR /home/flagger
|
||||
USER nobody
|
||||
|
||||
COPY /bin/flagger .
|
||||
|
||||
RUN chown -R flagger:flagger ./
|
||||
|
||||
USER flagger
|
||||
COPY --chown=nobody:nobody /bin/flagger .
|
||||
|
||||
ENTRYPOINT ["./flagger"]
|
||||
|
||||
|
||||
@@ -1,27 +1,69 @@
|
||||
FROM bats/bats:v1.1.0
|
||||
FROM alpine:3.11 as build
|
||||
|
||||
RUN addgroup -S app \
|
||||
&& adduser -S -g app app \
|
||||
&& apk --no-cache add ca-certificates curl jq
|
||||
RUN apk --no-cache add alpine-sdk perl curl
|
||||
|
||||
RUN curl -sSLo hey "https://storage.googleapis.com/hey-release/hey_linux_amd64" && \
|
||||
chmod +x hey && mv hey /usr/local/bin/hey
|
||||
|
||||
RUN HELM2_VERSION=2.16.8 && \
|
||||
curl -sSL "https://get.helm.sh/helm-v${HELM2_VERSION}-linux-amd64.tar.gz" | tar xvz && \
|
||||
chmod +x linux-amd64/helm && mv linux-amd64/helm /usr/local/bin/helm && \
|
||||
chmod +x linux-amd64/tiller && mv linux-amd64/tiller /usr/local/bin/tiller
|
||||
|
||||
RUN HELM3_VERSION=3.2.3 && \
|
||||
curl -sSL "https://get.helm.sh/helm-v${HELM3_VERSION}-linux-amd64.tar.gz" | tar xvz && \
|
||||
chmod +x linux-amd64/helm && mv linux-amd64/helm /usr/local/bin/helmv3
|
||||
|
||||
RUN GRPC_HEALTH_PROBE_VERSION=v0.3.1 && \
|
||||
wget -qO /usr/local/bin/grpc_health_probe https://github.com/grpc-ecosystem/grpc-health-probe/releases/download/${GRPC_HEALTH_PROBE_VERSION}/grpc_health_probe-linux-amd64 && \
|
||||
chmod +x /usr/local/bin/grpc_health_probe
|
||||
|
||||
RUN GHZ_VERSION=0.39.0 && \
|
||||
curl -sSL "https://github.com/bojand/ghz/releases/download/v${GHZ_VERSION}/ghz_${GHZ_VERSION}_Linux_x86_64.tar.gz" | tar xz -C /tmp && \
|
||||
mv /tmp/ghz /usr/local/bin && chmod +x /usr/local/bin/ghz
|
||||
|
||||
RUN HELM_TILLER_VERSION=0.9.3 && \
|
||||
curl -sSL "https://github.com/rimusz/helm-tiller/archive/v${HELM_TILLER_VERSION}.tar.gz" | tar xz -C /tmp && \
|
||||
mv /tmp/helm-tiller-${HELM_TILLER_VERSION} /tmp/helm-tiller
|
||||
|
||||
RUN WRK_VERSION=4.0.2 && \
|
||||
cd /tmp && git clone -b ${WRK_VERSION} https://github.com/wg/wrk
|
||||
RUN cd /tmp/wrk && make
|
||||
|
||||
FROM bash:5.0
|
||||
|
||||
RUN addgroup -S app && \
|
||||
adduser -S -g app app && \
|
||||
apk --no-cache add ca-certificates curl jq libgcc
|
||||
|
||||
WORKDIR /home/app
|
||||
|
||||
RUN curl -sSLo hey "https://storage.googleapis.com/jblabs/dist/hey_linux_v0.1.2" && \
|
||||
chmod +x hey && mv hey /usr/local/bin/hey
|
||||
COPY --from=bats/bats:v1.1.0 /opt/bats/ /opt/bats/
|
||||
RUN ln -s /opt/bats/bin/bats /usr/local/bin/
|
||||
|
||||
RUN curl -sSL "https://get.helm.sh/helm-v2.12.3-linux-amd64.tar.gz" | tar xvz && \
|
||||
chmod +x linux-amd64/helm && mv linux-amd64/helm /usr/local/bin/helm && \
|
||||
rm -rf linux-amd64
|
||||
COPY --from=build /usr/local/bin/hey /usr/local/bin/
|
||||
COPY --from=build /tmp/wrk/wrk /usr/local/bin/
|
||||
COPY --from=build /usr/local/bin/helm /usr/local/bin/
|
||||
COPY --from=build /usr/local/bin/tiller /usr/local/bin/
|
||||
COPY --from=build /usr/local/bin/ghz /usr/local/bin/
|
||||
COPY --from=build /usr/local/bin/helmv3 /usr/local/bin/
|
||||
COPY --from=build /usr/local/bin/grpc_health_probe /usr/local/bin/
|
||||
COPY --from=build /tmp/helm-tiller /tmp/helm-tiller
|
||||
|
||||
RUN curl -sSL "https://github.com/bojand/ghz/releases/download/v0.39.0/ghz_0.39.0_Linux_x86_64.tar.gz" | tar xz -C /tmp && \
|
||||
mv /tmp/ghz /usr/local/bin && chmod +x /usr/local/bin/ghz && rm -rf /tmp/ghz-web
|
||||
|
||||
RUN ls /tmp
|
||||
ADD https://raw.githubusercontent.com/grpc/grpc-proto/master/grpc/health/v1/health.proto /tmp/ghz/health.proto
|
||||
|
||||
COPY ./bin/loadtester .
|
||||
|
||||
RUN chown -R app:app ./
|
||||
RUN chown -R app:app /tmp/ghz
|
||||
|
||||
USER app
|
||||
|
||||
# test load generator tools
|
||||
RUN hey -n 1 -c 1 https://flagger.app > /dev/null && echo $? | grep 0
|
||||
RUN wrk -d 1s -c 1 -t 1 https://flagger.app > /dev/null && echo $? | grep 0
|
||||
|
||||
# install Helm v2 plugins
|
||||
RUN helm init --client-only && helm plugin install /tmp/helm-tiller
|
||||
|
||||
ENTRYPOINT ["./loadtester"]
|
||||
|
||||
@@ -3,3 +3,4 @@ https://weave-community.slack.com/messages/flagger/ (obtain an invitation
|
||||
at https://slack.weave.works/).
|
||||
|
||||
Stefan Prodan, Weaveworks <stefan@weave.works> (Slack: @stefan Twitter: @stefanprodan)
|
||||
Takeshi Yoneda, DMM.com <cz.rk.t0415y.g@gmail.com> (Slack: @mathetake Twitter: @mathetake)
|
||||
|
||||
107
Makefile
107
Makefile
@@ -1,48 +1,11 @@
|
||||
TAG?=latest
|
||||
VERSION?=$(shell grep 'VERSION' pkg/version/version.go | awk '{ print $$4 }' | tr -d '"')
|
||||
VERSION_MINOR:=$(shell grep 'VERSION' pkg/version/version.go | awk '{ print $$4 }' | tr -d '"' | rev | cut -d'.' -f2- | rev)
|
||||
PATCH:=$(shell grep 'VERSION' pkg/version/version.go | awk '{ print $$4 }' | tr -d '"' | awk -F. '{print $$NF}')
|
||||
SOURCE_DIRS = cmd pkg/apis pkg/controller pkg/server pkg/canary pkg/metrics pkg/router pkg/notifier
|
||||
LT_VERSION?=$(shell grep 'VERSION' cmd/loadtester/main.go | awk '{ print $$4 }' | tr -d '"' | head -n1)
|
||||
TS=$(shell date +%Y-%m-%d_%H-%M-%S)
|
||||
|
||||
run:
|
||||
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=istio -namespace=test \
|
||||
-metrics-server=https://prometheus.istio.weavedx.com \
|
||||
-enable-leader-election=true
|
||||
|
||||
run2:
|
||||
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=istio -namespace=test \
|
||||
-metrics-server=https://prometheus.istio.weavedx.com \
|
||||
-enable-leader-election=true \
|
||||
-port=9092
|
||||
|
||||
run-appmesh:
|
||||
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=appmesh \
|
||||
-metrics-server=http://acfc235624ca911e9a94c02c4171f346-1585187926.us-west-2.elb.amazonaws.com:9090
|
||||
|
||||
run-nginx:
|
||||
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=nginx -namespace=nginx \
|
||||
-metrics-server=http://prometheus-weave.istio.weavedx.com
|
||||
|
||||
run-smi:
|
||||
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=smi:istio -namespace=smi \
|
||||
-metrics-server=https://prometheus.istio.weavedx.com
|
||||
|
||||
run-gloo:
|
||||
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=gloo -namespace=gloo \
|
||||
-metrics-server=https://prometheus.istio.weavedx.com
|
||||
|
||||
run-nop:
|
||||
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=none -namespace=bg \
|
||||
-metrics-server=https://prometheus.istio.weavedx.com
|
||||
|
||||
run-linkerd:
|
||||
GO111MODULE=on go run cmd/flagger/* -kubeconfig=$$HOME/.kube/config -log-level=info -mesh-provider=smi:linkerd -namespace=demo \
|
||||
-metrics-server=https://linkerd-prometheus.istio.weavedx.com
|
||||
|
||||
build:
|
||||
GIT_COMMIT=$$(git rev-list -1 HEAD) && GO111MODULE=on CGO_ENABLED=0 GOOS=linux go build -ldflags "-s -w -X github.com/weaveworks/flagger/pkg/version.REVISION=$${GIT_COMMIT}" -a -installsuffix cgo -o ./bin/flagger ./cmd/flagger/*
|
||||
GIT_COMMIT=$$(git rev-list -1 HEAD) && CGO_ENABLED=0 GOOS=linux go build \
|
||||
-ldflags "-s -w -X github.com/weaveworks/flagger/pkg/version.REVISION=$${GIT_COMMIT}" \
|
||||
-a -installsuffix cgo -o ./bin/flagger ./cmd/flagger/*
|
||||
docker build -t weaveworks/flagger:$(TAG) . -f Dockerfile
|
||||
|
||||
push:
|
||||
@@ -50,10 +13,15 @@ push:
|
||||
docker push weaveworks/flagger:$(VERSION)
|
||||
|
||||
fmt:
|
||||
gofmt -l -s -w $(SOURCE_DIRS)
|
||||
gofmt -l -s -w ./
|
||||
goimports -l -w ./
|
||||
|
||||
test-fmt:
|
||||
gofmt -l -s $(SOURCE_DIRS) | grep ".*\.go"; if [ "$$?" = "0" ]; then exit 1; fi
|
||||
gofmt -l -s ./ | grep ".*\.go"; if [ "$$?" = "0" ]; then exit 1; fi
|
||||
goimports -l ./ | grep ".*\.go"; if [ "$$?" = "0" ]; then exit 1; fi
|
||||
|
||||
codegen:
|
||||
./hack/update-codegen.sh
|
||||
|
||||
test-codegen:
|
||||
./hack/verify-codegen.sh
|
||||
@@ -61,15 +29,9 @@ test-codegen:
|
||||
test: test-fmt test-codegen
|
||||
go test ./...
|
||||
|
||||
helm-package:
|
||||
cd charts/ && helm package ./*
|
||||
mv charts/*.tgz bin/
|
||||
curl -s https://raw.githubusercontent.com/weaveworks/flagger/gh-pages/index.yaml > ./bin/index.yaml
|
||||
helm repo index bin --url https://flagger.app --merge ./bin/index.yaml
|
||||
|
||||
helm-up:
|
||||
helm upgrade --install flagger ./charts/flagger --namespace=istio-system --set crd.create=false
|
||||
helm upgrade --install flagger-grafana ./charts/grafana --namespace=istio-system
|
||||
crd:
|
||||
cat artifacts/flagger/crd.yaml > charts/flagger/crds/crd.yaml
|
||||
cat artifacts/flagger/crd.yaml > kustomize/base/flagger/crd.yaml
|
||||
|
||||
version-set:
|
||||
@next="$(TAG)" && \
|
||||
@@ -82,46 +44,17 @@ version-set:
|
||||
sed -i '' "s/newTag: $$current/newTag: $$next/g" kustomize/base/flagger/kustomization.yaml && \
|
||||
echo "Version $$next set in code, deployment, chart and kustomize"
|
||||
|
||||
version-up:
|
||||
@next="$(VERSION_MINOR).$$(($(PATCH) + 1))" && \
|
||||
current="$(VERSION)" && \
|
||||
sed -i '' "s/$$current/$$next/g" pkg/version/version.go && \
|
||||
sed -i '' "s/flagger:$$current/flagger:$$next/g" artifacts/flagger/deployment.yaml && \
|
||||
sed -i '' "s/tag: $$current/tag: $$next/g" charts/flagger/values.yaml && \
|
||||
sed -i '' "s/appVersion: $$current/appVersion: $$next/g" charts/flagger/Chart.yaml && \
|
||||
echo "Version $$next set in code, deployment and chart"
|
||||
|
||||
dev-up: version-up
|
||||
@echo "Starting build/push/deploy pipeline for $(VERSION)"
|
||||
docker build -t quay.io/stefanprodan/flagger:$(VERSION) . -f Dockerfile
|
||||
docker push quay.io/stefanprodan/flagger:$(VERSION)
|
||||
kubectl apply -f ./artifacts/flagger/crd.yaml
|
||||
helm upgrade -i flagger ./charts/flagger --namespace=istio-system --set crd.create=false
|
||||
|
||||
release:
|
||||
git tag $(VERSION)
|
||||
git push origin $(VERSION)
|
||||
git tag "v$(VERSION)"
|
||||
git push origin "v$(VERSION)"
|
||||
|
||||
release-set: fmt version-set helm-package
|
||||
git add .
|
||||
git commit -m "Release $(VERSION)"
|
||||
git push origin master
|
||||
git tag $(VERSION)
|
||||
git push origin $(VERSION)
|
||||
|
||||
reset-test:
|
||||
kubectl delete -f ./artifacts/namespaces
|
||||
kubectl apply -f ./artifacts/namespaces
|
||||
kubectl apply -f ./artifacts/canaries
|
||||
|
||||
loadtester-run: loadtester-build
|
||||
docker build -t weaveworks/flagger-loadtester:$(LT_VERSION) . -f Dockerfile.loadtester
|
||||
docker rm -f tester || true
|
||||
docker run -dp 8888:9090 --name tester weaveworks/flagger-loadtester:$(LT_VERSION)
|
||||
release-notes:
|
||||
cd /tmp && GH_REL_URL="https://github.com/buchanae/github-release-notes/releases/download/0.2.0/github-release-notes-linux-amd64-0.2.0.tar.gz" && \
|
||||
curl -sSL $${GH_REL_URL} | tar xz && sudo mv github-release-notes /usr/local/bin/
|
||||
|
||||
loadtester-build:
|
||||
GO111MODULE=on CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o ./bin/loadtester ./cmd/loadtester/*
|
||||
CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o ./bin/loadtester ./cmd/loadtester/*
|
||||
docker build -t weaveworks/flagger-loadtester:$(LT_VERSION) . -f Dockerfile.loadtester
|
||||
|
||||
loadtester-push:
|
||||
docker build -t weaveworks/flagger-loadtester:$(LT_VERSION) . -f Dockerfile.loadtester
|
||||
docker push weaveworks/flagger-loadtester:$(LT_VERSION)
|
||||
|
||||
214
README.md
214
README.md
@@ -6,54 +6,57 @@
|
||||
[](https://github.com/weaveworks/flagger/blob/master/LICENSE)
|
||||
[](https://github.com/weaveworks/flagger/releases)
|
||||
|
||||
Flagger is a Kubernetes operator that automates the promotion of canary deployments
|
||||
using Istio, Linkerd, App Mesh, NGINX or Gloo routing for traffic shifting and Prometheus metrics for canary analysis.
|
||||
The canary analysis can be extended with webhooks for running acceptance tests,
|
||||
load tests or any other custom validation.
|
||||
|
||||
Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance
|
||||
indicators like HTTP requests success rate, requests average duration and pods health.
|
||||
Based on analysis of the KPIs a canary is promoted or aborted, and the analysis result is published to Slack or MS Teams.
|
||||
Flagger is a progressive delivery tool that automates the release process for applications running on Kubernetes.
|
||||
It reduces the risk of introducing a new software version in production
|
||||
by gradually shifting traffic to the new version while measuring metrics and running conformance tests.
|
||||
|
||||

|
||||
|
||||
## Documentation
|
||||
Flagger implements several deployment strategies (Canary releases, A/B testing, Blue/Green mirroring)
|
||||
using a service mesh (App Mesh, Istio, Linkerd) or an ingress controller (Contour, Gloo, NGINX, Skipper) for traffic routing.
|
||||
For release analysis, Flagger can query Prometheus, Datadog or CloudWatch
|
||||
and for alerting it uses Slack, MS Teams, Discord and Rocket.
|
||||
|
||||
Flagger documentation can be found at [docs.flagger.app](https://docs.flagger.app)
|
||||
### Documentation
|
||||
|
||||
Flagger documentation can be found at [docs.flagger.app](https://docs.flagger.app).
|
||||
|
||||
* Install
|
||||
* [Flagger install on Kubernetes](https://docs.flagger.app/install/flagger-install-on-kubernetes)
|
||||
* [Flagger install on GKE Istio](https://docs.flagger.app/install/flagger-install-on-google-cloud)
|
||||
* [Flagger install on EKS App Mesh](https://docs.flagger.app/install/flagger-install-on-eks-appmesh)
|
||||
* [Flagger install with SuperGloo](https://docs.flagger.app/install/flagger-install-with-supergloo)
|
||||
* How it works
|
||||
* [Canary custom resource](https://docs.flagger.app/how-it-works#canary-custom-resource)
|
||||
* [Routing](https://docs.flagger.app/how-it-works#istio-routing)
|
||||
* [Canary deployment stages](https://docs.flagger.app/how-it-works#canary-deployment)
|
||||
* [Canary analysis](https://docs.flagger.app/how-it-works#canary-analysis)
|
||||
* [HTTP metrics](https://docs.flagger.app/how-it-works#http-metrics)
|
||||
* [Custom metrics](https://docs.flagger.app/how-it-works#custom-metrics)
|
||||
* [Webhooks](https://docs.flagger.app/how-it-works#webhooks)
|
||||
* [Load testing](https://docs.flagger.app/how-it-works#load-testing)
|
||||
* [Manual gating](https://docs.flagger.app/how-it-works#manual-gating)
|
||||
* [FAQ](https://docs.flagger.app/faq)
|
||||
* Usage
|
||||
* [Istio canary deployments](https://docs.flagger.app/usage/progressive-delivery)
|
||||
* [Istio A/B testing](https://docs.flagger.app/usage/ab-testing)
|
||||
* [Linkerd canary deployments](https://docs.flagger.app/usage/linkerd-progressive-delivery)
|
||||
* [App Mesh canary deployments](https://docs.flagger.app/usage/appmesh-progressive-delivery)
|
||||
* [NGINX ingress controller canary deployments](https://docs.flagger.app/usage/nginx-progressive-delivery)
|
||||
* [Gloo ingress controller canary deployments](https://docs.flagger.app/usage/gloo-progressive-delivery)
|
||||
* [Blue/Green deployments](https://docs.flagger.app/usage/blue-green)
|
||||
* [Monitoring](https://docs.flagger.app/usage/monitoring)
|
||||
* [How it works](https://docs.flagger.app/usage/how-it-works)
|
||||
* [Deployment strategies](https://docs.flagger.app/usage/deployment-strategies)
|
||||
* [Metrics analysis](https://docs.flagger.app/usage/metrics)
|
||||
* [Webhooks](https://docs.flagger.app/usage/webhooks)
|
||||
* [Alerting](https://docs.flagger.app/usage/alerting)
|
||||
* [Monitoring](https://docs.flagger.app/usage/monitoring)
|
||||
* Tutorials
|
||||
* [Canary deployments with Helm charts and Weave Flux](https://docs.flagger.app/tutorials/canary-helm-gitops)
|
||||
* [App Mesh](https://docs.flagger.app/tutorials/appmesh-progressive-delivery)
|
||||
* [Istio](https://docs.flagger.app/tutorials/istio-progressive-delivery)
|
||||
* [Linkerd](https://docs.flagger.app/tutorials/linkerd-progressive-delivery)
|
||||
* [Contour](https://docs.flagger.app/tutorials/contour-progressive-delivery)
|
||||
* [Gloo](https://docs.flagger.app/tutorials/gloo-progressive-delivery)
|
||||
* [NGINX Ingress](https://docs.flagger.app/tutorials/nginx-progressive-delivery)
|
||||
* [Skipper](https://docs.flagger.app/tutorials/skipper-progressive-delivery)
|
||||
* [Kubernetes Blue/Green](https://docs.flagger.app/tutorials/kubernetes-blue-green)
|
||||
|
||||
## Canary CRD
|
||||
### Who is using Flagger
|
||||
|
||||
List of organizations using Flagger:
|
||||
|
||||
* [Chick-fil-A](https://www.chick-fil-a.com)
|
||||
* [Capra Consulting](https://www.capraconsulting.no)
|
||||
* [DMM.com](https://dmm-corp.com)
|
||||
* [MediaMarktSaturn](https://www.mediamarktsaturn.com)
|
||||
* [Weaveworks](https://weave.works)
|
||||
* [Jumia Group](https://group.jumia.com)
|
||||
|
||||
If you are using Flagger, please submit a PR to add your organization to the list!
|
||||
|
||||
### Canary CRD
|
||||
|
||||
Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA),
|
||||
then creates a series of objects (Kubernetes deployments, ClusterIP services and Istio or App Mesh virtual services).
|
||||
then creates a series of objects (Kubernetes deployments, ClusterIP services, service mesh or ingress routes).
|
||||
These objects expose the application on the mesh and drive the canary analysis and promotion.
|
||||
|
||||
Flagger keeps track of ConfigMaps and Secrets referenced by a Kubernetes Deployment and triggers a canary analysis if any of those objects change.
|
||||
@@ -62,15 +65,14 @@ When promoting a workload in production, both code (container images) and config
|
||||
For a deployment named _podinfo_, a canary promotion can be defined using Flagger's custom resource:
|
||||
|
||||
```yaml
|
||||
apiVersion: flagger.app/v1alpha3
|
||||
apiVersion: flagger.app/v1beta1
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
spec:
|
||||
# service mesh provider (optional)
|
||||
# can be: kubernetes, istio, linkerd, appmesh, nginx, gloo, supergloo
|
||||
# use the kubernetes provider for Blue/Green style deployments
|
||||
# can be: kubernetes, istio, linkerd, appmesh, nginx, skipper, contour, gloo, supergloo
|
||||
provider: istio
|
||||
# deployment reference
|
||||
targetRef:
|
||||
@@ -86,14 +88,17 @@ spec:
|
||||
kind: HorizontalPodAutoscaler
|
||||
name: podinfo
|
||||
service:
|
||||
# container port
|
||||
# service name (defaults to targetRef.name)
|
||||
name: podinfo
|
||||
# ClusterIP port number
|
||||
port: 9898
|
||||
# Istio gateways (optional)
|
||||
gateways:
|
||||
- public-gateway.istio-system.svc.cluster.local
|
||||
# Istio virtual service host names (optional)
|
||||
hosts:
|
||||
- podinfo.example.com
|
||||
# container port name or number (optional)
|
||||
targetPort: 9898
|
||||
# port name can be http or grpc (default http)
|
||||
portName: http
|
||||
# add all the other container ports
|
||||
# to the ClusterIP services (default false)
|
||||
portDiscovery: true
|
||||
# HTTP match conditions (optional)
|
||||
match:
|
||||
- uri:
|
||||
@@ -101,16 +106,12 @@ spec:
|
||||
# HTTP rewrite (optional)
|
||||
rewrite:
|
||||
uri: /
|
||||
# cross-origin resource sharing policy (optional)
|
||||
corsPolicy:
|
||||
allowOrigin:
|
||||
- example.com
|
||||
# request timeout (optional)
|
||||
timeout: 5s
|
||||
# promote the canary without analysing it (default false)
|
||||
skipAnalysis: false
|
||||
# define the canary analysis timing and KPIs
|
||||
canaryAnalysis:
|
||||
analysis:
|
||||
# schedule interval (default 60s)
|
||||
interval: 1m
|
||||
# max number of failed metric checks before rollback
|
||||
@@ -121,70 +122,113 @@ spec:
|
||||
# canary increment step
|
||||
# percentage (0-100)
|
||||
stepWeight: 5
|
||||
# Istio Prometheus checks
|
||||
# validation (optional)
|
||||
metrics:
|
||||
# builtin checks
|
||||
- name: request-success-rate
|
||||
# builtin Prometheus check
|
||||
# minimum req success rate (non 5xx responses)
|
||||
# percentage (0-100)
|
||||
threshold: 99
|
||||
thresholdRange:
|
||||
min: 99
|
||||
interval: 1m
|
||||
- name: request-duration
|
||||
# builtin Prometheus check
|
||||
# maximum req duration P99
|
||||
# milliseconds
|
||||
threshold: 500
|
||||
thresholdRange:
|
||||
max: 500
|
||||
interval: 30s
|
||||
# custom check
|
||||
- name: "kafka lag"
|
||||
threshold: 100
|
||||
query: |
|
||||
avg_over_time(
|
||||
kafka_consumergroup_lag{
|
||||
consumergroup=~"podinfo-consumer-.*",
|
||||
topic="podinfo"
|
||||
}[1m]
|
||||
)
|
||||
# external checks (optional)
|
||||
- name: "database connections"
|
||||
# custom metric check
|
||||
templateRef:
|
||||
name: db-connections
|
||||
thresholdRange:
|
||||
min: 2
|
||||
max: 100
|
||||
interval: 1m
|
||||
# testing (optional)
|
||||
webhooks:
|
||||
- name: load-test
|
||||
- name: "conformance test"
|
||||
type: pre-rollout
|
||||
url: http://flagger-helmtester.test/
|
||||
timeout: 5m
|
||||
metadata:
|
||||
type: "helmv3"
|
||||
cmd: "test run podinfo -n test"
|
||||
- name: "load test"
|
||||
type: rollout
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 5s
|
||||
metadata:
|
||||
cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"
|
||||
# alerting (optional)
|
||||
alerts:
|
||||
- name: "dev team Slack"
|
||||
severity: error
|
||||
providerRef:
|
||||
name: dev-slack
|
||||
namespace: flagger
|
||||
- name: "qa team Discord"
|
||||
severity: warn
|
||||
providerRef:
|
||||
name: qa-discord
|
||||
- name: "on-call MS Teams"
|
||||
severity: info
|
||||
providerRef:
|
||||
name: on-call-msteams
|
||||
```
|
||||
|
||||
For more details on how the canary analysis and promotion works please [read the docs](https://docs.flagger.app/how-it-works).
|
||||
For more details on how the canary analysis and promotion works please [read the docs](https://docs.flagger.app/usage/how-it-works).
|
||||
|
||||
## Features
|
||||
### Features
|
||||
|
||||
| Feature | Istio | Linkerd | App Mesh | NGINX | Gloo |
|
||||
| -------------------------------------------- | ------------------ | ------------------ |------------------ |------------------ |------------------ |
|
||||
| Canary deployments (weighted traffic) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
||||
| A/B testing (headers and cookies filters) | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign: |
|
||||
| Webhooks (acceptance/load testing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
||||
| Request success rate check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
||||
| Request duration check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_check_mark: |
|
||||
| Custom promql checks | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
||||
| Traffic policy, CORS, retries and timeouts | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: |
|
||||
**Service Mesh**
|
||||
|
||||
## Roadmap
|
||||
| Feature | App Mesh | Istio | Linkerd | Kubernetes CNI |
|
||||
| ------------------------------------------ | ------------------ | ------------------ | ------------------ | ----------------- |
|
||||
| Canary deployments (weighted traffic) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: |
|
||||
| A/B testing (headers and cookies routing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: |
|
||||
| Blue/Green deployments (traffic switch) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
||||
| Blue/Green deployments (traffic mirroring) | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: |
|
||||
| Webhooks (acceptance/load testing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
||||
| Manual gating (approve/pause/resume) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
||||
| Request success rate check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: |
|
||||
| Request duration check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: |
|
||||
| Custom metric checks | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
||||
|
||||
* Integrate with other ingress controllers like Contour, HAProxy, ALB
|
||||
**Ingress**
|
||||
|
||||
| Feature | Contour | Gloo | NGINX | Skipper |
|
||||
| ------------------------------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
|
||||
| Canary deployments (weighted traffic) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
||||
| A/B testing (headers and cookies routing) | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign: |
|
||||
| Blue/Green deployments (traffic switch) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
||||
| Webhooks (acceptance/load testing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
||||
| Manual gating (approve/pause/resume) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
||||
| Request success rate check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: |
|
||||
| Request duration check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: |
|
||||
| Custom metric checks | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
||||
|
||||
### Roadmap
|
||||
|
||||
* Add support for Kubernetes [Ingress v2](https://github.com/kubernetes-sigs/service-apis)
|
||||
* Integrate with other service mesh like Consul Connect and ingress controllers like HAProxy, ALB
|
||||
* Integrate with other metrics providers like InfluxDB, Stackdriver, SignalFX
|
||||
* Add support for comparing the canary metrics to the primary ones and do the validation based on the derivation between the two
|
||||
|
||||
## Contributing
|
||||
### Contributing
|
||||
|
||||
Flagger is Apache 2.0 licensed and accepts contributions via GitHub pull requests.
|
||||
To start contributing please read the [development guide](https://docs.flagger.app/dev/dev-guide).
|
||||
|
||||
When submitting bug reports please include as much details as possible:
|
||||
|
||||
* which Flagger version
|
||||
* which Flagger CRD version
|
||||
* which Kubernetes/Istio version
|
||||
* what configuration (canary, virtual service and workloads definitions)
|
||||
* what happened (Flagger, Istio Pilot and Proxy logs)
|
||||
* which Kubernetes version
|
||||
* what configuration (canary, ingress and workloads definitions)
|
||||
* what happened (Flagger and Proxy logs)
|
||||
|
||||
## Getting Help
|
||||
### Getting Help
|
||||
|
||||
If you have any questions about Flagger and progressive delivery:
|
||||
|
||||
|
||||
@@ -1,62 +0,0 @@
|
||||
apiVersion: flagger.app/v1alpha3
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: abtest
|
||||
namespace: test
|
||||
spec:
|
||||
# deployment reference
|
||||
targetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: abtest
|
||||
# the maximum time in seconds for the canary deployment
|
||||
# to make progress before it is rollback (default 600s)
|
||||
progressDeadlineSeconds: 60
|
||||
# HPA reference (optional)
|
||||
autoscalerRef:
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
name: abtest
|
||||
service:
|
||||
# container port
|
||||
port: 9898
|
||||
# Istio gateways (optional)
|
||||
gateways:
|
||||
- public-gateway.istio-system.svc.cluster.local
|
||||
- mesh
|
||||
# Istio virtual service host names (optional)
|
||||
hosts:
|
||||
- abtest.istio.weavedx.com
|
||||
canaryAnalysis:
|
||||
# schedule interval (default 60s)
|
||||
interval: 10s
|
||||
# max number of failed metric checks before rollback
|
||||
threshold: 10
|
||||
# total number of iterations
|
||||
iterations: 10
|
||||
# canary match condition
|
||||
match:
|
||||
- headers:
|
||||
user-agent:
|
||||
regex: "^(?!.*Chrome)(?=.*\bSafari\b).*$"
|
||||
- headers:
|
||||
cookie:
|
||||
regex: "^(.*?;)?(type=insider)(;.*)?$"
|
||||
metrics:
|
||||
- name: request-success-rate
|
||||
# minimum req success rate (non 5xx responses)
|
||||
# percentage (0-100)
|
||||
threshold: 99
|
||||
interval: 1m
|
||||
- name: request-duration
|
||||
# maximum req duration P99
|
||||
# milliseconds
|
||||
threshold: 500
|
||||
interval: 30s
|
||||
# external checks (optional)
|
||||
webhooks:
|
||||
- name: load-test
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 5s
|
||||
metadata:
|
||||
cmd: "hey -z 1m -q 10 -c 2 -H 'Cookie: type=insider' http://podinfo.test:9898/"
|
||||
@@ -1,67 +0,0 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: abtest
|
||||
namespace: test
|
||||
labels:
|
||||
app: abtest
|
||||
spec:
|
||||
minReadySeconds: 5
|
||||
revisionHistoryLimit: 5
|
||||
progressDeadlineSeconds: 60
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: 0
|
||||
type: RollingUpdate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: abtest
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
prometheus.io/scrape: "true"
|
||||
labels:
|
||||
app: abtest
|
||||
spec:
|
||||
containers:
|
||||
- name: podinfod
|
||||
image: quay.io/stefanprodan/podinfo:1.7.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- containerPort: 9898
|
||||
name: http
|
||||
protocol: TCP
|
||||
command:
|
||||
- ./podinfo
|
||||
- --port=9898
|
||||
- --level=info
|
||||
- --random-delay=false
|
||||
- --random-error=false
|
||||
env:
|
||||
- name: PODINFO_UI_COLOR
|
||||
value: blue
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- podcli
|
||||
- check
|
||||
- http
|
||||
- localhost:9898/healthz
|
||||
initialDelaySeconds: 5
|
||||
timeoutSeconds: 5
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- podcli
|
||||
- check
|
||||
- http
|
||||
- localhost:9898/readyz
|
||||
initialDelaySeconds: 5
|
||||
timeoutSeconds: 5
|
||||
resources:
|
||||
limits:
|
||||
cpu: 2000m
|
||||
memory: 512Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 64Mi
|
||||
@@ -1,19 +0,0 @@
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: abtest
|
||||
namespace: test
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: abtest
|
||||
minReplicas: 2
|
||||
maxReplicas: 4
|
||||
metrics:
|
||||
- type: Resource
|
||||
resource:
|
||||
name: cpu
|
||||
# scale up if usage is above
|
||||
# 99% of the requested CPU (100m)
|
||||
targetAverageUtilization: 99
|
||||
@@ -1,50 +0,0 @@
|
||||
apiVersion: flagger.app/v1alpha3
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
spec:
|
||||
# deployment reference
|
||||
targetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: podinfo
|
||||
# the maximum time in seconds for the canary deployment
|
||||
# to make progress before it is rollback (default 600s)
|
||||
progressDeadlineSeconds: 60
|
||||
# HPA reference (optional)
|
||||
autoscalerRef:
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
name: podinfo
|
||||
service:
|
||||
# container port
|
||||
port: 9898
|
||||
# App Mesh reference
|
||||
meshName: global
|
||||
# define the canary analysis timing and KPIs
|
||||
canaryAnalysis:
|
||||
# schedule interval (default 60s)
|
||||
interval: 10s
|
||||
# max number of failed metric checks before rollback
|
||||
threshold: 10
|
||||
# max traffic percentage routed to canary
|
||||
# percentage (0-100)
|
||||
maxWeight: 50
|
||||
# canary increment step
|
||||
# percentage (0-100)
|
||||
stepWeight: 5
|
||||
# App Mesh Prometheus checks
|
||||
metrics:
|
||||
- name: request-success-rate
|
||||
# minimum req success rate (non 5xx responses)
|
||||
# percentage (0-100)
|
||||
threshold: 99
|
||||
interval: 1m
|
||||
# external checks (optional)
|
||||
webhooks:
|
||||
- name: load-test
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 5s
|
||||
metadata:
|
||||
cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"
|
||||
@@ -1,65 +0,0 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
labels:
|
||||
app: podinfo
|
||||
spec:
|
||||
minReadySeconds: 5
|
||||
revisionHistoryLimit: 5
|
||||
progressDeadlineSeconds: 60
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: 0
|
||||
type: RollingUpdate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: podinfo
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
prometheus.io/scrape: "true"
|
||||
labels:
|
||||
app: podinfo
|
||||
spec:
|
||||
containers:
|
||||
- name: podinfod
|
||||
image: quay.io/stefanprodan/podinfo:1.7.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- containerPort: 9898
|
||||
name: http
|
||||
protocol: TCP
|
||||
command:
|
||||
- ./podinfo
|
||||
- --port=9898
|
||||
- --level=info
|
||||
env:
|
||||
- name: PODINFO_UI_COLOR
|
||||
value: blue
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- podcli
|
||||
- check
|
||||
- http
|
||||
- localhost:9898/healthz
|
||||
initialDelaySeconds: 5
|
||||
timeoutSeconds: 5
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- podcli
|
||||
- check
|
||||
- http
|
||||
- localhost:9898/readyz
|
||||
initialDelaySeconds: 5
|
||||
timeoutSeconds: 5
|
||||
resources:
|
||||
limits:
|
||||
cpu: 2000m
|
||||
memory: 512Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 64Mi
|
||||
@@ -1,6 +0,0 @@
|
||||
apiVersion: appmesh.k8s.aws/v1beta1
|
||||
kind: Mesh
|
||||
metadata:
|
||||
name: global
|
||||
spec:
|
||||
serviceDiscoveryType: dns
|
||||
@@ -1,19 +0,0 @@
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: podinfo
|
||||
minReplicas: 2
|
||||
maxReplicas: 4
|
||||
metrics:
|
||||
- type: Resource
|
||||
resource:
|
||||
name: cpu
|
||||
# scale up if usage is above
|
||||
# 99% of the requested CPU (100m)
|
||||
targetAverageUtilization: 99
|
||||
@@ -1,177 +0,0 @@
|
||||
---
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: ingress-config
|
||||
namespace: test
|
||||
labels:
|
||||
app: ingress
|
||||
data:
|
||||
envoy.yaml: |
|
||||
static_resources:
|
||||
listeners:
|
||||
- address:
|
||||
socket_address:
|
||||
address: 0.0.0.0
|
||||
port_value: 80
|
||||
filter_chains:
|
||||
- filters:
|
||||
- name: envoy.http_connection_manager
|
||||
config:
|
||||
access_log:
|
||||
- name: envoy.file_access_log
|
||||
config:
|
||||
path: /dev/stdout
|
||||
codec_type: auto
|
||||
stat_prefix: ingress_http
|
||||
http_filters:
|
||||
- name: envoy.router
|
||||
config: {}
|
||||
route_config:
|
||||
name: local_route
|
||||
virtual_hosts:
|
||||
- name: local_service
|
||||
domains: ["*"]
|
||||
routes:
|
||||
- match:
|
||||
prefix: "/"
|
||||
route:
|
||||
cluster: podinfo
|
||||
host_rewrite: podinfo.test
|
||||
timeout: 15s
|
||||
retry_policy:
|
||||
retry_on: "gateway-error,connect-failure,refused-stream"
|
||||
num_retries: 10
|
||||
per_try_timeout: 5s
|
||||
clusters:
|
||||
- name: podinfo
|
||||
connect_timeout: 0.30s
|
||||
type: strict_dns
|
||||
lb_policy: round_robin
|
||||
http2_protocol_options: {}
|
||||
hosts:
|
||||
- socket_address:
|
||||
address: podinfo.test
|
||||
port_value: 9898
|
||||
admin:
|
||||
access_log_path: /dev/null
|
||||
address:
|
||||
socket_address:
|
||||
address: 0.0.0.0
|
||||
port_value: 9999
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: ingress
|
||||
namespace: test
|
||||
labels:
|
||||
app: ingress
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: ingress
|
||||
strategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxUnavailable: 0
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: ingress
|
||||
annotations:
|
||||
prometheus.io/path: "/stats/prometheus"
|
||||
prometheus.io/port: "9999"
|
||||
prometheus.io/scrape: "true"
|
||||
# dummy port to exclude ingress from mesh traffic
|
||||
# only egress should go over the mesh
|
||||
appmesh.k8s.aws/ports: "444"
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 30
|
||||
containers:
|
||||
- name: ingress
|
||||
image: "envoyproxy/envoy-alpine:d920944aed67425f91fc203774aebce9609e5d9a"
|
||||
securityContext:
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
add:
|
||||
- NET_BIND_SERVICE
|
||||
command:
|
||||
- /usr/bin/dumb-init
|
||||
- --
|
||||
args:
|
||||
- /usr/local/bin/envoy
|
||||
- --base-id 30
|
||||
- --v2-config-only
|
||||
- -l
|
||||
- $loglevel
|
||||
- -c
|
||||
- /config/envoy.yaml
|
||||
ports:
|
||||
- name: admin
|
||||
containerPort: 9999
|
||||
protocol: TCP
|
||||
- name: http
|
||||
containerPort: 80
|
||||
protocol: TCP
|
||||
- name: https
|
||||
containerPort: 443
|
||||
protocol: TCP
|
||||
livenessProbe:
|
||||
initialDelaySeconds: 5
|
||||
tcpSocket:
|
||||
port: admin
|
||||
readinessProbe:
|
||||
initialDelaySeconds: 5
|
||||
tcpSocket:
|
||||
port: admin
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 64Mi
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /config
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: ingress-config
|
||||
---
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: ingress
|
||||
namespace: test
|
||||
spec:
|
||||
selector:
|
||||
app: ingress
|
||||
ports:
|
||||
- protocol: TCP
|
||||
name: http
|
||||
port: 80
|
||||
targetPort: 80
|
||||
- protocol: TCP
|
||||
name: https
|
||||
port: 443
|
||||
targetPort: 443
|
||||
type: LoadBalancer
|
||||
---
|
||||
apiVersion: appmesh.k8s.aws/v1beta1
|
||||
kind: VirtualNode
|
||||
metadata:
|
||||
name: ingress
|
||||
namespace: test
|
||||
spec:
|
||||
meshName: global
|
||||
listeners:
|
||||
- portMapping:
|
||||
port: 80
|
||||
protocol: http
|
||||
serviceDiscovery:
|
||||
dns:
|
||||
hostName: ingress.test
|
||||
backends:
|
||||
- virtualService:
|
||||
virtualServiceName: podinfo.test
|
||||
@@ -1,88 +0,0 @@
|
||||
apiVersion: flagger.app/v1alpha3
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
spec:
|
||||
# service mesh provider (default istio)
|
||||
# can be: kubernetes, istio, appmesh, smi, nginx, gloo, supergloo
|
||||
# use the kubernetes provider for Blue/Green style deployments
|
||||
provider: istio
|
||||
# deployment reference
|
||||
targetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: podinfo
|
||||
# the maximum time in seconds for the canary deployment
|
||||
# to make progress before it is rollback (default 600s)
|
||||
progressDeadlineSeconds: 60
|
||||
# HPA reference (optional)
|
||||
autoscalerRef:
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
name: podinfo
|
||||
service:
|
||||
# container port
|
||||
port: 9898
|
||||
# port name can be http or grpc (default http)
|
||||
portName: http
|
||||
# add all the other container ports
|
||||
# when generating ClusterIP services (default false)
|
||||
portDiscovery: false
|
||||
# Istio gateways (optional)
|
||||
gateways:
|
||||
- public-gateway.istio-system.svc.cluster.local
|
||||
# remove the mesh gateway if the public host is
|
||||
# shared across multiple virtual services
|
||||
- mesh
|
||||
# Istio virtual service host names (optional)
|
||||
hosts:
|
||||
- app.istio.weavedx.com
|
||||
# Istio traffic policy (optional)
|
||||
trafficPolicy:
|
||||
tls:
|
||||
# use ISTIO_MUTUAL when mTLS is enabled
|
||||
mode: DISABLE
|
||||
# HTTP match conditions (optional)
|
||||
match:
|
||||
- uri:
|
||||
prefix: /
|
||||
# HTTP rewrite (optional)
|
||||
rewrite:
|
||||
uri: /
|
||||
# HTTP timeout (optional)
|
||||
timeout: 30s
|
||||
# promote the canary without analysing it (default false)
|
||||
skipAnalysis: false
|
||||
canaryAnalysis:
|
||||
# schedule interval (default 60s)
|
||||
interval: 10s
|
||||
# max number of failed metric checks before rollback
|
||||
threshold: 10
|
||||
# max traffic percentage routed to canary
|
||||
# percentage (0-100)
|
||||
maxWeight: 50
|
||||
# canary increment step
|
||||
# percentage (0-100)
|
||||
stepWeight: 5
|
||||
# Prometheus checks
|
||||
metrics:
|
||||
- name: request-success-rate
|
||||
# minimum req success rate (non 5xx responses)
|
||||
# percentage (0-100)
|
||||
threshold: 99
|
||||
interval: 1m
|
||||
- name: request-duration
|
||||
# maximum req duration P99
|
||||
# milliseconds
|
||||
threshold: 500
|
||||
interval: 30s
|
||||
# external checks (optional)
|
||||
webhooks:
|
||||
- name: load-test
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 5s
|
||||
metadata:
|
||||
type: cmd
|
||||
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test:9898/"
|
||||
logCmdOutput: "true"
|
||||
@@ -1,67 +0,0 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
labels:
|
||||
app: podinfo
|
||||
spec:
|
||||
minReadySeconds: 5
|
||||
revisionHistoryLimit: 5
|
||||
progressDeadlineSeconds: 60
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: 0
|
||||
type: RollingUpdate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: podinfo
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
prometheus.io/scrape: "true"
|
||||
labels:
|
||||
app: podinfo
|
||||
spec:
|
||||
containers:
|
||||
- name: podinfod
|
||||
image: stefanprodan/podinfo:2.0.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- containerPort: 9898
|
||||
name: http
|
||||
protocol: TCP
|
||||
command:
|
||||
- ./podinfo
|
||||
- --port=9898
|
||||
- --level=info
|
||||
- --random-delay=false
|
||||
- --random-error=false
|
||||
env:
|
||||
- name: PODINFO_UI_COLOR
|
||||
value: blue
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- podcli
|
||||
- check
|
||||
- http
|
||||
- localhost:9898/healthz
|
||||
initialDelaySeconds: 5
|
||||
timeoutSeconds: 5
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- podcli
|
||||
- check
|
||||
- http
|
||||
- localhost:9898/readyz
|
||||
initialDelaySeconds: 5
|
||||
timeoutSeconds: 5
|
||||
resources:
|
||||
limits:
|
||||
cpu: 2000m
|
||||
memory: 512Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 64Mi
|
||||
@@ -1,19 +0,0 @@
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: podinfo
|
||||
minReplicas: 2
|
||||
maxReplicas: 4
|
||||
metrics:
|
||||
- type: Resource
|
||||
resource:
|
||||
name: cpu
|
||||
# scale up if usage is above
|
||||
# 99% of the requested CPU (100m)
|
||||
targetAverageUtilization: 99
|
||||
@@ -1,6 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: test
|
||||
labels:
|
||||
istio-injection: enabled
|
||||
@@ -1,26 +0,0 @@
|
||||
apiVersion: flux.weave.works/v1beta1
|
||||
kind: HelmRelease
|
||||
metadata:
|
||||
name: backend
|
||||
namespace: test
|
||||
annotations:
|
||||
flux.weave.works/automated: "true"
|
||||
flux.weave.works/tag.chart-image: regexp:^1.7.*
|
||||
spec:
|
||||
releaseName: backend
|
||||
chart:
|
||||
repository: https://flagger.app/
|
||||
name: podinfo
|
||||
version: 2.2.0
|
||||
values:
|
||||
image:
|
||||
repository: quay.io/stefanprodan/podinfo
|
||||
tag: 1.7.0
|
||||
httpServer:
|
||||
timeout: 30s
|
||||
canary:
|
||||
enabled: true
|
||||
istioIngress:
|
||||
enabled: false
|
||||
loadtest:
|
||||
enabled: true
|
||||
@@ -1,27 +0,0 @@
|
||||
apiVersion: flux.weave.works/v1beta1
|
||||
kind: HelmRelease
|
||||
metadata:
|
||||
name: frontend
|
||||
namespace: test
|
||||
annotations:
|
||||
flux.weave.works/automated: "true"
|
||||
flux.weave.works/tag.chart-image: semver:~1.7
|
||||
spec:
|
||||
releaseName: frontend
|
||||
chart:
|
||||
repository: https://flagger.app/
|
||||
name: podinfo
|
||||
version: 2.2.0
|
||||
values:
|
||||
image:
|
||||
repository: quay.io/stefanprodan/podinfo
|
||||
tag: 1.7.0
|
||||
backend: http://backend-podinfo:9898/echo
|
||||
canary:
|
||||
enabled: true
|
||||
istioIngress:
|
||||
enabled: true
|
||||
gateway: public-gateway.istio-system.svc.cluster.local
|
||||
host: frontend.istio.example.com
|
||||
loadtest:
|
||||
enabled: true
|
||||
@@ -1,18 +0,0 @@
|
||||
apiVersion: flux.weave.works/v1beta1
|
||||
kind: HelmRelease
|
||||
metadata:
|
||||
name: loadtester
|
||||
namespace: test
|
||||
annotations:
|
||||
flux.weave.works/automated: "true"
|
||||
flux.weave.works/tag.chart-image: glob:0.*
|
||||
spec:
|
||||
releaseName: flagger-loadtester
|
||||
chart:
|
||||
repository: https://flagger.app/
|
||||
name: loadtester
|
||||
version: 0.6.0
|
||||
values:
|
||||
image:
|
||||
repository: weaveworks/flagger-loadtester
|
||||
tag: 0.6.1
|
||||
@@ -1,264 +0,0 @@
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: prometheus
|
||||
labels:
|
||||
app: prometheus
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- nodes
|
||||
- services
|
||||
- endpoints
|
||||
- pods
|
||||
- nodes/proxy
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- configmaps
|
||||
verbs: ["get"]
|
||||
- nonResourceURLs: ["/metrics"]
|
||||
verbs: ["get"]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: prometheus
|
||||
labels:
|
||||
app: prometheus
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: prometheus
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: prometheus
|
||||
namespace: appmesh-system
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: prometheus
|
||||
namespace: appmesh-system
|
||||
labels:
|
||||
app: prometheus
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: prometheus
|
||||
namespace: appmesh-system
|
||||
labels:
|
||||
app: prometheus
|
||||
data:
|
||||
prometheus.yml: |-
|
||||
global:
|
||||
scrape_interval: 5s
|
||||
scrape_configs:
|
||||
|
||||
# Scrape config for AppMesh Envoy sidecar
|
||||
- job_name: 'appmesh-envoy'
|
||||
metrics_path: /stats/prometheus
|
||||
kubernetes_sd_configs:
|
||||
- role: pod
|
||||
|
||||
relabel_configs:
|
||||
- source_labels: [__meta_kubernetes_pod_container_name]
|
||||
action: keep
|
||||
regex: '^envoy$'
|
||||
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
|
||||
action: replace
|
||||
regex: ([^:]+)(?::\d+)?;(\d+)
|
||||
replacement: ${1}:9901
|
||||
target_label: __address__
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_pod_label_(.+)
|
||||
- source_labels: [__meta_kubernetes_namespace]
|
||||
action: replace
|
||||
target_label: kubernetes_namespace
|
||||
- source_labels: [__meta_kubernetes_pod_name]
|
||||
action: replace
|
||||
target_label: kubernetes_pod_name
|
||||
|
||||
# Exclude high cardinality metrics
|
||||
metric_relabel_configs:
|
||||
- source_labels: [ cluster_name ]
|
||||
regex: '(outbound|inbound|prometheus_stats).*'
|
||||
action: drop
|
||||
- source_labels: [ tcp_prefix ]
|
||||
regex: '(outbound|inbound|prometheus_stats).*'
|
||||
action: drop
|
||||
- source_labels: [ listener_address ]
|
||||
regex: '(.+)'
|
||||
action: drop
|
||||
- source_labels: [ http_conn_manager_listener_prefix ]
|
||||
regex: '(.+)'
|
||||
action: drop
|
||||
- source_labels: [ http_conn_manager_prefix ]
|
||||
regex: '(.+)'
|
||||
action: drop
|
||||
- source_labels: [ __name__ ]
|
||||
regex: 'envoy_tls.*'
|
||||
action: drop
|
||||
- source_labels: [ __name__ ]
|
||||
regex: 'envoy_tcp_downstream.*'
|
||||
action: drop
|
||||
- source_labels: [ __name__ ]
|
||||
regex: 'envoy_http_(stats|admin).*'
|
||||
action: drop
|
||||
- source_labels: [ __name__ ]
|
||||
regex: 'envoy_cluster_(lb|retry|bind|internal|max|original).*'
|
||||
action: drop
|
||||
|
||||
# Scrape config for API servers
|
||||
- job_name: 'kubernetes-apiservers'
|
||||
kubernetes_sd_configs:
|
||||
- role: endpoints
|
||||
namespaces:
|
||||
names:
|
||||
- default
|
||||
scheme: https
|
||||
tls_config:
|
||||
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||
relabel_configs:
|
||||
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
|
||||
action: keep
|
||||
regex: kubernetes;https
|
||||
|
||||
# Scrape config for nodes
|
||||
- job_name: 'kubernetes-nodes'
|
||||
scheme: https
|
||||
tls_config:
|
||||
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||
kubernetes_sd_configs:
|
||||
- role: node
|
||||
relabel_configs:
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_node_label_(.+)
|
||||
- target_label: __address__
|
||||
replacement: kubernetes.default.svc:443
|
||||
- source_labels: [__meta_kubernetes_node_name]
|
||||
regex: (.+)
|
||||
target_label: __metrics_path__
|
||||
replacement: /api/v1/nodes/${1}/proxy/metrics
|
||||
|
||||
# scrape config for cAdvisor
|
||||
- job_name: 'kubernetes-cadvisor'
|
||||
scheme: https
|
||||
tls_config:
|
||||
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||
kubernetes_sd_configs:
|
||||
- role: node
|
||||
relabel_configs:
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_node_label_(.+)
|
||||
- target_label: __address__
|
||||
replacement: kubernetes.default.svc:443
|
||||
- source_labels: [__meta_kubernetes_node_name]
|
||||
regex: (.+)
|
||||
target_label: __metrics_path__
|
||||
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
|
||||
|
||||
# scrape config for pods
|
||||
- job_name: kubernetes-pods
|
||||
kubernetes_sd_configs:
|
||||
- role: pod
|
||||
relabel_configs:
|
||||
- action: keep
|
||||
regex: true
|
||||
source_labels:
|
||||
- __meta_kubernetes_pod_annotation_prometheus_io_scrape
|
||||
- source_labels: [ __address__ ]
|
||||
regex: '.*9901.*'
|
||||
action: drop
|
||||
- action: replace
|
||||
regex: (.+)
|
||||
source_labels:
|
||||
- __meta_kubernetes_pod_annotation_prometheus_io_path
|
||||
target_label: __metrics_path__
|
||||
- action: replace
|
||||
regex: ([^:]+)(?::\d+)?;(\d+)
|
||||
replacement: $1:$2
|
||||
source_labels:
|
||||
- __address__
|
||||
- __meta_kubernetes_pod_annotation_prometheus_io_port
|
||||
target_label: __address__
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_pod_label_(.+)
|
||||
- action: replace
|
||||
source_labels:
|
||||
- __meta_kubernetes_namespace
|
||||
target_label: kubernetes_namespace
|
||||
- action: replace
|
||||
source_labels:
|
||||
- __meta_kubernetes_pod_name
|
||||
target_label: kubernetes_pod_name
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: prometheus
|
||||
namespace: appmesh-system
|
||||
labels:
|
||||
app: prometheus
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: prometheus
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: prometheus
|
||||
annotations:
|
||||
version: "appmesh-v1alpha1"
|
||||
spec:
|
||||
serviceAccountName: prometheus
|
||||
containers:
|
||||
- name: prometheus
|
||||
image: "docker.io/prom/prometheus:v2.7.1"
|
||||
imagePullPolicy: IfNotPresent
|
||||
args:
|
||||
- '--storage.tsdb.retention=6h'
|
||||
- '--config.file=/etc/prometheus/prometheus.yml'
|
||||
ports:
|
||||
- containerPort: 9090
|
||||
name: http
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /-/healthy
|
||||
port: 9090
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /-/ready
|
||||
port: 9090
|
||||
resources:
|
||||
requests:
|
||||
cpu: 10m
|
||||
memory: 128Mi
|
||||
volumeMounts:
|
||||
- name: config-volume
|
||||
mountPath: /etc/prometheus
|
||||
volumes:
|
||||
- name: config-volume
|
||||
configMap:
|
||||
name: prometheus
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: prometheus
|
||||
namespace: appmesh-system
|
||||
labels:
|
||||
name: prometheus
|
||||
spec:
|
||||
selector:
|
||||
app: prometheus
|
||||
ports:
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 9090
|
||||
62
artifacts/examples/appmesh-abtest.yaml
Normal file
62
artifacts/examples/appmesh-abtest.yaml
Normal file
@@ -0,0 +1,62 @@
|
||||
apiVersion: flagger.app/v1beta1
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
spec:
|
||||
provider: appmesh
|
||||
progressDeadlineSeconds: 600
|
||||
targetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: podinfo
|
||||
autoscalerRef:
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
name: podinfo
|
||||
service:
|
||||
port: 80
|
||||
targetPort: 9898
|
||||
meshName: global
|
||||
retries:
|
||||
attempts: 3
|
||||
perTryTimeout: 5s
|
||||
retryOn: "gateway-error,client-error,stream-error"
|
||||
timeout: 35s
|
||||
match:
|
||||
- uri:
|
||||
prefix: /
|
||||
rewrite:
|
||||
uri: /
|
||||
analysis:
|
||||
interval: 15s
|
||||
threshold: 10
|
||||
iterations: 10
|
||||
match:
|
||||
- headers:
|
||||
x-canary:
|
||||
exact: "insider"
|
||||
metrics:
|
||||
- name: request-success-rate
|
||||
thresholdRange:
|
||||
min: 99
|
||||
interval: 1m
|
||||
- name: request-duration
|
||||
thresholdRange:
|
||||
max: 500
|
||||
interval: 30s
|
||||
webhooks:
|
||||
- name: conformance-test
|
||||
type: pre-rollout
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 15s
|
||||
metadata:
|
||||
type: "bash"
|
||||
cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token"
|
||||
- name: load-test
|
||||
type: rollout
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 5s
|
||||
metadata:
|
||||
type: cmd
|
||||
cmd: "hey -z 1m -q 10 -c 2 -H 'X-Canary: insider' http://podinfo-canary.test/"
|
||||
59
artifacts/examples/appmesh-canary.yaml
Normal file
59
artifacts/examples/appmesh-canary.yaml
Normal file
@@ -0,0 +1,59 @@
|
||||
apiVersion: flagger.app/v1beta1
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
spec:
|
||||
provider: appmesh
|
||||
progressDeadlineSeconds: 600
|
||||
targetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: podinfo
|
||||
autoscalerRef:
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
name: podinfo
|
||||
service:
|
||||
port: 80
|
||||
targetPort: http
|
||||
meshName: global
|
||||
retries:
|
||||
attempts: 3
|
||||
perTryTimeout: 5s
|
||||
retryOn: "gateway-error,client-error,stream-error"
|
||||
timeout: 35s
|
||||
match:
|
||||
- uri:
|
||||
prefix: /
|
||||
rewrite:
|
||||
uri: /
|
||||
analysis:
|
||||
interval: 15s
|
||||
threshold: 10
|
||||
maxWeight: 50
|
||||
stepWeight: 5
|
||||
metrics:
|
||||
- name: request-success-rate
|
||||
thresholdRange:
|
||||
min: 99
|
||||
interval: 1m
|
||||
- name: request-duration
|
||||
thresholdRange:
|
||||
max: 500
|
||||
interval: 30s
|
||||
webhooks:
|
||||
- name: conformance-test
|
||||
type: pre-rollout
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 15s
|
||||
metadata:
|
||||
type: "bash"
|
||||
cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token"
|
||||
- name: load-test
|
||||
type: rollout
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 5s
|
||||
metadata:
|
||||
type: cmd
|
||||
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test/"
|
||||
70
artifacts/examples/istio-abtest.yaml
Normal file
70
artifacts/examples/istio-abtest.yaml
Normal file
@@ -0,0 +1,70 @@
|
||||
apiVersion: flagger.app/v1beta1
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
spec:
|
||||
provider: istio
|
||||
targetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: podinfo
|
||||
autoscalerRef:
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
name: podinfo
|
||||
service:
|
||||
name: podinfo
|
||||
port: 80
|
||||
targetPort: 9898
|
||||
portName: http
|
||||
portDiscovery: true
|
||||
gateways:
|
||||
- public-gateway.istio-system.svc.cluster.local
|
||||
- mesh
|
||||
hosts:
|
||||
- app.example.com
|
||||
trafficPolicy:
|
||||
tls:
|
||||
mode: DISABLE
|
||||
match:
|
||||
- uri:
|
||||
prefix: /
|
||||
rewrite:
|
||||
uri: /
|
||||
timeout: 30s
|
||||
analysis:
|
||||
interval: 15s
|
||||
threshold: 10
|
||||
iterations: 10
|
||||
match:
|
||||
- headers:
|
||||
cookie:
|
||||
regex: "^(.*?;)?(type=insider)(;.*)?$"
|
||||
- headers:
|
||||
user-agent:
|
||||
regex: ".*Firefox.*"
|
||||
metrics:
|
||||
- name: request-success-rate
|
||||
thresholdRange:
|
||||
min: 99
|
||||
interval: 1m
|
||||
- name: request-duration
|
||||
thresholdRange:
|
||||
max: 500
|
||||
interval: 30s
|
||||
webhooks:
|
||||
- name: conformance-test
|
||||
type: pre-rollout
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 15s
|
||||
metadata:
|
||||
type: "bash"
|
||||
cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token"
|
||||
- name: load-test
|
||||
type: rollout
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 5s
|
||||
metadata:
|
||||
type: cmd
|
||||
cmd: "hey -z 1m -q 10 -c 2 -H 'Cookie: type=insider' http://podinfo.test/"
|
||||
66
artifacts/examples/istio-canary.yaml
Normal file
66
artifacts/examples/istio-canary.yaml
Normal file
@@ -0,0 +1,66 @@
|
||||
apiVersion: flagger.app/v1beta1
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
spec:
|
||||
provider: istio
|
||||
progressDeadlineSeconds: 600
|
||||
targetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: podinfo
|
||||
autoscalerRef:
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
name: podinfo
|
||||
service:
|
||||
name: podinfo
|
||||
port: 80
|
||||
targetPort: 9898
|
||||
portName: http
|
||||
portDiscovery: true
|
||||
gateways:
|
||||
- public-gateway.istio-system.svc.cluster.local
|
||||
- mesh
|
||||
hosts:
|
||||
- app.example.com
|
||||
trafficPolicy:
|
||||
tls:
|
||||
mode: DISABLE
|
||||
match:
|
||||
- uri:
|
||||
prefix: /
|
||||
rewrite:
|
||||
uri: /
|
||||
timeout: 30s
|
||||
skipAnalysis: false
|
||||
analysis:
|
||||
interval: 15s
|
||||
threshold: 10
|
||||
maxWeight: 50
|
||||
stepWeight: 5
|
||||
metrics:
|
||||
- name: request-success-rate
|
||||
thresholdRange:
|
||||
min: 99
|
||||
interval: 1m
|
||||
- name: request-duration
|
||||
thresholdRange:
|
||||
max: 500
|
||||
interval: 30s
|
||||
webhooks:
|
||||
- name: conformance-test
|
||||
type: pre-rollout
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 15s
|
||||
metadata:
|
||||
type: "bash"
|
||||
cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token"
|
||||
- name: load-test
|
||||
type: rollout
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 5s
|
||||
metadata:
|
||||
type: cmd
|
||||
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test/"
|
||||
52
artifacts/examples/linkerd-canary.yaml
Normal file
52
artifacts/examples/linkerd-canary.yaml
Normal file
@@ -0,0 +1,52 @@
|
||||
apiVersion: flagger.app/v1beta1
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
spec:
|
||||
provider: linkerd
|
||||
progressDeadlineSeconds: 600
|
||||
targetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: podinfo
|
||||
autoscalerRef:
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
name: podinfo
|
||||
service:
|
||||
name: podinfo
|
||||
port: 80
|
||||
targetPort: 9898
|
||||
portName: http
|
||||
portDiscovery: true
|
||||
skipAnalysis: false
|
||||
analysis:
|
||||
interval: 15s
|
||||
threshold: 10
|
||||
maxWeight: 50
|
||||
stepWeight: 5
|
||||
metrics:
|
||||
- name: request-success-rate
|
||||
thresholdRange:
|
||||
min: 99
|
||||
interval: 1m
|
||||
- name: request-duration
|
||||
thresholdRange:
|
||||
max: 500
|
||||
interval: 30s
|
||||
webhooks:
|
||||
- name: conformance-test
|
||||
type: pre-rollout
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 15s
|
||||
metadata:
|
||||
type: "bash"
|
||||
cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token"
|
||||
- name: load-test
|
||||
type: rollout
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 5s
|
||||
metadata:
|
||||
type: cmd
|
||||
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test/"
|
||||
@@ -2,7 +2,7 @@ apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: flagger
|
||||
namespace: istio-system
|
||||
namespace: default
|
||||
labels:
|
||||
app: flagger
|
||||
---
|
||||
@@ -18,69 +18,164 @@ rules:
|
||||
resources:
|
||||
- events
|
||||
- configmaps
|
||||
- configmaps/finalizers
|
||||
- secrets
|
||||
- secrets/finalizers
|
||||
- services
|
||||
verbs: ["*"]
|
||||
- services/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- apps
|
||||
resources:
|
||||
- daemonsets
|
||||
- daemonsets/finalizers
|
||||
- deployments
|
||||
verbs: ["*"]
|
||||
- deployments/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- autoscaling
|
||||
resources:
|
||||
- horizontalpodautoscalers
|
||||
verbs: ["*"]
|
||||
- horizontalpodautoscalers/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- "extensions"
|
||||
- extensions
|
||||
- networking.k8s.io
|
||||
resources:
|
||||
- ingresses
|
||||
- ingresses/status
|
||||
verbs: ["*"]
|
||||
- ingresses/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- flagger.app
|
||||
resources:
|
||||
- canaries
|
||||
- canaries/status
|
||||
verbs: ["*"]
|
||||
- metrictemplates
|
||||
- metrictemplates/status
|
||||
- alertproviders
|
||||
- alertproviders/status
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- networking.istio.io
|
||||
resources:
|
||||
- virtualservices
|
||||
- virtualservices/status
|
||||
- virtualservices/finalizers
|
||||
- destinationrules
|
||||
- destinationrules/status
|
||||
verbs: ["*"]
|
||||
- destinationrules/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- appmesh.k8s.aws
|
||||
resources:
|
||||
- meshes
|
||||
- meshes/status
|
||||
- virtualnodes
|
||||
- virtualnodes/status
|
||||
- virtualnodes/finalizers
|
||||
- virtualrouters
|
||||
- virtualrouters/finalizers
|
||||
- virtualservices
|
||||
- virtualservices/status
|
||||
verbs: ["*"]
|
||||
- virtualservices/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- split.smi-spec.io
|
||||
resources:
|
||||
- trafficsplits
|
||||
verbs: ["*"]
|
||||
- trafficsplits/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- specs.smi-spec.io
|
||||
resources:
|
||||
- httproutegroups
|
||||
- httproutegroups/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- gloo.solo.io
|
||||
resources:
|
||||
- settings
|
||||
- upstreams
|
||||
- upstreams/finalizers
|
||||
- upstreamgroups
|
||||
- proxies
|
||||
- virtualservices
|
||||
verbs: ["*"]
|
||||
- upstreamgroups/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- gateway.solo.io
|
||||
- projectcontour.io
|
||||
resources:
|
||||
- virtualservices
|
||||
- gateways
|
||||
verbs: ["*"]
|
||||
- httpproxies
|
||||
- httpproxies/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- nonResourceURLs:
|
||||
- /version
|
||||
verbs:
|
||||
@@ -99,4 +194,4 @@ roleRef:
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: flagger
|
||||
namespace: istio-system
|
||||
namespace: default
|
||||
|
||||
@@ -6,16 +6,19 @@ metadata:
|
||||
helm.sh/resource-policy: keep
|
||||
spec:
|
||||
group: flagger.app
|
||||
version: v1alpha3
|
||||
version: v1beta1
|
||||
versions:
|
||||
- name: v1alpha3
|
||||
- name: v1beta1
|
||||
served: true
|
||||
storage: true
|
||||
- name: v1alpha2
|
||||
- name: v1alpha3
|
||||
served: true
|
||||
storage: false
|
||||
- name: v1alpha2
|
||||
served: false
|
||||
storage: false
|
||||
- name: v1alpha1
|
||||
served: true
|
||||
served: false
|
||||
storage: false
|
||||
names:
|
||||
plural: canaries
|
||||
@@ -33,6 +36,26 @@ spec:
|
||||
- name: Weight
|
||||
type: string
|
||||
JSONPath: .status.canaryWeight
|
||||
- name: FailedChecks
|
||||
type: string
|
||||
JSONPath: .status.failedChecks
|
||||
priority: 1
|
||||
- name: Interval
|
||||
type: string
|
||||
JSONPath: .spec.analysis.interval
|
||||
priority: 1
|
||||
- name: Mirror
|
||||
type: boolean
|
||||
JSONPath: .spec.analysis.mirror
|
||||
priority: 1
|
||||
- name: StepWeight
|
||||
type: string
|
||||
JSONPath: .spec.analysis.stepWeight
|
||||
priority: 1
|
||||
- name: MaxWeight
|
||||
type: string
|
||||
JSONPath: .spec.analysis.maxWeight
|
||||
priority: 1
|
||||
- name: LastTransitionTime
|
||||
type: string
|
||||
JSONPath: .status.lastTransitionTime
|
||||
@@ -43,116 +66,490 @@ spec:
|
||||
required:
|
||||
- targetRef
|
||||
- service
|
||||
- canaryAnalysis
|
||||
- analysis
|
||||
properties:
|
||||
provider:
|
||||
description: Traffic managent provider
|
||||
type: string
|
||||
metricsServer:
|
||||
description: Prometheus URL
|
||||
type: string
|
||||
progressDeadlineSeconds:
|
||||
description: Deployment progress deadline
|
||||
type: number
|
||||
targetRef:
|
||||
description: Deployment selector
|
||||
description: Target selector
|
||||
type: object
|
||||
required: ['apiVersion', 'kind', 'name']
|
||||
required: ["apiVersion", "kind", "name"]
|
||||
properties:
|
||||
apiVersion:
|
||||
type: string
|
||||
kind:
|
||||
type: string
|
||||
enum:
|
||||
- DaemonSet
|
||||
- Deployment
|
||||
- Service
|
||||
name:
|
||||
type: string
|
||||
autoscalerRef:
|
||||
description: HPA selector
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: object
|
||||
required: ['apiVersion', 'kind', 'name']
|
||||
type: object
|
||||
required: ["apiVersion", "kind", "name"]
|
||||
properties:
|
||||
apiVersion:
|
||||
type: string
|
||||
kind:
|
||||
type: string
|
||||
enum:
|
||||
- HorizontalPodAutoscaler
|
||||
name:
|
||||
type: string
|
||||
ingressRef:
|
||||
description: NGINX ingress selector
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: object
|
||||
required: ['apiVersion', 'kind', 'name']
|
||||
type: object
|
||||
required: ["apiVersion", "kind", "name"]
|
||||
properties:
|
||||
apiVersion:
|
||||
type: string
|
||||
kind:
|
||||
type: string
|
||||
enum:
|
||||
- Ingress
|
||||
name:
|
||||
type: string
|
||||
service:
|
||||
description: Kubernetes Service spec
|
||||
type: object
|
||||
required: ['port']
|
||||
required: ["port"]
|
||||
properties:
|
||||
name:
|
||||
description: Kubernetes service name
|
||||
type: string
|
||||
port:
|
||||
description: Container port number
|
||||
type: number
|
||||
portName:
|
||||
description: Container port name
|
||||
type: string
|
||||
targetPort:
|
||||
description: Container target port name
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: number
|
||||
portDiscovery:
|
||||
description: Enable port dicovery
|
||||
type: boolean
|
||||
timeout:
|
||||
description: HTTP or gRPC request timeout
|
||||
type: string
|
||||
meshName:
|
||||
description: AppMesh mesh name
|
||||
type: string
|
||||
backends:
|
||||
description: AppMesh backend array
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: object
|
||||
timeout:
|
||||
description: Istio HTTP or gRPC request timeout
|
||||
type: string
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
hosts:
|
||||
description: The list of host names for this service
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
match:
|
||||
description: URI match conditions
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
uri:
|
||||
type: object
|
||||
oneOf:
|
||||
- required: ["exact"]
|
||||
- required: ["prefix"]
|
||||
- required: ["suffix"]
|
||||
- required: ["regex"]
|
||||
properties:
|
||||
exact:
|
||||
format: string
|
||||
type: string
|
||||
prefix:
|
||||
format: string
|
||||
type: string
|
||||
suffix:
|
||||
format: string
|
||||
type: string
|
||||
regex:
|
||||
format: string
|
||||
type: string
|
||||
retries:
|
||||
description: Retry policy for HTTP requests
|
||||
type: object
|
||||
properties:
|
||||
attempts:
|
||||
description: Number of retries for a given request
|
||||
format: int32
|
||||
type: integer
|
||||
perTryTimeout:
|
||||
description: Timeout per retry attempt for a given request
|
||||
type: string
|
||||
retryOn:
|
||||
description: Specifies the conditions under which retry takes place
|
||||
format: string
|
||||
type: string
|
||||
rewrite:
|
||||
description: Rewrite HTTP URIs
|
||||
type: object
|
||||
properties:
|
||||
uri:
|
||||
format: string
|
||||
type: string
|
||||
headers:
|
||||
description: Headers operations
|
||||
type: object
|
||||
properties:
|
||||
request:
|
||||
properties:
|
||||
add:
|
||||
additionalProperties:
|
||||
format: string
|
||||
type: string
|
||||
type: object
|
||||
remove:
|
||||
items:
|
||||
format: string
|
||||
type: string
|
||||
type: array
|
||||
set:
|
||||
additionalProperties:
|
||||
format: string
|
||||
type: string
|
||||
type: object
|
||||
type: object
|
||||
response:
|
||||
properties:
|
||||
add:
|
||||
additionalProperties:
|
||||
format: string
|
||||
type: string
|
||||
type: object
|
||||
remove:
|
||||
items:
|
||||
format: string
|
||||
type: string
|
||||
type: array
|
||||
set:
|
||||
additionalProperties:
|
||||
format: string
|
||||
type: string
|
||||
type: object
|
||||
type: object
|
||||
gateways:
|
||||
description: The list of Istio gateway for this virtual service
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
corsPolicy:
|
||||
description: Istio Cross-Origin Resource Sharing policy (CORS)
|
||||
type: object
|
||||
properties:
|
||||
allowCredentials:
|
||||
type: boolean
|
||||
allowHeaders:
|
||||
items:
|
||||
format: string
|
||||
type: string
|
||||
type: array
|
||||
allowMethods:
|
||||
description: List of HTTP methods allowed to access the resource
|
||||
items:
|
||||
format: string
|
||||
type: string
|
||||
type: array
|
||||
allowOrigin:
|
||||
description: The list of origins that are allowed to perform
|
||||
CORS requests.
|
||||
items:
|
||||
format: string
|
||||
type: string
|
||||
type: array
|
||||
allowOrigins:
|
||||
description: String patterns that match allowed origins
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
oneOf:
|
||||
- required:
|
||||
- exact
|
||||
- required:
|
||||
- prefix
|
||||
- required:
|
||||
- regex
|
||||
properties:
|
||||
exact:
|
||||
format: string
|
||||
type: string
|
||||
prefix:
|
||||
format: string
|
||||
type: string
|
||||
regex:
|
||||
format: string
|
||||
type: string
|
||||
exposeHeaders:
|
||||
items:
|
||||
format: string
|
||||
type: string
|
||||
type: array
|
||||
maxAge:
|
||||
type: string
|
||||
trafficPolicy:
|
||||
description: Istio traffic policy
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: object
|
||||
match:
|
||||
description: Istio URL match conditions
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: array
|
||||
rewrite:
|
||||
description: Istio URL rewrite
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: object
|
||||
headers:
|
||||
description: Istio headers operations
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: object
|
||||
corsPolicy:
|
||||
description: Istio CORS policy
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: object
|
||||
gateways:
|
||||
description: Istio gateways list
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: array
|
||||
hosts:
|
||||
description: Istio hosts list
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: array
|
||||
type: object
|
||||
properties:
|
||||
connectionPool:
|
||||
properties:
|
||||
http:
|
||||
description: HTTP connection pool settings.
|
||||
type: object
|
||||
properties:
|
||||
h2UpgradePolicy:
|
||||
description: Specify if http1.1 connection should
|
||||
be upgraded to http2 for the associated destination.
|
||||
enum:
|
||||
- DEFAULT
|
||||
- DO_NOT_UPGRADE
|
||||
- UPGRADE
|
||||
type: string
|
||||
http1MaxPendingRequests:
|
||||
description: Maximum number of pending HTTP requests
|
||||
to a destination.
|
||||
format: int32
|
||||
type: integer
|
||||
http2MaxRequests:
|
||||
description: Maximum number of requests to a backend.
|
||||
format: int32
|
||||
type: integer
|
||||
idleTimeout:
|
||||
description: The idle timeout for upstream connection
|
||||
pool connections.
|
||||
type: string
|
||||
maxRequestsPerConnection:
|
||||
description: Maximum number of requests per connection
|
||||
to a backend.
|
||||
format: int32
|
||||
type: integer
|
||||
maxRetries:
|
||||
format: int32
|
||||
type: integer
|
||||
loadBalancer:
|
||||
description: Settings controlling the load balancer algorithms.
|
||||
type: object
|
||||
oneOf:
|
||||
- required:
|
||||
- simple
|
||||
- properties:
|
||||
consistentHash:
|
||||
oneOf:
|
||||
- required:
|
||||
- httpHeaderName
|
||||
- required:
|
||||
- httpCookie
|
||||
- required:
|
||||
- useSourceIp
|
||||
- required:
|
||||
- httpQueryParameterName
|
||||
required:
|
||||
- consistentHash
|
||||
properties:
|
||||
consistentHash:
|
||||
properties:
|
||||
httpCookie:
|
||||
description: Hash based on HTTP cookie.
|
||||
properties:
|
||||
name:
|
||||
description: Name of the cookie.
|
||||
format: string
|
||||
type: string
|
||||
path:
|
||||
description: Path to set for the cookie.
|
||||
format: string
|
||||
type: string
|
||||
ttl:
|
||||
description: Lifetime of the cookie.
|
||||
type: string
|
||||
type: object
|
||||
httpHeaderName:
|
||||
description: Hash based on a specific HTTP header.
|
||||
format: string
|
||||
type: string
|
||||
httpQueryParameterName:
|
||||
description: Hash based on a specific HTTP query parameter.
|
||||
format: string
|
||||
type: string
|
||||
minimumRingSize:
|
||||
type: integer
|
||||
useSourceIp:
|
||||
description: Hash based on the source IP address.
|
||||
type: boolean
|
||||
type: object
|
||||
localityLbSetting:
|
||||
properties:
|
||||
distribute:
|
||||
description: 'Optional: only one of distribute or
|
||||
failover can be set.'
|
||||
items:
|
||||
properties:
|
||||
from:
|
||||
description: Originating locality, '/' separated,
|
||||
e.g.
|
||||
format: string
|
||||
type: string
|
||||
to:
|
||||
additionalProperties:
|
||||
type: integer
|
||||
description: Map of upstream localities to traffic
|
||||
distribution weights.
|
||||
type: object
|
||||
type: object
|
||||
type: array
|
||||
enabled:
|
||||
description: enable locality load balancing, this
|
||||
is DestinationRule-level and will override mesh
|
||||
wide settings in entirety.
|
||||
type: boolean
|
||||
failover:
|
||||
description: 'Optional: only failover or distribute
|
||||
can be set.'
|
||||
items:
|
||||
properties:
|
||||
from:
|
||||
description: Originating region.
|
||||
format: string
|
||||
type: string
|
||||
to:
|
||||
format: string
|
||||
type: string
|
||||
type: object
|
||||
type: array
|
||||
type: object
|
||||
simple:
|
||||
enum:
|
||||
- ROUND_ROBIN
|
||||
- LEAST_CONN
|
||||
- RANDOM
|
||||
- PASSTHROUGH
|
||||
type: string
|
||||
outlierDetection:
|
||||
description: Settings controlling eviction of unhealthy hosts from the load balancing pool.
|
||||
type: object
|
||||
properties:
|
||||
baseEjectionTime:
|
||||
description: Minimum ejection duration.
|
||||
type: string
|
||||
consecutive5xxErrors:
|
||||
description: Number of 5xx errors before a host is ejected
|
||||
from the connection pool.
|
||||
type: integer
|
||||
consecutiveErrors:
|
||||
format: int32
|
||||
type: integer
|
||||
consecutiveGatewayErrors:
|
||||
description: Number of gateway errors before a host is
|
||||
ejected from the connection pool.
|
||||
format: int32
|
||||
type: integer
|
||||
interval:
|
||||
description: Time interval between ejection sweep analysis.
|
||||
type: string
|
||||
maxEjectionPercent:
|
||||
format: int32
|
||||
type: integer
|
||||
minHealthPercent:
|
||||
format: int32
|
||||
type: integer
|
||||
tls:
|
||||
description: Istio TLS related settings for connections to the upstream service
|
||||
type: object
|
||||
properties:
|
||||
caCertificates:
|
||||
format: string
|
||||
type: string
|
||||
clientCertificate:
|
||||
description: REQUIRED if mode is `MUTUAL`.
|
||||
format: string
|
||||
type: string
|
||||
mode:
|
||||
enum:
|
||||
- DISABLE
|
||||
- SIMPLE
|
||||
- MUTUAL
|
||||
- ISTIO_MUTUAL
|
||||
type: string
|
||||
privateKey:
|
||||
description: REQUIRED if mode is `MUTUAL`.
|
||||
format: string
|
||||
type: string
|
||||
sni:
|
||||
description: SNI string to present to the server
|
||||
during TLS handshake.
|
||||
format: string
|
||||
type: string
|
||||
subjectAltNames:
|
||||
items:
|
||||
format: string
|
||||
type: string
|
||||
type: array
|
||||
apex:
|
||||
description: Metadata to add to the apex service
|
||||
type: object
|
||||
properties:
|
||||
labels:
|
||||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
annotations:
|
||||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
primary:
|
||||
description: Metadata to add to the primary service
|
||||
type: object
|
||||
properties:
|
||||
labels:
|
||||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
annotations:
|
||||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
canary:
|
||||
description: Metadata to add to the canary service
|
||||
type: object
|
||||
properties:
|
||||
labels:
|
||||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
annotations:
|
||||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
skipAnalysis:
|
||||
description: Skip analysis and promote canary
|
||||
type: boolean
|
||||
canaryAnalysis:
|
||||
revertOnDeletion:
|
||||
description: Revert mutated resources to original spec on deletion
|
||||
type: boolean
|
||||
analysis:
|
||||
description: Canary analysis for this canary
|
||||
type: object
|
||||
oneOf:
|
||||
- required: ["interval", "threshold", "iterations"]
|
||||
- required: ["interval", "threshold", "stepWeight"]
|
||||
properties:
|
||||
interval:
|
||||
description: Canary schedule interval
|
||||
description: Schedule interval for this canary
|
||||
type: string
|
||||
pattern: "^[0-9]+(m|s)"
|
||||
iterations:
|
||||
@@ -165,67 +562,128 @@ spec:
|
||||
description: Max traffic percentage routed to canary
|
||||
type: number
|
||||
stepWeight:
|
||||
description: Canary incremental traffic percentage step
|
||||
description: Incremental traffic percentage step for the analysis phase
|
||||
type: number
|
||||
stepWeightPromotion:
|
||||
description: Incremental traffic percentage step for the promotion phase
|
||||
type: number
|
||||
mirror:
|
||||
description: Mirror traffic to canary
|
||||
type: boolean
|
||||
mirrorWeight:
|
||||
description: Percentage of traffic to be mirrored
|
||||
type: number
|
||||
match:
|
||||
description: A/B testing match conditions
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: array
|
||||
metrics:
|
||||
description: Prometheus query list for this canary
|
||||
type: array
|
||||
properties:
|
||||
items:
|
||||
type: object
|
||||
required: ['name', 'threshold']
|
||||
properties:
|
||||
name:
|
||||
description: Name of the Prometheus metric
|
||||
type: string
|
||||
interval:
|
||||
description: Interval of the promql query
|
||||
type: string
|
||||
pattern: "^[0-9]+(m|s)"
|
||||
threshold:
|
||||
description: Max scalar value accepted for this metric
|
||||
type: number
|
||||
query:
|
||||
description: Prometheus query
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
headers:
|
||||
type: object
|
||||
additionalProperties:
|
||||
oneOf:
|
||||
- required: ["exact"]
|
||||
- required: ["prefix"]
|
||||
- required: ["suffix"]
|
||||
- required: ["regex"]
|
||||
type: object
|
||||
properties:
|
||||
exact:
|
||||
format: string
|
||||
type: string
|
||||
prefix:
|
||||
format: string
|
||||
type: string
|
||||
suffix:
|
||||
format: string
|
||||
type: string
|
||||
regex:
|
||||
description: RE2 style regex-based match (https://github.com/google/re2/wiki/Syntax)
|
||||
format: string
|
||||
type: string
|
||||
sourceLabels:
|
||||
description: Applicable only when the 'mesh' gateway is included in the service.gateways list
|
||||
type: object
|
||||
additionalProperties:
|
||||
format: string
|
||||
type: string
|
||||
metrics:
|
||||
description: Metric check list for this canary
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
required: ["name"]
|
||||
properties:
|
||||
name:
|
||||
description: Name of the metric
|
||||
type: string
|
||||
interval:
|
||||
description: Interval of the query
|
||||
type: string
|
||||
pattern: "^[0-9]+(m|s)"
|
||||
threshold:
|
||||
description: Max value accepted for this metric
|
||||
type: number
|
||||
thresholdRange:
|
||||
description: Range accepted for this metric
|
||||
type: object
|
||||
properties:
|
||||
min:
|
||||
description: Min value accepted for this metric
|
||||
type: number
|
||||
max:
|
||||
description: Max value accepted for this metric
|
||||
type: number
|
||||
query:
|
||||
description: Prometheus query
|
||||
type: string
|
||||
templateRef:
|
||||
description: Metric template reference
|
||||
type: object
|
||||
required: ["name"]
|
||||
properties:
|
||||
name:
|
||||
description: Name of this metric template
|
||||
type: string
|
||||
namespace:
|
||||
description: Namespace of this metric template
|
||||
type: string
|
||||
webhooks:
|
||||
description: Webhook list for this canary
|
||||
type: array
|
||||
properties:
|
||||
items:
|
||||
type: object
|
||||
required: ['name', 'url', 'timeout']
|
||||
properties:
|
||||
name:
|
||||
description: Name of the webhook
|
||||
items:
|
||||
type: object
|
||||
required: ["name", "url"]
|
||||
properties:
|
||||
name:
|
||||
description: Name of the webhook
|
||||
type: string
|
||||
type:
|
||||
description: Type of the webhook pre, post or during rollout
|
||||
type: string
|
||||
enum:
|
||||
- ""
|
||||
- confirm-rollout
|
||||
- pre-rollout
|
||||
- rollout
|
||||
- confirm-promotion
|
||||
- post-rollout
|
||||
- event
|
||||
- rollback
|
||||
url:
|
||||
description: URL address of this webhook
|
||||
type: string
|
||||
format: url
|
||||
timeout:
|
||||
description: Request timeout for this webhook
|
||||
type: string
|
||||
pattern: "^[0-9]+(m|s)"
|
||||
metadata:
|
||||
description: Metadata (key-value pairs) for this webhook
|
||||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
type:
|
||||
description: Type of the webhook pre, post or during rollout
|
||||
type: string
|
||||
enum:
|
||||
- ""
|
||||
- confirm-rollout
|
||||
- pre-rollout
|
||||
- rollout
|
||||
- post-rollout
|
||||
url:
|
||||
description: URL address of this webhook
|
||||
type: string
|
||||
format: url
|
||||
timeout:
|
||||
description: Request timeout for this webhook
|
||||
type: string
|
||||
pattern: "^[0-9]+(m|s)"
|
||||
metadata:
|
||||
description: Metadata (key-value pairs) for this webhook
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: object
|
||||
status:
|
||||
properties:
|
||||
phase:
|
||||
@@ -237,9 +695,12 @@ spec:
|
||||
- Initialized
|
||||
- Waiting
|
||||
- Progressing
|
||||
- Promoting
|
||||
- Finalising
|
||||
- Succeeded
|
||||
- Failed
|
||||
- Terminating
|
||||
- Terminated
|
||||
canaryWeight:
|
||||
description: Traffic weight percentage routed to canary
|
||||
type: number
|
||||
@@ -259,28 +720,156 @@ spec:
|
||||
conditions:
|
||||
description: Status conditions of this canary
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
required: ["type", "status", "reason"]
|
||||
properties:
|
||||
lastTransitionTime:
|
||||
description: LastTransitionTime of this condition
|
||||
format: date-time
|
||||
type: string
|
||||
lastUpdateTime:
|
||||
description: LastUpdateTime of this condition
|
||||
format: date-time
|
||||
type: string
|
||||
message:
|
||||
description: Message associated with this condition
|
||||
type: string
|
||||
reason:
|
||||
description: Reason for the current status of this condition
|
||||
type: string
|
||||
status:
|
||||
description: Status of this condition
|
||||
type: string
|
||||
type:
|
||||
description: Type of this condition
|
||||
type: string
|
||||
---
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: metrictemplates.flagger.app
|
||||
annotations:
|
||||
helm.sh/resource-policy: keep
|
||||
spec:
|
||||
group: flagger.app
|
||||
version: v1beta1
|
||||
versions:
|
||||
- name: v1beta1
|
||||
served: true
|
||||
storage: true
|
||||
- name: v1alpha1
|
||||
served: true
|
||||
storage: false
|
||||
names:
|
||||
plural: metrictemplates
|
||||
singular: metrictemplate
|
||||
kind: MetricTemplate
|
||||
categories:
|
||||
- all
|
||||
scope: Namespaced
|
||||
subresources:
|
||||
status: {}
|
||||
additionalPrinterColumns:
|
||||
- name: Provider
|
||||
type: string
|
||||
JSONPath: .spec.provider.type
|
||||
validation:
|
||||
openAPIV3Schema:
|
||||
properties:
|
||||
spec:
|
||||
required:
|
||||
- provider
|
||||
- query
|
||||
properties:
|
||||
provider:
|
||||
description: Provider of this metric template
|
||||
type: object
|
||||
required:
|
||||
- type
|
||||
properties:
|
||||
items:
|
||||
type:
|
||||
description: Type of this provider
|
||||
type: string
|
||||
enum:
|
||||
- prometheus
|
||||
- influxdb
|
||||
- datadog
|
||||
- cloudwatch
|
||||
address:
|
||||
description: API address of this provider
|
||||
type: string
|
||||
secretRef:
|
||||
description: Kubernetes secret reference containing the provider credentials
|
||||
type: object
|
||||
required: ['type', 'status', 'reason']
|
||||
required:
|
||||
- name
|
||||
properties:
|
||||
lastTransitionTime:
|
||||
description: LastTransitionTime of this condition
|
||||
format: date-time
|
||||
type: string
|
||||
lastUpdateTime:
|
||||
description: LastUpdateTime of this condition
|
||||
format: date-time
|
||||
type: string
|
||||
message:
|
||||
description: Message associated with this condition
|
||||
type: string
|
||||
reason:
|
||||
description: Reason for the current status of this condition
|
||||
type: string
|
||||
status:
|
||||
description: Status of this condition
|
||||
type: string
|
||||
type:
|
||||
description: Type of this condition
|
||||
name:
|
||||
description: Name of the Kubernetes secret
|
||||
type: string
|
||||
region:
|
||||
description: Region of the provider
|
||||
type: string
|
||||
query:
|
||||
description: Query of this metric template
|
||||
type: string
|
||||
---
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: alertproviders.flagger.app
|
||||
annotations:
|
||||
helm.sh/resource-policy: keep
|
||||
spec:
|
||||
group: flagger.app
|
||||
version: v1beta1
|
||||
versions:
|
||||
- name: v1beta1
|
||||
served: true
|
||||
storage: true
|
||||
names:
|
||||
plural: alertproviders
|
||||
singular: alertprovider
|
||||
kind: AlertProvider
|
||||
categories:
|
||||
- all
|
||||
scope: Namespaced
|
||||
subresources:
|
||||
status: {}
|
||||
additionalPrinterColumns:
|
||||
- name: Type
|
||||
type: string
|
||||
JSONPath: .spec.type
|
||||
validation:
|
||||
openAPIV3Schema:
|
||||
properties:
|
||||
spec:
|
||||
oneOf:
|
||||
- required:
|
||||
- type
|
||||
- address
|
||||
- required:
|
||||
- type
|
||||
- secretRef
|
||||
properties:
|
||||
type:
|
||||
description: Type of this provider
|
||||
type: string
|
||||
enum:
|
||||
- slack
|
||||
- msteams
|
||||
- discord
|
||||
- rocket
|
||||
address:
|
||||
description: Hook URL address of this provider
|
||||
type: string
|
||||
secretRef:
|
||||
description: Kubernetes secret reference containing the provider address
|
||||
type: object
|
||||
required:
|
||||
- name
|
||||
properties:
|
||||
name:
|
||||
description: Name of the Kubernetes secret
|
||||
type: string
|
||||
|
||||
@@ -2,7 +2,7 @@ apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: flagger
|
||||
namespace: istio-system
|
||||
namespace: default
|
||||
labels:
|
||||
app: flagger
|
||||
spec:
|
||||
@@ -22,7 +22,7 @@ spec:
|
||||
serviceAccountName: flagger
|
||||
containers:
|
||||
- name: flagger
|
||||
image: weaveworks/flagger:0.18.2
|
||||
image: weaveworks/flagger:1.1.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- name: http
|
||||
@@ -30,9 +30,6 @@ spec:
|
||||
command:
|
||||
- ./flagger
|
||||
- -log-level=info
|
||||
- -control-loop-interval=10s
|
||||
- -mesh-provider=$(MESH_PROVIDER)
|
||||
- -metrics-server=http://prometheus.istio-system.svc.cluster.local:9090
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
|
||||
@@ -1,27 +0,0 @@
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: public-gateway
|
||||
namespace: istio-system
|
||||
spec:
|
||||
selector:
|
||||
istio: ingressgateway
|
||||
servers:
|
||||
- port:
|
||||
number: 80
|
||||
name: http
|
||||
protocol: HTTP
|
||||
hosts:
|
||||
- "*"
|
||||
tls:
|
||||
httpsRedirect: true
|
||||
- port:
|
||||
number: 443
|
||||
name: https
|
||||
protocol: HTTPS
|
||||
hosts:
|
||||
- "*"
|
||||
tls:
|
||||
mode: SIMPLE
|
||||
privateKey: /etc/istio/ingressgateway-certs/tls.key
|
||||
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
|
||||
@@ -1,834 +0,0 @@
|
||||
# Source: istio/charts/prometheus/templates/configmap.yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: prometheus
|
||||
namespace: istio-system
|
||||
labels:
|
||||
app: prometheus
|
||||
chart: prometheus-1.0.6
|
||||
heritage: Tiller
|
||||
release: istio
|
||||
data:
|
||||
prometheus.yml: |-
|
||||
global:
|
||||
scrape_interval: 15s
|
||||
scrape_configs:
|
||||
|
||||
- job_name: 'istio-mesh'
|
||||
# Override the global default and scrape targets from this job every 5 seconds.
|
||||
scrape_interval: 5s
|
||||
|
||||
kubernetes_sd_configs:
|
||||
- role: endpoints
|
||||
namespaces:
|
||||
names:
|
||||
- istio-system
|
||||
|
||||
relabel_configs:
|
||||
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
|
||||
action: keep
|
||||
regex: istio-telemetry;prometheus
|
||||
|
||||
|
||||
# Scrape config for envoy stats
|
||||
- job_name: 'envoy-stats'
|
||||
metrics_path: /stats/prometheus
|
||||
kubernetes_sd_configs:
|
||||
- role: pod
|
||||
|
||||
relabel_configs:
|
||||
- source_labels: [__meta_kubernetes_pod_container_port_name]
|
||||
action: keep
|
||||
regex: '.*-envoy-prom'
|
||||
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
|
||||
action: replace
|
||||
regex: ([^:]+)(?::\d+)?;(\d+)
|
||||
replacement: $1:15090
|
||||
target_label: __address__
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_pod_label_(.+)
|
||||
- source_labels: [__meta_kubernetes_namespace]
|
||||
action: replace
|
||||
target_label: namespace
|
||||
- source_labels: [__meta_kubernetes_pod_name]
|
||||
action: replace
|
||||
target_label: pod_name
|
||||
|
||||
metric_relabel_configs:
|
||||
# Exclude some of the envoy metrics that have massive cardinality
|
||||
# This list may need to be pruned further moving forward, as informed
|
||||
# by performance and scalability testing.
|
||||
- source_labels: [ cluster_name ]
|
||||
regex: '(outbound|inbound|prometheus_stats).*'
|
||||
action: drop
|
||||
- source_labels: [ tcp_prefix ]
|
||||
regex: '(outbound|inbound|prometheus_stats).*'
|
||||
action: drop
|
||||
- source_labels: [ listener_address ]
|
||||
regex: '(.+)'
|
||||
action: drop
|
||||
- source_labels: [ http_conn_manager_listener_prefix ]
|
||||
regex: '(.+)'
|
||||
action: drop
|
||||
- source_labels: [ http_conn_manager_prefix ]
|
||||
regex: '(.+)'
|
||||
action: drop
|
||||
- source_labels: [ __name__ ]
|
||||
regex: 'envoy_tls.*'
|
||||
action: drop
|
||||
- source_labels: [ __name__ ]
|
||||
regex: 'envoy_tcp_downstream.*'
|
||||
action: drop
|
||||
- source_labels: [ __name__ ]
|
||||
regex: 'envoy_http_(stats|admin).*'
|
||||
action: drop
|
||||
- source_labels: [ __name__ ]
|
||||
regex: 'envoy_cluster_(lb|retry|bind|internal|max|original).*'
|
||||
action: drop
|
||||
|
||||
|
||||
- job_name: 'istio-policy'
|
||||
# Override the global default and scrape targets from this job every 5 seconds.
|
||||
scrape_interval: 5s
|
||||
# metrics_path defaults to '/metrics'
|
||||
# scheme defaults to 'http'.
|
||||
|
||||
kubernetes_sd_configs:
|
||||
- role: endpoints
|
||||
namespaces:
|
||||
names:
|
||||
- istio-system
|
||||
|
||||
|
||||
relabel_configs:
|
||||
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
|
||||
action: keep
|
||||
regex: istio-policy;http-monitoring
|
||||
|
||||
- job_name: 'istio-telemetry'
|
||||
# Override the global default and scrape targets from this job every 5 seconds.
|
||||
scrape_interval: 5s
|
||||
# metrics_path defaults to '/metrics'
|
||||
# scheme defaults to 'http'.
|
||||
|
||||
kubernetes_sd_configs:
|
||||
- role: endpoints
|
||||
namespaces:
|
||||
names:
|
||||
- istio-system
|
||||
|
||||
relabel_configs:
|
||||
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
|
||||
action: keep
|
||||
regex: istio-telemetry;http-monitoring
|
||||
|
||||
- job_name: 'pilot'
|
||||
# Override the global default and scrape targets from this job every 5 seconds.
|
||||
scrape_interval: 5s
|
||||
# metrics_path defaults to '/metrics'
|
||||
# scheme defaults to 'http'.
|
||||
|
||||
kubernetes_sd_configs:
|
||||
- role: endpoints
|
||||
namespaces:
|
||||
names:
|
||||
- istio-system
|
||||
|
||||
relabel_configs:
|
||||
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
|
||||
action: keep
|
||||
regex: istio-pilot;http-monitoring
|
||||
|
||||
- job_name: 'galley'
|
||||
# Override the global default and scrape targets from this job every 5 seconds.
|
||||
scrape_interval: 5s
|
||||
# metrics_path defaults to '/metrics'
|
||||
# scheme defaults to 'http'.
|
||||
|
||||
kubernetes_sd_configs:
|
||||
- role: endpoints
|
||||
namespaces:
|
||||
names:
|
||||
- istio-system
|
||||
|
||||
relabel_configs:
|
||||
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
|
||||
action: keep
|
||||
regex: istio-galley;http-monitoring
|
||||
|
||||
# scrape config for API servers
|
||||
- job_name: 'kubernetes-apiservers'
|
||||
kubernetes_sd_configs:
|
||||
- role: endpoints
|
||||
namespaces:
|
||||
names:
|
||||
- default
|
||||
scheme: https
|
||||
tls_config:
|
||||
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||
relabel_configs:
|
||||
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
|
||||
action: keep
|
||||
regex: kubernetes;https
|
||||
|
||||
# scrape config for nodes (kubelet)
|
||||
- job_name: 'kubernetes-nodes'
|
||||
scheme: https
|
||||
tls_config:
|
||||
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||
kubernetes_sd_configs:
|
||||
- role: node
|
||||
relabel_configs:
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_node_label_(.+)
|
||||
- target_label: __address__
|
||||
replacement: kubernetes.default.svc:443
|
||||
- source_labels: [__meta_kubernetes_node_name]
|
||||
regex: (.+)
|
||||
target_label: __metrics_path__
|
||||
replacement: /api/v1/nodes/${1}/proxy/metrics
|
||||
|
||||
# Scrape config for Kubelet cAdvisor.
|
||||
#
|
||||
# This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics
|
||||
# (those whose names begin with 'container_') have been removed from the
|
||||
# Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to
|
||||
# retrieve those metrics.
|
||||
#
|
||||
# In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor
|
||||
# HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics"
|
||||
# in that case (and ensure cAdvisor's HTTP server hasn't been disabled with
|
||||
# the --cadvisor-port=0 Kubelet flag).
|
||||
#
|
||||
# This job is not necessary and should be removed in Kubernetes 1.6 and
|
||||
# earlier versions, or it will cause the metrics to be scraped twice.
|
||||
- job_name: 'kubernetes-cadvisor'
|
||||
scheme: https
|
||||
tls_config:
|
||||
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||
kubernetes_sd_configs:
|
||||
- role: node
|
||||
relabel_configs:
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_node_label_(.+)
|
||||
- target_label: __address__
|
||||
replacement: kubernetes.default.svc:443
|
||||
- source_labels: [__meta_kubernetes_node_name]
|
||||
regex: (.+)
|
||||
target_label: __metrics_path__
|
||||
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
|
||||
|
||||
# scrape config for service endpoints.
|
||||
- job_name: 'kubernetes-service-endpoints'
|
||||
kubernetes_sd_configs:
|
||||
- role: endpoints
|
||||
relabel_configs:
|
||||
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
|
||||
action: keep
|
||||
regex: true
|
||||
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
|
||||
action: replace
|
||||
target_label: __scheme__
|
||||
regex: (https?)
|
||||
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
|
||||
action: replace
|
||||
target_label: __metrics_path__
|
||||
regex: (.+)
|
||||
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
|
||||
action: replace
|
||||
target_label: __address__
|
||||
regex: ([^:]+)(?::\d+)?;(\d+)
|
||||
replacement: $1:$2
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_service_label_(.+)
|
||||
- source_labels: [__meta_kubernetes_namespace]
|
||||
action: replace
|
||||
target_label: kubernetes_namespace
|
||||
- source_labels: [__meta_kubernetes_service_name]
|
||||
action: replace
|
||||
target_label: kubernetes_name
|
||||
|
||||
- job_name: 'kubernetes-pods'
|
||||
kubernetes_sd_configs:
|
||||
- role: pod
|
||||
relabel_configs: # If first two labels are present, pod should be scraped by the istio-secure job.
|
||||
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
|
||||
action: keep
|
||||
regex: true
|
||||
- source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status]
|
||||
action: drop
|
||||
regex: (.+)
|
||||
- source_labels: [__meta_kubernetes_pod_annotation_istio_mtls]
|
||||
action: drop
|
||||
regex: (true)
|
||||
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
|
||||
action: replace
|
||||
target_label: __metrics_path__
|
||||
regex: (.+)
|
||||
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
|
||||
action: replace
|
||||
regex: ([^:]+)(?::\d+)?;(\d+)
|
||||
replacement: $1:$2
|
||||
target_label: __address__
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_pod_label_(.+)
|
||||
- source_labels: [__meta_kubernetes_namespace]
|
||||
action: replace
|
||||
target_label: namespace
|
||||
- source_labels: [__meta_kubernetes_pod_name]
|
||||
action: replace
|
||||
target_label: pod_name
|
||||
|
||||
- job_name: 'kubernetes-pods-istio-secure'
|
||||
scheme: https
|
||||
tls_config:
|
||||
ca_file: /etc/istio-certs/root-cert.pem
|
||||
cert_file: /etc/istio-certs/cert-chain.pem
|
||||
key_file: /etc/istio-certs/key.pem
|
||||
insecure_skip_verify: true # prometheus does not support secure naming.
|
||||
kubernetes_sd_configs:
|
||||
- role: pod
|
||||
relabel_configs:
|
||||
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
|
||||
action: keep
|
||||
regex: true
|
||||
# sidecar status annotation is added by sidecar injector and
|
||||
# istio_workload_mtls_ability can be specifically placed on a pod to indicate its ability to receive mtls traffic.
|
||||
- source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status, __meta_kubernetes_pod_annotation_istio_mtls]
|
||||
action: keep
|
||||
regex: (([^;]+);([^;]*))|(([^;]*);(true))
|
||||
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
|
||||
action: replace
|
||||
target_label: __metrics_path__
|
||||
regex: (.+)
|
||||
- source_labels: [__address__] # Only keep address that is host:port
|
||||
action: keep # otherwise an extra target with ':443' is added for https scheme
|
||||
regex: ([^:]+):(\d+)
|
||||
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
|
||||
action: replace
|
||||
regex: ([^:]+)(?::\d+)?;(\d+)
|
||||
replacement: $1:$2
|
||||
target_label: __address__
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_pod_label_(.+)
|
||||
- source_labels: [__meta_kubernetes_namespace]
|
||||
action: replace
|
||||
target_label: namespace
|
||||
- source_labels: [__meta_kubernetes_pod_name]
|
||||
action: replace
|
||||
target_label: pod_name
|
||||
|
||||
---
|
||||
|
||||
# Source: istio/charts/prometheus/templates/clusterrole.yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: prometheus-istio-system
|
||||
labels:
|
||||
app: prometheus
|
||||
chart: prometheus-1.0.6
|
||||
heritage: Tiller
|
||||
release: istio
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- nodes
|
||||
- services
|
||||
- endpoints
|
||||
- pods
|
||||
- nodes/proxy
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- configmaps
|
||||
verbs: ["get"]
|
||||
- nonResourceURLs: ["/metrics"]
|
||||
verbs: ["get"]
|
||||
|
||||
---
|
||||
|
||||
# Source: istio/charts/prometheus/templates/serviceaccount.yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: prometheus
|
||||
namespace: istio-system
|
||||
labels:
|
||||
app: prometheus
|
||||
chart: prometheus-1.0.6
|
||||
heritage: Tiller
|
||||
release: istio
|
||||
|
||||
---
|
||||
|
||||
# Source: istio/charts/prometheus/templates/clusterrolebindings.yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: prometheus-istio-system
|
||||
labels:
|
||||
app: prometheus
|
||||
chart: prometheus-1.0.6
|
||||
heritage: Tiller
|
||||
release: istio
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: prometheus-istio-system
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: prometheus
|
||||
namespace: istio-system
|
||||
|
||||
---
|
||||
|
||||
# Source: istio/charts/prometheus/templates/service.yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: prometheus
|
||||
namespace: istio-system
|
||||
annotations:
|
||||
prometheus.io/scrape: 'true'
|
||||
labels:
|
||||
name: prometheus
|
||||
spec:
|
||||
selector:
|
||||
app: prometheus
|
||||
ports:
|
||||
- name: http-prometheus
|
||||
protocol: TCP
|
||||
port: 9090
|
||||
|
||||
---
|
||||
|
||||
# Source: istio/charts/prometheus/templates/deployment.yaml
|
||||
apiVersion: apps/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: prometheus
|
||||
namespace: istio-system
|
||||
labels:
|
||||
app: prometheus
|
||||
chart: prometheus-1.0.6
|
||||
heritage: Tiller
|
||||
release: istio
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: prometheus
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: prometheus
|
||||
annotations:
|
||||
sidecar.istio.io/inject: "false"
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ""
|
||||
spec:
|
||||
serviceAccountName: prometheus
|
||||
containers:
|
||||
- name: prometheus
|
||||
image: "docker.io/prom/prometheus:v2.3.1"
|
||||
imagePullPolicy: IfNotPresent
|
||||
args:
|
||||
- '--storage.tsdb.retention=6h'
|
||||
- '--config.file=/etc/prometheus/prometheus.yml'
|
||||
ports:
|
||||
- containerPort: 9090
|
||||
name: http
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /-/healthy
|
||||
port: 9090
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /-/ready
|
||||
port: 9090
|
||||
resources:
|
||||
requests:
|
||||
cpu: 10m
|
||||
|
||||
volumeMounts:
|
||||
- name: config-volume
|
||||
mountPath: /etc/prometheus
|
||||
- mountPath: /etc/istio-certs
|
||||
name: istio-certs
|
||||
volumes:
|
||||
- name: config-volume
|
||||
configMap:
|
||||
name: prometheus
|
||||
- name: istio-certs
|
||||
secret:
|
||||
defaultMode: 420
|
||||
optional: true
|
||||
secretName: istio.default
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: beta.kubernetes.io/arch
|
||||
operator: In
|
||||
values:
|
||||
- amd64
|
||||
- ppc64le
|
||||
- s390x
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 2
|
||||
preference:
|
||||
matchExpressions:
|
||||
- key: beta.kubernetes.io/arch
|
||||
operator: In
|
||||
values:
|
||||
- amd64
|
||||
- weight: 2
|
||||
preference:
|
||||
matchExpressions:
|
||||
- key: beta.kubernetes.io/arch
|
||||
operator: In
|
||||
values:
|
||||
- ppc64le
|
||||
- weight: 2
|
||||
preference:
|
||||
matchExpressions:
|
||||
- key: beta.kubernetes.io/arch
|
||||
operator: In
|
||||
values:
|
||||
- s390x
|
||||
|
||||
---
|
||||
apiVersion: "config.istio.io/v1alpha2"
|
||||
kind: metric
|
||||
metadata:
|
||||
name: requestcount
|
||||
namespace: istio-system
|
||||
spec:
|
||||
value: "1"
|
||||
dimensions:
|
||||
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
|
||||
source_workload: source.workload.name | "unknown"
|
||||
source_workload_namespace: source.workload.namespace | "unknown"
|
||||
source_principal: source.principal | "unknown"
|
||||
source_app: source.labels["app"] | "unknown"
|
||||
source_version: source.labels["version"] | "unknown"
|
||||
destination_workload: destination.workload.name | "unknown"
|
||||
destination_workload_namespace: destination.workload.namespace | "unknown"
|
||||
destination_principal: destination.principal | "unknown"
|
||||
destination_app: destination.labels["app"] | "unknown"
|
||||
destination_version: destination.labels["version"] | "unknown"
|
||||
destination_service: destination.service.host | "unknown"
|
||||
destination_service_name: destination.service.name | "unknown"
|
||||
destination_service_namespace: destination.service.namespace | "unknown"
|
||||
request_protocol: api.protocol | context.protocol | "unknown"
|
||||
response_code: response.code | 200
|
||||
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
|
||||
monitored_resource_type: '"UNSPECIFIED"'
|
||||
---
|
||||
apiVersion: "config.istio.io/v1alpha2"
|
||||
kind: metric
|
||||
metadata:
|
||||
name: requestduration
|
||||
namespace: istio-system
|
||||
spec:
|
||||
value: response.duration | "0ms"
|
||||
dimensions:
|
||||
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
|
||||
source_workload: source.workload.name | "unknown"
|
||||
source_workload_namespace: source.workload.namespace | "unknown"
|
||||
source_principal: source.principal | "unknown"
|
||||
source_app: source.labels["app"] | "unknown"
|
||||
source_version: source.labels["version"] | "unknown"
|
||||
destination_workload: destination.workload.name | "unknown"
|
||||
destination_workload_namespace: destination.workload.namespace | "unknown"
|
||||
destination_principal: destination.principal | "unknown"
|
||||
destination_app: destination.labels["app"] | "unknown"
|
||||
destination_version: destination.labels["version"] | "unknown"
|
||||
destination_service: destination.service.host | "unknown"
|
||||
destination_service_name: destination.service.name | "unknown"
|
||||
destination_service_namespace: destination.service.namespace | "unknown"
|
||||
request_protocol: api.protocol | context.protocol | "unknown"
|
||||
response_code: response.code | 200
|
||||
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
|
||||
monitored_resource_type: '"UNSPECIFIED"'
|
||||
---
|
||||
apiVersion: "config.istio.io/v1alpha2"
|
||||
kind: metric
|
||||
metadata:
|
||||
name: requestsize
|
||||
namespace: istio-system
|
||||
spec:
|
||||
value: request.size | 0
|
||||
dimensions:
|
||||
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
|
||||
source_workload: source.workload.name | "unknown"
|
||||
source_workload_namespace: source.workload.namespace | "unknown"
|
||||
source_principal: source.principal | "unknown"
|
||||
source_app: source.labels["app"] | "unknown"
|
||||
source_version: source.labels["version"] | "unknown"
|
||||
destination_workload: destination.workload.name | "unknown"
|
||||
destination_workload_namespace: destination.workload.namespace | "unknown"
|
||||
destination_principal: destination.principal | "unknown"
|
||||
destination_app: destination.labels["app"] | "unknown"
|
||||
destination_version: destination.labels["version"] | "unknown"
|
||||
destination_service: destination.service.host | "unknown"
|
||||
destination_service_name: destination.service.name | "unknown"
|
||||
destination_service_namespace: destination.service.namespace | "unknown"
|
||||
request_protocol: api.protocol | context.protocol | "unknown"
|
||||
response_code: response.code | 200
|
||||
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
|
||||
monitored_resource_type: '"UNSPECIFIED"'
|
||||
---
|
||||
apiVersion: "config.istio.io/v1alpha2"
|
||||
kind: metric
|
||||
metadata:
|
||||
name: responsesize
|
||||
namespace: istio-system
|
||||
spec:
|
||||
value: response.size | 0
|
||||
dimensions:
|
||||
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
|
||||
source_workload: source.workload.name | "unknown"
|
||||
source_workload_namespace: source.workload.namespace | "unknown"
|
||||
source_principal: source.principal | "unknown"
|
||||
source_app: source.labels["app"] | "unknown"
|
||||
source_version: source.labels["version"] | "unknown"
|
||||
destination_workload: destination.workload.name | "unknown"
|
||||
destination_workload_namespace: destination.workload.namespace | "unknown"
|
||||
destination_principal: destination.principal | "unknown"
|
||||
destination_app: destination.labels["app"] | "unknown"
|
||||
destination_version: destination.labels["version"] | "unknown"
|
||||
destination_service: destination.service.host | "unknown"
|
||||
destination_service_name: destination.service.name | "unknown"
|
||||
destination_service_namespace: destination.service.namespace | "unknown"
|
||||
request_protocol: api.protocol | context.protocol | "unknown"
|
||||
response_code: response.code | 200
|
||||
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
|
||||
monitored_resource_type: '"UNSPECIFIED"'
|
||||
---
|
||||
apiVersion: "config.istio.io/v1alpha2"
|
||||
kind: metric
|
||||
metadata:
|
||||
name: tcpbytesent
|
||||
namespace: istio-system
|
||||
spec:
|
||||
value: connection.sent.bytes | 0
|
||||
dimensions:
|
||||
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
|
||||
source_workload: source.workload.name | "unknown"
|
||||
source_workload_namespace: source.workload.namespace | "unknown"
|
||||
source_principal: source.principal | "unknown"
|
||||
source_app: source.labels["app"] | "unknown"
|
||||
source_version: source.labels["version"] | "unknown"
|
||||
destination_workload: destination.workload.name | "unknown"
|
||||
destination_workload_namespace: destination.workload.namespace | "unknown"
|
||||
destination_principal: destination.principal | "unknown"
|
||||
destination_app: destination.labels["app"] | "unknown"
|
||||
destination_version: destination.labels["version"] | "unknown"
|
||||
destination_service: destination.service.name | "unknown"
|
||||
destination_service_name: destination.service.name | "unknown"
|
||||
destination_service_namespace: destination.service.namespace | "unknown"
|
||||
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
|
||||
monitored_resource_type: '"UNSPECIFIED"'
|
||||
---
|
||||
apiVersion: "config.istio.io/v1alpha2"
|
||||
kind: metric
|
||||
metadata:
|
||||
name: tcpbytereceived
|
||||
namespace: istio-system
|
||||
spec:
|
||||
value: connection.received.bytes | 0
|
||||
dimensions:
|
||||
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination")
|
||||
source_workload: source.workload.name | "unknown"
|
||||
source_workload_namespace: source.workload.namespace | "unknown"
|
||||
source_principal: source.principal | "unknown"
|
||||
source_app: source.labels["app"] | "unknown"
|
||||
source_version: source.labels["version"] | "unknown"
|
||||
destination_workload: destination.workload.name | "unknown"
|
||||
destination_workload_namespace: destination.workload.namespace | "unknown"
|
||||
destination_principal: destination.principal | "unknown"
|
||||
destination_app: destination.labels["app"] | "unknown"
|
||||
destination_version: destination.labels["version"] | "unknown"
|
||||
destination_service: destination.service.name | "unknown"
|
||||
destination_service_name: destination.service.name | "unknown"
|
||||
destination_service_namespace: destination.service.namespace | "unknown"
|
||||
connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none"))
|
||||
monitored_resource_type: '"UNSPECIFIED"'
|
||||
---
|
||||
apiVersion: "config.istio.io/v1alpha2"
|
||||
kind: prometheus
|
||||
metadata:
|
||||
name: handler
|
||||
namespace: istio-system
|
||||
spec:
|
||||
metrics:
|
||||
- name: requests_total
|
||||
instance_name: requestcount.metric.istio-system
|
||||
kind: COUNTER
|
||||
label_names:
|
||||
- reporter
|
||||
- source_app
|
||||
- source_principal
|
||||
- source_workload
|
||||
- source_workload_namespace
|
||||
- source_version
|
||||
- destination_app
|
||||
- destination_principal
|
||||
- destination_workload
|
||||
- destination_workload_namespace
|
||||
- destination_version
|
||||
- destination_service
|
||||
- destination_service_name
|
||||
- destination_service_namespace
|
||||
- request_protocol
|
||||
- response_code
|
||||
- connection_security_policy
|
||||
- name: request_duration_seconds
|
||||
instance_name: requestduration.metric.istio-system
|
||||
kind: DISTRIBUTION
|
||||
label_names:
|
||||
- reporter
|
||||
- source_app
|
||||
- source_principal
|
||||
- source_workload
|
||||
- source_workload_namespace
|
||||
- source_version
|
||||
- destination_app
|
||||
- destination_principal
|
||||
- destination_workload
|
||||
- destination_workload_namespace
|
||||
- destination_version
|
||||
- destination_service
|
||||
- destination_service_name
|
||||
- destination_service_namespace
|
||||
- request_protocol
|
||||
- response_code
|
||||
- connection_security_policy
|
||||
buckets:
|
||||
explicit_buckets:
|
||||
bounds: [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10]
|
||||
- name: request_bytes
|
||||
instance_name: requestsize.metric.istio-system
|
||||
kind: DISTRIBUTION
|
||||
label_names:
|
||||
- reporter
|
||||
- source_app
|
||||
- source_principal
|
||||
- source_workload
|
||||
- source_workload_namespace
|
||||
- source_version
|
||||
- destination_app
|
||||
- destination_principal
|
||||
- destination_workload
|
||||
- destination_workload_namespace
|
||||
- destination_version
|
||||
- destination_service
|
||||
- destination_service_name
|
||||
- destination_service_namespace
|
||||
- request_protocol
|
||||
- response_code
|
||||
- connection_security_policy
|
||||
buckets:
|
||||
exponentialBuckets:
|
||||
numFiniteBuckets: 8
|
||||
scale: 1
|
||||
growthFactor: 10
|
||||
- name: response_bytes
|
||||
instance_name: responsesize.metric.istio-system
|
||||
kind: DISTRIBUTION
|
||||
label_names:
|
||||
- reporter
|
||||
- source_app
|
||||
- source_principal
|
||||
- source_workload
|
||||
- source_workload_namespace
|
||||
- source_version
|
||||
- destination_app
|
||||
- destination_principal
|
||||
- destination_workload
|
||||
- destination_workload_namespace
|
||||
- destination_version
|
||||
- destination_service
|
||||
- destination_service_name
|
||||
- destination_service_namespace
|
||||
- request_protocol
|
||||
- response_code
|
||||
- connection_security_policy
|
||||
buckets:
|
||||
exponentialBuckets:
|
||||
numFiniteBuckets: 8
|
||||
scale: 1
|
||||
growthFactor: 10
|
||||
- name: tcp_sent_bytes_total
|
||||
instance_name: tcpbytesent.metric.istio-system
|
||||
kind: COUNTER
|
||||
label_names:
|
||||
- reporter
|
||||
- source_app
|
||||
- source_principal
|
||||
- source_workload
|
||||
- source_workload_namespace
|
||||
- source_version
|
||||
- destination_app
|
||||
- destination_principal
|
||||
- destination_workload
|
||||
- destination_workload_namespace
|
||||
- destination_version
|
||||
- destination_service
|
||||
- destination_service_name
|
||||
- destination_service_namespace
|
||||
- connection_security_policy
|
||||
- name: tcp_received_bytes_total
|
||||
instance_name: tcpbytereceived.metric.istio-system
|
||||
kind: COUNTER
|
||||
label_names:
|
||||
- reporter
|
||||
- source_app
|
||||
- source_principal
|
||||
- source_workload
|
||||
- source_workload_namespace
|
||||
- source_version
|
||||
- destination_app
|
||||
- destination_principal
|
||||
- destination_workload
|
||||
- destination_workload_namespace
|
||||
- destination_version
|
||||
- destination_service
|
||||
- destination_service_name
|
||||
- destination_service_namespace
|
||||
- connection_security_policy
|
||||
---
|
||||
apiVersion: "config.istio.io/v1alpha2"
|
||||
kind: rule
|
||||
metadata:
|
||||
name: promhttp
|
||||
namespace: istio-system
|
||||
spec:
|
||||
match: context.protocol == "http" || context.protocol == "grpc"
|
||||
actions:
|
||||
- handler: handler.prometheus
|
||||
instances:
|
||||
- requestcount.metric
|
||||
- requestduration.metric
|
||||
- requestsize.metric
|
||||
- responsesize.metric
|
||||
---
|
||||
apiVersion: "config.istio.io/v1alpha2"
|
||||
kind: rule
|
||||
metadata:
|
||||
name: promtcp
|
||||
namespace: istio-system
|
||||
spec:
|
||||
match: context.protocol == "tcp"
|
||||
actions:
|
||||
- handler: handler.prometheus
|
||||
instances:
|
||||
- tcpbytesent.metric
|
||||
- tcpbytereceived.metric
|
||||
---
|
||||
@@ -1,36 +0,0 @@
|
||||
apiVersion: flagger.app/v1alpha3
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
spec:
|
||||
targetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: podinfo
|
||||
progressDeadlineSeconds: 60
|
||||
autoscalerRef:
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
name: podinfo
|
||||
service:
|
||||
port: 9898
|
||||
canaryAnalysis:
|
||||
interval: 10s
|
||||
threshold: 10
|
||||
maxWeight: 50
|
||||
stepWeight: 5
|
||||
metrics:
|
||||
- name: request-success-rate
|
||||
threshold: 99
|
||||
interval: 1m
|
||||
- name: request-duration
|
||||
threshold: 500
|
||||
interval: 30s
|
||||
webhooks:
|
||||
- name: load-test
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 5s
|
||||
metadata:
|
||||
type: cmd
|
||||
cmd: "hey -z 1m -q 10 -c 2 http://gloo.example.com/"
|
||||
@@ -1,67 +0,0 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
labels:
|
||||
app: podinfo
|
||||
spec:
|
||||
minReadySeconds: 5
|
||||
revisionHistoryLimit: 5
|
||||
progressDeadlineSeconds: 60
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: 0
|
||||
type: RollingUpdate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: podinfo
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
prometheus.io/scrape: "true"
|
||||
labels:
|
||||
app: podinfo
|
||||
spec:
|
||||
containers:
|
||||
- name: podinfod
|
||||
image: quay.io/stefanprodan/podinfo:1.7.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- containerPort: 9898
|
||||
name: http
|
||||
protocol: TCP
|
||||
command:
|
||||
- ./podinfo
|
||||
- --port=9898
|
||||
- --level=info
|
||||
- --random-delay=false
|
||||
- --random-error=false
|
||||
env:
|
||||
- name: PODINFO_UI_COLOR
|
||||
value: blue
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- podcli
|
||||
- check
|
||||
- http
|
||||
- localhost:9898/healthz
|
||||
initialDelaySeconds: 5
|
||||
timeoutSeconds: 5
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- podcli
|
||||
- check
|
||||
- http
|
||||
- localhost:9898/readyz
|
||||
initialDelaySeconds: 5
|
||||
timeoutSeconds: 5
|
||||
resources:
|
||||
limits:
|
||||
cpu: 2000m
|
||||
memory: 512Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 64Mi
|
||||
@@ -1,19 +0,0 @@
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: podinfo
|
||||
minReplicas: 1
|
||||
maxReplicas: 4
|
||||
metrics:
|
||||
- type: Resource
|
||||
resource:
|
||||
name: cpu
|
||||
# scale up if usage is above
|
||||
# 99% of the requested CPU (100m)
|
||||
targetAverageUtilization: 99
|
||||
@@ -1,17 +0,0 @@
|
||||
apiVersion: gateway.solo.io/v1
|
||||
kind: VirtualService
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
spec:
|
||||
virtualHost:
|
||||
domains:
|
||||
- '*'
|
||||
name: podinfo.default
|
||||
routes:
|
||||
- matcher:
|
||||
prefix: /
|
||||
routeAction:
|
||||
upstreamGroup:
|
||||
name: podinfo
|
||||
namespace: gloo
|
||||
@@ -1,58 +0,0 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: flagger-helmtester
|
||||
namespace: kube-system
|
||||
labels:
|
||||
app: flagger-helmtester
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: flagger-helmtester
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: flagger-helmtester
|
||||
annotations:
|
||||
prometheus.io/scrape: "true"
|
||||
spec:
|
||||
serviceAccountName: tiller
|
||||
containers:
|
||||
- name: helmtester
|
||||
image: weaveworks/flagger-loadtester:0.4.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 8080
|
||||
command:
|
||||
- ./loadtester
|
||||
- -port=8080
|
||||
- -log-level=info
|
||||
- -timeout=1h
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- wget
|
||||
- --quiet
|
||||
- --tries=1
|
||||
- --timeout=4
|
||||
- --spider
|
||||
- http://localhost:8080/healthz
|
||||
timeoutSeconds: 5
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- wget
|
||||
- --quiet
|
||||
- --tries=1
|
||||
- --timeout=4
|
||||
- --spider
|
||||
- http://localhost:8080/healthz
|
||||
timeoutSeconds: 5
|
||||
resources:
|
||||
limits:
|
||||
memory: "512Mi"
|
||||
cpu: "1000m"
|
||||
requests:
|
||||
memory: "32Mi"
|
||||
cpu: "10m"
|
||||
@@ -1,16 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: flagger-helmtester
|
||||
namespace: kube-system
|
||||
labels:
|
||||
app: flagger-helmtester
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app: flagger-helmtester
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
protocol: TCP
|
||||
targetPort: http
|
||||
@@ -1,19 +0,0 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: flagger-loadtester-bats
|
||||
data:
|
||||
tests: |
|
||||
#!/usr/bin/env bats
|
||||
|
||||
@test "check message" {
|
||||
curl -sS http://${URL} | jq -r .message | {
|
||||
run cut -d $' ' -f1
|
||||
[ $output = "greetings" ]
|
||||
}
|
||||
}
|
||||
|
||||
@test "check headers" {
|
||||
curl -sS http://${URL}/headers | grep X-Request-Id
|
||||
}
|
||||
@@ -1,67 +0,0 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: flagger-loadtester
|
||||
labels:
|
||||
app: flagger-loadtester
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: flagger-loadtester
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: flagger-loadtester
|
||||
annotations:
|
||||
prometheus.io/scrape: "true"
|
||||
spec:
|
||||
containers:
|
||||
- name: loadtester
|
||||
image: weaveworks/flagger-loadtester:0.6.1
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 8080
|
||||
command:
|
||||
- ./loadtester
|
||||
- -port=8080
|
||||
- -log-level=info
|
||||
- -timeout=1h
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- wget
|
||||
- --quiet
|
||||
- --tries=1
|
||||
- --timeout=4
|
||||
- --spider
|
||||
- http://localhost:8080/healthz
|
||||
timeoutSeconds: 5
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- wget
|
||||
- --quiet
|
||||
- --tries=1
|
||||
- --timeout=4
|
||||
- --spider
|
||||
- http://localhost:8080/healthz
|
||||
timeoutSeconds: 5
|
||||
resources:
|
||||
limits:
|
||||
memory: "512Mi"
|
||||
cpu: "1000m"
|
||||
requests:
|
||||
memory: "32Mi"
|
||||
cpu: "10m"
|
||||
securityContext:
|
||||
readOnlyRootFilesystem: true
|
||||
runAsUser: 10001
|
||||
# volumeMounts:
|
||||
# - name: tests
|
||||
# mountPath: /bats
|
||||
# readOnly: true
|
||||
# volumes:
|
||||
# - name: tests
|
||||
# configMap:
|
||||
# name: flagger-loadtester-bats
|
||||
@@ -1,15 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: flagger-loadtester
|
||||
labels:
|
||||
app: flagger-loadtester
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app: flagger-loadtester
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
protocol: TCP
|
||||
targetPort: http
|
||||
@@ -1,7 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: test
|
||||
labels:
|
||||
istio-injection: enabled
|
||||
appmesh.k8s.aws/sidecarInjectorWebhook: enabled
|
||||
@@ -1,68 +0,0 @@
|
||||
apiVersion: flagger.app/v1alpha3
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
spec:
|
||||
# deployment reference
|
||||
targetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: podinfo
|
||||
# ingress reference
|
||||
ingressRef:
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
name: podinfo
|
||||
# HPA reference (optional)
|
||||
autoscalerRef:
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
name: podinfo
|
||||
# the maximum time in seconds for the canary deployment
|
||||
# to make progress before it is rollback (default 600s)
|
||||
progressDeadlineSeconds: 60
|
||||
service:
|
||||
# container port
|
||||
port: 9898
|
||||
canaryAnalysis:
|
||||
# schedule interval (default 60s)
|
||||
interval: 10s
|
||||
# max number of failed metric checks before rollback
|
||||
threshold: 10
|
||||
# max traffic percentage routed to canary
|
||||
# percentage (0-100)
|
||||
maxWeight: 50
|
||||
# canary increment step
|
||||
# percentage (0-100)
|
||||
stepWeight: 5
|
||||
# NGINX Prometheus checks
|
||||
metrics:
|
||||
- name: request-success-rate
|
||||
# minimum req success rate (non 5xx responses)
|
||||
# percentage (0-100)
|
||||
threshold: 99
|
||||
interval: 1m
|
||||
- name: "latency"
|
||||
threshold: 0.5
|
||||
interval: 1m
|
||||
query: |
|
||||
histogram_quantile(0.99,
|
||||
sum(
|
||||
rate(
|
||||
http_request_duration_seconds_bucket{
|
||||
kubernetes_namespace="test",
|
||||
kubernetes_pod_name=~"podinfo-[0-9a-zA-Z]+(-[0-9a-zA-Z]+)"
|
||||
}[1m]
|
||||
)
|
||||
) by (le)
|
||||
)
|
||||
# external checks (optional)
|
||||
webhooks:
|
||||
- name: load-test
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 5s
|
||||
metadata:
|
||||
type: cmd
|
||||
cmd: "hey -z 1m -q 10 -c 2 http://app.example.com/"
|
||||
logCmdOutput: "true"
|
||||
@@ -1,69 +0,0 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
labels:
|
||||
app: podinfo
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: 0
|
||||
type: RollingUpdate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: podinfo
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
prometheus.io/scrape: "true"
|
||||
labels:
|
||||
app: podinfo
|
||||
spec:
|
||||
containers:
|
||||
- name: podinfod
|
||||
image: quay.io/stefanprodan/podinfo:1.7.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- containerPort: 9898
|
||||
name: http
|
||||
protocol: TCP
|
||||
command:
|
||||
- ./podinfo
|
||||
- --port=9898
|
||||
- --level=info
|
||||
- --random-delay=false
|
||||
- --random-error=false
|
||||
env:
|
||||
- name: PODINFO_UI_COLOR
|
||||
value: green
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- podcli
|
||||
- check
|
||||
- http
|
||||
- localhost:9898/healthz
|
||||
failureThreshold: 3
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 2
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- podcli
|
||||
- check
|
||||
- http
|
||||
- localhost:9898/readyz
|
||||
failureThreshold: 3
|
||||
periodSeconds: 3
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 2
|
||||
resources:
|
||||
limits:
|
||||
cpu: 1000m
|
||||
memory: 256Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 16Mi
|
||||
@@ -1,19 +0,0 @@
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: podinfo
|
||||
minReplicas: 2
|
||||
maxReplicas: 4
|
||||
metrics:
|
||||
- type: Resource
|
||||
resource:
|
||||
name: cpu
|
||||
# scale up if usage is above
|
||||
# 99% of the requested CPU (100m)
|
||||
targetAverageUtilization: 99
|
||||
@@ -1,17 +0,0 @@
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
labels:
|
||||
app: podinfo
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: "nginx"
|
||||
spec:
|
||||
rules:
|
||||
- host: app.example.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: podinfo
|
||||
servicePort: 9898
|
||||
@@ -1,131 +0,0 @@
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: trafficsplits.split.smi-spec.io
|
||||
spec:
|
||||
additionalPrinterColumns:
|
||||
- JSONPath: .spec.service
|
||||
description: The service
|
||||
name: Service
|
||||
type: string
|
||||
group: split.smi-spec.io
|
||||
names:
|
||||
kind: TrafficSplit
|
||||
listKind: TrafficSplitList
|
||||
plural: trafficsplits
|
||||
singular: trafficsplit
|
||||
scope: Namespaced
|
||||
subresources:
|
||||
status: {}
|
||||
version: v1alpha1
|
||||
versions:
|
||||
- name: v1alpha1
|
||||
served: true
|
||||
storage: true
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: smi-adapter-istio
|
||||
namespace: istio-system
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: smi-adapter-istio
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
- services
|
||||
- endpoints
|
||||
- persistentvolumeclaims
|
||||
- events
|
||||
- configmaps
|
||||
- secrets
|
||||
verbs:
|
||||
- '*'
|
||||
- apiGroups:
|
||||
- apps
|
||||
resources:
|
||||
- deployments
|
||||
- daemonsets
|
||||
- replicasets
|
||||
- statefulsets
|
||||
verbs:
|
||||
- '*'
|
||||
- apiGroups:
|
||||
- monitoring.coreos.com
|
||||
resources:
|
||||
- servicemonitors
|
||||
verbs:
|
||||
- get
|
||||
- create
|
||||
- apiGroups:
|
||||
- apps
|
||||
resourceNames:
|
||||
- smi-adapter-istio
|
||||
resources:
|
||||
- deployments/finalizers
|
||||
verbs:
|
||||
- update
|
||||
- apiGroups:
|
||||
- split.smi-spec.io
|
||||
resources:
|
||||
- '*'
|
||||
verbs:
|
||||
- '*'
|
||||
- apiGroups:
|
||||
- networking.istio.io
|
||||
resources:
|
||||
- '*'
|
||||
verbs:
|
||||
- '*'
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: smi-adapter-istio
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: smi-adapter-istio
|
||||
namespace: istio-system
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: smi-adapter-istio
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: smi-adapter-istio
|
||||
namespace: istio-system
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
name: smi-adapter-istio
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: smi-adapter-istio
|
||||
annotations:
|
||||
sidecar.istio.io/inject: "false"
|
||||
spec:
|
||||
serviceAccountName: smi-adapter-istio
|
||||
containers:
|
||||
- name: smi-adapter-istio
|
||||
image: docker.io/stefanprodan/smi-adapter-istio:0.0.2-beta.1
|
||||
command:
|
||||
- smi-adapter-istio
|
||||
imagePullPolicy: Always
|
||||
env:
|
||||
- name: WATCH_NAMESPACE
|
||||
value: ""
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: OPERATOR_NAME
|
||||
value: "smi-adapter-istio"
|
||||
@@ -1,21 +1,25 @@
|
||||
apiVersion: v1
|
||||
name: flagger
|
||||
version: 0.18.2
|
||||
appVersion: 0.18.2
|
||||
version: 1.1.0
|
||||
appVersion: 1.1.0
|
||||
kubeVersion: ">=1.11.0-0"
|
||||
engine: gotpl
|
||||
description: Flagger is a Kubernetes operator that automates the promotion of canary deployments using Istio, Linkerd, App Mesh, Gloo or NGINX routing for traffic shifting and Prometheus metrics for canary analysis.
|
||||
home: https://docs.flagger.app
|
||||
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/flagger-icon.png
|
||||
description: Flagger is a progressive delivery operator for Kubernetes
|
||||
home: https://flagger.app
|
||||
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/weaveworks.png
|
||||
sources:
|
||||
- https://github.com/weaveworks/flagger
|
||||
- https://github.com/weaveworks/flagger
|
||||
maintainers:
|
||||
- name: stefanprodan
|
||||
url: https://github.com/stefanprodan
|
||||
email: stefanprodan@users.noreply.github.com
|
||||
- name: stefanprodan
|
||||
url: https://github.com/stefanprodan
|
||||
email: stefanprodan@users.noreply.github.com
|
||||
keywords:
|
||||
- canary
|
||||
- istio
|
||||
- appmesh
|
||||
- linkerd
|
||||
- gitops
|
||||
- flagger
|
||||
- istio
|
||||
- appmesh
|
||||
- linkerd
|
||||
- gloo
|
||||
- contour
|
||||
- nginx
|
||||
- gitops
|
||||
- canary
|
||||
|
||||
201
charts/flagger/LICENSE
Normal file
201
charts/flagger/LICENSE
Normal file
@@ -0,0 +1,201 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright 2018 Weaveworks. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
@@ -1,15 +1,18 @@
|
||||
# Flagger
|
||||
|
||||
[Flagger](https://github.com/weaveworks/flagger) is a Kubernetes operator that automates the promotion of
|
||||
canary deployments using Istio, Linkerd, App Mesh, NGINX or Gloo routing for traffic shifting and Prometheus metrics for canary analysis.
|
||||
Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators
|
||||
like HTTP requests success rate, requests average duration and pods health.
|
||||
Based on the KPIs analysis a canary is promoted or aborted and the analysis result is published to Slack or MS Teams.
|
||||
[Flagger](https://github.com/weaveworks/flagger) is an operator that automates the release process of applications on Kubernetes.
|
||||
|
||||
Flagger can run automated application analysis, testing, promotion and rollback for the following deployment strategies:
|
||||
* Canary Release (progressive traffic shifting)
|
||||
* A/B Testing (HTTP headers and cookies traffic routing)
|
||||
* Blue/Green (traffic switching and mirroring)
|
||||
|
||||
Flagger works with service mesh solutions (Istio, Linkerd, AWS App Mesh) and with Kubernetes ingress controllers (NGINX, Skipper, Gloo, Contour).
|
||||
Flagger can be configured to send alerts to various chat platforms such as Slack, Microsoft Teams, Discord and Rocket.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Kubernetes >= 1.11
|
||||
* Prometheus >= 2.6
|
||||
* Kubernetes >= 1.14
|
||||
|
||||
## Installing the Chart
|
||||
|
||||
@@ -25,26 +28,61 @@ Install Flagger's custom resource definitions:
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/weaveworks/flagger/master/artifacts/flagger/crd.yaml
|
||||
```
|
||||
|
||||
To install the chart with the release name `flagger` for Istio:
|
||||
To install Flagger for **Istio**:
|
||||
|
||||
```console
|
||||
$ helm upgrade -i flagger flagger/flagger \
|
||||
--namespace=istio-system \
|
||||
--set crd.create=false \
|
||||
--set meshProvider=istio \
|
||||
--set metricsServer=http://prometheus:9090
|
||||
```
|
||||
|
||||
To install the chart with the release name `flagger` for Linkerd:
|
||||
To install Flagger for **Linkerd**:
|
||||
|
||||
```console
|
||||
$ helm upgrade -i flagger flagger/flagger \
|
||||
--namespace=linkerd \
|
||||
--set crd.create=false \
|
||||
--set meshProvider=linkerd \
|
||||
--set metricsServer=http://linkerd-prometheus:9090
|
||||
```
|
||||
|
||||
To install Flagger for **AWS App Mesh**:
|
||||
|
||||
```console
|
||||
$ helm upgrade -i flagger flagger/flagger \
|
||||
--namespace=appmesh-system \
|
||||
--set meshProvider=appmesh:v1beta2 \
|
||||
--set metricsServer=http://appmesh-prometheus:9090
|
||||
```
|
||||
|
||||
To install Flagger and Prometheus for **NGINX** Ingress (requires controller metrics enabled):
|
||||
|
||||
```console
|
||||
$ helm upgrade -i flagger flagger/flagger \
|
||||
--namespace=ingress-nginx \
|
||||
--set meshProvider=nginx \
|
||||
--set prometheus.install=true
|
||||
```
|
||||
|
||||
To install Flagger and Prometheus for **Gloo** (requires Gloo discovery enabled):
|
||||
|
||||
```console
|
||||
$ helm upgrade -i flagger flagger/flagger \
|
||||
--namespace=gloo-system \
|
||||
--set meshProvider=gloo \
|
||||
--set prometheus.install=true
|
||||
```
|
||||
|
||||
To install Flagger and Prometheus for **Contour**:
|
||||
|
||||
```console
|
||||
$ helm upgrade -i flagger flagger/flagger \
|
||||
--namespace=projectcontour \
|
||||
--set meshProvider=contour \
|
||||
--set ingressClass=contour \
|
||||
--set prometheus.install=true
|
||||
```
|
||||
|
||||
The [configuration](#configuration) section lists the parameters that can be configured during installation.
|
||||
|
||||
## Uninstalling the Chart
|
||||
@@ -52,7 +90,7 @@ The [configuration](#configuration) section lists the parameters that can be con
|
||||
To uninstall/delete the `flagger` deployment:
|
||||
|
||||
```console
|
||||
$ helm delete --purge flagger
|
||||
$ helm delete flagger
|
||||
```
|
||||
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
@@ -63,34 +101,52 @@ The following tables lists the configurable parameters of the Flagger chart and
|
||||
|
||||
Parameter | Description | Default
|
||||
--- | --- | ---
|
||||
`image.repository` | image repository | `weaveworks/flagger`
|
||||
`image.tag` | image tag | `<VERSION>`
|
||||
`image.pullPolicy` | image pull policy | `IfNotPresent`
|
||||
`prometheus.install` | if `true`, installs Prometheus configured to scrape all pods in the custer including the App Mesh sidecar | `false`
|
||||
`image.repository` | Image repository | `weaveworks/flagger`
|
||||
`image.tag` | Image tag | `<VERSION>`
|
||||
`image.pullPolicy` | Image pull policy | `IfNotPresent`
|
||||
`logLevel` | Log level | `info`
|
||||
`metricsServer` | Prometheus URL, used when `prometheus.install` is `false` | `http://prometheus.istio-system:9090`
|
||||
`prometheus.install` | If `true`, installs Prometheus configured to scrape all pods in the custer | `false`
|
||||
`prometheus.retention` | Prometheus data retention | `2h`
|
||||
`selectorLabels` | List of labels that Flagger uses to create pod selectors | `app,name,app.kubernetes.io/name`
|
||||
`configTracking.enabled` | If `true`, flagger will track changes in Secrets and ConfigMaps referenced in the target deployment | `true`
|
||||
`eventWebhook` | If set, Flagger will publish events to the given webhook | None
|
||||
`slack.url` | Slack incoming webhook | None
|
||||
`slack.channel` | Slack channel | None
|
||||
`slack.user` | Slack username | `flagger`
|
||||
`msteams.url` | Microsoft Teams incoming webhook | None
|
||||
`leaderElection.enabled` | leader election must be enabled when running more than one replica | `false`
|
||||
`leaderElection.replicaCount` | number of replicas | `1`
|
||||
`rbac.create` | if `true`, create and use RBAC resources | `true`
|
||||
`podMonitor.enabled` | If `true`, create a PodMonitor for [monitoring the metrics](https://docs.flagger.app/usage/monitoring#metrics) | `false`
|
||||
`podMonitor.namespace` | Namespace where the PodMonitor is created | the same namespace
|
||||
`podMonitor.interval` | Interval at which metrics should be scraped | `15s`
|
||||
`podMonitor.podMonitor` | Additional labels to add to the PodMonitor | `{}`
|
||||
`leaderElection.enabled` | If `true`, Flagger will run in HA mode | `false`
|
||||
`leaderElection.replicaCount` | Number of replicas | `1`
|
||||
`serviceAccount.create` | If `true`, Flagger will create service account | `true`
|
||||
`serviceAccount.name` | The name of the service account to create or use. If not set and `serviceAccount.create` is `true`, a name is generated using the Flagger fullname | `""`
|
||||
`serviceAccount.annotations` | Annotations for service account | `{}`
|
||||
`ingressAnnotationsPrefix` | Annotations prefix for ingresses | `custom.ingress.kubernetes.io`
|
||||
`rbac.create` | If `true`, create and use RBAC resources | `true`
|
||||
`rbac.pspEnabled` | If `true`, create and use a restricted pod security policy | `false`
|
||||
`crd.create` | if `true`, create Flagger's CRDs | `true`
|
||||
`resources.requests/cpu` | pod CPU request | `10m`
|
||||
`resources.requests/memory` | pod memory request | `32Mi`
|
||||
`resources.limits/cpu` | pod CPU limit | `1000m`
|
||||
`resources.limits/memory` | pod memory limit | `512Mi`
|
||||
`affinity` | node/pod affinities | None
|
||||
`nodeSelector` | node labels for pod assignment | `{}`
|
||||
`tolerations` | list of node taints to tolerate | `[]`
|
||||
`crd.create` | If `true`, create Flagger's CRDs (should be enabled for Helm v2 only) | `false`
|
||||
`resources.requests/cpu` | Pod CPU request | `10m`
|
||||
`resources.requests/memory` | Pod memory request | `32Mi`
|
||||
`resources.limits/cpu` | Pod CPU limit | `1000m`
|
||||
`resources.limits/memory` | Pod memory limit | `512Mi`
|
||||
`affinity` | Node/pod affinities | None
|
||||
`nodeSelector` | Node labels for pod assignment | `{}`
|
||||
`threadiness` | Number of controller workers | `2`
|
||||
`tolerations` | List of node taints to tolerate | `[]`
|
||||
`istio.kubeconfig.secretName` | The name of the Kubernetes secret containing the Istio shared control plane kubeconfig | None
|
||||
`istio.kubeconfig.key` | The name of Kubernetes secret data key that contains the Istio control plane kubeconfig | `kubeconfig`
|
||||
`ingressAnnotationsPrefix` | Annotations prefix for NGINX ingresses | None
|
||||
`ingressClass` | Ingress class used for annotating HTTPProxy objects, e.g. `contour` | None
|
||||
`podPriorityClassName` | PriorityClass name for pod priority configuration | ""
|
||||
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm upgrade`. For example,
|
||||
|
||||
```console
|
||||
$ helm upgrade -i flagger flagger/flagger \
|
||||
--namespace istio-system \
|
||||
--set crd.create=false \
|
||||
--namespace flagger-system \
|
||||
--set slack.url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK \
|
||||
--set slack.channel=general
|
||||
```
|
||||
|
||||
875
charts/flagger/crds/crd.yaml
Normal file
875
charts/flagger/crds/crd.yaml
Normal file
@@ -0,0 +1,875 @@
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: canaries.flagger.app
|
||||
annotations:
|
||||
helm.sh/resource-policy: keep
|
||||
spec:
|
||||
group: flagger.app
|
||||
version: v1beta1
|
||||
versions:
|
||||
- name: v1beta1
|
||||
served: true
|
||||
storage: true
|
||||
- name: v1alpha3
|
||||
served: true
|
||||
storage: false
|
||||
- name: v1alpha2
|
||||
served: false
|
||||
storage: false
|
||||
- name: v1alpha1
|
||||
served: false
|
||||
storage: false
|
||||
names:
|
||||
plural: canaries
|
||||
singular: canary
|
||||
kind: Canary
|
||||
categories:
|
||||
- all
|
||||
scope: Namespaced
|
||||
subresources:
|
||||
status: {}
|
||||
additionalPrinterColumns:
|
||||
- name: Status
|
||||
type: string
|
||||
JSONPath: .status.phase
|
||||
- name: Weight
|
||||
type: string
|
||||
JSONPath: .status.canaryWeight
|
||||
- name: FailedChecks
|
||||
type: string
|
||||
JSONPath: .status.failedChecks
|
||||
priority: 1
|
||||
- name: Interval
|
||||
type: string
|
||||
JSONPath: .spec.analysis.interval
|
||||
priority: 1
|
||||
- name: Mirror
|
||||
type: boolean
|
||||
JSONPath: .spec.analysis.mirror
|
||||
priority: 1
|
||||
- name: StepWeight
|
||||
type: string
|
||||
JSONPath: .spec.analysis.stepWeight
|
||||
priority: 1
|
||||
- name: MaxWeight
|
||||
type: string
|
||||
JSONPath: .spec.analysis.maxWeight
|
||||
priority: 1
|
||||
- name: LastTransitionTime
|
||||
type: string
|
||||
JSONPath: .status.lastTransitionTime
|
||||
validation:
|
||||
openAPIV3Schema:
|
||||
properties:
|
||||
spec:
|
||||
required:
|
||||
- targetRef
|
||||
- service
|
||||
- analysis
|
||||
properties:
|
||||
provider:
|
||||
description: Traffic managent provider
|
||||
type: string
|
||||
metricsServer:
|
||||
description: Prometheus URL
|
||||
type: string
|
||||
progressDeadlineSeconds:
|
||||
description: Deployment progress deadline
|
||||
type: number
|
||||
targetRef:
|
||||
description: Target selector
|
||||
type: object
|
||||
required: ["apiVersion", "kind", "name"]
|
||||
properties:
|
||||
apiVersion:
|
||||
type: string
|
||||
kind:
|
||||
type: string
|
||||
enum:
|
||||
- DaemonSet
|
||||
- Deployment
|
||||
- Service
|
||||
name:
|
||||
type: string
|
||||
autoscalerRef:
|
||||
description: HPA selector
|
||||
type: object
|
||||
required: ["apiVersion", "kind", "name"]
|
||||
properties:
|
||||
apiVersion:
|
||||
type: string
|
||||
kind:
|
||||
type: string
|
||||
enum:
|
||||
- HorizontalPodAutoscaler
|
||||
name:
|
||||
type: string
|
||||
ingressRef:
|
||||
description: NGINX ingress selector
|
||||
type: object
|
||||
required: ["apiVersion", "kind", "name"]
|
||||
properties:
|
||||
apiVersion:
|
||||
type: string
|
||||
kind:
|
||||
type: string
|
||||
enum:
|
||||
- Ingress
|
||||
name:
|
||||
type: string
|
||||
service:
|
||||
description: Kubernetes Service spec
|
||||
type: object
|
||||
required: ["port"]
|
||||
properties:
|
||||
name:
|
||||
description: Kubernetes service name
|
||||
type: string
|
||||
port:
|
||||
description: Container port number
|
||||
type: number
|
||||
portName:
|
||||
description: Container port name
|
||||
type: string
|
||||
targetPort:
|
||||
description: Container target port name
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: number
|
||||
portDiscovery:
|
||||
description: Enable port dicovery
|
||||
type: boolean
|
||||
timeout:
|
||||
description: HTTP or gRPC request timeout
|
||||
type: string
|
||||
meshName:
|
||||
description: AppMesh mesh name
|
||||
type: string
|
||||
backends:
|
||||
description: AppMesh backend array
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
hosts:
|
||||
description: The list of host names for this service
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
match:
|
||||
description: URI match conditions
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
uri:
|
||||
type: object
|
||||
oneOf:
|
||||
- required: ["exact"]
|
||||
- required: ["prefix"]
|
||||
- required: ["suffix"]
|
||||
- required: ["regex"]
|
||||
properties:
|
||||
exact:
|
||||
format: string
|
||||
type: string
|
||||
prefix:
|
||||
format: string
|
||||
type: string
|
||||
suffix:
|
||||
format: string
|
||||
type: string
|
||||
regex:
|
||||
format: string
|
||||
type: string
|
||||
retries:
|
||||
description: Retry policy for HTTP requests
|
||||
type: object
|
||||
properties:
|
||||
attempts:
|
||||
description: Number of retries for a given request
|
||||
format: int32
|
||||
type: integer
|
||||
perTryTimeout:
|
||||
description: Timeout per retry attempt for a given request
|
||||
type: string
|
||||
retryOn:
|
||||
description: Specifies the conditions under which retry takes place
|
||||
format: string
|
||||
type: string
|
||||
rewrite:
|
||||
description: Rewrite HTTP URIs
|
||||
type: object
|
||||
properties:
|
||||
uri:
|
||||
format: string
|
||||
type: string
|
||||
headers:
|
||||
description: Headers operations
|
||||
type: object
|
||||
properties:
|
||||
request:
|
||||
properties:
|
||||
add:
|
||||
additionalProperties:
|
||||
format: string
|
||||
type: string
|
||||
type: object
|
||||
remove:
|
||||
items:
|
||||
format: string
|
||||
type: string
|
||||
type: array
|
||||
set:
|
||||
additionalProperties:
|
||||
format: string
|
||||
type: string
|
||||
type: object
|
||||
type: object
|
||||
response:
|
||||
properties:
|
||||
add:
|
||||
additionalProperties:
|
||||
format: string
|
||||
type: string
|
||||
type: object
|
||||
remove:
|
||||
items:
|
||||
format: string
|
||||
type: string
|
||||
type: array
|
||||
set:
|
||||
additionalProperties:
|
||||
format: string
|
||||
type: string
|
||||
type: object
|
||||
type: object
|
||||
gateways:
|
||||
description: The list of Istio gateway for this virtual service
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
corsPolicy:
|
||||
description: Istio Cross-Origin Resource Sharing policy (CORS)
|
||||
type: object
|
||||
properties:
|
||||
allowCredentials:
|
||||
type: boolean
|
||||
allowHeaders:
|
||||
items:
|
||||
format: string
|
||||
type: string
|
||||
type: array
|
||||
allowMethods:
|
||||
description: List of HTTP methods allowed to access the resource
|
||||
items:
|
||||
format: string
|
||||
type: string
|
||||
type: array
|
||||
allowOrigin:
|
||||
description: The list of origins that are allowed to perform
|
||||
CORS requests.
|
||||
items:
|
||||
format: string
|
||||
type: string
|
||||
type: array
|
||||
allowOrigins:
|
||||
description: String patterns that match allowed origins
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
oneOf:
|
||||
- required:
|
||||
- exact
|
||||
- required:
|
||||
- prefix
|
||||
- required:
|
||||
- regex
|
||||
properties:
|
||||
exact:
|
||||
format: string
|
||||
type: string
|
||||
prefix:
|
||||
format: string
|
||||
type: string
|
||||
regex:
|
||||
format: string
|
||||
type: string
|
||||
exposeHeaders:
|
||||
items:
|
||||
format: string
|
||||
type: string
|
||||
type: array
|
||||
maxAge:
|
||||
type: string
|
||||
trafficPolicy:
|
||||
description: Istio traffic policy
|
||||
type: object
|
||||
properties:
|
||||
connectionPool:
|
||||
properties:
|
||||
http:
|
||||
description: HTTP connection pool settings.
|
||||
type: object
|
||||
properties:
|
||||
h2UpgradePolicy:
|
||||
description: Specify if http1.1 connection should
|
||||
be upgraded to http2 for the associated destination.
|
||||
enum:
|
||||
- DEFAULT
|
||||
- DO_NOT_UPGRADE
|
||||
- UPGRADE
|
||||
type: string
|
||||
http1MaxPendingRequests:
|
||||
description: Maximum number of pending HTTP requests
|
||||
to a destination.
|
||||
format: int32
|
||||
type: integer
|
||||
http2MaxRequests:
|
||||
description: Maximum number of requests to a backend.
|
||||
format: int32
|
||||
type: integer
|
||||
idleTimeout:
|
||||
description: The idle timeout for upstream connection
|
||||
pool connections.
|
||||
type: string
|
||||
maxRequestsPerConnection:
|
||||
description: Maximum number of requests per connection
|
||||
to a backend.
|
||||
format: int32
|
||||
type: integer
|
||||
maxRetries:
|
||||
format: int32
|
||||
type: integer
|
||||
loadBalancer:
|
||||
description: Settings controlling the load balancer algorithms.
|
||||
type: object
|
||||
oneOf:
|
||||
- required:
|
||||
- simple
|
||||
- properties:
|
||||
consistentHash:
|
||||
oneOf:
|
||||
- required:
|
||||
- httpHeaderName
|
||||
- required:
|
||||
- httpCookie
|
||||
- required:
|
||||
- useSourceIp
|
||||
- required:
|
||||
- httpQueryParameterName
|
||||
required:
|
||||
- consistentHash
|
||||
properties:
|
||||
consistentHash:
|
||||
properties:
|
||||
httpCookie:
|
||||
description: Hash based on HTTP cookie.
|
||||
properties:
|
||||
name:
|
||||
description: Name of the cookie.
|
||||
format: string
|
||||
type: string
|
||||
path:
|
||||
description: Path to set for the cookie.
|
||||
format: string
|
||||
type: string
|
||||
ttl:
|
||||
description: Lifetime of the cookie.
|
||||
type: string
|
||||
type: object
|
||||
httpHeaderName:
|
||||
description: Hash based on a specific HTTP header.
|
||||
format: string
|
||||
type: string
|
||||
httpQueryParameterName:
|
||||
description: Hash based on a specific HTTP query parameter.
|
||||
format: string
|
||||
type: string
|
||||
minimumRingSize:
|
||||
type: integer
|
||||
useSourceIp:
|
||||
description: Hash based on the source IP address.
|
||||
type: boolean
|
||||
type: object
|
||||
localityLbSetting:
|
||||
properties:
|
||||
distribute:
|
||||
description: 'Optional: only one of distribute or
|
||||
failover can be set.'
|
||||
items:
|
||||
properties:
|
||||
from:
|
||||
description: Originating locality, '/' separated,
|
||||
e.g.
|
||||
format: string
|
||||
type: string
|
||||
to:
|
||||
additionalProperties:
|
||||
type: integer
|
||||
description: Map of upstream localities to traffic
|
||||
distribution weights.
|
||||
type: object
|
||||
type: object
|
||||
type: array
|
||||
enabled:
|
||||
description: enable locality load balancing, this
|
||||
is DestinationRule-level and will override mesh
|
||||
wide settings in entirety.
|
||||
type: boolean
|
||||
failover:
|
||||
description: 'Optional: only failover or distribute
|
||||
can be set.'
|
||||
items:
|
||||
properties:
|
||||
from:
|
||||
description: Originating region.
|
||||
format: string
|
||||
type: string
|
||||
to:
|
||||
format: string
|
||||
type: string
|
||||
type: object
|
||||
type: array
|
||||
type: object
|
||||
simple:
|
||||
enum:
|
||||
- ROUND_ROBIN
|
||||
- LEAST_CONN
|
||||
- RANDOM
|
||||
- PASSTHROUGH
|
||||
type: string
|
||||
outlierDetection:
|
||||
description: Settings controlling eviction of unhealthy hosts from the load balancing pool.
|
||||
type: object
|
||||
properties:
|
||||
baseEjectionTime:
|
||||
description: Minimum ejection duration.
|
||||
type: string
|
||||
consecutive5xxErrors:
|
||||
description: Number of 5xx errors before a host is ejected
|
||||
from the connection pool.
|
||||
type: integer
|
||||
consecutiveErrors:
|
||||
format: int32
|
||||
type: integer
|
||||
consecutiveGatewayErrors:
|
||||
description: Number of gateway errors before a host is
|
||||
ejected from the connection pool.
|
||||
format: int32
|
||||
type: integer
|
||||
interval:
|
||||
description: Time interval between ejection sweep analysis.
|
||||
type: string
|
||||
maxEjectionPercent:
|
||||
format: int32
|
||||
type: integer
|
||||
minHealthPercent:
|
||||
format: int32
|
||||
type: integer
|
||||
tls:
|
||||
description: Istio TLS related settings for connections to the upstream service
|
||||
type: object
|
||||
properties:
|
||||
caCertificates:
|
||||
format: string
|
||||
type: string
|
||||
clientCertificate:
|
||||
description: REQUIRED if mode is `MUTUAL`.
|
||||
format: string
|
||||
type: string
|
||||
mode:
|
||||
enum:
|
||||
- DISABLE
|
||||
- SIMPLE
|
||||
- MUTUAL
|
||||
- ISTIO_MUTUAL
|
||||
type: string
|
||||
privateKey:
|
||||
description: REQUIRED if mode is `MUTUAL`.
|
||||
format: string
|
||||
type: string
|
||||
sni:
|
||||
description: SNI string to present to the server
|
||||
during TLS handshake.
|
||||
format: string
|
||||
type: string
|
||||
subjectAltNames:
|
||||
items:
|
||||
format: string
|
||||
type: string
|
||||
type: array
|
||||
apex:
|
||||
description: Metadata to add to the apex service
|
||||
type: object
|
||||
properties:
|
||||
labels:
|
||||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
annotations:
|
||||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
primary:
|
||||
description: Metadata to add to the primary service
|
||||
type: object
|
||||
properties:
|
||||
labels:
|
||||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
annotations:
|
||||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
canary:
|
||||
description: Metadata to add to the canary service
|
||||
type: object
|
||||
properties:
|
||||
labels:
|
||||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
annotations:
|
||||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
skipAnalysis:
|
||||
description: Skip analysis and promote canary
|
||||
type: boolean
|
||||
revertOnDeletion:
|
||||
description: Revert mutated resources to original spec on deletion
|
||||
type: boolean
|
||||
analysis:
|
||||
description: Canary analysis for this canary
|
||||
type: object
|
||||
oneOf:
|
||||
- required: ["interval", "threshold", "iterations"]
|
||||
- required: ["interval", "threshold", "stepWeight"]
|
||||
properties:
|
||||
interval:
|
||||
description: Schedule interval for this canary
|
||||
type: string
|
||||
pattern: "^[0-9]+(m|s)"
|
||||
iterations:
|
||||
description: Number of checks to run for A/B Testing and Blue/Green
|
||||
type: number
|
||||
threshold:
|
||||
description: Max number of failed checks before rollback
|
||||
type: number
|
||||
maxWeight:
|
||||
description: Max traffic percentage routed to canary
|
||||
type: number
|
||||
stepWeight:
|
||||
description: Incremental traffic percentage step for the analysis phase
|
||||
type: number
|
||||
stepWeightPromotion:
|
||||
description: Incremental traffic percentage step for the promotion phase
|
||||
type: number
|
||||
mirror:
|
||||
description: Mirror traffic to canary
|
||||
type: boolean
|
||||
mirrorWeight:
|
||||
description: Percentage of traffic to be mirrored
|
||||
type: number
|
||||
match:
|
||||
description: A/B testing match conditions
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
headers:
|
||||
type: object
|
||||
additionalProperties:
|
||||
oneOf:
|
||||
- required: ["exact"]
|
||||
- required: ["prefix"]
|
||||
- required: ["suffix"]
|
||||
- required: ["regex"]
|
||||
type: object
|
||||
properties:
|
||||
exact:
|
||||
format: string
|
||||
type: string
|
||||
prefix:
|
||||
format: string
|
||||
type: string
|
||||
suffix:
|
||||
format: string
|
||||
type: string
|
||||
regex:
|
||||
description: RE2 style regex-based match (https://github.com/google/re2/wiki/Syntax)
|
||||
format: string
|
||||
type: string
|
||||
sourceLabels:
|
||||
description: Applicable only when the 'mesh' gateway is included in the service.gateways list
|
||||
type: object
|
||||
additionalProperties:
|
||||
format: string
|
||||
type: string
|
||||
metrics:
|
||||
description: Metric check list for this canary
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
required: ["name"]
|
||||
properties:
|
||||
name:
|
||||
description: Name of the metric
|
||||
type: string
|
||||
interval:
|
||||
description: Interval of the query
|
||||
type: string
|
||||
pattern: "^[0-9]+(m|s)"
|
||||
threshold:
|
||||
description: Max value accepted for this metric
|
||||
type: number
|
||||
thresholdRange:
|
||||
description: Range accepted for this metric
|
||||
type: object
|
||||
properties:
|
||||
min:
|
||||
description: Min value accepted for this metric
|
||||
type: number
|
||||
max:
|
||||
description: Max value accepted for this metric
|
||||
type: number
|
||||
query:
|
||||
description: Prometheus query
|
||||
type: string
|
||||
templateRef:
|
||||
description: Metric template reference
|
||||
type: object
|
||||
required: ["name"]
|
||||
properties:
|
||||
name:
|
||||
description: Name of this metric template
|
||||
type: string
|
||||
namespace:
|
||||
description: Namespace of this metric template
|
||||
type: string
|
||||
webhooks:
|
||||
description: Webhook list for this canary
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
required: ["name", "url"]
|
||||
properties:
|
||||
name:
|
||||
description: Name of the webhook
|
||||
type: string
|
||||
type:
|
||||
description: Type of the webhook pre, post or during rollout
|
||||
type: string
|
||||
enum:
|
||||
- ""
|
||||
- confirm-rollout
|
||||
- pre-rollout
|
||||
- rollout
|
||||
- confirm-promotion
|
||||
- post-rollout
|
||||
- event
|
||||
- rollback
|
||||
url:
|
||||
description: URL address of this webhook
|
||||
type: string
|
||||
format: url
|
||||
timeout:
|
||||
description: Request timeout for this webhook
|
||||
type: string
|
||||
pattern: "^[0-9]+(m|s)"
|
||||
metadata:
|
||||
description: Metadata (key-value pairs) for this webhook
|
||||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
status:
|
||||
properties:
|
||||
phase:
|
||||
description: Analysis phase of this canary
|
||||
type: string
|
||||
enum:
|
||||
- ""
|
||||
- Initializing
|
||||
- Initialized
|
||||
- Waiting
|
||||
- Progressing
|
||||
- Promoting
|
||||
- Finalising
|
||||
- Succeeded
|
||||
- Failed
|
||||
- Terminating
|
||||
- Terminated
|
||||
canaryWeight:
|
||||
description: Traffic weight percentage routed to canary
|
||||
type: number
|
||||
failedChecks:
|
||||
description: Failed check count of the current canary analysis
|
||||
type: number
|
||||
iterations:
|
||||
description: Iteration count of the current canary analysis
|
||||
type: number
|
||||
lastAppliedSpec:
|
||||
description: LastAppliedSpec of this canary
|
||||
type: string
|
||||
lastTransitionTime:
|
||||
description: LastTransitionTime of this canary
|
||||
format: date-time
|
||||
type: string
|
||||
conditions:
|
||||
description: Status conditions of this canary
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
required: ["type", "status", "reason"]
|
||||
properties:
|
||||
lastTransitionTime:
|
||||
description: LastTransitionTime of this condition
|
||||
format: date-time
|
||||
type: string
|
||||
lastUpdateTime:
|
||||
description: LastUpdateTime of this condition
|
||||
format: date-time
|
||||
type: string
|
||||
message:
|
||||
description: Message associated with this condition
|
||||
type: string
|
||||
reason:
|
||||
description: Reason for the current status of this condition
|
||||
type: string
|
||||
status:
|
||||
description: Status of this condition
|
||||
type: string
|
||||
type:
|
||||
description: Type of this condition
|
||||
type: string
|
||||
---
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: metrictemplates.flagger.app
|
||||
annotations:
|
||||
helm.sh/resource-policy: keep
|
||||
spec:
|
||||
group: flagger.app
|
||||
version: v1beta1
|
||||
versions:
|
||||
- name: v1beta1
|
||||
served: true
|
||||
storage: true
|
||||
- name: v1alpha1
|
||||
served: true
|
||||
storage: false
|
||||
names:
|
||||
plural: metrictemplates
|
||||
singular: metrictemplate
|
||||
kind: MetricTemplate
|
||||
categories:
|
||||
- all
|
||||
scope: Namespaced
|
||||
subresources:
|
||||
status: {}
|
||||
additionalPrinterColumns:
|
||||
- name: Provider
|
||||
type: string
|
||||
JSONPath: .spec.provider.type
|
||||
validation:
|
||||
openAPIV3Schema:
|
||||
properties:
|
||||
spec:
|
||||
required:
|
||||
- provider
|
||||
- query
|
||||
properties:
|
||||
provider:
|
||||
description: Provider of this metric template
|
||||
type: object
|
||||
required:
|
||||
- type
|
||||
properties:
|
||||
type:
|
||||
description: Type of this provider
|
||||
type: string
|
||||
enum:
|
||||
- prometheus
|
||||
- influxdb
|
||||
- datadog
|
||||
- cloudwatch
|
||||
address:
|
||||
description: API address of this provider
|
||||
type: string
|
||||
secretRef:
|
||||
description: Kubernetes secret reference containing the provider credentials
|
||||
type: object
|
||||
required:
|
||||
- name
|
||||
properties:
|
||||
name:
|
||||
description: Name of the Kubernetes secret
|
||||
type: string
|
||||
region:
|
||||
description: Region of the provider
|
||||
type: string
|
||||
query:
|
||||
description: Query of this metric template
|
||||
type: string
|
||||
---
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: alertproviders.flagger.app
|
||||
annotations:
|
||||
helm.sh/resource-policy: keep
|
||||
spec:
|
||||
group: flagger.app
|
||||
version: v1beta1
|
||||
versions:
|
||||
- name: v1beta1
|
||||
served: true
|
||||
storage: true
|
||||
names:
|
||||
plural: alertproviders
|
||||
singular: alertprovider
|
||||
kind: AlertProvider
|
||||
categories:
|
||||
- all
|
||||
scope: Namespaced
|
||||
subresources:
|
||||
status: {}
|
||||
additionalPrinterColumns:
|
||||
- name: Type
|
||||
type: string
|
||||
JSONPath: .spec.type
|
||||
validation:
|
||||
openAPIV3Schema:
|
||||
properties:
|
||||
spec:
|
||||
oneOf:
|
||||
- required:
|
||||
- type
|
||||
- address
|
||||
- required:
|
||||
- type
|
||||
- secretRef
|
||||
properties:
|
||||
type:
|
||||
description: Type of this provider
|
||||
type: string
|
||||
enum:
|
||||
- slack
|
||||
- msteams
|
||||
- discord
|
||||
- rocket
|
||||
address:
|
||||
description: Hook URL address of this provider
|
||||
type: string
|
||||
secretRef:
|
||||
description: Kubernetes secret reference containing the provider address
|
||||
type: object
|
||||
required:
|
||||
- name
|
||||
properties:
|
||||
name:
|
||||
description: Name of the Kubernetes secret
|
||||
type: string
|
||||
@@ -3,6 +3,10 @@ apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ template "flagger.serviceAccountName" . }}
|
||||
annotations:
|
||||
{{- if .Values.serviceAccount.annotations }}
|
||||
{{ toYaml .Values.serviceAccount.annotations | indent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
helm.sh/chart: {{ template "flagger.chart" . }}
|
||||
app.kubernetes.io/name: {{ template "flagger.name" . }}
|
||||
|
||||
@@ -1,288 +1,6 @@
|
||||
{{- if .Values.crd.create }}
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: canaries.flagger.app
|
||||
annotations:
|
||||
helm.sh/resource-policy: keep
|
||||
spec:
|
||||
group: flagger.app
|
||||
version: v1alpha3
|
||||
versions:
|
||||
- name: v1alpha3
|
||||
served: true
|
||||
storage: true
|
||||
- name: v1alpha2
|
||||
served: true
|
||||
storage: false
|
||||
- name: v1alpha1
|
||||
served: true
|
||||
storage: false
|
||||
names:
|
||||
plural: canaries
|
||||
singular: canary
|
||||
kind: Canary
|
||||
categories:
|
||||
- all
|
||||
scope: Namespaced
|
||||
subresources:
|
||||
status: {}
|
||||
additionalPrinterColumns:
|
||||
- name: Status
|
||||
type: string
|
||||
JSONPath: .status.phase
|
||||
- name: Weight
|
||||
type: string
|
||||
JSONPath: .status.canaryWeight
|
||||
- name: LastTransitionTime
|
||||
type: string
|
||||
JSONPath: .status.lastTransitionTime
|
||||
validation:
|
||||
openAPIV3Schema:
|
||||
properties:
|
||||
spec:
|
||||
required:
|
||||
- targetRef
|
||||
- service
|
||||
- canaryAnalysis
|
||||
properties:
|
||||
provider:
|
||||
description: Traffic managent provider
|
||||
type: string
|
||||
progressDeadlineSeconds:
|
||||
description: Deployment progress deadline
|
||||
type: number
|
||||
targetRef:
|
||||
description: Deployment selector
|
||||
type: object
|
||||
required: ['apiVersion', 'kind', 'name']
|
||||
properties:
|
||||
apiVersion:
|
||||
type: string
|
||||
kind:
|
||||
type: string
|
||||
name:
|
||||
type: string
|
||||
autoscalerRef:
|
||||
description: HPA selector
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: object
|
||||
required: ['apiVersion', 'kind', 'name']
|
||||
properties:
|
||||
apiVersion:
|
||||
type: string
|
||||
kind:
|
||||
type: string
|
||||
name:
|
||||
type: string
|
||||
ingressRef:
|
||||
description: NGINX ingress selector
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: object
|
||||
required: ['apiVersion', 'kind', 'name']
|
||||
properties:
|
||||
apiVersion:
|
||||
type: string
|
||||
kind:
|
||||
type: string
|
||||
name:
|
||||
type: string
|
||||
service:
|
||||
type: object
|
||||
required: ['port']
|
||||
properties:
|
||||
port:
|
||||
description: Container port number
|
||||
type: number
|
||||
portName:
|
||||
description: Container port name
|
||||
type: string
|
||||
portDiscovery:
|
||||
description: Enable port dicovery
|
||||
type: boolean
|
||||
meshName:
|
||||
description: AppMesh mesh name
|
||||
type: string
|
||||
backends:
|
||||
description: AppMesh backend array
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: object
|
||||
timeout:
|
||||
description: Istio HTTP or gRPC request timeout
|
||||
type: string
|
||||
trafficPolicy:
|
||||
description: Istio traffic policy
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: object
|
||||
match:
|
||||
description: Istio URL match conditions
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: array
|
||||
rewrite:
|
||||
description: Istio URL rewrite
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: object
|
||||
headers:
|
||||
description: Istio headers operations
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: object
|
||||
corsPolicy:
|
||||
description: Istio CORS policy
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: object
|
||||
gateways:
|
||||
description: Istio gateways list
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: array
|
||||
hosts:
|
||||
description: Istio hosts list
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: array
|
||||
skipAnalysis:
|
||||
type: boolean
|
||||
canaryAnalysis:
|
||||
properties:
|
||||
interval:
|
||||
description: Canary schedule interval
|
||||
type: string
|
||||
pattern: "^[0-9]+(m|s)"
|
||||
iterations:
|
||||
description: Number of checks to run for A/B Testing and Blue/Green
|
||||
type: number
|
||||
threshold:
|
||||
description: Max number of failed checks before rollback
|
||||
type: number
|
||||
maxWeight:
|
||||
description: Max traffic percentage routed to canary
|
||||
type: number
|
||||
stepWeight:
|
||||
description: Canary incremental traffic percentage step
|
||||
type: number
|
||||
match:
|
||||
description: A/B testing match conditions
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: array
|
||||
metrics:
|
||||
description: Prometheus query list for this canary
|
||||
type: array
|
||||
properties:
|
||||
items:
|
||||
type: object
|
||||
required: ['name', 'threshold']
|
||||
properties:
|
||||
name:
|
||||
description: Name of the Prometheus metric
|
||||
type: string
|
||||
interval:
|
||||
description: Interval of the promql query
|
||||
type: string
|
||||
pattern: "^[0-9]+(m|s)"
|
||||
threshold:
|
||||
description: Max scalar value accepted for this metric
|
||||
type: number
|
||||
query:
|
||||
description: Prometheus query
|
||||
type: string
|
||||
webhooks:
|
||||
description: Webhook list for this canary
|
||||
type: array
|
||||
properties:
|
||||
items:
|
||||
type: object
|
||||
required: ['name', 'url', 'timeout']
|
||||
properties:
|
||||
name:
|
||||
description: Name of the webhook
|
||||
type: string
|
||||
type:
|
||||
description: Type of the webhook pre, post or during rollout
|
||||
type: string
|
||||
enum:
|
||||
- ""
|
||||
- confirm-rollout
|
||||
- pre-rollout
|
||||
- rollout
|
||||
- post-rollout
|
||||
url:
|
||||
description: URL address of this webhook
|
||||
type: string
|
||||
format: url
|
||||
timeout:
|
||||
description: Request timeout for this webhook
|
||||
type: string
|
||||
pattern: "^[0-9]+(m|s)"
|
||||
metadata:
|
||||
description: Metadata (key-value pairs) for this webhook
|
||||
anyOf:
|
||||
- type: string
|
||||
- type: object
|
||||
status:
|
||||
properties:
|
||||
phase:
|
||||
description: Analysis phase of this canary
|
||||
type: string
|
||||
enum:
|
||||
- ""
|
||||
- Initializing
|
||||
- Initialized
|
||||
- Waiting
|
||||
- Progressing
|
||||
- Finalising
|
||||
- Succeeded
|
||||
- Failed
|
||||
canaryWeight:
|
||||
description: Traffic weight percentage routed to canary
|
||||
type: number
|
||||
failedChecks:
|
||||
description: Failed check count of the current canary analysis
|
||||
type: number
|
||||
iterations:
|
||||
description: Iteration count of the current canary analysis
|
||||
type: number
|
||||
lastAppliedSpec:
|
||||
description: LastAppliedSpec of this canary
|
||||
type: string
|
||||
lastTransitionTime:
|
||||
description: LastTransitionTime of this canary
|
||||
format: date-time
|
||||
type: string
|
||||
conditions:
|
||||
description: Status conditions of this canary
|
||||
type: array
|
||||
properties:
|
||||
items:
|
||||
type: object
|
||||
required: ['type', 'status', 'reason']
|
||||
properties:
|
||||
lastTransitionTime:
|
||||
description: LastTransitionTime of this condition
|
||||
format: date-time
|
||||
type: string
|
||||
lastUpdateTime:
|
||||
description: LastUpdateTime of this condition
|
||||
format: date-time
|
||||
type: string
|
||||
message:
|
||||
description: Message associated with this condition
|
||||
type: string
|
||||
reason:
|
||||
description: Reason for the current status of this condition
|
||||
type: string
|
||||
status:
|
||||
description: Status of this condition
|
||||
type: string
|
||||
type:
|
||||
description: Type of this condition
|
||||
type: string
|
||||
{{- end }}
|
||||
{{- if .Values.crd.create -}}
|
||||
{{- range $path, $bytes := .Files.Glob "crds/*.yaml" -}}
|
||||
{{ $.Files.Get $path }}
|
||||
---
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
@@ -9,8 +9,10 @@ metadata:
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
spec:
|
||||
replicas: {{ .Values.leaderElection.replicaCount }}
|
||||
{{- if eq .Values.leaderElection.enabled false }}
|
||||
strategy:
|
||||
type: Recreate
|
||||
{{- end }}
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: {{ template "flagger.name" . }}
|
||||
@@ -20,6 +22,10 @@ spec:
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ template "flagger.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
annotations:
|
||||
{{- if .Values.podAnnotations }}
|
||||
{{ toYaml .Values.podAnnotations | indent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
serviceAccountName: {{ template "flagger.serviceAccountName" . }}
|
||||
affinity:
|
||||
@@ -36,11 +42,26 @@ spec:
|
||||
imagePullSecrets:
|
||||
- name: {{ .Values.image.pullSecret }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
{{- if .Values.istio.kubeconfig.secretName }}
|
||||
- name: kubeconfig
|
||||
secret:
|
||||
secretName: "{{ .Values.istio.kubeconfig.secretName }}"
|
||||
{{- end }}
|
||||
{{- if .Values.podPriorityClassName }}
|
||||
priorityClassName: {{ .Values.podPriorityClassName }}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: flagger
|
||||
{{- if .Values.securityContext.enabled }}
|
||||
securityContext:
|
||||
readOnlyRootFilesystem: true
|
||||
runAsUser: 10001
|
||||
{{ toYaml .Values.securityContext.context | indent 12 }}
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
{{- if .Values.istio.kubeconfig.secretName }}
|
||||
- name: kubeconfig
|
||||
mountPath: "/tmp/istio-host"
|
||||
{{- end }}
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
ports:
|
||||
@@ -48,7 +69,7 @@ spec:
|
||||
containerPort: 8080
|
||||
command:
|
||||
- ./flagger
|
||||
- -log-level=info
|
||||
- -log-level={{ .Values.logLevel }}
|
||||
{{- if .Values.meshProvider }}
|
||||
- -mesh-provider={{ .Values.meshProvider }}
|
||||
{{- end }}
|
||||
@@ -57,12 +78,22 @@ spec:
|
||||
{{- else }}
|
||||
- -metrics-server={{ .Values.metricsServer }}
|
||||
{{- end }}
|
||||
{{- if .Values.selectorLabels }}
|
||||
- -selector-labels={{ .Values.selectorLabels }}
|
||||
{{- end }}
|
||||
{{- if .Values.configTracking }}
|
||||
- -enable-config-tracking={{ .Values.configTracking.enabled }}
|
||||
{{- end }}
|
||||
{{- if .Values.namespace }}
|
||||
- -namespace={{ .Values.namespace }}
|
||||
{{- end }}
|
||||
{{- if .Values.slack.url }}
|
||||
- -slack-url={{ .Values.slack.url }}
|
||||
{{- end }}
|
||||
{{- if .Values.slack.user }}
|
||||
- -slack-user={{ .Values.slack.user }}
|
||||
{{- end }}
|
||||
{{- if .Values.slack.channel }}
|
||||
- -slack-channel={{ .Values.slack.channel }}
|
||||
{{- end }}
|
||||
{{- if .Values.msteams.url }}
|
||||
@@ -72,6 +103,21 @@ spec:
|
||||
- -enable-leader-election=true
|
||||
- -leader-election-namespace={{ .Release.Namespace }}
|
||||
{{- end }}
|
||||
{{- if .Values.ingressAnnotationsPrefix }}
|
||||
- -ingress-annotations-prefix={{ .Values.ingressAnnotationsPrefix }}
|
||||
{{- end }}
|
||||
{{- if .Values.ingressClass }}
|
||||
- -ingress-class={{ .Values.ingressClass }}
|
||||
{{- end }}
|
||||
{{- if .Values.eventWebhook }}
|
||||
- -event-webhook={{ .Values.eventWebhook }}
|
||||
{{- end }}
|
||||
{{- if .Values.istio.kubeconfig.secretName }}
|
||||
- -kubeconfig-service-mesh=/tmp/istio-host/{{ .Values.istio.kubeconfig.key }}
|
||||
{{- end }}
|
||||
{{- if .Values.threadiness }}
|
||||
- -threadiness={{ .Values.threadiness }}
|
||||
{{- end }}
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
@@ -92,6 +138,10 @@ spec:
|
||||
- --spider
|
||||
- http://localhost:8080/healthz
|
||||
timeoutSeconds: 5
|
||||
{{- if .Values.env }}
|
||||
env:
|
||||
{{ toYaml .Values.env | indent 12 }}
|
||||
{{- end }}
|
||||
resources:
|
||||
{{ toYaml .Values.resources | indent 12 }}
|
||||
{{- with .Values.nodeSelector }}
|
||||
|
||||
27
charts/flagger/templates/podmonitor.yaml
Normal file
27
charts/flagger/templates/podmonitor.yaml
Normal file
@@ -0,0 +1,27 @@
|
||||
{{- if .Values.podMonitor.enabled }}
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PodMonitor
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: {{ template "flagger.chart" . }}
|
||||
app.kubernetes.io/name: {{ template "flagger.name" . }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- range $k, $v := .Values.podMonitor.additionalLabels }}
|
||||
{{ $k }}: {{ $v | quote }}
|
||||
{{- end }}
|
||||
name: {{ include "flagger.fullname" . }}
|
||||
namespace: {{ .Values.podMonitor.namespace | default .Release.Namespace }}
|
||||
spec:
|
||||
podMetricsEndpoints:
|
||||
- interval: {{ .Values.podMonitor.interval }}
|
||||
path: /metrics
|
||||
port: http
|
||||
namespaceSelector:
|
||||
matchNames:
|
||||
- {{ .Release.Namespace }}
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: {{ template "flagger.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end }}
|
||||
@@ -133,38 +133,22 @@ data:
|
||||
scheme: https
|
||||
tls_config:
|
||||
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||
insecure_skip_verify: true
|
||||
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||
relabel_configs:
|
||||
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
|
||||
action: keep
|
||||
regex: kubernetes;https
|
||||
|
||||
# Scrape config for nodes
|
||||
- job_name: 'kubernetes-nodes'
|
||||
scheme: https
|
||||
tls_config:
|
||||
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||
kubernetes_sd_configs:
|
||||
- role: node
|
||||
relabel_configs:
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_node_label_(.+)
|
||||
- target_label: __address__
|
||||
replacement: kubernetes.default.svc:443
|
||||
- source_labels: [__meta_kubernetes_node_name]
|
||||
regex: (.+)
|
||||
target_label: __metrics_path__
|
||||
replacement: /api/v1/nodes/${1}/proxy/metrics
|
||||
|
||||
# scrape config for cAdvisor
|
||||
- job_name: 'kubernetes-cadvisor'
|
||||
scheme: https
|
||||
tls_config:
|
||||
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
||||
insecure_skip_verify: true
|
||||
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||
kubernetes_sd_configs:
|
||||
- role: node
|
||||
- role: node
|
||||
relabel_configs:
|
||||
- action: labelmap
|
||||
regex: __meta_kubernetes_node_label_(.+)
|
||||
@@ -174,6 +158,14 @@ data:
|
||||
regex: (.+)
|
||||
target_label: __metrics_path__
|
||||
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
|
||||
# exclude high cardinality metrics
|
||||
metric_relabel_configs:
|
||||
- source_labels: [__name__]
|
||||
regex: (container|machine)_(cpu|memory|network|fs)_(.+)
|
||||
action: keep
|
||||
- source_labels: [__name__]
|
||||
regex: container_memory_failures_total
|
||||
action: drop
|
||||
|
||||
# scrape config for pods
|
||||
- job_name: kubernetes-pods
|
||||
@@ -238,10 +230,10 @@ spec:
|
||||
serviceAccountName: {{ template "flagger.serviceAccountName" . }}-prometheus
|
||||
containers:
|
||||
- name: prometheus
|
||||
image: "docker.io/prom/prometheus:v2.10.0"
|
||||
image: {{ .Values.prometheus.image }}
|
||||
imagePullPolicy: IfNotPresent
|
||||
args:
|
||||
- '--storage.tsdb.retention=2h'
|
||||
- '--storage.tsdb.retention={{ .Values.prometheus.retention }}'
|
||||
- '--config.file=/etc/prometheus/prometheus.yml'
|
||||
ports:
|
||||
- containerPort: 9090
|
||||
|
||||
@@ -14,69 +14,164 @@ rules:
|
||||
resources:
|
||||
- events
|
||||
- configmaps
|
||||
- configmaps/finalizers
|
||||
- secrets
|
||||
- secrets/finalizers
|
||||
- services
|
||||
verbs: ["*"]
|
||||
- services/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- apps
|
||||
resources:
|
||||
- daemonsets
|
||||
- daemonsets/finalizers
|
||||
- deployments
|
||||
verbs: ["*"]
|
||||
- deployments/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- autoscaling
|
||||
resources:
|
||||
- horizontalpodautoscalers
|
||||
verbs: ["*"]
|
||||
- horizontalpodautoscalers/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- "extensions"
|
||||
- extensions
|
||||
- networking.k8s.io
|
||||
resources:
|
||||
- ingresses
|
||||
- ingresses/status
|
||||
verbs: ["*"]
|
||||
- ingresses/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- flagger.app
|
||||
resources:
|
||||
- canaries
|
||||
- canaries/status
|
||||
verbs: ["*"]
|
||||
- metrictemplates
|
||||
- metrictemplates/status
|
||||
- alertproviders
|
||||
- alertproviders/status
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- networking.istio.io
|
||||
resources:
|
||||
- virtualservices
|
||||
- virtualservices/status
|
||||
- virtualservices/finalizers
|
||||
- destinationrules
|
||||
- destinationrules/status
|
||||
verbs: ["*"]
|
||||
- destinationrules/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- appmesh.k8s.aws
|
||||
resources:
|
||||
- meshes
|
||||
- meshes/status
|
||||
- virtualnodes
|
||||
- virtualnodes/status
|
||||
- virtualnodes/finalizers
|
||||
- virtualrouters
|
||||
- virtualrouters/finalizers
|
||||
- virtualservices
|
||||
- virtualservices/status
|
||||
verbs: ["*"]
|
||||
- virtualservices/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- split.smi-spec.io
|
||||
resources:
|
||||
- trafficsplits
|
||||
verbs: ["*"]
|
||||
- trafficsplits/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- specs.smi-spec.io
|
||||
resources:
|
||||
- httproutegroups
|
||||
- httproutegroups/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- gloo.solo.io
|
||||
resources:
|
||||
- settings
|
||||
- upstreams
|
||||
- upstreams/finalizers
|
||||
- upstreamgroups
|
||||
- proxies
|
||||
- virtualservices
|
||||
verbs: ["*"]
|
||||
- upstreamgroups/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- gateway.solo.io
|
||||
- projectcontour.io
|
||||
resources:
|
||||
- virtualservices
|
||||
- gateways
|
||||
verbs: ["*"]
|
||||
- httpproxies
|
||||
- httpproxies/finalizers
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- nonResourceURLs:
|
||||
- /version
|
||||
verbs:
|
||||
|
||||
@@ -2,18 +2,54 @@
|
||||
|
||||
image:
|
||||
repository: weaveworks/flagger
|
||||
tag: 0.18.2
|
||||
tag: 1.1.0
|
||||
pullPolicy: IfNotPresent
|
||||
pullSecret:
|
||||
|
||||
# accepted values are debug, info, warning, error (defaults to info)
|
||||
logLevel: info
|
||||
|
||||
podAnnotations:
|
||||
prometheus.io/scrape: "true"
|
||||
prometheus.io/port: "8080"
|
||||
appmesh.k8s.aws/sidecarInjectorWebhook: disabled
|
||||
|
||||
# priority class name for pod priority configuration
|
||||
podPriorityClassName: ""
|
||||
|
||||
metricsServer: "http://prometheus:9090"
|
||||
|
||||
# accepted values are istio, appmesh, nginx or supergloo:mesh.namespace (defaults to istio)
|
||||
# accepted values are kubernetes, istio, linkerd, appmesh, nginx, gloo or supergloo:mesh.namespace (defaults to istio)
|
||||
meshProvider: ""
|
||||
|
||||
# single namespace restriction
|
||||
namespace: ""
|
||||
|
||||
# list of pod labels that Flagger uses to create pod selectors
|
||||
# defaults to: app,name,app.kubernetes.io/name
|
||||
selectorLabels: ""
|
||||
|
||||
# when enabled, flagger will track changes in Secrets and ConfigMaps referenced in the target deployment (enabled by default)
|
||||
configTracking:
|
||||
enabled: true
|
||||
|
||||
# annotations prefix for NGINX ingresses
|
||||
ingressAnnotationsPrefix: ""
|
||||
|
||||
# ingress class used for annotating HTTPProxy objects
|
||||
ingressClass: ""
|
||||
|
||||
# when enabled, it will add a security context for the flagger pod. You may
|
||||
# need to disable this if you are running flagger on OpenShift
|
||||
securityContext:
|
||||
enabled: true
|
||||
context:
|
||||
readOnlyRootFilesystem: true
|
||||
runAsUser: 10001
|
||||
|
||||
# when specified, flagger will publish events to the provided webhook
|
||||
eventWebhook: ""
|
||||
|
||||
slack:
|
||||
user: flagger
|
||||
channel:
|
||||
@@ -24,6 +60,30 @@ msteams:
|
||||
# MS Teams incoming webhook URL
|
||||
url:
|
||||
|
||||
podMonitor:
|
||||
enabled: false
|
||||
namespace:
|
||||
interval: 15s
|
||||
additionalLabels: {}
|
||||
|
||||
#env:
|
||||
#- name: SLACK_URL
|
||||
# valueFrom:
|
||||
# secretKeyRef:
|
||||
# name: slack
|
||||
# key: url
|
||||
#- name: MSTEAMS_URL
|
||||
# valueFrom:
|
||||
# secretKeyRef:
|
||||
# name: msteams
|
||||
# key: url
|
||||
#- name: EVENT_WEBHOOK_URL
|
||||
# valueFrom:
|
||||
# secretKeyRef:
|
||||
# name: eventwebhook
|
||||
# key: url
|
||||
env: []
|
||||
|
||||
leaderElection:
|
||||
enabled: false
|
||||
replicaCount: 1
|
||||
@@ -33,6 +93,8 @@ serviceAccount:
|
||||
create: true
|
||||
# serviceAccount.name: The name of the service account to create or use
|
||||
name: ""
|
||||
# serviceAccount.annotations: Annotations for service account
|
||||
annotations: {}
|
||||
|
||||
rbac:
|
||||
# rbac.create: `true` if rbac resources should be created
|
||||
@@ -42,7 +104,7 @@ rbac:
|
||||
|
||||
crd:
|
||||
# crd.create: `true` if custom resource definitions should be created
|
||||
create: true
|
||||
create: false
|
||||
|
||||
nameOverride: ""
|
||||
fullnameOverride: ""
|
||||
@@ -60,5 +122,16 @@ nodeSelector: {}
|
||||
tolerations: []
|
||||
|
||||
prometheus:
|
||||
# to be used with AppMesh or nginx ingress
|
||||
# to be used with ingress controllers
|
||||
install: false
|
||||
image: docker.io/prom/prometheus:v2.19.0
|
||||
retention: 2h
|
||||
|
||||
# Istio multi-cluster service mesh (shared control plane single-network)
|
||||
# https://istio.io/docs/setup/install/multicluster/shared-vpn/
|
||||
istio:
|
||||
kubeconfig:
|
||||
# istio.kubeconfig.secretName: The name of the secret containing the Istio control plane kubeconfig
|
||||
secretName: ""
|
||||
# istio.kubeconfig.key: The name of secret data key that contains the Istio control plane kubeconfig
|
||||
key: "kubeconfig"
|
||||
|
||||
@@ -1,13 +1,20 @@
|
||||
apiVersion: v1
|
||||
name: grafana
|
||||
version: 1.3.0
|
||||
appVersion: 6.2.5
|
||||
version: 1.4.0
|
||||
appVersion: 6.5.1
|
||||
description: Grafana dashboards for monitoring Flagger canary deployments
|
||||
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/flagger-icon.png
|
||||
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/weaveworks.png
|
||||
home: https://flagger.app
|
||||
sources:
|
||||
- https://github.com/weaveworks/flagger
|
||||
- https://github.com/weaveworks/flagger
|
||||
maintainers:
|
||||
- name: stefanprodan
|
||||
url: https://github.com/stefanprodan
|
||||
email: stefanprodan@users.noreply.github.com
|
||||
- name: stefanprodan
|
||||
url: https://github.com/stefanprodan
|
||||
email: stefanprodan@users.noreply.github.com
|
||||
keywords:
|
||||
- flagger
|
||||
- grafana
|
||||
- canary
|
||||
- istio
|
||||
- appmesh
|
||||
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
# Flagger Grafana
|
||||
|
||||
Grafana dashboards for monitoring progressive deployments powered by Istio, Prometheus and Flagger.
|
||||
Grafana dashboards for monitoring progressive deployments powered by Flagger and Prometheus.
|
||||
|
||||

|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Kubernetes >= 1.11
|
||||
* Istio >= 1.0
|
||||
* Prometheus >= 2.6
|
||||
|
||||
## Installing the Chart
|
||||
@@ -18,14 +17,20 @@ Add Flagger Helm repository:
|
||||
helm repo add flagger https://flagger.app
|
||||
```
|
||||
|
||||
To install the chart with the release name `flagger-grafana`:
|
||||
To install the chart for Istio run:
|
||||
|
||||
```console
|
||||
helm upgrade -i flagger-grafana flagger/grafana \
|
||||
--namespace=istio-system \
|
||||
--set url=http://prometheus:9090 \
|
||||
--set user=admin \
|
||||
--set password=admin
|
||||
--set url=http://prometheus:9090
|
||||
```
|
||||
|
||||
To install the chart for AWS App Mesh run:
|
||||
|
||||
```console
|
||||
helm upgrade -i flagger-grafana flagger/grafana \
|
||||
--namespace=appmesh-system \
|
||||
--set url=http://appmesh-prometheus:9090
|
||||
```
|
||||
|
||||
The command deploys Grafana on the Kubernetes cluster in the default namespace.
|
||||
@@ -56,10 +61,7 @@ Parameter | Description | Default
|
||||
`affinity` | node/pod affinities | `node`
|
||||
`nodeSelector` | node labels for pod assignment | `{}`
|
||||
`service.type` | type of service | `ClusterIP`
|
||||
`url` | Prometheus URL, used when Weave Cloud token is empty | `http://prometheus:9090`
|
||||
`token` | Weave Cloud token | `none`
|
||||
`user` | Grafana admin username | `admin`
|
||||
`password` | Grafana admin password | `admin`
|
||||
`url` | Prometheus URL | `http://prometheus:9090`
|
||||
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
|
||||
|
||||
|
||||
@@ -602,11 +602,11 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate(container_cpu_usage_seconds_total{cpu=\"total\",namespace=\"$namespace\",pod_name=~\"$primary.*\", container_name!~\"POD|istio-proxy\"}[1m])) by (pod_name)",
|
||||
"expr": "sum(rate(container_cpu_usage_seconds_total{cpu=\"total\",namespace=\"$namespace\",pod=~\"$primary.*\", container!~\"POD|istio-proxy\"}[1m])) by (pod)",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ pod_name }}",
|
||||
"legendFormat": "{{ pod }}",
|
||||
"refId": "B"
|
||||
}
|
||||
],
|
||||
@@ -692,11 +692,11 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate(container_cpu_usage_seconds_total{cpu=\"total\",namespace=\"$namespace\",pod_name=~\"$canary.*\", pod_name!~\"$primary.*\", container_name!~\"POD|istio-proxy\"}[1m])) by (pod_name)",
|
||||
"expr": "sum(rate(container_cpu_usage_seconds_total{cpu=\"total\",namespace=\"$namespace\",pod=~\"$canary.*\", pod!~\"$primary.*\", container!~\"POD|istio-proxy\"}[1m])) by (pod)",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ pod_name }}",
|
||||
"legendFormat": "{{ pod }}",
|
||||
"refId": "B"
|
||||
}
|
||||
],
|
||||
@@ -782,12 +782,12 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(container_memory_working_set_bytes{namespace=\"$namespace\",pod_name=~\"$primary.*\", container_name!~\"POD|istio-proxy\"}) by (pod_name)",
|
||||
"expr": "sum(container_memory_working_set_bytes{namespace=\"$namespace\",pod=~\"$primary.*\", container!~\"POD|istio-proxy\"}) by (pod)",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ pod_name }}",
|
||||
"legendFormat": "{{ pod }}",
|
||||
"refId": "B"
|
||||
}
|
||||
],
|
||||
@@ -874,12 +874,12 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(container_memory_working_set_bytes{namespace=\"$namespace\",pod_name=~\"$canary.*\", pod_name!~\"$primary.*\", container_name!~\"POD|istio-proxy\"}) by (pod_name)",
|
||||
"expr": "sum(container_memory_working_set_bytes{namespace=\"$namespace\",pod=~\"$canary.*\", pod!~\"$primary.*\", container!~\"POD|istio-proxy\"}) by (pod)",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ pod_name }}",
|
||||
"legendFormat": "{{ pod }}",
|
||||
"refId": "B"
|
||||
}
|
||||
],
|
||||
@@ -975,14 +975,14 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate (container_network_receive_bytes_total{namespace=\"$namespace\",pod_name=~\"$primary.*\"}[1m])) ",
|
||||
"expr": "sum(rate (container_network_receive_bytes_total{namespace=\"$namespace\",pod=~\"$primary.*\"}[1m])) ",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "received",
|
||||
"refId": "A"
|
||||
},
|
||||
{
|
||||
"expr": "-sum (rate (container_network_transmit_bytes_total{namespace=\"$namespace\",pod_name=~\"$primary.*\"}[1m]))",
|
||||
"expr": "-sum (rate (container_network_transmit_bytes_total{namespace=\"$namespace\",pod=~\"$primary.*\"}[1m]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "transmited",
|
||||
@@ -1081,14 +1081,14 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate (container_network_receive_bytes_total{namespace=\"$namespace\",pod_name=~\"$canary.*\",pod_name!~\"$primary.*\"}[1m])) ",
|
||||
"expr": "sum(rate (container_network_receive_bytes_total{namespace=\"$namespace\",pod=~\"$canary.*\",pod!~\"$primary.*\"}[1m])) ",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "received",
|
||||
"refId": "A"
|
||||
},
|
||||
{
|
||||
"expr": "-sum (rate (container_network_transmit_bytes_total{namespace=\"$namespace\",pod_name=~\"$canary.*\",pod_name!~\"$primary.*\"}[1m]))",
|
||||
"expr": "-sum (rate (container_network_transmit_bytes_total{namespace=\"$namespace\",pod=~\"$canary.*\",pod!~\"$primary.*\"}[1m]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "transmited",
|
||||
|
||||
1226
charts/grafana/dashboards/envoy.json
Normal file
1226
charts/grafana/dashboards/envoy.json
Normal file
File diff suppressed because it is too large
Load Diff
@@ -403,7 +403,7 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "histogram_quantile(0.50, sum(irate(istio_request_duration_seconds_bucket{reporter=\"destination\",destination_workload=~\"$primary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
|
||||
"expr": "histogram_quantile(0.50, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$primary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
@@ -411,7 +411,7 @@
|
||||
"refId": "A"
|
||||
},
|
||||
{
|
||||
"expr": "histogram_quantile(0.90, sum(irate(istio_request_duration_seconds_bucket{reporter=\"destination\",destination_workload=~\"$primary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
|
||||
"expr": "histogram_quantile(0.90, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$primary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"intervalFactor": 1,
|
||||
@@ -419,7 +419,7 @@
|
||||
"refId": "B"
|
||||
},
|
||||
{
|
||||
"expr": "histogram_quantile(0.99, sum(irate(istio_request_duration_seconds_bucket{reporter=\"destination\",destination_workload=~\"$primary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
|
||||
"expr": "histogram_quantile(0.99, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$primary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"intervalFactor": 1,
|
||||
@@ -509,7 +509,7 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "histogram_quantile(0.50, sum(irate(istio_request_duration_seconds_bucket{reporter=\"destination\",destination_workload=~\"$canary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
|
||||
"expr": "histogram_quantile(0.50, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$canary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
@@ -517,7 +517,7 @@
|
||||
"refId": "A"
|
||||
},
|
||||
{
|
||||
"expr": "histogram_quantile(0.90, sum(irate(istio_request_duration_seconds_bucket{reporter=\"destination\",destination_workload=~\"$canary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
|
||||
"expr": "histogram_quantile(0.90, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$canary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"intervalFactor": 1,
|
||||
@@ -525,7 +525,7 @@
|
||||
"refId": "B"
|
||||
},
|
||||
{
|
||||
"expr": "histogram_quantile(0.99, sum(irate(istio_request_duration_seconds_bucket{reporter=\"destination\",destination_workload=~\"$canary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
|
||||
"expr": "histogram_quantile(0.99, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$canary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"intervalFactor": 1,
|
||||
@@ -630,11 +630,11 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate(container_cpu_usage_seconds_total{cpu=\"total\",namespace=\"$namespace\",pod_name=~\"$primary.*\", container_name!~\"POD|istio-proxy\"}[1m])) by (pod_name)",
|
||||
"expr": "sum(rate(container_cpu_usage_seconds_total{cpu=\"total\",namespace=\"$namespace\",pod=~\"$primary.*\", container!~\"POD|istio-proxy\"}[1m])) by (pod)",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ pod_name }}",
|
||||
"legendFormat": "{{ pod }}",
|
||||
"refId": "B"
|
||||
}
|
||||
],
|
||||
@@ -720,11 +720,11 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate(container_cpu_usage_seconds_total{cpu=\"total\",namespace=\"$namespace\",pod_name=~\"$canary.*\", pod_name!~\"$primary.*\", container_name!~\"POD|istio-proxy\"}[1m])) by (pod_name)",
|
||||
"expr": "sum(rate(container_cpu_usage_seconds_total{cpu=\"total\",namespace=\"$namespace\",pod=~\"$canary.*\", pod!~\"$primary.*\", container!~\"POD|istio-proxy\"}[1m])) by (pod)",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ pod_name }}",
|
||||
"legendFormat": "{{ pod }}",
|
||||
"refId": "B"
|
||||
}
|
||||
],
|
||||
@@ -810,12 +810,12 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(container_memory_working_set_bytes{namespace=\"$namespace\",pod_name=~\"$primary.*\", container_name!~\"POD|istio-proxy\"}) by (pod_name)",
|
||||
"expr": "sum(container_memory_working_set_bytes{namespace=\"$namespace\",pod=~\"$primary.*\", container!~\"POD|istio-proxy\"}) by (pod)",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ pod_name }}",
|
||||
"legendFormat": "{{ pod }}",
|
||||
"refId": "B"
|
||||
}
|
||||
],
|
||||
@@ -902,12 +902,12 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(container_memory_working_set_bytes{namespace=\"$namespace\",pod_name=~\"$canary.*\", pod_name!~\"$primary.*\", container_name!~\"POD|istio-proxy\"}) by (pod_name)",
|
||||
"expr": "sum(container_memory_working_set_bytes{namespace=\"$namespace\",pod=~\"$canary.*\", pod!~\"$primary.*\", container!~\"POD|istio-proxy\"}) by (pod)",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{ pod_name }}",
|
||||
"legendFormat": "{{ pod }}",
|
||||
"refId": "B"
|
||||
}
|
||||
],
|
||||
@@ -1003,14 +1003,14 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate (container_network_receive_bytes_total{namespace=\"$namespace\",pod_name=~\"$primary.*\"}[1m])) ",
|
||||
"expr": "sum(rate (container_network_receive_bytes_total{namespace=\"$namespace\",pod=~\"$primary.*\"}[1m])) ",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "received",
|
||||
"refId": "A"
|
||||
},
|
||||
{
|
||||
"expr": "-sum (rate (container_network_transmit_bytes_total{namespace=\"$namespace\",pod_name=~\"$primary.*\"}[1m]))",
|
||||
"expr": "-sum (rate (container_network_transmit_bytes_total{namespace=\"$namespace\",pod=~\"$primary.*\"}[1m]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "transmited",
|
||||
@@ -1109,14 +1109,14 @@
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate (container_network_receive_bytes_total{namespace=\"$namespace\",pod_name=~\"$canary.*\",pod_name!~\"$primary.*\"}[1m])) ",
|
||||
"expr": "sum(rate (container_network_receive_bytes_total{namespace=\"$namespace\",pod=~\"$canary.*\",pod!~\"$primary.*\"}[1m])) ",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "received",
|
||||
"refId": "A"
|
||||
},
|
||||
{
|
||||
"expr": "-sum (rate (container_network_transmit_bytes_total{namespace=\"$namespace\",pod_name=~\"$canary.*\",pod_name!~\"$primary.*\"}[1m]))",
|
||||
"expr": "-sum (rate (container_network_transmit_bytes_total{namespace=\"$namespace\",pod=~\"$canary.*\",pod!~\"$primary.*\"}[1m]))",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "transmited",
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
apiVersion: apps/v1beta2
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ template "grafana.fullname" . }}
|
||||
@@ -20,6 +20,9 @@ spec:
|
||||
release: {{ .Release.Name }}
|
||||
annotations:
|
||||
prometheus.io/scrape: 'false'
|
||||
{{- if .Values.podAnnotations }}
|
||||
{{ toYaml .Values.podAnnotations | indent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}
|
||||
|
||||
@@ -6,9 +6,11 @@ replicaCount: 1
|
||||
|
||||
image:
|
||||
repository: grafana/grafana
|
||||
tag: 6.2.5
|
||||
tag: 6.5.1
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
podAnnotations: {}
|
||||
|
||||
service:
|
||||
type: ClusterIP
|
||||
port: 80
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
apiVersion: v1
|
||||
name: loadtester
|
||||
version: 0.6.0
|
||||
appVersion: 0.6.1
|
||||
version: 0.18.0
|
||||
appVersion: 0.18.0
|
||||
kubeVersion: ">=1.11.0-0"
|
||||
engine: gotpl
|
||||
description: Flagger's load testing services based on rakyll/hey and bojand/ghz that generates traffic during canary analysis when configured as a webhook.
|
||||
home: https://docs.flagger.app
|
||||
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/flagger-icon.png
|
||||
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/weaveworks.png
|
||||
sources:
|
||||
- https://github.com/weaveworks/flagger
|
||||
maintainers:
|
||||
@@ -14,8 +14,10 @@ maintainers:
|
||||
url: https://github.com/stefanprodan
|
||||
email: stefanprodan@users.noreply.github.com
|
||||
keywords:
|
||||
- canary
|
||||
- flagger
|
||||
- istio
|
||||
- appmesh
|
||||
- linkerd
|
||||
- gloo
|
||||
- gitops
|
||||
- load testing
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
# Flagger load testing service
|
||||
|
||||
[Flagger's](https://github.com/weaveworks/flagger) load testing service is based on
|
||||
[rakyll/hey](https://github.com/rakyll/hey)
|
||||
and can be used to generates traffic during canary analysis when configured as a webhook.
|
||||
[rakyll/hey](https://github.com/rakyll/hey) and
|
||||
[bojand/ghz](https://github.com/bojand/ghz).
|
||||
It can be used to generate HTTP and gRPC traffic during canary analysis when configured as a webhook.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
@@ -22,9 +23,10 @@ To install the chart with the release name `flagger-loadtester`:
|
||||
helm upgrade -i flagger-loadtester flagger/loadtester
|
||||
```
|
||||
|
||||
The command deploys Grafana on the Kubernetes cluster in the default namespace.
|
||||
The command deploys loadtester on the Kubernetes cluster in the default namespace.
|
||||
|
||||
> **Tip**: Note that the namespace where you deploy the load tester should have the Istio or App Mesh sidecar injection enabled
|
||||
> **Tip**: Note that the namespace where you deploy the load tester should
|
||||
> have the Istio, App Mesh or Linkerd sidecar injection enabled
|
||||
|
||||
The [configuration](#configuration) section lists the parameters that can be configured during installation.
|
||||
|
||||
@@ -33,7 +35,7 @@ The [configuration](#configuration) section lists the parameters that can be con
|
||||
To uninstall/delete the `flagger-loadtester` deployment:
|
||||
|
||||
```console
|
||||
helm delete --purge flagger-loadtester
|
||||
helm delete flagger-loadtester
|
||||
```
|
||||
|
||||
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||
@@ -58,13 +60,24 @@ Parameter | Description | Default
|
||||
`service.port` | ClusterIP port | `80`
|
||||
`cmd.timeout` | Command execution timeout | `1h`
|
||||
`logLevel` | Log level can be debug, info, warning, error or panic | `info`
|
||||
`meshName` | AWS App Mesh name | `none`
|
||||
`backends` | AWS App Mesh virtual services | `none`
|
||||
`appmesh.enabled` | Create AWS App Mesh v1beta2 virtual node | `false`
|
||||
`appmesh.backends` | AWS App Mesh virtual services | `none`
|
||||
`istio.enabled` | Create Istio virtual service | `false`
|
||||
`istio.host` | Loadtester hostname | `flagger-loadtester.flagger`
|
||||
`istio.gateway.enabled` | Create Istio gateway in namespace | `false`
|
||||
`istio.tls.enabled` | Enable TLS in gateway ( TLS secrets should be in namespace ) | `false`
|
||||
`istio.tls.httpsRedirect` | Redirect traffic to TLS port | `false`
|
||||
`podPriorityClassName` | PriorityClass name for pod priority configuration | ""
|
||||
`securityContext.enabled` | Add securityContext to container | ""
|
||||
`securityContext.context` | securityContext to add | ""
|
||||
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
|
||||
Specify each parameter using the `--set key=value[,key=value]` argument to `helm upgrade`. For example,
|
||||
|
||||
```console
|
||||
helm install flagger/loadtester --name flagger-loadtester
|
||||
helm upgrade -i flagger-loadtester flagger/loadtester \
|
||||
--set "appmesh.enabled=true" \
|
||||
--set "appmesh.backends[0]=podinfo" \
|
||||
--set "appmesh.backends[1]=podinfo-canary"
|
||||
```
|
||||
|
||||
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
|
||||
|
||||
27
charts/loadtester/templates/appmesh.yaml
Normal file
27
charts/loadtester/templates/appmesh.yaml
Normal file
@@ -0,0 +1,27 @@
|
||||
{{- if .Values.appmesh.enabled }}
|
||||
apiVersion: appmesh.k8s.aws/v1beta2
|
||||
kind: VirtualNode
|
||||
metadata:
|
||||
name: {{ include "loadtester.fullname" . }}
|
||||
labels:
|
||||
app.kubernetes.io/name: {{ include "loadtester.name" . }}
|
||||
helm.sh/chart: {{ include "loadtester.chart" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: {{ include "loadtester.name" . }}
|
||||
logging:
|
||||
accessLog:
|
||||
file:
|
||||
path: /dev/stdout
|
||||
{{- if .Values.appmesh.backends }}
|
||||
backends:
|
||||
{{- range .Values.appmesh.backends }}
|
||||
- virtualService:
|
||||
virtualServiceRef:
|
||||
name: {{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
@@ -18,12 +18,24 @@ spec:
|
||||
app: {{ include "loadtester.name" . }}
|
||||
annotations:
|
||||
appmesh.k8s.aws/ports: "444"
|
||||
{{- if .Values.podAnnotations }}
|
||||
{{ toYaml .Values.podAnnotations | indent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.serviceAccountName }}
|
||||
serviceAccountName: {{ .Values.serviceAccountName }}
|
||||
{{- else if .Values.rbac.create }}
|
||||
serviceAccountName: {{ include "loadtester.fullname" . }}
|
||||
{{- end }}
|
||||
{{- if .Values.podPriorityClassName }}
|
||||
priorityClassName: {{ .Values.podPriorityClassName }}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}
|
||||
{{- if .Values.securityContext.enabled }}
|
||||
securityContext:
|
||||
{{ toYaml .Values.securityContext.context | indent 12 }}
|
||||
{{- end }}
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
ports:
|
||||
|
||||
30
charts/loadtester/templates/istio-gw.yaml
Normal file
30
charts/loadtester/templates/istio-gw.yaml
Normal file
@@ -0,0 +1,30 @@
|
||||
{{- if and (.Values.istio.enabled) (.Values.istio.gateway.enabled) }}
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: {{ include "loadtester.fullname" . }}
|
||||
spec:
|
||||
selector:
|
||||
istio: ingressgateway
|
||||
servers:
|
||||
- port:
|
||||
number: 80
|
||||
name: http-default
|
||||
protocol: HTTP
|
||||
hosts:
|
||||
- {{ .Values.istio.host }}
|
||||
{{- if .Values.istio.tls.enabled }}
|
||||
- port:
|
||||
number: 443
|
||||
name: https-default
|
||||
protocol: HTTPS
|
||||
tls:
|
||||
httpsRedirect: {{ .Values.istio.tls.httpsRedirect }}
|
||||
mode: SIMPLE
|
||||
serverCertificate: "sds"
|
||||
privateKey: "sds"
|
||||
credentialName: {{ include "loadtester.fullname" . }}
|
||||
hosts:
|
||||
- {{ .Values.istio.host }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
17
charts/loadtester/templates/istio-vs.yaml
Normal file
17
charts/loadtester/templates/istio-vs.yaml
Normal file
@@ -0,0 +1,17 @@
|
||||
{{- if .Values.istio.enabled }}
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: VirtualService
|
||||
metadata:
|
||||
name: {{ include "loadtester.fullname" . }}
|
||||
spec:
|
||||
gateways:
|
||||
- {{ include "loadtester.fullname" . }}
|
||||
hosts:
|
||||
- {{ .Values.istio.host }}
|
||||
http:
|
||||
- route:
|
||||
- destination:
|
||||
host: {{ include "loadtester.fullname" . }}
|
||||
port:
|
||||
number: {{ .Values.service.port }}
|
||||
{{- end }}
|
||||
54
charts/loadtester/templates/rbac.yaml
Normal file
54
charts/loadtester/templates/rbac.yaml
Normal file
@@ -0,0 +1,54 @@
|
||||
---
|
||||
{{- if .Values.rbac.create }}
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
{{- if eq .Values.rbac.scope "cluster" }}
|
||||
kind: ClusterRole
|
||||
{{- else }}
|
||||
kind: Role
|
||||
{{- end }}
|
||||
metadata:
|
||||
name: {{ template "loadtester.fullname" . }}
|
||||
labels:
|
||||
helm.sh/chart: {{ template "loadtester.chart" . }}
|
||||
app.kubernetes.io/name: {{ template "loadtester.name" . }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
rules:
|
||||
{{ toYaml .Values.rbac.rules | indent 2 }}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
{{- if eq .Values.rbac.scope "cluster" }}
|
||||
kind: ClusterRoleBinding
|
||||
{{- else }}
|
||||
kind: RoleBinding
|
||||
{{- end }}
|
||||
metadata:
|
||||
name: {{ template "loadtester.fullname" . }}
|
||||
labels:
|
||||
helm.sh/chart: {{ template "loadtester.chart" . }}
|
||||
app.kubernetes.io/name: {{ template "loadtester.name" . }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
{{- if eq .Values.rbac.scope "cluster" }}
|
||||
kind: ClusterRole
|
||||
{{- else }}
|
||||
kind: Role
|
||||
{{- end }}
|
||||
name: {{ template "loadtester.fullname" . }}
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ template "loadtester.fullname" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ template "loadtester.fullname" . }}
|
||||
labels:
|
||||
helm.sh/chart: {{ template "loadtester.chart" . }}
|
||||
app.kubernetes.io/name: {{ template "loadtester.name" . }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end }}
|
||||
@@ -2,9 +2,15 @@ replicaCount: 1
|
||||
|
||||
image:
|
||||
repository: weaveworks/flagger-loadtester
|
||||
tag: 0.6.1
|
||||
tag: 0.18.0
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
podAnnotations:
|
||||
prometheus.io/scrape: "true"
|
||||
prometheus.io/port: "8080"
|
||||
|
||||
podPriorityClassName: ""
|
||||
|
||||
logLevel: info
|
||||
cmd:
|
||||
timeout: 1h
|
||||
@@ -27,10 +33,49 @@ tolerations: []
|
||||
|
||||
affinity: {}
|
||||
|
||||
rbac:
|
||||
# rbac.create: `true` if rbac resources should be created
|
||||
create: false
|
||||
# rbac.scope: `cluster` to create cluster-scope rbac resources (ClusterRole/ClusterRoleBinding)
|
||||
# otherwise, namespace-scope rbac resources will be created (Role/RoleBinding)
|
||||
scope:
|
||||
# rbac.rules: array of rules to apply to the role. example:
|
||||
# rules:
|
||||
# - apiGroups: [""]
|
||||
# resources: ["pods"]
|
||||
# verbs: ["list", "get"]
|
||||
rules: []
|
||||
|
||||
# name of an existing service account to use - if not creating rbac resources
|
||||
serviceAccountName: ""
|
||||
|
||||
# App Mesh virtual node settings
|
||||
# App Mesh virtual node settings (to be used for AppMesh v1beta1)
|
||||
meshName: ""
|
||||
#backends:
|
||||
# - app1.namespace
|
||||
# - app2.namespace
|
||||
|
||||
# App Mesh virtual node settings (to be used for AppMesh v1beta2)
|
||||
appmesh:
|
||||
enabled: false
|
||||
backends:
|
||||
- podinfo
|
||||
- podinfo-canary
|
||||
|
||||
#Istio virtual service and gatway settings. TLS secrets should be in namespace before enbaled it. ( secret format loadtester.fullname )
|
||||
istio:
|
||||
enabled: false
|
||||
host: flagger-loadtester.flagger
|
||||
gateway:
|
||||
enabled: false
|
||||
tls:
|
||||
enabled: false
|
||||
httpsRedirect: false
|
||||
|
||||
# when enabled, it will add a security context for the loadtester pod
|
||||
securityContext:
|
||||
enabled: false
|
||||
context:
|
||||
readOnlyRootFilesystem: true
|
||||
runAsUser: 100
|
||||
runAsGroup: 101
|
||||
|
||||
@@ -1,12 +1,14 @@
|
||||
apiVersion: v1
|
||||
version: 3.0.0
|
||||
appVersion: 2.0.0
|
||||
version: 3.1.1
|
||||
appVersion: 3.1.0
|
||||
name: podinfo
|
||||
engine: gotpl
|
||||
description: Flagger canary deployment demo chart
|
||||
home: https://github.com/weaveworks/flagger
|
||||
maintainers:
|
||||
- email: stefanprodan@users.noreply.github.com
|
||||
name: stefanprodan
|
||||
description: Flagger canary deployment demo application
|
||||
home: https://docs.flagger.app
|
||||
icon: https://raw.githubusercontent.com/weaveworks/flagger/master/docs/logo/weaveworks.png
|
||||
sources:
|
||||
- https://github.com/weaveworks/flagger
|
||||
- https://github.com/stefanprodan/podinfo
|
||||
maintainers:
|
||||
- name: stefanprodan
|
||||
url: https://github.com/stefanprodan
|
||||
email: stefanprodan@users.noreply.github.com
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{{- if .Values.canary.enabled }}
|
||||
apiVersion: flagger.app/v1alpha3
|
||||
apiVersion: flagger.app/v1beta1
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: {{ template "podinfo.fullname" . }}
|
||||
@@ -13,7 +13,6 @@ spec:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: {{ template "podinfo.fullname" . }}
|
||||
progressDeadlineSeconds: 60
|
||||
autoscalerRef:
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
@@ -29,7 +28,7 @@ spec:
|
||||
trafficPolicy:
|
||||
tls:
|
||||
mode: {{ .Values.canary.istioTLS }}
|
||||
canaryAnalysis:
|
||||
analysis:
|
||||
interval: {{ .Values.canary.analysis.interval }}
|
||||
threshold: {{ .Values.canary.analysis.threshold }}
|
||||
maxWeight: {{ .Values.canary.analysis.maxWeight }}
|
||||
@@ -48,8 +47,8 @@ spec:
|
||||
url: {{ .Values.canary.helmtest.url }}
|
||||
timeout: 3m
|
||||
metadata:
|
||||
type: "helm"
|
||||
cmd: "test {{ .Release.Name }} --cleanup"
|
||||
type: "helmv3"
|
||||
cmd: "test {{ .Release.Name }} -n {{ .Release.Namespace }}"
|
||||
{{- end }}
|
||||
{{- if .Values.canary.loadtest.enabled }}
|
||||
- name: load-test-get
|
||||
@@ -57,10 +56,5 @@ spec:
|
||||
timeout: 5s
|
||||
metadata:
|
||||
cmd: "hey -z 1m -q 5 -c 2 http://{{ template "podinfo.fullname" . }}.{{ .Release.Namespace }}:{{ .Values.service.port }}"
|
||||
- name: load-test-post
|
||||
url: {{ .Values.canary.loadtest.url }}
|
||||
timeout: 5s
|
||||
metadata:
|
||||
cmd: "hey -z 1m -q 5 -c 2 -m POST -d '{\"test\": true}' http://{{ template "podinfo.fullname" . }}.{{ .Release.Namespace }}:{{ .Values.service.port }}/echo"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
@@ -21,6 +21,9 @@ spec:
|
||||
app: {{ template "podinfo.fullname" . }}
|
||||
annotations:
|
||||
prometheus.io/scrape: 'true'
|
||||
{{- if .Values.podAnnotations }}
|
||||
{{ toYaml .Values.podAnnotations | indent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 30
|
||||
containers:
|
||||
@@ -34,7 +37,12 @@ spec:
|
||||
- --random-delay={{ .Values.faults.delay }}
|
||||
- --random-error={{ .Values.faults.error }}
|
||||
- --config-path=/podinfo/config
|
||||
{{- range .Values.backends }}
|
||||
- --backend-url={{ . }}
|
||||
{{- end }}
|
||||
env:
|
||||
- name: PODINFO_UI_COLOR
|
||||
value: "#34577c"
|
||||
{{- if .Values.message }}
|
||||
- name: PODINFO_UI_MESSAGE
|
||||
value: {{ .Values.message }}
|
||||
|
||||
@@ -10,7 +10,7 @@ metadata:
|
||||
heritage: {{ .Release.Service }}
|
||||
spec:
|
||||
scaleTargetRef:
|
||||
apiVersion: apps/v1beta2
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: {{ template "podinfo.fullname" . }}
|
||||
minReplicas: {{ .Values.hpa.minReplicas }}
|
||||
@@ -28,10 +28,4 @@ spec:
|
||||
name: memory
|
||||
targetAverageValue: {{ .Values.hpa.memory }}
|
||||
{{- end }}
|
||||
{{- if .Values.hpa.requests }}
|
||||
- type: Pod
|
||||
pods:
|
||||
metricName: http_requests
|
||||
targetAverageValue: {{ .Values.hpa.requests }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
29
charts/podinfo/templates/tests/jwt.yaml
Normal file
29
charts/podinfo/templates/tests/jwt.yaml
Normal file
@@ -0,0 +1,29 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: {{ template "podinfo.fullname" . }}-jwt-test-{{ randAlphaNum 5 | lower }}
|
||||
labels:
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
app: {{ template "podinfo.name" . }}
|
||||
annotations:
|
||||
"helm.sh/hook": test-success
|
||||
sidecar.istio.io/inject: "false"
|
||||
linkerd.io/inject: disabled
|
||||
appmesh.k8s.aws/sidecarInjectorWebhook: disabled
|
||||
spec:
|
||||
containers:
|
||||
- name: tools
|
||||
image: giantswarm/tiny-tools
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- |
|
||||
TOKEN=$(curl -sd 'test' ${PODINFO_SVC}/token | jq -r .token) &&
|
||||
curl -H "Authorization: Bearer ${TOKEN}" ${PODINFO_SVC}/token/validate | grep test
|
||||
env:
|
||||
- name: PODINFO_SVC
|
||||
value: {{ template "podinfo.fullname" . }}:{{ .Values.service.port }}
|
||||
restartPolicy: Never
|
||||
|
||||
@@ -1,22 +0,0 @@
|
||||
{{- $url := printf "%s%s.%s:%v" (include "podinfo.fullname" .) (include "podinfo.suffix" .) .Release.Namespace .Values.service.port -}}
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ template "podinfo.fullname" . }}-tests
|
||||
labels:
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
app: {{ template "podinfo.name" . }}
|
||||
data:
|
||||
run.sh: |-
|
||||
@test "HTTP POST /echo" {
|
||||
run curl --retry 3 --connect-timeout 2 -sSX POST -d 'test' {{ $url }}/echo
|
||||
[ $output = "test" ]
|
||||
}
|
||||
@test "HTTP POST /store" {
|
||||
curl --retry 3 --connect-timeout 2 -sSX POST -d 'test' {{ $url }}/store
|
||||
}
|
||||
@test "HTTP GET /" {
|
||||
curl --retry 3 --connect-timeout 2 -sS {{ $url }} | grep hostname
|
||||
}
|
||||
@@ -1,43 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: {{ template "podinfo.fullname" . }}-tests-{{ randAlphaNum 5 | lower }}
|
||||
annotations:
|
||||
"helm.sh/hook": test-success
|
||||
sidecar.istio.io/inject: "false"
|
||||
labels:
|
||||
heritage: {{ .Release.Service }}
|
||||
release: {{ .Release.Name }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
|
||||
app: {{ template "podinfo.name" . }}
|
||||
spec:
|
||||
initContainers:
|
||||
- name: "test-framework"
|
||||
image: "dduportal/bats:0.4.0"
|
||||
command:
|
||||
- "bash"
|
||||
- "-c"
|
||||
- |
|
||||
set -ex
|
||||
# copy bats to tools dir
|
||||
cp -R /usr/local/libexec/ /tools/bats/
|
||||
volumeMounts:
|
||||
- mountPath: /tools
|
||||
name: tools
|
||||
containers:
|
||||
- name: {{ .Release.Name }}-ui-test
|
||||
image: dduportal/bats:0.4.0
|
||||
command: ["/tools/bats/bats", "-t", "/tests/run.sh"]
|
||||
volumeMounts:
|
||||
- mountPath: /tests
|
||||
name: tests
|
||||
readOnly: true
|
||||
- mountPath: /tools
|
||||
name: tools
|
||||
volumes:
|
||||
- name: tests
|
||||
configMap:
|
||||
name: {{ template "podinfo.fullname" . }}-tests
|
||||
- name: tools
|
||||
emptyDir: {}
|
||||
restartPolicy: Never
|
||||
@@ -1,22 +1,25 @@
|
||||
# Default values for podinfo.
|
||||
image:
|
||||
repository: stefanprodan/podinfo
|
||||
tag: 2.0.0
|
||||
tag: 3.1.0
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
podAnnotations: {}
|
||||
|
||||
service:
|
||||
enabled: false
|
||||
type: ClusterIP
|
||||
port: 9898
|
||||
|
||||
hpa:
|
||||
enabled: true
|
||||
minReplicas: 2
|
||||
maxReplicas: 2
|
||||
maxReplicas: 4
|
||||
cpu: 80
|
||||
memory: 512Mi
|
||||
|
||||
canary:
|
||||
enabled: true
|
||||
enabled: false
|
||||
# Istio traffic policy tls can be DISABLE or ISTIO_MUTUAL
|
||||
istioTLS: DISABLE
|
||||
istioIngress:
|
||||
@@ -69,6 +72,7 @@ fullnameOverride: ""
|
||||
|
||||
logLevel: info
|
||||
backend: #http://backend-podinfo:9898/echo
|
||||
backends: []
|
||||
message: #UI greetings
|
||||
|
||||
faults:
|
||||
|
||||
@@ -9,18 +9,9 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/Masterminds/semver"
|
||||
clientset "github.com/weaveworks/flagger/pkg/client/clientset/versioned"
|
||||
informers "github.com/weaveworks/flagger/pkg/client/informers/externalversions"
|
||||
"github.com/weaveworks/flagger/pkg/controller"
|
||||
"github.com/weaveworks/flagger/pkg/logger"
|
||||
"github.com/weaveworks/flagger/pkg/metrics"
|
||||
"github.com/weaveworks/flagger/pkg/notifier"
|
||||
"github.com/weaveworks/flagger/pkg/router"
|
||||
"github.com/weaveworks/flagger/pkg/server"
|
||||
"github.com/weaveworks/flagger/pkg/signals"
|
||||
"github.com/weaveworks/flagger/pkg/version"
|
||||
semver "github.com/Masterminds/semver/v3"
|
||||
"go.uber.org/zap"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/util/uuid"
|
||||
"k8s.io/client-go/kubernetes"
|
||||
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
|
||||
@@ -30,28 +21,45 @@ import (
|
||||
"k8s.io/client-go/tools/leaderelection/resourcelock"
|
||||
"k8s.io/client-go/transport"
|
||||
_ "k8s.io/code-generator/cmd/client-gen/generators"
|
||||
|
||||
"github.com/weaveworks/flagger/pkg/canary"
|
||||
clientset "github.com/weaveworks/flagger/pkg/client/clientset/versioned"
|
||||
informers "github.com/weaveworks/flagger/pkg/client/informers/externalversions"
|
||||
"github.com/weaveworks/flagger/pkg/controller"
|
||||
"github.com/weaveworks/flagger/pkg/logger"
|
||||
"github.com/weaveworks/flagger/pkg/metrics/observers"
|
||||
"github.com/weaveworks/flagger/pkg/notifier"
|
||||
"github.com/weaveworks/flagger/pkg/router"
|
||||
"github.com/weaveworks/flagger/pkg/server"
|
||||
"github.com/weaveworks/flagger/pkg/signals"
|
||||
"github.com/weaveworks/flagger/pkg/version"
|
||||
)
|
||||
|
||||
var (
|
||||
masterURL string
|
||||
kubeconfig string
|
||||
metricsServer string
|
||||
controlLoopInterval time.Duration
|
||||
logLevel string
|
||||
port string
|
||||
msteamsURL string
|
||||
slackURL string
|
||||
slackUser string
|
||||
slackChannel string
|
||||
threadiness int
|
||||
zapReplaceGlobals bool
|
||||
zapEncoding string
|
||||
namespace string
|
||||
meshProvider string
|
||||
selectorLabels string
|
||||
enableLeaderElection bool
|
||||
leaderElectionNamespace string
|
||||
ver bool
|
||||
masterURL string
|
||||
kubeconfig string
|
||||
metricsServer string
|
||||
controlLoopInterval time.Duration
|
||||
logLevel string
|
||||
port string
|
||||
msteamsURL string
|
||||
slackURL string
|
||||
slackUser string
|
||||
slackChannel string
|
||||
eventWebhook string
|
||||
threadiness int
|
||||
zapReplaceGlobals bool
|
||||
zapEncoding string
|
||||
namespace string
|
||||
meshProvider string
|
||||
selectorLabels string
|
||||
ingressAnnotationsPrefix string
|
||||
ingressClass string
|
||||
enableLeaderElection bool
|
||||
leaderElectionNamespace string
|
||||
enableConfigTracking bool
|
||||
ver bool
|
||||
kubeconfigServiceMesh string
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -64,16 +72,21 @@ func init() {
|
||||
flag.StringVar(&slackURL, "slack-url", "", "Slack hook URL.")
|
||||
flag.StringVar(&slackUser, "slack-user", "flagger", "Slack user name.")
|
||||
flag.StringVar(&slackChannel, "slack-channel", "", "Slack channel.")
|
||||
flag.StringVar(&eventWebhook, "event-webhook", "", "Webhook for publishing flagger events")
|
||||
flag.StringVar(&msteamsURL, "msteams-url", "", "MS Teams incoming webhook URL.")
|
||||
flag.IntVar(&threadiness, "threadiness", 2, "Worker concurrency.")
|
||||
flag.BoolVar(&zapReplaceGlobals, "zap-replace-globals", false, "Whether to change the logging level of the global zap logger.")
|
||||
flag.StringVar(&zapEncoding, "zap-encoding", "json", "Zap logger encoding.")
|
||||
flag.StringVar(&namespace, "namespace", "", "Namespace that flagger would watch canary object.")
|
||||
flag.StringVar(&meshProvider, "mesh-provider", "istio", "Service mesh provider, can be istio, linkerd, appmesh, supergloo, nginx or smi.")
|
||||
flag.StringVar(&meshProvider, "mesh-provider", "istio", "Service mesh provider, can be istio, linkerd, appmesh, contour, gloo, nginx or skipper.")
|
||||
flag.StringVar(&selectorLabels, "selector-labels", "app,name,app.kubernetes.io/name", "List of pod labels that Flagger uses to create pod selectors.")
|
||||
flag.StringVar(&ingressAnnotationsPrefix, "ingress-annotations-prefix", "nginx.ingress.kubernetes.io", "Annotations prefix for NGINX ingresses.")
|
||||
flag.StringVar(&ingressClass, "ingress-class", "", "Ingress class used for annotating HTTPProxy objects.")
|
||||
flag.BoolVar(&enableLeaderElection, "enable-leader-election", false, "Enable leader election.")
|
||||
flag.StringVar(&leaderElectionNamespace, "leader-election-namespace", "kube-system", "Namespace used to create the leader election config map.")
|
||||
flag.BoolVar(&enableConfigTracking, "enable-config-tracking", true, "Enable secrets and configmaps tracking.")
|
||||
flag.BoolVar(&ver, "version", false, "Print version")
|
||||
flag.StringVar(&kubeconfigServiceMesh, "kubeconfig-service-mesh", "", "Path to a kubeconfig for the service mesh control plane cluster.")
|
||||
}
|
||||
|
||||
func main() {
|
||||
@@ -96,6 +109,8 @@ func main() {
|
||||
|
||||
stopCh := signals.SetupSignalHandler()
|
||||
|
||||
logger.Infof("Starting flagger version %s revision %s mesh provider %s", version.VERSION, version.REVISION, meshProvider)
|
||||
|
||||
cfg, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfig)
|
||||
if err != nil {
|
||||
logger.Fatalf("Error building kubeconfig: %v", err)
|
||||
@@ -106,58 +121,39 @@ func main() {
|
||||
logger.Fatalf("Error building kubernetes clientset: %v", err)
|
||||
}
|
||||
|
||||
meshClient, err := clientset.NewForConfig(cfg)
|
||||
if err != nil {
|
||||
logger.Fatalf("Error building mesh clientset: %v", err)
|
||||
}
|
||||
|
||||
flaggerClient, err := clientset.NewForConfig(cfg)
|
||||
if err != nil {
|
||||
logger.Fatalf("Error building flagger clientset: %s", err.Error())
|
||||
}
|
||||
|
||||
flaggerInformerFactory := informers.NewSharedInformerFactoryWithOptions(flaggerClient, time.Second*30, informers.WithNamespace(namespace))
|
||||
|
||||
canaryInformer := flaggerInformerFactory.Flagger().V1alpha3().Canaries()
|
||||
|
||||
logger.Infof("Starting flagger version %s revision %s mesh provider %s", version.VERSION, version.REVISION, meshProvider)
|
||||
|
||||
ver, err := kubeClient.Discovery().ServerVersion()
|
||||
// use a remote cluster for routing if a service mesh kubeconfig is specified
|
||||
if kubeconfigServiceMesh == "" {
|
||||
kubeconfigServiceMesh = kubeconfig
|
||||
}
|
||||
cfgHost, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfigServiceMesh)
|
||||
if err != nil {
|
||||
logger.Fatalf("Error calling Kubernetes API: %v", err)
|
||||
logger.Fatalf("Error building host kubeconfig: %v", err)
|
||||
}
|
||||
|
||||
k8sVersionConstraint := "^1.11.0"
|
||||
|
||||
// We append -alpha.1 to the end of our version constraint so that prebuilds of later versions
|
||||
// are considered valid for our purposes, as well as some managed solutions like EKS where they provide
|
||||
// a version like `v1.12.6-eks-d69f1b`. It doesn't matter what the prelease value is here, just that it
|
||||
// exists in our constraint.
|
||||
semverConstraint, err := semver.NewConstraint(k8sVersionConstraint + "-alpha.1")
|
||||
meshClient, err := clientset.NewForConfig(cfgHost)
|
||||
if err != nil {
|
||||
logger.Fatalf("Error parsing kubernetes version constraint: %v", err)
|
||||
logger.Fatalf("Error building mesh clientset: %v", err)
|
||||
}
|
||||
|
||||
k8sSemver, err := semver.NewVersion(ver.GitVersion)
|
||||
if err != nil {
|
||||
logger.Fatalf("Error parsing kubernetes version as a semantic version: %v", err)
|
||||
}
|
||||
|
||||
if !semverConstraint.Check(k8sSemver) {
|
||||
logger.Fatalf("Unsupported version of kubernetes detected. Expected %s, got %v", k8sVersionConstraint, ver)
|
||||
}
|
||||
verifyCRDs(flaggerClient, logger)
|
||||
verifyKubernetesVersion(kubeClient, logger)
|
||||
infos := startInformers(flaggerClient, logger, stopCh)
|
||||
|
||||
labels := strings.Split(selectorLabels, ",")
|
||||
if len(labels) < 1 {
|
||||
logger.Fatalf("At least one selector label is required")
|
||||
}
|
||||
|
||||
logger.Infof("Connected to Kubernetes API %s", ver)
|
||||
if namespace != "" {
|
||||
logger.Infof("Watching namespace %s", namespace)
|
||||
}
|
||||
|
||||
observerFactory, err := metrics.NewFactory(metricsServer, meshProvider, 5*time.Second)
|
||||
observerFactory, err := observers.NewFactory(metricsServer)
|
||||
if err != nil {
|
||||
logger.Fatalf("Error building prometheus client: %s", err.Error())
|
||||
}
|
||||
@@ -175,34 +171,36 @@ func main() {
|
||||
// start HTTP server
|
||||
go server.ListenAndServe(port, 3*time.Second, logger, stopCh)
|
||||
|
||||
routerFactory := router.NewFactory(cfg, kubeClient, flaggerClient, logger, meshClient)
|
||||
routerFactory := router.NewFactory(cfg, kubeClient, flaggerClient, ingressAnnotationsPrefix, ingressClass, logger, meshClient)
|
||||
|
||||
var configTracker canary.Tracker
|
||||
if enableConfigTracking {
|
||||
configTracker = &canary.ConfigTracker{
|
||||
Logger: logger,
|
||||
KubeClient: kubeClient,
|
||||
FlaggerClient: flaggerClient,
|
||||
}
|
||||
} else {
|
||||
configTracker = &canary.NopTracker{}
|
||||
}
|
||||
|
||||
canaryFactory := canary.NewFactory(kubeClient, flaggerClient, configTracker, labels, logger)
|
||||
|
||||
c := controller.NewController(
|
||||
kubeClient,
|
||||
meshClient,
|
||||
flaggerClient,
|
||||
canaryInformer,
|
||||
infos,
|
||||
controlLoopInterval,
|
||||
logger,
|
||||
notifierClient,
|
||||
canaryFactory,
|
||||
routerFactory,
|
||||
observerFactory,
|
||||
meshProvider,
|
||||
version.VERSION,
|
||||
labels,
|
||||
fromEnv("EVENT_WEBHOOK_URL", eventWebhook),
|
||||
)
|
||||
|
||||
flaggerInformerFactory.Start(stopCh)
|
||||
|
||||
logger.Info("Waiting for informer caches to sync")
|
||||
for _, synced := range []cache.InformerSynced{
|
||||
canaryInformer.Informer().HasSynced,
|
||||
} {
|
||||
if ok := cache.WaitForCacheSync(stopCh, synced); !ok {
|
||||
logger.Fatalf("Failed to wait for cache sync")
|
||||
}
|
||||
}
|
||||
|
||||
// leader election context
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
@@ -235,6 +233,37 @@ func main() {
|
||||
}
|
||||
}
|
||||
|
||||
func startInformers(flaggerClient clientset.Interface, logger *zap.SugaredLogger, stopCh <-chan struct{}) controller.Informers {
|
||||
flaggerInformerFactory := informers.NewSharedInformerFactoryWithOptions(flaggerClient, time.Second*30, informers.WithNamespace(namespace))
|
||||
|
||||
logger.Info("Waiting for canary informer cache to sync")
|
||||
canaryInformer := flaggerInformerFactory.Flagger().V1beta1().Canaries()
|
||||
go canaryInformer.Informer().Run(stopCh)
|
||||
if ok := cache.WaitForNamedCacheSync("flagger", stopCh, canaryInformer.Informer().HasSynced); !ok {
|
||||
logger.Fatalf("failed to wait for cache to sync")
|
||||
}
|
||||
|
||||
logger.Info("Waiting for metric template informer cache to sync")
|
||||
metricInformer := flaggerInformerFactory.Flagger().V1beta1().MetricTemplates()
|
||||
go metricInformer.Informer().Run(stopCh)
|
||||
if ok := cache.WaitForNamedCacheSync("flagger", stopCh, metricInformer.Informer().HasSynced); !ok {
|
||||
logger.Fatalf("failed to wait for cache to sync")
|
||||
}
|
||||
|
||||
logger.Info("Waiting for alert provider informer cache to sync")
|
||||
alertInformer := flaggerInformerFactory.Flagger().V1beta1().AlertProviders()
|
||||
go alertInformer.Informer().Run(stopCh)
|
||||
if ok := cache.WaitForNamedCacheSync("flagger", stopCh, alertInformer.Informer().HasSynced); !ok {
|
||||
logger.Fatalf("failed to wait for cache to sync")
|
||||
}
|
||||
|
||||
return controller.Informers{
|
||||
CanaryInformer: canaryInformer,
|
||||
MetricInformer: metricInformer,
|
||||
AlertInformer: alertInformer,
|
||||
}
|
||||
}
|
||||
|
||||
func startLeaderElection(ctx context.Context, run func(), ns string, kubeClient kubernetes.Interface, logger *zap.SugaredLogger) {
|
||||
configMapName := "flagger-leader-election"
|
||||
id, err := os.Hostname()
|
||||
@@ -284,21 +313,72 @@ func startLeaderElection(ctx context.Context, run func(), ns string, kubeClient
|
||||
|
||||
func initNotifier(logger *zap.SugaredLogger) (client notifier.Interface) {
|
||||
provider := "slack"
|
||||
notifierURL := slackURL
|
||||
if msteamsURL != "" {
|
||||
notifierURL := fromEnv("SLACK_URL", slackURL)
|
||||
if msteamsURL != "" || os.Getenv("MSTEAMS_URL") != "" {
|
||||
provider = "msteams"
|
||||
notifierURL = msteamsURL
|
||||
notifierURL = fromEnv("MSTEAMS_URL", msteamsURL)
|
||||
}
|
||||
notifierFactory := notifier.NewFactory(notifierURL, slackUser, slackChannel)
|
||||
|
||||
if notifierURL != "" {
|
||||
var err error
|
||||
client, err = notifierFactory.Notifier(provider)
|
||||
if err != nil {
|
||||
logger.Errorf("Notifier %v", err)
|
||||
} else {
|
||||
logger.Infof("Notifications enabled for %s", notifierURL[0:30])
|
||||
}
|
||||
var err error
|
||||
client, err = notifierFactory.Notifier(provider)
|
||||
if err != nil {
|
||||
logger.Errorf("Notifier %v", err)
|
||||
} else if len(notifierURL) > 30 {
|
||||
logger.Infof("Notifications enabled for %s", notifierURL[0:30])
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func fromEnv(envVar string, defaultVal string) string {
|
||||
if v := os.Getenv(envVar); v != "" {
|
||||
return v
|
||||
}
|
||||
return defaultVal
|
||||
}
|
||||
|
||||
func verifyCRDs(flaggerClient clientset.Interface, logger *zap.SugaredLogger) {
|
||||
_, err := flaggerClient.FlaggerV1beta1().Canaries(namespace).List(context.TODO(), metav1.ListOptions{Limit: 1})
|
||||
if err != nil {
|
||||
logger.Fatalf("Canary CRD is not registered %v", err)
|
||||
}
|
||||
|
||||
_, err = flaggerClient.FlaggerV1beta1().MetricTemplates(namespace).List(context.TODO(), metav1.ListOptions{Limit: 1})
|
||||
if err != nil {
|
||||
logger.Fatalf("MetricTemplate CRD is not registered %v", err)
|
||||
}
|
||||
|
||||
_, err = flaggerClient.FlaggerV1beta1().AlertProviders(namespace).List(context.TODO(), metav1.ListOptions{Limit: 1})
|
||||
if err != nil {
|
||||
logger.Fatalf("AlertProvider CRD is not registered %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func verifyKubernetesVersion(kubeClient kubernetes.Interface, logger *zap.SugaredLogger) {
|
||||
ver, err := kubeClient.Discovery().ServerVersion()
|
||||
if err != nil {
|
||||
logger.Fatalf("Error calling Kubernetes API: %v", err)
|
||||
}
|
||||
|
||||
k8sVersionConstraint := "^1.11.0"
|
||||
|
||||
// We append -alpha.1 to the end of our version constraint so that prebuilds of later versions
|
||||
// are considered valid for our purposes, as well as some managed solutions like EKS where they provide
|
||||
// a version like `v1.12.6-eks-d69f1b`. It doesn't matter what the prelease value is here, just that it
|
||||
// exists in our constraint.
|
||||
semverConstraint, err := semver.NewConstraint(k8sVersionConstraint + "-alpha.1")
|
||||
if err != nil {
|
||||
logger.Fatalf("Error parsing kubernetes version constraint: %v", err)
|
||||
}
|
||||
|
||||
k8sSemver, err := semver.NewVersion(ver.GitVersion)
|
||||
if err != nil {
|
||||
logger.Fatalf("Error parsing kubernetes version as a semantic version: %v", err)
|
||||
}
|
||||
|
||||
if !semverConstraint.Check(k8sSemver) {
|
||||
logger.Fatalf("Unsupported version of kubernetes detected. Expected %s, got %v", k8sVersionConstraint, ver)
|
||||
}
|
||||
|
||||
logger.Infof("Connected to Kubernetes API %s", ver)
|
||||
}
|
||||
|
||||
@@ -2,15 +2,16 @@ package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
"github.com/weaveworks/flagger/pkg/loadtester"
|
||||
"github.com/weaveworks/flagger/pkg/logger"
|
||||
"github.com/weaveworks/flagger/pkg/signals"
|
||||
"go.uber.org/zap"
|
||||
"log"
|
||||
"time"
|
||||
)
|
||||
|
||||
var VERSION = "0.6.1"
|
||||
var VERSION = "0.18.0"
|
||||
var (
|
||||
logLevel string
|
||||
port string
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 158 KiB After Width: | Height: | Size: 30 KiB |
BIN
docs/diagrams/flagger-canary-traffic-mirroring.png
Normal file
BIN
docs/diagrams/flagger-canary-traffic-mirroring.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 39 KiB |
BIN
docs/diagrams/flagger-contour-overview.png
Normal file
BIN
docs/diagrams/flagger-contour-overview.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 40 KiB |
BIN
docs/diagrams/flagger-gitops-contour.png
Normal file
BIN
docs/diagrams/flagger-gitops-contour.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 37 KiB |
BIN
docs/diagrams/flagger-skipper-overview.png
Normal file
BIN
docs/diagrams/flagger-skipper-overview.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 47 KiB |
@@ -4,19 +4,45 @@ description: Flagger is a progressive delivery Kubernetes operator
|
||||
|
||||
# Introduction
|
||||
|
||||
[Flagger](https://github.com/weaveworks/flagger) is a **Kubernetes** operator that automates the promotion of canary
|
||||
deployments using **Istio**, **Linkerd**, **App Mesh**, **NGINX** or **Gloo** routing for traffic shifting and **Prometheus** metrics for canary analysis.
|
||||
The canary analysis can be extended with webhooks for running system integration/acceptance tests, load tests, or any other custom validation.
|
||||
[Flagger](https://github.com/weaveworks/flagger) is a **Kubernetes** operator that automates the promotion of
|
||||
canary deployments using **Istio**, **Linkerd**, **App Mesh**, **NGINX**, **Skipper**, **Contour** or **Gloo** routing for
|
||||
traffic shifting and **Prometheus** metrics for canary analysis. The canary analysis can be extended with webhooks for
|
||||
running system integration/acceptance tests, load tests, or any other custom validation.
|
||||
|
||||
Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance
|
||||
indicators like HTTP requests success rate, requests average duration and pods health.
|
||||
Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators
|
||||
like HTTP requests success rate, requests average duration and pods health.
|
||||
Based on analysis of the **KPIs** a canary is promoted or aborted, and the analysis result is published to **Slack** or **MS Teams**.
|
||||
|
||||

|
||||
|
||||
Flagger can be configured with Kubernetes custom resources and is compatible with
|
||||
any CI/CD solutions made for Kubernetes. Since Flagger is declarative and reacts to Kubernetes events,
|
||||
it can be used in **GitOps** pipelines together with Weave Flux or JenkinsX.
|
||||
Flagger can be configured with Kubernetes custom resources and is compatible with any CI/CD solutions made for Kubernetes.
|
||||
Since Flagger is declarative and reacts to Kubernetes events,
|
||||
it can be used in **GitOps** pipelines together with Flux CD or JenkinsX.
|
||||
|
||||
This project is sponsored by [Weaveworks](https://www.weave.works/)
|
||||
|
||||
## Getting started
|
||||
|
||||
To get started with Flagger, chose one of the supported routing providers
|
||||
and [install](install/flagger-install-on-kubernetes.md) Flagger with Helm or Kustomize.
|
||||
|
||||
After install Flagger, you can follow one of the tutorials:
|
||||
|
||||
**Service mesh tutorials**
|
||||
|
||||
* [Istio](tutorials/istio-progressive-delivery.md)
|
||||
* [Linkerd](tutorials/linkerd-progressive-delivery.md)
|
||||
* [AWS App Mesh](tutorials/appmesh-progressive-delivery.md)
|
||||
|
||||
**Ingress controller tutorials**
|
||||
|
||||
* [Contour](tutorials/contour-progressive-delivery.md)
|
||||
* [Gloo](tutorials/gloo-progressive-delivery.md)
|
||||
* [NGINX Ingress](tutorials/nginx-progressive-delivery.md)
|
||||
* [Skipper Ingress](tutorials/skipper-progressive-delivery.md)
|
||||
|
||||
**Hands-on GitOps workshops**
|
||||
|
||||
* [Istio](https://github.com/stefanprodan/gitops-istio)
|
||||
* [Linkerd](https://helm.workshop.flagger.dev)
|
||||
* [AWS App Mesh](https://eks.handson.flagger.dev)
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
# Table of contents
|
||||
|
||||
* [Introduction](README.md)
|
||||
* [How it works](how-it-works.md)
|
||||
* [FAQ](faq.md)
|
||||
|
||||
## Install
|
||||
@@ -9,22 +8,34 @@
|
||||
* [Flagger Install on Kubernetes](install/flagger-install-on-kubernetes.md)
|
||||
* [Flagger Install on GKE Istio](install/flagger-install-on-google-cloud.md)
|
||||
* [Flagger Install on EKS App Mesh](install/flagger-install-on-eks-appmesh.md)
|
||||
* [Flagger Install with SuperGloo](install/flagger-install-with-supergloo.md)
|
||||
|
||||
## Usage
|
||||
|
||||
* [Istio Canary Deployments](usage/progressive-delivery.md)
|
||||
* [Istio A/B Testing](usage/ab-testing.md)
|
||||
* [Linkerd Canary Deployments](usage/linkerd-progressive-delivery.md)
|
||||
* [App Mesh Canary Deployments](usage/appmesh-progressive-delivery.md)
|
||||
* [NGINX Canary Deployments](usage/nginx-progressive-delivery.md)
|
||||
* [Gloo Canary Deployments](usage/gloo-progressive-delivery.md)
|
||||
* [Blue/Green Deployments](usage/blue-green.md)
|
||||
* [Monitoring](usage/monitoring.md)
|
||||
* [How it works](usage/how-it-works.md)
|
||||
* [Deployment Strategies](usage/deployment-strategies.md)
|
||||
* [Metrics Analysis](usage/metrics.md)
|
||||
* [Webhooks](usage/webhooks.md)
|
||||
* [Alerting](usage/alerting.md)
|
||||
* [Monitoring](usage/monitoring.md)
|
||||
|
||||
## Tutorials
|
||||
|
||||
* [SMI Istio Canary Deployments](tutorials/flagger-smi-istio.md)
|
||||
* [Istio Canary Deployments](tutorials/istio-progressive-delivery.md)
|
||||
* [Istio A/B Testing](tutorials/istio-ab-testing.md)
|
||||
* [Linkerd Canary Deployments](tutorials/linkerd-progressive-delivery.md)
|
||||
* [App Mesh Canary Deployments](tutorials/appmesh-progressive-delivery.md)
|
||||
* [Contour Canary Deployments](tutorials/contour-progressive-delivery.md)
|
||||
* [Gloo Canary Deployments](tutorials/gloo-progressive-delivery.md)
|
||||
* [NGINX Canary Deployments](tutorials/nginx-progressive-delivery.md)
|
||||
* [Skipper Canary Deployments](tutorials/skipper-progressive-delivery.md)
|
||||
* [Blue/Green Deployments](tutorials/kubernetes-blue-green.md)
|
||||
* [Crossover Canary Deployments](tutorials/crossover-progressive-delivery.md)
|
||||
* [Canary analysis with Prometheus Operator](tutorials/prometheus-operator.md)
|
||||
* [Canaries with Helm charts and GitOps](tutorials/canary-helm-gitops.md)
|
||||
* [Zero downtime deployments](tutorials/zero-downtime-deployments.md)
|
||||
|
||||
## Dev
|
||||
|
||||
* [Development Guide](dev/dev-guide.md)
|
||||
* [Release Guide](dev/release-guide.md)
|
||||
* [Upgrade Guide](dev/upgrade-guide.md)
|
||||
|
||||
211
docs/gitbook/dev/dev-guide.md
Normal file
211
docs/gitbook/dev/dev-guide.md
Normal file
@@ -0,0 +1,211 @@
|
||||
# Development Guide
|
||||
|
||||
This document describes how to build, test and run Flagger from source.
|
||||
|
||||
### Setup dev environment
|
||||
|
||||
Flagger is written in Go and uses Go modules for dependency management.
|
||||
|
||||
On your dev machine install the following tools:
|
||||
* go >= 1.14
|
||||
* git >= 2.20
|
||||
* bash >= 5.0
|
||||
* make >= 3.81
|
||||
* kubectl >= 1.16
|
||||
* kustomize >= 3.5
|
||||
* helm >= 3.0
|
||||
* docker >= 19.03
|
||||
|
||||
You'll also need a Kubernetes cluster for testing Flagger.
|
||||
You can use Minikube, Kind, Docker desktop or any remote cluster
|
||||
(AKS/EKS/GKE/etc) Kubernetes version 1.14 or newer.
|
||||
|
||||
To start contributing to Flagger, fork the [repository](https://github.com/weaveworks/flagger) on GitHub.
|
||||
|
||||
Create a dir inside your `GOPATH`:
|
||||
|
||||
```bash
|
||||
mkdir -p $GOPATH/src/github.com/weaveworks
|
||||
```
|
||||
|
||||
Clone your fork:
|
||||
|
||||
```bash
|
||||
cd $GOPATH/src/github.com/weaveworks
|
||||
git clone https://github.com/YOUR_USERNAME/flagger
|
||||
cd flagger
|
||||
```
|
||||
|
||||
Set Flagger repository as upstream:
|
||||
|
||||
```bash
|
||||
git remote add upstream https://github.com/weaveworks/flagger.git
|
||||
```
|
||||
|
||||
Sync your fork regularly to keep it up-to-date with upstream:
|
||||
|
||||
```bash
|
||||
git fetch upstream
|
||||
git checkout master
|
||||
git merge upstream/master
|
||||
```
|
||||
|
||||
### Build
|
||||
|
||||
Download Go modules:
|
||||
|
||||
```bash
|
||||
go mod download
|
||||
```
|
||||
|
||||
Build Flagger binary and container image:
|
||||
|
||||
```bash
|
||||
make build
|
||||
```
|
||||
|
||||
Build load tester binary and container image:
|
||||
|
||||
```bash
|
||||
make loadtester-build
|
||||
```
|
||||
|
||||
### Code changes
|
||||
|
||||
Before submitting a PR, make sure your changes are covered by unit tests.
|
||||
|
||||
If you made changes to `go.mod` run:
|
||||
|
||||
```bash
|
||||
go mod tidy
|
||||
```
|
||||
|
||||
If you made changes to `pkg/apis` regenerate Kubernetes client sets with:
|
||||
|
||||
```bash
|
||||
make codegen
|
||||
```
|
||||
|
||||
Run code formatters:
|
||||
|
||||
```bash
|
||||
make fmt
|
||||
```
|
||||
|
||||
Run unit tests:
|
||||
|
||||
```bash
|
||||
make test
|
||||
```
|
||||
|
||||
### API changes
|
||||
|
||||
If you made changes to `pkg/apis` regenerate the Kubernetes client sets with:
|
||||
|
||||
```bash
|
||||
make codegen
|
||||
```
|
||||
|
||||
Update the validation spec in `artifacts/flagger/crd.yaml` and run:
|
||||
|
||||
```bash
|
||||
make crd
|
||||
```
|
||||
|
||||
Note that any change to the CRDs must be accompanied by an update to the Open API schema.
|
||||
|
||||
### Manual testing
|
||||
|
||||
Install a service mesh and/or an ingress controller on your cluster and deploy Flagger
|
||||
using one of the install options [listed here](https://docs.flagger.app/install/flagger-install-on-kubernetes).
|
||||
|
||||
If you made changes to the CRDs, apply your local copy with:
|
||||
|
||||
```bash
|
||||
kubectl apply -f artifacts/flagger/crd.yaml
|
||||
```
|
||||
|
||||
Shutdown the Flagger instance installed on your cluster (replace the namespace with your mesh/ingress one):
|
||||
|
||||
```bash
|
||||
kubectl -n istio-system scale deployment/flagger --replicas=0
|
||||
```
|
||||
|
||||
Port forward to your Prometheus instance:
|
||||
|
||||
```bash
|
||||
kubectl -n istio-system port-forward svc/prometheus 9090:9090
|
||||
```
|
||||
|
||||
Run Flagger locally against your remote cluster by specifying a kubeconfig path:
|
||||
|
||||
```bash
|
||||
go run cmd/flagger/ -kubeconfig=$HOME/.kube/config \
|
||||
-log-level=info \
|
||||
-mesh-provider=istio \
|
||||
-metrics-server=http://localhost:9090
|
||||
```
|
||||
|
||||
Another option to manually test your changes is to build and push the image to your container registry:
|
||||
|
||||
```bash
|
||||
make build
|
||||
docker tag weaveworks/flagger:latest <YOUR-DOCKERHUB-USERNAME>/flagger:<YOUR-TAG>
|
||||
docker push <YOUR-DOCKERHUB-USERNAME>/flagger:<YOUR-TAG>
|
||||
```
|
||||
|
||||
Deploy your image on the cluster and scale up Flagger:
|
||||
|
||||
```bash
|
||||
kubectl -n istio-system set image deployment/flagger flagger=<YOUR-DOCKERHUB-USERNAME>/flagger:<YOUR-TAG>
|
||||
kubectl -n istio-system scale deployment/flagger --replicas=1
|
||||
```
|
||||
|
||||
Now you can use one of the [tutorials](https://docs.flagger.app/) to manually test your changes.
|
||||
|
||||
### Integration testing
|
||||
|
||||
Flagger end-to-end tests can be run locally with [Kubernetes Kind](https://github.com/kubernetes-sigs/kind).
|
||||
|
||||
Create a Kind cluster:
|
||||
|
||||
```bash
|
||||
kind create cluster
|
||||
```
|
||||
|
||||
Install a service mesh and/or an ingress controller in Kind.
|
||||
|
||||
Linkerd example:
|
||||
|
||||
```bash
|
||||
linkerd install | kubectl apply -f -
|
||||
linkerd check
|
||||
```
|
||||
|
||||
Build Flagger container image and load it on the cluster:
|
||||
|
||||
```bash
|
||||
make build
|
||||
docker tag weaveworks/flagger:latest test/flagger:latest
|
||||
kind load docker-image test/flagger:latest
|
||||
```
|
||||
|
||||
Install Flagger on the cluster and set the test image:
|
||||
|
||||
```bash
|
||||
kubectl apply -k ./kustomize/linkerd
|
||||
kubectl -n linkerd set image deployment/flagger flagger=test/flagger:latest
|
||||
kubectl -n linkerd rollout status deployment/flagger
|
||||
```
|
||||
|
||||
Run the Linkerd e2e tests:
|
||||
|
||||
```bash
|
||||
./test/e2e-linkerd-tests.sh
|
||||
```
|
||||
|
||||
For each service mesh and ingress controller there is a dedicated e2e test suite,
|
||||
chose one that matches your changes from this [list](https://github.com/weaveworks/flagger/tree/master/test).
|
||||
|
||||
When you open a pull request on Flagger repo, the unit and integration tests will be run in CI.
|
||||
|
||||
34
docs/gitbook/dev/release-guide.md
Normal file
34
docs/gitbook/dev/release-guide.md
Normal file
@@ -0,0 +1,34 @@
|
||||
# Release Guide
|
||||
|
||||
This document describes how to release Flagger.
|
||||
|
||||
### Release
|
||||
|
||||
To release a new Flagger version (e.g. `2.0.0`) follow these steps:
|
||||
* create a branch `git checkout -b prep-2.0.0`
|
||||
* set the version in code and manifests `TAG=2.0.0 make version-set`
|
||||
* commit changes and merge PR
|
||||
* checkout master `git checkout master && git pull`
|
||||
* tag master `make release`
|
||||
|
||||
### CI
|
||||
|
||||
After the tag has been pushed to GitHub, the CI release pipeline does the following:
|
||||
* creates a GitHub release
|
||||
* pushes the Flagger binary and change log to GitHub release
|
||||
* pushes the Flagger container image to Docker Hub
|
||||
* pushes the Helm chart to github-pages branch
|
||||
* GitHub pages publishes the new chart version on the Helm repository
|
||||
|
||||
### Docs
|
||||
|
||||
The documentation [website](https://docs.flagger.app) is built from the `docs` branch.
|
||||
|
||||
After a Flagger release, publish the docs with:
|
||||
* `git checkout master && git pull`
|
||||
* `git checkout docs`
|
||||
* `git rebase master`
|
||||
* `git push origin docs`
|
||||
|
||||
|
||||
|
||||
90
docs/gitbook/dev/upgrade-guide.md
Normal file
90
docs/gitbook/dev/upgrade-guide.md
Normal file
@@ -0,0 +1,90 @@
|
||||
# Upgrade Guide
|
||||
|
||||
This document describes how to upgrade Flagger.
|
||||
|
||||
### Upgrade canaries v1alpha3 to v1beta1
|
||||
|
||||
Canary CRD changes in `canaries.flagger.app/v1beta1`:
|
||||
* the `spec.canaryAnalysis` field has been deprecated and replaced with `spec.analysis`
|
||||
* the `spec.analysis.interval` and `spec.analysis.threshold` fields are required
|
||||
* the `status.lastAppliedSpec` and `status.lastPromotedSpec` hashing algorithm changed to `hash/fnv`
|
||||
* the `spec.analysis.alerts` array can reference `alertproviders.flagger.app/v1beta1` resources
|
||||
* the `spec.analysis.metrics[].templateRef` can reference a `metrictemplate.flagger.app/v1beta1` resource
|
||||
* the `metric.threshold` field has been deprecated and replaced with `metric.thresholdRange`
|
||||
* the `metric.query` field has been deprecated and replaced with `metric.templateRef`
|
||||
* the `spec.ingressRef.apiVersion` accepts `networking.k8s.io/v1beta1`
|
||||
* the `spec.targetRef` can reference `DaemonSet` kind
|
||||
* the `spec.service.meshName` field has been deprecated and no longer used for `provider: appmesh:v1beta2`
|
||||
|
||||
Upgrade procedure:
|
||||
* install the `v1beta1` CRDs
|
||||
* update Flagger deployment
|
||||
* replace `apiVersion: flagger.app/v1alpha3` with `apiVersion: flagger.app/v1beta1` in all canary manifests
|
||||
* replace `spec.canaryAnalysis` with `spec.analysis` in all canary manifests
|
||||
* update canary manifests in cluster
|
||||
|
||||
**Note** that after upgrading Flagger, all canaries will be triggered as the hash value used for tracking changes
|
||||
is computed differently. You can set `spec.skipAnalysis: true` in all canary manifests before upgrading Flagger,
|
||||
do the upgrade, wait for Flagger to finish the no-op promotions and finally set `skipAnalysis` to `false`.
|
||||
|
||||
Update builtin metrics:
|
||||
* replace `threshold` with `thresholdRange.min` for request-success-rate
|
||||
* replace `threshold` with `thresholdRange.max` for request-duration
|
||||
|
||||
```yaml
|
||||
metrics:
|
||||
- name: request-success-rate
|
||||
thresholdRange:
|
||||
min: 99
|
||||
interval: 1m
|
||||
- name: request-duration
|
||||
thresholdRange:
|
||||
max: 500
|
||||
interval: 1m
|
||||
```
|
||||
|
||||
### Istio telemetry v2
|
||||
|
||||
Istio 1.5 comes with a breaking change for Flagger uses. In Istio telemetry v2 the metric
|
||||
`istio_request_duration_seconds_bucket` has been removed and replaced with `istio_request_duration_milliseconds_bucket`
|
||||
and this breaks the `request-duration` metric check.
|
||||
|
||||
If are using **Istio 1.4**, you can create a metric template using the old duration metric like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: flagger.app/v1beta1
|
||||
kind: MetricTemplate
|
||||
metadata:
|
||||
name: latency
|
||||
namespace: istio-system
|
||||
spec:
|
||||
provider:
|
||||
type: prometheus
|
||||
address: http://prometheus.istio-system:9090
|
||||
query: |
|
||||
histogram_quantile(
|
||||
0.99,
|
||||
sum(
|
||||
rate(
|
||||
istio_request_duration_seconds_bucket{
|
||||
reporter="destination",
|
||||
destination_workload_namespace="{{ namespace }}",
|
||||
destination_workload=~"{{ target }}"
|
||||
}[{{ interval }}]
|
||||
)
|
||||
) by (le)
|
||||
)
|
||||
```
|
||||
|
||||
In the canary manifests, replace the `request-duration` metric with `latency`:
|
||||
|
||||
```yaml
|
||||
metrics:
|
||||
- name: latency
|
||||
templateRef:
|
||||
name: latency
|
||||
namespace: istio-system
|
||||
thresholdRange:
|
||||
max: 0.500
|
||||
interval: 1m
|
||||
```
|
||||
@@ -4,104 +4,48 @@
|
||||
|
||||
**Which deployment strategies are supported by Flagger?**
|
||||
|
||||
Flagger can run automated application analysis, promotion and rollback for the following deployment strategies:
|
||||
* Canary (progressive traffic shifting)
|
||||
* Istio, Linkerd, App Mesh, NGINX, Gloo
|
||||
* A/B Testing (HTTP headers and cookies traffic routing)
|
||||
* Istio, NGINX
|
||||
* Blue/Green (traffic switch)
|
||||
* Kubernetes CNI
|
||||
|
||||
For Canary deployments and A/B testing you'll need a Layer 7 traffic management solution like a service mesh or an ingress controller.
|
||||
For Blue/Green deployments no service mesh or ingress controller is required.
|
||||
Flagger implements the following deployment strategies:
|
||||
* [Canary Release](usage/deployment-strategies.md#canary-release)
|
||||
* [A/B Testing](usage/deployment-strategies.md#a-b-testing)
|
||||
* [Blue/Green](usage/deployment-strategies.md#blue-green-deployments)
|
||||
* [Blue/Green Mirroring](usage/deployment-strategies.md#blue-green-with-traffic-mirroring)
|
||||
|
||||
**When should I use A/B testing instead of progressive traffic shifting?**
|
||||
|
||||
For frontend applications that require session affinity you should use HTTP headers or cookies match conditions
|
||||
to ensure a set of users will stay on the same version for the whole duration of the canary analysis.
|
||||
A/B testing is supported by Istio and NGINX only.
|
||||
|
||||
Istio example:
|
||||
|
||||
```yaml
|
||||
canaryAnalysis:
|
||||
# schedule interval (default 60s)
|
||||
interval: 1m
|
||||
# total number of iterations
|
||||
iterations: 10
|
||||
# max number of failed iterations before rollback
|
||||
threshold: 2
|
||||
# canary match condition
|
||||
match:
|
||||
- headers:
|
||||
x-canary:
|
||||
regex: ".*insider.*"
|
||||
- headers:
|
||||
cookie:
|
||||
regex: "^(.*?;)?(canary=always)(;.*)?$"
|
||||
```
|
||||
|
||||
NGINX example:
|
||||
|
||||
```yaml
|
||||
canaryAnalysis:
|
||||
interval: 1m
|
||||
threshold: 10
|
||||
iterations: 2
|
||||
match:
|
||||
- headers:
|
||||
x-canary:
|
||||
exact: "insider"
|
||||
- headers:
|
||||
cookie:
|
||||
exact: "canary"
|
||||
```
|
||||
|
||||
Note that the NGINX ingress controller supports only exact matching for a single header and the cookie value is set to `always`.
|
||||
|
||||
The above configurations will route users with the x-canary header or canary cookie to the canary instance during analysis:
|
||||
|
||||
```bash
|
||||
curl -H 'X-Canary: insider' http://app.example.com
|
||||
curl -b 'canary=always' http://app.example.com
|
||||
```
|
||||
|
||||
**Can I use Flagger to manage applications that live outside of a service mesh?**
|
||||
|
||||
For applications that are not deployed on a service mesh, Flagger can orchestrate Blue/Green style deployments
|
||||
with Kubernetes L4 networking.
|
||||
|
||||
Blue/Green example:
|
||||
**When can I use traffic mirroring?**
|
||||
|
||||
Traffic mirroring can be used for Blue/Green deployment strategy or a pre-stage in a Canary release.
|
||||
Traffic mirroring will copy each incoming request, sending one request to the primary and one to the canary service.
|
||||
Mirroring should be used for requests that are **idempotent** or capable of being processed twice (once by the primary and once by the canary).
|
||||
|
||||
**How to retry a failed release?**
|
||||
|
||||
A canary analysis is triggered by changes in any of the following objects:
|
||||
|
||||
* Deployment/DaemonSet PodSpec (metadata, container image, command, ports, env, resources, etc)
|
||||
* ConfigMaps mounted as volumes or mapped to environment variables
|
||||
* Secrets mounted as volumes or mapped to environment variables
|
||||
|
||||
To retry a release you can add or change an annotation on the pod template:
|
||||
|
||||
```yaml
|
||||
apiVersion: flagger.app/v1alpha3
|
||||
kind: Canary
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
spec:
|
||||
provider: kubernetes
|
||||
canaryAnalysis:
|
||||
interval: 30s
|
||||
threshold: 2
|
||||
iterations: 10
|
||||
metrics:
|
||||
- name: request-success-rate
|
||||
threshold: 99
|
||||
interval: 1m
|
||||
- name: request-duration
|
||||
threshold: 500
|
||||
interval: 30s
|
||||
webhooks:
|
||||
- name: load-test
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 5s
|
||||
metadata:
|
||||
type: cmd
|
||||
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test:9898/"
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
timestamp: "2020-03-10T14:24:48+0000"
|
||||
```
|
||||
|
||||
The above configuration will run an analysis for five minutes.
|
||||
Flagger starts the load test for the canary service (green version) and checks the Prometheus metrics every 30 seconds.
|
||||
If the analysis result is positive, Flagger will promote the canary (green version) to primary (blue version).
|
||||
|
||||
### Kubernetes services
|
||||
|
||||
**How is an application exposed inside the cluster?**
|
||||
@@ -109,7 +53,7 @@ If the analysis result is positive, Flagger will promote the canary (green versi
|
||||
Assuming the app name is podinfo you can define a canary like:
|
||||
|
||||
```yaml
|
||||
apiVersion: flagger.app/v1alpha3
|
||||
apiVersion: flagger.app/v1beta1
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: podinfo
|
||||
@@ -120,25 +64,31 @@ spec:
|
||||
kind: Deployment
|
||||
name: podinfo
|
||||
service:
|
||||
# container port (required)
|
||||
# service name (optional)
|
||||
name: podinfo
|
||||
# ClusterIP port number (required)
|
||||
port: 9898
|
||||
# container port name or number
|
||||
targetPort: http
|
||||
# port name can be http or grpc (default http)
|
||||
portName: http
|
||||
```
|
||||
|
||||
If the `service.name` is not specified, then `targetRef.name` is used for the apex domain and canary/primary services name prefix.
|
||||
You should treat the service name as an immutable field, changing it could result in routing conflicts.
|
||||
|
||||
Based on the canary spec service, Flagger generates the following Kubernetes ClusterIP service:
|
||||
|
||||
* `<targetRef.name>.<namespace>.svc.cluster.local`
|
||||
* `<service.name>.<namespace>.svc.cluster.local`
|
||||
selector `app=<name>-primary`
|
||||
* `<targetRef.name>-primary.<namespace>.svc.cluster.local`
|
||||
* `<service.name>-primary.<namespace>.svc.cluster.local`
|
||||
selector `app=<name>-primary`
|
||||
* `<targetRef.name>-canary.<namespace>.svc.cluster.local`
|
||||
* `<service.name>-canary.<namespace>.svc.cluster.local`
|
||||
selector `app=<name>`
|
||||
|
||||
This ensures that traffic coming from a namespace outside the mesh to `podinfo.test:9898`
|
||||
will be routed to the latest stable release of your app.
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
@@ -192,7 +142,7 @@ canary analysis and can be used for conformance testing or load testing.
|
||||
|
||||
If port discovery is enabled, Flagger scans the deployment spec and extracts the containers
|
||||
ports excluding the port specified in the canary service and Envoy sidecar ports.
|
||||
`These ports will be used when generating the ClusterIP services.
|
||||
These ports will be used when generating the ClusterIP services.
|
||||
|
||||
For a deployment that exposes two ports:
|
||||
|
||||
@@ -216,7 +166,7 @@ spec:
|
||||
You can enable port discovery so that Prometheus will be able to reach port `9090` over mTLS:
|
||||
|
||||
```yaml
|
||||
apiVersion: flagger.app/v1alpha3
|
||||
apiVersion: flagger.app/v1beta1
|
||||
kind: Canary
|
||||
spec:
|
||||
service:
|
||||
@@ -291,6 +241,319 @@ spec:
|
||||
topologyKey: kubernetes.io/hostname
|
||||
```
|
||||
|
||||
### Metrics
|
||||
|
||||
**How does Flagger measures the request success rate and duration?**
|
||||
|
||||
Flagger measures the request success rate and duration using Prometheus queries.
|
||||
|
||||
**HTTP requests success rate percentage**
|
||||
|
||||
Spec:
|
||||
|
||||
```yaml
|
||||
analysis:
|
||||
metrics:
|
||||
- name: request-success-rate
|
||||
# minimum req success rate (non 5xx responses)
|
||||
# percentage (0-100)
|
||||
thresholdRange:
|
||||
min: 99
|
||||
interval: 1m
|
||||
```
|
||||
|
||||
Istio query:
|
||||
|
||||
```javascript
|
||||
sum(
|
||||
rate(
|
||||
istio_requests_total{
|
||||
reporter="destination",
|
||||
destination_workload_namespace=~"$namespace",
|
||||
destination_workload=~"$workload",
|
||||
response_code!~"5.*"
|
||||
}[$interval]
|
||||
)
|
||||
)
|
||||
/
|
||||
sum(
|
||||
rate(
|
||||
istio_requests_total{
|
||||
reporter="destination",
|
||||
destination_workload_namespace=~"$namespace",
|
||||
destination_workload=~"$workload"
|
||||
}[$interval]
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
Envoy query (App Mesh, Contour or Gloo):
|
||||
|
||||
```javascript
|
||||
sum(
|
||||
rate(
|
||||
envoy_cluster_upstream_rq{
|
||||
kubernetes_namespace="$namespace",
|
||||
kubernetes_pod_name=~"$workload",
|
||||
envoy_response_code!~"5.*"
|
||||
}[$interval]
|
||||
)
|
||||
)
|
||||
/
|
||||
sum(
|
||||
rate(
|
||||
envoy_cluster_upstream_rq{
|
||||
kubernetes_namespace="$namespace",
|
||||
kubernetes_pod_name=~"$workload"
|
||||
}[$interval]
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
**HTTP requests milliseconds duration P99**
|
||||
|
||||
Spec:
|
||||
|
||||
```yaml
|
||||
analysis:
|
||||
metrics:
|
||||
- name: request-duration
|
||||
# maximum req duration P99
|
||||
# milliseconds
|
||||
thresholdRange:
|
||||
max: 500
|
||||
interval: 1m
|
||||
```
|
||||
|
||||
Istio query:
|
||||
|
||||
```javascript
|
||||
histogram_quantile(0.99,
|
||||
sum(
|
||||
irate(
|
||||
istio_request_duration_seconds_bucket{
|
||||
reporter="destination",
|
||||
destination_workload=~"$workload",
|
||||
destination_workload_namespace=~"$namespace"
|
||||
}[$interval]
|
||||
)
|
||||
) by (le)
|
||||
)
|
||||
```
|
||||
|
||||
Envoy query (App Mesh, Contour or Gloo):
|
||||
|
||||
```javascript
|
||||
histogram_quantile(0.99,
|
||||
sum(
|
||||
irate(
|
||||
envoy_cluster_upstream_rq_time_bucket{
|
||||
kubernetes_pod_name=~"$workload",
|
||||
kubernetes_namespace=~"$namespace"
|
||||
}[$interval]
|
||||
)
|
||||
) by (le)
|
||||
)
|
||||
```
|
||||
|
||||
> **Note** that the metric interval should be lower or equal to the control loop interval.
|
||||
|
||||
**Can I use custom metrics?**
|
||||
|
||||
The analysis can be extended with metrics provided by Prometheus, Datadog and AWS CloudWatch. For more details
|
||||
on how custom metrics can be used please read the [metrics docs](usage/metrics.md).
|
||||
|
||||
### Istio routing
|
||||
|
||||
**How does Flagger interact with Istio?**
|
||||
|
||||
Flagger creates an Istio Virtual Service and Destination Rules based on the Canary service spec.
|
||||
The service configuration lets you expose an app inside or outside the mesh.
|
||||
You can also define traffic policies, HTTP match conditions, URI rewrite rules, CORS policies, timeout and retries.
|
||||
|
||||
The following spec exposes the `frontend` workload inside the mesh on `frontend.test.svc.cluster.local:9898`
|
||||
and outside the mesh on `frontend.example.com`. You'll have to specify an Istio ingress gateway for external hosts.
|
||||
|
||||
```yaml
|
||||
apiVersion: flagger.app/v1beta1
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: frontend
|
||||
namespace: test
|
||||
spec:
|
||||
service:
|
||||
# container port
|
||||
port: 9898
|
||||
# service port name (optional, will default to "http")
|
||||
portName: http-frontend
|
||||
# Istio gateways (optional)
|
||||
gateways:
|
||||
- public-gateway.istio-system.svc.cluster.local
|
||||
- mesh
|
||||
# Istio virtual service host names (optional)
|
||||
hosts:
|
||||
- frontend.example.com
|
||||
# Istio traffic policy
|
||||
trafficPolicy:
|
||||
tls:
|
||||
# use ISTIO_MUTUAL when mTLS is enabled
|
||||
mode: DISABLE
|
||||
# HTTP match conditions (optional)
|
||||
match:
|
||||
- uri:
|
||||
prefix: /
|
||||
# HTTP rewrite (optional)
|
||||
rewrite:
|
||||
uri: /
|
||||
# Istio retry policy (optional)
|
||||
retries:
|
||||
attempts: 3
|
||||
perTryTimeout: 1s
|
||||
retryOn: "gateway-error,connect-failure,refused-stream"
|
||||
# Add headers (optional)
|
||||
headers:
|
||||
request:
|
||||
add:
|
||||
x-some-header: "value"
|
||||
# cross-origin resource sharing policy (optional)
|
||||
corsPolicy:
|
||||
allowOrigin:
|
||||
- example.com
|
||||
allowMethods:
|
||||
- GET
|
||||
allowCredentials: false
|
||||
allowHeaders:
|
||||
- x-some-header
|
||||
maxAge: 24h
|
||||
```
|
||||
|
||||
For the above spec Flagger will generate the following virtual service:
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: VirtualService
|
||||
metadata:
|
||||
name: frontend
|
||||
namespace: test
|
||||
ownerReferences:
|
||||
- apiVersion: flagger.app/v1beta1
|
||||
blockOwnerDeletion: true
|
||||
controller: true
|
||||
kind: Canary
|
||||
name: podinfo
|
||||
uid: 3a4a40dd-3875-11e9-8e1d-42010a9c0fd1
|
||||
spec:
|
||||
gateways:
|
||||
- public-gateway.istio-system.svc.cluster.local
|
||||
- mesh
|
||||
hosts:
|
||||
- frontend.example.com
|
||||
- frontend
|
||||
http:
|
||||
- corsPolicy:
|
||||
allowHeaders:
|
||||
- x-some-header
|
||||
allowMethods:
|
||||
- GET
|
||||
allowOrigin:
|
||||
- example.com
|
||||
maxAge: 24h
|
||||
headers:
|
||||
request:
|
||||
add:
|
||||
x-some-header: "value"
|
||||
match:
|
||||
- uri:
|
||||
prefix: /
|
||||
rewrite:
|
||||
uri: /
|
||||
route:
|
||||
- destination:
|
||||
host: podinfo-primary
|
||||
weight: 100
|
||||
- destination:
|
||||
host: podinfo-canary
|
||||
weight: 0
|
||||
retries:
|
||||
attempts: 3
|
||||
perTryTimeout: 1s
|
||||
retryOn: "gateway-error,connect-failure,refused-stream"
|
||||
```
|
||||
|
||||
For each destination in the virtual service a rule is generated:
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: DestinationRule
|
||||
metadata:
|
||||
name: frontend-primary
|
||||
namespace: test
|
||||
spec:
|
||||
host: frontend-primary
|
||||
trafficPolicy:
|
||||
tls:
|
||||
mode: DISABLE
|
||||
---
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: DestinationRule
|
||||
metadata:
|
||||
name: frontend-canary
|
||||
namespace: test
|
||||
spec:
|
||||
host: frontend-canary
|
||||
trafficPolicy:
|
||||
tls:
|
||||
mode: DISABLE
|
||||
```
|
||||
|
||||
Flagger keeps in sync the virtual service and destination rules with the canary service spec.
|
||||
Any direct modification to the virtual service spec will be overwritten.
|
||||
|
||||
To expose a workload inside the mesh on `http://backend.test.svc.cluster.local:9898`,
|
||||
the service spec can contain only the container port and the traffic policy:
|
||||
|
||||
```yaml
|
||||
apiVersion: flagger.app/v1beta1
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: backend
|
||||
namespace: test
|
||||
spec:
|
||||
service:
|
||||
port: 9898
|
||||
trafficPolicy:
|
||||
tls:
|
||||
mode: DISABLE
|
||||
```
|
||||
|
||||
Based on the above spec, Flagger will create several ClusterIP services like:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: backend-primary
|
||||
ownerReferences:
|
||||
- apiVersion: flagger.app/v1beta1
|
||||
blockOwnerDeletion: true
|
||||
controller: true
|
||||
kind: Canary
|
||||
name: backend
|
||||
uid: 2ca1a9c7-2ef6-11e9-bd01-42010a9c0145
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- name: http
|
||||
port: 9898
|
||||
protocol: TCP
|
||||
targetPort: 9898
|
||||
selector:
|
||||
app: backend-primary
|
||||
```
|
||||
|
||||
Flagger works for user facing apps exposed outside the cluster via an ingress gateway
|
||||
and for backend HTTP APIs that are accessible only from inside the mesh.
|
||||
|
||||
### Istio Ingress Gateway
|
||||
|
||||
**How can I expose multiple canaries on the same external domain?**
|
||||
@@ -299,7 +562,7 @@ Assuming you have two apps, one that servers the main website and one that serve
|
||||
For each app you can define a canary object as:
|
||||
|
||||
```yaml
|
||||
apiVersion: flagger.app/v1alpha3
|
||||
apiVersion: flagger.app/v1beta1
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: website
|
||||
@@ -316,7 +579,7 @@ spec:
|
||||
rewrite:
|
||||
uri: /
|
||||
---
|
||||
apiVersion: flagger.app/v1alpha3
|
||||
apiVersion: flagger.app/v1beta1
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: webapi
|
||||
@@ -347,7 +610,7 @@ Note that host merging only works if the canaries are bounded to a ingress gatew
|
||||
When deploying Istio with global mTLS enabled, you have to set the TLS mode to `ISTIO_MUTUAL`:
|
||||
|
||||
```yaml
|
||||
apiVersion: flagger.app/v1alpha3
|
||||
apiVersion: flagger.app/v1beta1
|
||||
kind: Canary
|
||||
spec:
|
||||
service:
|
||||
@@ -359,7 +622,7 @@ spec:
|
||||
If you run Istio in permissive mode you can disable TLS:
|
||||
|
||||
```yaml
|
||||
apiVersion: flagger.app/v1alpha3
|
||||
apiVersion: flagger.app/v1beta1
|
||||
kind: Canary
|
||||
spec:
|
||||
service:
|
||||
|
||||
@@ -1,950 +0,0 @@
|
||||
# How it works
|
||||
|
||||
[Flagger](https://github.com/weaveworks/flagger) takes a Kubernetes deployment and optionally
|
||||
a horizontal pod autoscaler \(HPA\) and creates a series of objects
|
||||
\(Kubernetes deployments, ClusterIP services, virtual service, traffic split or ingress\) to drive the canary analysis and promotion.
|
||||
|
||||

|
||||
|
||||
### Canary Custom Resource
|
||||
|
||||
For a deployment named _podinfo_, a canary promotion can be defined using Flagger's custom resource:
|
||||
|
||||
```yaml
|
||||
apiVersion: flagger.app/v1alpha3
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: podinfo
|
||||
namespace: test
|
||||
spec:
|
||||
# service mesh provider (optional)
|
||||
# can be: kubernetes, istio, linkerd, appmesh, nginx, gloo, supergloo
|
||||
# use the kubernetes provider for Blue/Green style deployments
|
||||
provider: istio
|
||||
# deployment reference
|
||||
targetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: podinfo
|
||||
# the maximum time in seconds for the canary deployment
|
||||
# to make progress before it is rollback (default 600s)
|
||||
progressDeadlineSeconds: 60
|
||||
# HPA reference (optional)
|
||||
autoscalerRef:
|
||||
apiVersion: autoscaling/v2beta1
|
||||
kind: HorizontalPodAutoscaler
|
||||
name: podinfo
|
||||
service:
|
||||
# container port
|
||||
port: 9898
|
||||
# service port name (optional, will default to "http")
|
||||
portName: http-podinfo
|
||||
# Istio gateways (optional)
|
||||
gateways:
|
||||
- public-gateway.istio-system.svc.cluster.local
|
||||
# Istio virtual service host names (optional)
|
||||
hosts:
|
||||
- podinfo.example.com
|
||||
# promote the canary without analysing it (default false)
|
||||
skipAnalysis: false
|
||||
# define the canary analysis timing and KPIs
|
||||
canaryAnalysis:
|
||||
# schedule interval (default 60s)
|
||||
interval: 1m
|
||||
# max number of failed metric checks before rollback
|
||||
threshold: 10
|
||||
# max traffic percentage routed to canary
|
||||
# percentage (0-100)
|
||||
maxWeight: 50
|
||||
# canary increment step
|
||||
# percentage (0-100)
|
||||
stepWeight: 5
|
||||
# Prometheus checks
|
||||
metrics:
|
||||
- name: request-success-rate
|
||||
# minimum req success rate (non 5xx responses)
|
||||
# percentage (0-100)
|
||||
threshold: 99
|
||||
interval: 1m
|
||||
- name: request-duration
|
||||
# maximum req duration P99
|
||||
# milliseconds
|
||||
threshold: 500
|
||||
interval: 30s
|
||||
# external checks (optional)
|
||||
webhooks:
|
||||
- name: integration-tests
|
||||
url: http://podinfo.test:9898/echo
|
||||
timeout: 1m
|
||||
# key-value pairs (optional)
|
||||
metadata:
|
||||
test: "all"
|
||||
token: "16688eb5e9f289f1991c"
|
||||
```
|
||||
|
||||
**Note** that the target deployment must have a single label selector in the format `app: <DEPLOYMENT-NAME>`:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: podinfo
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: podinfo
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: podinfo
|
||||
```
|
||||
|
||||
Besides `app` Flagger supports `name` and `app.kubernetes.io/name` selectors. If you use a different
|
||||
convention you can specify your label with the `-selector-labels` flag.
|
||||
|
||||
The target deployment should expose a TCP port that will be used by Flagger to create the ClusterIP Service and
|
||||
the Istio Virtual Service. The container port from the target deployment should match the `service.port` value.
|
||||
|
||||
### Canary status
|
||||
|
||||
Get the current status of canary deployments cluster wide:
|
||||
|
||||
```bash
|
||||
kubectl get canaries --all-namespaces
|
||||
|
||||
NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME
|
||||
test podinfo Progressing 15 2019-06-30T14:05:07Z
|
||||
prod frontend Succeeded 0 2019-06-30T16:15:07Z
|
||||
prod backend Failed 0 2019-06-30T17:05:07Z
|
||||
```
|
||||
|
||||
The status condition reflects the last know state of the canary analysis:
|
||||
|
||||
```bash
|
||||
kubectl -n test get canary/podinfo -oyaml | awk '/status/,0'
|
||||
```
|
||||
|
||||
A successful rollout status:
|
||||
|
||||
```yaml
|
||||
status:
|
||||
canaryWeight: 0
|
||||
failedChecks: 0
|
||||
iterations: 0
|
||||
lastAppliedSpec: "14788816656920327485"
|
||||
lastPromotedSpec: "14788816656920327485"
|
||||
conditions:
|
||||
- lastTransitionTime: "2019-07-10T08:23:18Z"
|
||||
lastUpdateTime: "2019-07-10T08:23:18Z"
|
||||
message: Canary analysis completed successfully, promotion finished.
|
||||
reason: Succeeded
|
||||
status: "True"
|
||||
type: Promoted
|
||||
```
|
||||
|
||||
The `Promoted` status condition can have one of the following reasons:
|
||||
Initialized, Waiting, Progressing, Finalising, Succeeded or Failed.
|
||||
A failed canary will have the promoted status set to `false`,
|
||||
the reason to `failed` and the last applied spec will be different to the last promoted one.
|
||||
|
||||
Wait for a successful rollout:
|
||||
|
||||
```bash
|
||||
kubectl wait canary/podinfo --for=condition=promoted
|
||||
```
|
||||
|
||||
### Istio routing
|
||||
|
||||
Flagger creates an Istio Virtual Service and Destination Rules based on the Canary service spec.
|
||||
The service configuration lets you expose an app inside or outside the mesh.
|
||||
You can also define traffic policies, HTTP match conditions, URI rewrite rules, CORS policies, timeout and retries.
|
||||
|
||||
The following spec exposes the `frontend` workload inside the mesh on `frontend.test.svc.cluster.local:9898`
|
||||
and outside the mesh on `frontend.example.com`. You'll have to specify an Istio ingress gateway for external hosts.
|
||||
|
||||
```yaml
|
||||
apiVersion: flagger.app/v1alpha3
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: frontend
|
||||
namespace: test
|
||||
spec:
|
||||
service:
|
||||
# container port
|
||||
port: 9898
|
||||
# service port name (optional, will default to "http")
|
||||
portName: http-frontend
|
||||
# Istio gateways (optional)
|
||||
gateways:
|
||||
- public-gateway.istio-system.svc.cluster.local
|
||||
- mesh
|
||||
# Istio virtual service host names (optional)
|
||||
hosts:
|
||||
- frontend.example.com
|
||||
# Istio traffic policy (optional)
|
||||
trafficPolicy:
|
||||
loadBalancer:
|
||||
simple: LEAST_CONN
|
||||
# HTTP match conditions (optional)
|
||||
match:
|
||||
- uri:
|
||||
prefix: /
|
||||
# HTTP rewrite (optional)
|
||||
rewrite:
|
||||
uri: /
|
||||
# Envoy timeout and retry policy (optional)
|
||||
headers:
|
||||
request:
|
||||
add:
|
||||
x-envoy-upstream-rq-timeout-ms: "15000"
|
||||
x-envoy-max-retries: "10"
|
||||
x-envoy-retry-on: "gateway-error,connect-failure,refused-stream"
|
||||
# cross-origin resource sharing policy (optional)
|
||||
corsPolicy:
|
||||
allowOrigin:
|
||||
- example.com
|
||||
allowMethods:
|
||||
- GET
|
||||
allowCredentials: false
|
||||
allowHeaders:
|
||||
- x-some-header
|
||||
maxAge: 24h
|
||||
```
|
||||
|
||||
For the above spec Flagger will generate the following virtual service:
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: VirtualService
|
||||
metadata:
|
||||
name: frontend
|
||||
namespace: test
|
||||
ownerReferences:
|
||||
- apiVersion: flagger.app/v1alpha3
|
||||
blockOwnerDeletion: true
|
||||
controller: true
|
||||
kind: Canary
|
||||
name: podinfo
|
||||
uid: 3a4a40dd-3875-11e9-8e1d-42010a9c0fd1
|
||||
spec:
|
||||
gateways:
|
||||
- public-gateway.istio-system.svc.cluster.local
|
||||
- mesh
|
||||
hosts:
|
||||
- frontend.example.com
|
||||
- frontend
|
||||
http:
|
||||
- appendHeaders:
|
||||
x-envoy-max-retries: "10"
|
||||
x-envoy-retry-on: gateway-error,connect-failure,refused-stream
|
||||
x-envoy-upstream-rq-timeout-ms: "15000"
|
||||
corsPolicy:
|
||||
allowHeaders:
|
||||
- x-some-header
|
||||
allowMethods:
|
||||
- GET
|
||||
allowOrigin:
|
||||
- example.com
|
||||
maxAge: 24h
|
||||
match:
|
||||
- uri:
|
||||
prefix: /
|
||||
rewrite:
|
||||
uri: /
|
||||
route:
|
||||
- destination:
|
||||
host: podinfo-primary
|
||||
weight: 100
|
||||
- destination:
|
||||
host: podinfo-canary
|
||||
weight: 0
|
||||
```
|
||||
|
||||
For each destination in the virtual service a rule is generated:
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: DestinationRule
|
||||
metadata:
|
||||
name: frontend-primary
|
||||
namespace: test
|
||||
spec:
|
||||
host: frontend-primary
|
||||
trafficPolicy:
|
||||
loadBalancer:
|
||||
simple: LEAST_CONN
|
||||
---
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: DestinationRule
|
||||
metadata:
|
||||
name: frontend-canary
|
||||
namespace: test
|
||||
spec:
|
||||
host: frontend-canary
|
||||
trafficPolicy:
|
||||
loadBalancer:
|
||||
simple: LEAST_CONN
|
||||
```
|
||||
|
||||
Flagger keeps in sync the virtual service and destination rules with the canary service spec.
|
||||
Any direct modification to the virtual service spec will be overwritten.
|
||||
|
||||
To expose a workload inside the mesh on `http://backend.test.svc.cluster.local:9898`,
|
||||
the service spec can contain only the container port:
|
||||
|
||||
```yaml
|
||||
apiVersion: flagger.app/v1alpha3
|
||||
kind: Canary
|
||||
metadata:
|
||||
name: backend
|
||||
namespace: test
|
||||
spec:
|
||||
service:
|
||||
port: 9898
|
||||
```
|
||||
|
||||
Based on the above spec, Flagger will create several ClusterIP services like:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: backend-primary
|
||||
ownerReferences:
|
||||
- apiVersion: flagger.app/v1alpha3
|
||||
blockOwnerDeletion: true
|
||||
controller: true
|
||||
kind: Canary
|
||||
name: backend
|
||||
uid: 2ca1a9c7-2ef6-11e9-bd01-42010a9c0145
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- name: http
|
||||
port: 9898
|
||||
protocol: TCP
|
||||
targetPort: 9898
|
||||
selector:
|
||||
app: backend-primary
|
||||
```
|
||||
|
||||
Flagger works for user facing apps exposed outside the cluster via an ingress gateway
|
||||
and for backend HTTP APIs that are accessible only from inside the mesh.
|
||||
|
||||
### Canary Stages
|
||||
|
||||

|
||||
|
||||
A canary deployment is triggered by changes in any of the following objects:
|
||||
|
||||
* Deployment PodSpec (container image, command, ports, env, resources, etc)
|
||||
* ConfigMaps mounted as volumes or mapped to environment variables
|
||||
* Secrets mounted as volumes or mapped to environment variables
|
||||
|
||||
Gated canary promotion stages:
|
||||
|
||||
* scan for canary deployments
|
||||
* check Istio virtual service routes are mapped to primary and canary ClusterIP services
|
||||
* check primary and canary deployments status
|
||||
* halt advancement if a rolling update is underway
|
||||
* halt advancement if pods are unhealthy
|
||||
* call pre-rollout webhooks are check results
|
||||
* halt advancement if any hook returned a non HTTP 2xx result
|
||||
* increment the failed checks counter
|
||||
* increase canary traffic weight percentage from 0% to 5% (step weight)
|
||||
* call rollout webhooks and check results
|
||||
* check canary HTTP request success rate and latency
|
||||
* halt advancement if any metric is under the specified threshold
|
||||
* increment the failed checks counter
|
||||
* check if the number of failed checks reached the threshold
|
||||
* route all traffic to primary
|
||||
* scale to zero the canary deployment and mark it as failed
|
||||
* call post-rollout webhooks
|
||||
* post the analysis result to Slack
|
||||
* wait for the canary deployment to be updated and start over
|
||||
* increase canary traffic weight by 5% (step weight) till it reaches 50% (max weight)
|
||||
* halt advancement if any webhook call fails
|
||||
* halt advancement while canary request success rate is under the threshold
|
||||
* halt advancement while canary request duration P99 is over the threshold
|
||||
* halt advancement if the primary or canary deployment becomes unhealthy
|
||||
* halt advancement while canary deployment is being scaled up/down by HPA
|
||||
* promote canary to primary
|
||||
* copy ConfigMaps and Secrets from canary to primary
|
||||
* copy canary deployment spec template over primary
|
||||
* wait for primary rolling update to finish
|
||||
* halt advancement if pods are unhealthy
|
||||
* route all traffic to primary
|
||||
* scale to zero the canary deployment
|
||||
* mark rollout as finished
|
||||
* call post-rollout webhooks
|
||||
* post the analysis result to Slack
|
||||
* wait for the canary deployment to be updated and start over
|
||||
|
||||
### Canary Analysis
|
||||
|
||||
The canary analysis runs periodically until it reaches the maximum traffic weight or the failed checks threshold.
|
||||
|
||||
Spec:
|
||||
|
||||
```yaml
|
||||
canaryAnalysis:
|
||||
# schedule interval (default 60s)
|
||||
interval: 1m
|
||||
# max number of failed metric checks before rollback
|
||||
threshold: 10
|
||||
# max traffic percentage routed to canary
|
||||
# percentage (0-100)
|
||||
maxWeight: 50
|
||||
# canary increment step
|
||||
# percentage (0-100)
|
||||
stepWeight: 2
|
||||
# deploy straight to production without
|
||||
# the metrics and webhook checks
|
||||
skipAnalysis: false
|
||||
```
|
||||
|
||||
The above analysis, if it succeeds, will run for 25 minutes while validating the HTTP metrics and webhooks every minute.
|
||||
You can determine the minimum time that it takes to validate and promote a canary deployment using this formula:
|
||||
|
||||
```
|
||||
interval * (maxWeight / stepWeight)
|
||||
```
|
||||
|
||||
And the time it takes for a canary to be rollback when the metrics or webhook checks are failing:
|
||||
|
||||
```
|
||||
interval * threshold
|
||||
```
|
||||
|
||||
In emergency cases, you may want to skip the analysis phase and ship changes directly to production.
|
||||
At any time you can set the `spec.skipAnalysis: true`.
|
||||
When skip analysis is enabled, Flagger checks if the canary deployment is healthy and
|
||||
promotes it without analysing it. If an analysis is underway, Flagger cancels it and runs the promotion.
|
||||
|
||||
### A/B Testing
|
||||
|
||||
Besides weighted routing, Flagger can be configured to route traffic to the canary based on HTTP match conditions.
|
||||
In an A/B testing scenario, you'll be using HTTP headers or cookies to target a certain segment of your users.
|
||||
This is particularly useful for frontend applications that require session affinity.
|
||||
|
||||
You can enable A/B testing by specifying the HTTP match conditions and the number of iterations:
|
||||
|
||||
```yaml
|
||||
canaryAnalysis:
|
||||
# schedule interval (default 60s)
|
||||
interval: 1m
|
||||
# total number of iterations
|
||||
iterations: 10
|
||||
# max number of failed iterations before rollback
|
||||
threshold: 2
|
||||
# canary match condition
|
||||
match:
|
||||
- headers:
|
||||
user-agent:
|
||||
regex: "^(?!.*Chrome).*Safari.*"
|
||||
- headers:
|
||||
cookie:
|
||||
regex: "^(.*?;)?(user=test)(;.*)?$"
|
||||
```
|
||||
|
||||
If Flagger finds a HTTP match condition, it will ignore the `maxWeight` and `stepWeight` settings.
|
||||
|
||||
The above configuration will run an analysis for ten minutes targeting the Safari users and those that have a test cookie.
|
||||
You can determine the minimum time that it takes to validate and promote a canary deployment using this formula:
|
||||
|
||||
```
|
||||
interval * iterations
|
||||
```
|
||||
|
||||
And the time it takes for a canary to be rollback when the metrics or webhook checks are failing:
|
||||
|
||||
```
|
||||
interval * threshold
|
||||
```
|
||||
|
||||
Make sure that the analysis threshold is lower than the number of iterations.
|
||||
|
||||
### HTTP Metrics
|
||||
|
||||
The canary analysis is using the following Prometheus queries:
|
||||
|
||||
**HTTP requests success rate percentage**
|
||||
|
||||
Spec:
|
||||
|
||||
```yaml
|
||||
canaryAnalysis:
|
||||
metrics:
|
||||
- name: request-success-rate
|
||||
# minimum req success rate (non 5xx responses)
|
||||
# percentage (0-100)
|
||||
threshold: 99
|
||||
interval: 1m
|
||||
```
|
||||
|
||||
Istio query:
|
||||
|
||||
```javascript
|
||||
sum(
|
||||
rate(
|
||||
istio_requests_total{
|
||||
reporter="destination",
|
||||
destination_workload_namespace=~"$namespace",
|
||||
destination_workload=~"$workload",
|
||||
response_code!~"5.*"
|
||||
}[$interval]
|
||||
)
|
||||
)
|
||||
/
|
||||
sum(
|
||||
rate(
|
||||
istio_requests_total{
|
||||
reporter="destination",
|
||||
destination_workload_namespace=~"$namespace",
|
||||
destination_workload=~"$workload"
|
||||
}[$interval]
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
App Mesh query:
|
||||
|
||||
```javascript
|
||||
sum(
|
||||
rate(
|
||||
envoy_cluster_upstream_rq{
|
||||
kubernetes_namespace="$namespace",
|
||||
kubernetes_pod_name=~"$workload",
|
||||
response_code!~"5.*"
|
||||
}[$interval]
|
||||
)
|
||||
)
|
||||
/
|
||||
sum(
|
||||
rate(
|
||||
envoy_cluster_upstream_rq{
|
||||
kubernetes_namespace="$namespace",
|
||||
kubernetes_pod_name=~"$workload"
|
||||
}[$interval]
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
**HTTP requests milliseconds duration P99**
|
||||
|
||||
Spec:
|
||||
|
||||
```yaml
|
||||
canaryAnalysis:
|
||||
metrics:
|
||||
- name: request-duration
|
||||
# maximum req duration P99
|
||||
# milliseconds
|
||||
threshold: 500
|
||||
interval: 1m
|
||||
```
|
||||
|
||||
Istio query:
|
||||
|
||||
```javascript
|
||||
histogram_quantile(0.99,
|
||||
sum(
|
||||
irate(
|
||||
istio_request_duration_seconds_bucket{
|
||||
reporter="destination",
|
||||
destination_workload=~"$workload",
|
||||
destination_workload_namespace=~"$namespace"
|
||||
}[$interval]
|
||||
)
|
||||
) by (le)
|
||||
)
|
||||
```
|
||||
|
||||
App Mesh query:
|
||||
|
||||
```javascript
|
||||
histogram_quantile(0.99,
|
||||
sum(
|
||||
irate(
|
||||
envoy_cluster_upstream_rq_time_bucket{
|
||||
kubernetes_pod_name=~"$workload",
|
||||
kubernetes_namespace=~"$namespace"
|
||||
}[$interval]
|
||||
)
|
||||
) by (le)
|
||||
)
|
||||
```
|
||||
|
||||
> **Note** that the metric interval should be lower or equal to the control loop interval.
|
||||
|
||||
### Custom Metrics
|
||||
|
||||
The canary analysis can be extended with custom Prometheus queries.
|
||||
|
||||
```yaml
|
||||
canaryAnalysis:
|
||||
threshold: 1
|
||||
maxWeight: 50
|
||||
stepWeight: 5
|
||||
metrics:
|
||||
- name: "404s percentage"
|
||||
threshold: 5
|
||||
query: |
|
||||
100 - sum(
|
||||
rate(
|
||||
istio_requests_total{
|
||||
reporter="destination",
|
||||
destination_workload_namespace="test",
|
||||
destination_workload="podinfo",
|
||||
response_code!="404"
|
||||
}[1m]
|
||||
)
|
||||
)
|
||||
/
|
||||
sum(
|
||||
rate(
|
||||
istio_requests_total{
|
||||
reporter="destination",
|
||||
destination_workload_namespace="test",
|
||||
destination_workload="podinfo"
|
||||
}[1m]
|
||||
)
|
||||
) * 100
|
||||
```
|
||||
|
||||
The above configuration validates the canary by checking
|
||||
if the HTTP 404 req/sec percentage is below 5 percent of the total traffic.
|
||||
If the 404s rate reaches the 5% threshold, then the canary fails.
|
||||
|
||||
```yaml
|
||||
canaryAnalysis:
|
||||
threshold: 1
|
||||
maxWeight: 50
|
||||
stepWeight: 5
|
||||
metrics:
|
||||
- name: "rpc error rate"
|
||||
threshold: 5
|
||||
query: |
|
||||
100 - (sum
|
||||
rate(
|
||||
grpc_server_handled_total{
|
||||
grpc_service="my.TestService",
|
||||
grpc_code!="OK"
|
||||
}[1m]
|
||||
)
|
||||
)
|
||||
/
|
||||
sum(
|
||||
rate(
|
||||
grpc_server_started_total{
|
||||
grpc_service="my.TestService"
|
||||
}[1m]
|
||||
)
|
||||
) * 100
|
||||
```
|
||||
|
||||
The above configuration validates the canary by checking if the percentage of
|
||||
non-OK GRPC req/sec is below 5 percent of the total requests. If the non-OK
|
||||
rate reaches the 5% threshold, then the canary fails.
|
||||
|
||||
When specifying a query, Flagger will run the promql query and convert the result to float64.
|
||||
Then it compares the query result value with the metric threshold value.
|
||||
|
||||
### Webhooks
|
||||
|
||||
The canary analysis can be extended with webhooks. Flagger will call each webhook URL and
|
||||
determine from the response status code (HTTP 2xx) if the canary is failing or not.
|
||||
|
||||
There are three types of hooks:
|
||||
* Confirm-rollout hooks are executed before scaling up the canary deployment and ca be used for manual approval.
|
||||
The rollout is paused until the hook returns a successful HTTP status code.
|
||||
* Pre-rollout hooks are executed before routing traffic to canary.
|
||||
The canary advancement is paused if a pre-rollout hook fails and if the number of failures reach the
|
||||
threshold the canary will be rollback.
|
||||
* Rollout hooks are executed during the analysis on each iteration before the metric checks.
|
||||
If a rollout hook call fails the canary advancement is paused and eventfully rolled back.
|
||||
* Post-rollout hooks are executed after the canary has been promoted or rolled back.
|
||||
If a post rollout hook fails the error is logged.
|
||||
|
||||
Spec:
|
||||
|
||||
```yaml
|
||||
canaryAnalysis:
|
||||
webhooks:
|
||||
- name: "start gate"
|
||||
type: confirm-rollout
|
||||
url: http://flagger-loadtester.test/gate/approve
|
||||
- name: "smoke test"
|
||||
type: pre-rollout
|
||||
url: http://flagger-helmtester.kube-system/
|
||||
timeout: 3m
|
||||
metadata:
|
||||
type: "helm"
|
||||
cmd: "test podinfo --cleanup"
|
||||
- name: "load test"
|
||||
type: rollout
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 15s
|
||||
metadata:
|
||||
cmd: "hey -z 1m -q 5 -c 2 http://podinfo-canary.test:9898/"
|
||||
- name: "notify"
|
||||
type: post-rollout
|
||||
url: http://telegram.bot:8080/
|
||||
timeout: 5s
|
||||
metadata:
|
||||
some: "message"
|
||||
```
|
||||
|
||||
> **Note** that the sum of all rollout webhooks timeouts should be lower than the analysis interval.
|
||||
|
||||
Webhook payload (HTTP POST):
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "podinfo",
|
||||
"namespace": "test",
|
||||
"phase": "Progressing",
|
||||
"metadata": {
|
||||
"test": "all",
|
||||
"token": "16688eb5e9f289f1991c"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Response status codes:
|
||||
|
||||
* 200-202 - advance canary by increasing the traffic weight
|
||||
* timeout or non-2xx - halt advancement and increment failed checks
|
||||
|
||||
On a non-2xx response Flagger will include the response body (if any) in the failed checks log and Kubernetes events.
|
||||
|
||||
### Load Testing
|
||||
|
||||
For workloads that are not receiving constant traffic Flagger can be configured with a webhook,
|
||||
that when called, will start a load test for the target workload.
|
||||
If the target workload doesn't receive any traffic during the canary analysis,
|
||||
Flagger metric checks will fail with "no values found for metric request-success-rate".
|
||||
|
||||
Flagger comes with a load testing service based on [rakyll/hey](https://github.com/rakyll/hey)
|
||||
that generates traffic during analysis when configured as a webhook.
|
||||
|
||||

|
||||
|
||||
First you need to deploy the load test runner in a namespace with sidecar injection enabled:
|
||||
|
||||
```bash
|
||||
export REPO=https://raw.githubusercontent.com/weaveworks/flagger/master
|
||||
|
||||
kubectl -n test apply -f ${REPO}/artifacts/loadtester/deployment.yaml
|
||||
kubectl -n test apply -f ${REPO}/artifacts/loadtester/service.yaml
|
||||
```
|
||||
|
||||
Or by using Helm:
|
||||
|
||||
```bash
|
||||
helm repo add flagger https://flagger.app
|
||||
|
||||
helm upgrade -i flagger-loadtester flagger/loadtester \
|
||||
--namespace=test \
|
||||
--set cmd.timeout=1h
|
||||
```
|
||||
|
||||
When deployed the load tester API will be available at `http://flagger-loadtester.test/`.
|
||||
|
||||
Now you can add webhooks to the canary analysis spec:
|
||||
|
||||
```yaml
|
||||
webhooks:
|
||||
- name: load-test-get
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 5s
|
||||
metadata:
|
||||
type: cmd
|
||||
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test:9898/"
|
||||
- name: load-test-post
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 5s
|
||||
metadata:
|
||||
type: cmd
|
||||
cmd: "hey -z 1m -q 10 -c 2 -m POST -d '{test: 2}' http://podinfo-canary.test:9898/echo"
|
||||
```
|
||||
|
||||
When the canary analysis starts, Flagger will call the webhooks and the load tester will run the `hey` commands
|
||||
in the background, if they are not already running. This will ensure that during the
|
||||
analysis, the `podinfo-canary.test` service will receive a steady stream of GET and POST requests.
|
||||
|
||||
If your workload is exposed outside the mesh you can point `hey` to the
|
||||
public URL and use HTTP2.
|
||||
|
||||
```yaml
|
||||
webhooks:
|
||||
- name: load-test-get
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 5s
|
||||
metadata:
|
||||
type: cmd
|
||||
cmd: "hey -z 1m -q 10 -c 2 -h2 https://podinfo.example.com/"
|
||||
```
|
||||
|
||||
For gRPC services you can use [bojand/ghz](https://github.com/bojand/ghz) which is a similar tool to Hey but for gPRC:
|
||||
|
||||
```yaml
|
||||
webhooks:
|
||||
- name: grpc-load-test
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 5s
|
||||
metadata:
|
||||
type: cmd
|
||||
cmd: "ghz -z 1m -q 10 -c 2 --insecure podinfo.test:9898"
|
||||
```
|
||||
|
||||
The load tester can run arbitrary commands as long as the binary is present in the container image.
|
||||
For example if you you want to replace `hey` with another CLI, you can create your own Docker image:
|
||||
|
||||
```dockerfile
|
||||
FROM weaveworks/flagger-loadtester:<VER>
|
||||
|
||||
RUN curl -Lo /usr/local/bin/my-cli https://github.com/user/repo/releases/download/ver/my-cli \
|
||||
&& chmod +x /usr/local/bin/my-cli
|
||||
```
|
||||
|
||||
### Load Testing Delegation
|
||||
|
||||
The load tester can also forward testing tasks to external tools, by now [nGrinder](https://github.com/naver/ngrinder)
|
||||
is supported.
|
||||
|
||||
To use this feature, add a load test task of type 'ngrinder' to the canary analysis spec:
|
||||
|
||||
```yaml
|
||||
webhooks:
|
||||
- name: load-test-post
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 5s
|
||||
metadata:
|
||||
# type of this load test task, cmd or ngrinder
|
||||
type: ngrinder
|
||||
# base url of your nGrinder controller server
|
||||
server: http://ngrinder-server:port
|
||||
# id of the test to clone from, the test must have been defined.
|
||||
clone: 100
|
||||
# user name and base64 encoded password to authenticate against the nGrinder server
|
||||
username: admin
|
||||
passwd: YWRtaW4=
|
||||
# the interval between between nGrinder test status polling, default to 1s
|
||||
pollInterval: 5s
|
||||
```
|
||||
When the canary analysis starts, the load tester will initiate a [clone_and_start request](https://github.com/naver/ngrinder/wiki/REST-API-PerfTest)
|
||||
to the nGrinder server and start a new performance test. the load tester will periodically poll the nGrinder server
|
||||
for the status of the test, and prevent duplicate requests from being sent in subsequent analysis loops.
|
||||
|
||||
### Integration Testing
|
||||
|
||||
Flagger comes with a testing service that can run Helm tests or Bats tests when configured as a webhook.
|
||||
|
||||
Deploy the Helm test runner in the `kube-system` namespace using the `tiller` service account:
|
||||
|
||||
```bash
|
||||
helm repo add flagger https://flagger.app
|
||||
|
||||
helm upgrade -i flagger-helmtester flagger/loadtester \
|
||||
--namespace=kube-system \
|
||||
--set serviceAccountName=tiller
|
||||
```
|
||||
|
||||
When deployed the Helm tester API will be available at `http://flagger-helmtester.kube-system/`.
|
||||
|
||||
Now you can add pre-rollout webhooks to the canary analysis spec:
|
||||
|
||||
```yaml
|
||||
canaryAnalysis:
|
||||
webhooks:
|
||||
- name: "smoke test"
|
||||
type: pre-rollout
|
||||
url: http://flagger-helmtester.kube-system/
|
||||
timeout: 3m
|
||||
metadata:
|
||||
type: "helm"
|
||||
cmd: "test {{ .Release.Name }} --cleanup"
|
||||
```
|
||||
|
||||
When the canary analysis starts, Flagger will call the pre-rollout webhooks before routing traffic to the canary.
|
||||
If the helm test fails, Flagger will retry until the analysis threshold is reached and the canary is rolled back.
|
||||
|
||||
As an alternative to Helm you can use the [Bash Automated Testing System](https://github.com/bats-core/bats-core) to run your tests.
|
||||
|
||||
```yaml
|
||||
canaryAnalysis:
|
||||
webhooks:
|
||||
- name: "acceptance tests"
|
||||
type: pre-rollout
|
||||
url: http://flagger-batstester.default/
|
||||
timeout: 5m
|
||||
metadata:
|
||||
type: "bash"
|
||||
cmd: "bats /tests/acceptance.bats"
|
||||
```
|
||||
|
||||
Note that you should create a ConfigMap with your Bats tests and mount it inside the tester container.
|
||||
|
||||
### Manual Gating
|
||||
|
||||
For manual approval of a canary deployment you can use the `confirm-rollout` webhook.
|
||||
The confirmation hooks are executed before the pre-rollout hooks.
|
||||
Flagger will halt the canary traffic shifting and analysis until the confirm webhook returns HTTP status 200.
|
||||
|
||||
Manual gating with Flagger's tester:
|
||||
|
||||
```yaml
|
||||
canaryAnalysis:
|
||||
webhooks:
|
||||
- name: "gate"
|
||||
type: confirm-rollout
|
||||
url: http://flagger-loadtester.test/gate/halt
|
||||
```
|
||||
|
||||
The `/gate/halt` returns HTTP 403 thus blocking the rollout.
|
||||
|
||||
If you have notifications enabled, Flagger will post a message to Slack or MS Teams if a canary rollout is waiting for approval.
|
||||
|
||||
Change the URL to `/gate/approve` to start the canary analysis:
|
||||
|
||||
```yaml
|
||||
canaryAnalysis:
|
||||
webhooks:
|
||||
- name: "gate"
|
||||
type: confirm-rollout
|
||||
url: http://flagger-loadtester.test/gate/approve
|
||||
```
|
||||
|
||||
Manual gating can be driven with Flagger's tester API. Set the confirmation URL to `/gate/check`:
|
||||
|
||||
```yaml
|
||||
canaryAnalysis:
|
||||
webhooks:
|
||||
- name: "ask for confirmation"
|
||||
type: confirm-rollout
|
||||
url: http://flagger-loadtester.test/gate/check
|
||||
```
|
||||
|
||||
By default the gate is closed, you can start or resume the canary rollout with:
|
||||
|
||||
```bash
|
||||
kubectl -n test exec -it flagger-loadtester-xxxx-xxxx sh
|
||||
|
||||
curl -d '{"name": "podinfo","namespace":"test"}' http://localhost:8080/gate/open
|
||||
```
|
||||
|
||||
You can pause the rollout at any time with:
|
||||
|
||||
```bash
|
||||
curl -d '{"name": "podinfo","namespace":"test"}' http://localhost:8080/gate/close
|
||||
```
|
||||
|
||||
If a canary analysis is paused the status will change to waiting:
|
||||
|
||||
```bash
|
||||
kubectl get canary/podinfo
|
||||
|
||||
NAME STATUS WEIGHT
|
||||
podinfo Waiting 0
|
||||
```
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user