mirror of
https://github.com/stakater/Reloader.git
synced 2026-02-14 18:09:50 +00:00
Compare commits
693 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9aa957c839 | ||
|
|
20e2680539 | ||
|
|
f51b62dee9 | ||
|
|
f776e2dfd0 | ||
|
|
003fbbfa1f | ||
|
|
8834ab097d | ||
|
|
3ee87d3725 | ||
|
|
5c3593fb1c | ||
|
|
8537502bbd | ||
|
|
fd5f03adfb | ||
|
|
32899e1983 | ||
|
|
6c15e5db24 | ||
|
|
4a95a813cd | ||
|
|
3dd2741102 | ||
|
|
16ff7f6ac9 | ||
|
|
1be910749b | ||
|
|
1945a740d0 | ||
|
|
07f7365d63 | ||
|
|
ad6013adbf | ||
|
|
a132ed8dea | ||
|
|
2674f405ce | ||
|
|
e56323d582 | ||
|
|
c4f3255c78 | ||
|
|
2442eddd81 | ||
|
|
76287e0420 | ||
|
|
322c4bc130 | ||
|
|
958c6c2be7 | ||
|
|
922cac120a | ||
|
|
b945e5e828 | ||
|
|
1652c62775 | ||
|
|
193f64c0ec | ||
|
|
f7210204d4 | ||
|
|
eb3bc2447e | ||
|
|
27f49ecc0f | ||
|
|
8373b1e76c | ||
|
|
a419b07e02 | ||
|
|
fdd2474b3f | ||
|
|
4c0883b4cf | ||
|
|
157cf0f2e4 | ||
|
|
6fd7c8254a | ||
|
|
703319e732 | ||
|
|
b0ca635e49 | ||
|
|
8b64c9b9cd | ||
|
|
e1db875efe | ||
|
|
5b63610f4f | ||
|
|
512278d740 | ||
|
|
9a3edf13d2 | ||
|
|
eb38bf7470 | ||
|
|
4b90335362 | ||
|
|
7e9d571e1e | ||
|
|
0f1d02e975 | ||
|
|
85f1c13de9 | ||
|
|
9c8c511ae5 | ||
|
|
109971d8b7 | ||
|
|
c9cab4f6e0 | ||
|
|
eb96bab4e0 | ||
|
|
779c3b0895 | ||
|
|
a5d1012570 | ||
|
|
ebac11f904 | ||
|
|
0505dfb7c2 | ||
|
|
232bbdc68c | ||
|
|
8545fe8d8d | ||
|
|
32ac22dc33 | ||
|
|
3afe895045 | ||
|
|
64a18ff207 | ||
|
|
b71fb19882 | ||
|
|
27fb47ff52 | ||
|
|
04dec609f0 | ||
|
|
1725f17b0b | ||
|
|
4f8b22e954 | ||
|
|
cdaed4a8af | ||
|
|
d409b79a11 | ||
|
|
c3546066fa | ||
|
|
7080ec27cc | ||
|
|
6f1ecffb25 | ||
|
|
765053f21e | ||
|
|
fd9b7e2c1f | ||
|
|
bfb720e9e9 | ||
|
|
e9114d3455 | ||
|
|
10b3e077b5 | ||
|
|
9607da6d8a | ||
|
|
32046ebfe0 | ||
|
|
5d4b9f5a32 | ||
|
|
620959a03b | ||
|
|
008c45e9ac | ||
|
|
fa201d9762 | ||
|
|
174b57cdad | ||
|
|
4476fad274 | ||
|
|
16b26be5c2 | ||
|
|
7c429714ae | ||
|
|
64c3d8487b | ||
|
|
405069e691 | ||
|
|
4694b7570e | ||
|
|
3a9ca713bb | ||
|
|
3de9c688f2 | ||
|
|
90b9713b7f | ||
|
|
9139f838cf | ||
|
|
59738b2d6d | ||
|
|
91bdb47dad | ||
|
|
2835e5952f | ||
|
|
cadf4489e8 | ||
|
|
32f83fabc9 | ||
|
|
09f2a63b00 | ||
|
|
c860dcc402 | ||
|
|
5f2cf19213 | ||
|
|
8980e1fd80 | ||
|
|
644e5d51d3 | ||
|
|
65dc259b7b | ||
|
|
3cf845b596 | ||
|
|
9af46a363c | ||
|
|
999141df8c | ||
|
|
e99bb34451 | ||
|
|
196373a688 | ||
|
|
c3022c1255 | ||
|
|
c988b77933 | ||
|
|
e3e7cef752 | ||
|
|
f7d4fca874 | ||
|
|
956b3934da | ||
|
|
39352e4f4d | ||
|
|
a43dcc7b85 | ||
|
|
0078e3f814 | ||
|
|
acaa00e256 | ||
|
|
dffed992d6 | ||
|
|
eff894e919 | ||
|
|
6d640e2ca1 | ||
|
|
11c99a7c13 | ||
|
|
03c3f5947b | ||
|
|
1084574bd0 | ||
|
|
3103e5ac4d | ||
|
|
a77c10a2c6 | ||
|
|
bd767a7ef1 | ||
|
|
3a1cc8f348 | ||
|
|
dd0807e951 | ||
|
|
b8edc25177 | ||
|
|
f9d658d3b4 | ||
|
|
816ad6d430 | ||
|
|
19a76258d0 | ||
|
|
aa481d9568 | ||
|
|
177d2756a8 | ||
|
|
9b2af6f9b7 | ||
|
|
7c4899a7eb | ||
|
|
54d44858f8 | ||
|
|
6304a9e5ab | ||
|
|
1e6a6ec2d9 | ||
|
|
42cd7e71a2 | ||
|
|
1107fee109 | ||
|
|
9e33dac9ef | ||
|
|
517fd33fb1 | ||
|
|
1e46d44c7c | ||
|
|
49409dce54 | ||
|
|
9039956c32 | ||
|
|
82eb8d8b87 | ||
|
|
7af0728990 | ||
|
|
d9b36a56b5 | ||
|
|
dcf4b0d0f6 | ||
|
|
b8b7cdb610 | ||
|
|
d2580930e4 | ||
|
|
e0150145ec | ||
|
|
52ac7e307d | ||
|
|
e311fe2fff | ||
|
|
dec3410b7f | ||
|
|
1e1f094516 | ||
|
|
a9566fa672 | ||
|
|
51ee0a19bb | ||
|
|
19918e6fa8 | ||
|
|
95cea97d34 | ||
|
|
d22a0f25de | ||
|
|
2d2c35fcf4 | ||
|
|
cf600f7761 | ||
|
|
01f62cf823 | ||
|
|
85bd7a075c | ||
|
|
3999edb7a3 | ||
|
|
f69773c588 | ||
|
|
8b257a3f0c | ||
|
|
93936d12c1 | ||
|
|
2d810f9824 | ||
|
|
99c45b3ca3 | ||
|
|
81315adc9b | ||
|
|
ad0407517d | ||
|
|
cafa14f0e7 | ||
|
|
5089955691 | ||
|
|
379c428283 | ||
|
|
404c3700f7 | ||
|
|
df812c555c | ||
|
|
e8cb97a6d1 | ||
|
|
58ad781c0c | ||
|
|
be09beac29 | ||
|
|
516f9e8bc5 | ||
|
|
e69ea41ece | ||
|
|
c2107cccbb | ||
|
|
7ea10b48ec | ||
|
|
a6abbd278c | ||
|
|
70e58394d1 | ||
|
|
c807e6deaf | ||
|
|
52815a4ee1 | ||
|
|
4679d21e26 | ||
|
|
172de75f01 | ||
|
|
4eaaf2da79 | ||
|
|
144cc910af | ||
|
|
6faffdc0cf | ||
|
|
c5481c6e7b | ||
|
|
a47d927422 | ||
|
|
70e0598833 | ||
|
|
85bea39568 | ||
|
|
97e74ad11b | ||
|
|
9c77e27b2c | ||
|
|
ad8e6f78a0 | ||
|
|
93ba31821e | ||
|
|
adbc22c143 | ||
|
|
b6c6e45b5f | ||
|
|
3c03be7b8e | ||
|
|
1717fe93b1 | ||
|
|
0ff6d0fcf2 | ||
|
|
f896e1462d | ||
|
|
8b0311f799 | ||
|
|
fafb5f45d6 | ||
|
|
22ee53ae33 | ||
|
|
61ba818d4d | ||
|
|
b56f8f5c83 | ||
|
|
c021462893 | ||
|
|
1874f871d9 | ||
|
|
4741ff231b | ||
|
|
b166c2bbfa | ||
|
|
ab0980d6ee | ||
|
|
f9ec2e99a8 | ||
|
|
b06411479f | ||
|
|
e93731a686 | ||
|
|
5139d65f9c | ||
|
|
023425d4e1 | ||
|
|
8c2f2e574c | ||
|
|
b6d538dca8 | ||
|
|
fdfb083b27 | ||
|
|
533ba4f7eb | ||
|
|
662e1fcd9b | ||
|
|
54999116c9 | ||
|
|
f9561a2167 | ||
|
|
c2d6b95297 | ||
|
|
2312be3d68 | ||
|
|
2c82d6507f | ||
|
|
96dc40c8cb | ||
|
|
ee12df2b32 | ||
|
|
e08b1d3927 | ||
|
|
aee1366017 | ||
|
|
7dc8002029 | ||
|
|
e041b6d3f9 | ||
|
|
7c96cf3f57 | ||
|
|
e913848a13 | ||
|
|
e54e21bdaf | ||
|
|
48eb586e32 | ||
|
|
5bd6241a83 | ||
|
|
04fe5cd8bf | ||
|
|
5d574a7692 | ||
|
|
a753076f69 | ||
|
|
dcc91eaed3 | ||
|
|
7d07874936 | ||
|
|
0027bd73fd | ||
|
|
1a1260fffd | ||
|
|
00300db512 | ||
|
|
1727845af7 | ||
|
|
95ba7a0427 | ||
|
|
4b8fc31e93 | ||
|
|
2b662dd2e0 | ||
|
|
1ee3b40131 | ||
|
|
6b2cc94468 | ||
|
|
edfbe52238 | ||
|
|
70c6405432 | ||
|
|
be15d3349c | ||
|
|
d1fa115a6c | ||
|
|
1cb249598a | ||
|
|
ccb2d61ff5 | ||
|
|
413e805d39 | ||
|
|
1d43a7f1b4 | ||
|
|
d18aabe160 | ||
|
|
6a4bca0fce | ||
|
|
561f21a81d | ||
|
|
0aa974f7e6 | ||
|
|
5185ff2c91 | ||
|
|
d6a95a923a | ||
|
|
3dc3b4726c | ||
|
|
1b2780f712 | ||
|
|
e7e095cb4b | ||
|
|
570649e56b | ||
|
|
69b0d93f31 | ||
|
|
717291f173 | ||
|
|
75f9a23de3 | ||
|
|
3c39406ca9 | ||
|
|
6d1d017aa4 | ||
|
|
f6887b4d8a | ||
|
|
574129d637 | ||
|
|
919d0cc3ca | ||
|
|
ca3c95404e | ||
|
|
9ded29a9aa | ||
|
|
37e6c6b7c8 | ||
|
|
1d923bce36 | ||
|
|
636ab7a3f5 | ||
|
|
41802adc52 | ||
|
|
e3d352cf56 | ||
|
|
dd7d35e268 | ||
|
|
cb769d0f64 | ||
|
|
234084a042 | ||
|
|
c341a8bb97 | ||
|
|
6aff0b9c79 | ||
|
|
9fc4f5bcf7 | ||
|
|
d45054ba5a | ||
|
|
5246ec70e7 | ||
|
|
33f28ec1e3 | ||
|
|
ee8ff2d413 | ||
|
|
4de1659965 | ||
|
|
277dde8525 | ||
|
|
0e4db821d9 | ||
|
|
565a3d6916 | ||
|
|
04d72d0b70 | ||
|
|
0a2aa15d1f | ||
|
|
035fa66692 | ||
|
|
d99a510628 | ||
|
|
266bad4c7a | ||
|
|
bd54311ac2 | ||
|
|
7ddb1ae434 | ||
|
|
1c794b911b | ||
|
|
e2592ddb35 | ||
|
|
fabb83a422 | ||
|
|
ecdfc6d751 | ||
|
|
145679a1fb | ||
|
|
bce0ac9aa6 | ||
|
|
78bb519058 | ||
|
|
3f5ee46f00 | ||
|
|
d4acec63b7 | ||
|
|
0d464cff65 | ||
|
|
eb42fce5a8 | ||
|
|
a39100ab35 | ||
|
|
bf6360752d | ||
|
|
38ab09a5af | ||
|
|
ca09e243a3 | ||
|
|
22f6c3e461 | ||
|
|
d784b552ee | ||
|
|
33710457ef | ||
|
|
f3bf76bb9d | ||
|
|
26ce083053 | ||
|
|
e9811bf166 | ||
|
|
93e7aca146 | ||
|
|
ff7c5c0f74 | ||
|
|
1ffef1a1d4 | ||
|
|
c9b919f2f4 | ||
|
|
b4cc5420ac | ||
|
|
865a985bcd | ||
|
|
2cd4f2397a | ||
|
|
53b650ac80 | ||
|
|
32d5bb877f | ||
|
|
60a2f26976 | ||
|
|
e2edc87812 | ||
|
|
01205e70df | ||
|
|
785cc49374 | ||
|
|
489a900a20 | ||
|
|
394707a7f8 | ||
|
|
242fd80209 | ||
|
|
fa09ff7e76 | ||
|
|
12826023d4 | ||
|
|
71d6c4bd07 | ||
|
|
4b3a58d91e | ||
|
|
0d6d5ca479 | ||
|
|
017e6ed7fd | ||
|
|
ba6cc12daf | ||
|
|
ec5586fcb7 | ||
|
|
edf57bc94c | ||
|
|
f2a0e81ad1 | ||
|
|
a461080c05 | ||
|
|
2a239d4667 | ||
|
|
eaae123248 | ||
|
|
28b70651fd | ||
|
|
595841cf3f | ||
|
|
79bc824c7d | ||
|
|
99bb4da3d4 | ||
|
|
c6e7c328c6 | ||
|
|
41cf1056a6 | ||
|
|
db80cc755d | ||
|
|
6aef0ccc1b | ||
|
|
3862d808e9 | ||
|
|
7068d00345 | ||
|
|
aaf53097c5 | ||
|
|
3981623ec2 | ||
|
|
b0b2b58afd | ||
|
|
735a621e02 | ||
|
|
8cfc992cfc | ||
|
|
325515f805 | ||
|
|
ca95a7f4e8 | ||
|
|
19d88dbe0c | ||
|
|
a2d23f8ea5 | ||
|
|
e210ea62fe | ||
|
|
7a9bb4fcbc | ||
|
|
208a55f995 | ||
|
|
6bf4620b1b | ||
|
|
ba2ffcd561 | ||
|
|
b1cb6df1fa | ||
|
|
17ce721ddc | ||
|
|
53ab40a201 | ||
|
|
968855335d | ||
|
|
7789dc96f5 | ||
|
|
2791a00de9 | ||
|
|
9d9196e9ba | ||
|
|
f0caad4f38 | ||
|
|
f1e2d21105 | ||
|
|
13b6d2a878 | ||
|
|
7bf23a55c1 | ||
|
|
bd3e8b0d09 | ||
|
|
bb57e8429a | ||
|
|
21552a102a | ||
|
|
74a6bee9da | ||
|
|
0c5a8a5c1f | ||
|
|
8eb86c671c | ||
|
|
515f416a70 | ||
|
|
fcced46c6a | ||
|
|
9a7c9cb1b1 | ||
|
|
81c1b79203 | ||
|
|
a262d734b7 | ||
|
|
5ddb07b1c1 | ||
|
|
57e33facd7 | ||
|
|
34a6870fa9 | ||
|
|
354d348481 | ||
|
|
db0eaaabdc | ||
|
|
3524ab3ebb | ||
|
|
ade07b27fd | ||
|
|
6c9c6094eb | ||
|
|
c4d314210e | ||
|
|
e8704b6289 | ||
|
|
6b856c84f1 | ||
|
|
1442f92e68 | ||
|
|
c06ecd1788 | ||
|
|
babe5b07a9 | ||
|
|
80a1578beb | ||
|
|
4654e047cf | ||
|
|
76372be456 | ||
|
|
0d4593cb5e | ||
|
|
0071048d6d | ||
|
|
bb1afc2932 | ||
|
|
800232c5e8 | ||
|
|
7d50d13fbd | ||
|
|
86f33dec94 | ||
|
|
5a53a39500 | ||
|
|
7df32aefc8 | ||
|
|
5d8622b6ee | ||
|
|
70ab56606d | ||
|
|
04a987411f | ||
|
|
ef0ea91ec5 | ||
|
|
e6d833bc20 | ||
|
|
32d5ce8990 | ||
|
|
12b9a197a4 | ||
|
|
26f28e632e | ||
|
|
181b88a2b8 | ||
|
|
93f8467b33 | ||
|
|
3c266657b6 | ||
|
|
474b925f30 | ||
|
|
17b49794a2 | ||
|
|
2abd1164fc | ||
|
|
3cca8645cb | ||
|
|
3d14a846b5 | ||
|
|
7b61a748f8 | ||
|
|
4dceccc1c6 | ||
|
|
80c54f8619 | ||
|
|
08f16d133a | ||
|
|
43ea11a73b | ||
|
|
038f9a8eb8 | ||
|
|
6b02646878 | ||
|
|
b6b9bb97e7 | ||
|
|
ae67d2cd40 | ||
|
|
55ad4f7161 | ||
|
|
2c7480ac67 | ||
|
|
3f49abebb9 | ||
|
|
070355e79d | ||
|
|
49997ac248 | ||
|
|
b6894f6de1 | ||
|
|
cda9fa9bf0 | ||
|
|
074a42d8ce | ||
|
|
cbf501fc75 | ||
|
|
9f5211b730 | ||
|
|
ab7e4ddca7 | ||
|
|
fc955f72c8 | ||
|
|
0bcf8cb1e1 | ||
|
|
ac7e965326 | ||
|
|
7f5ea8c18f | ||
|
|
b73b04d9ae | ||
|
|
2e68364298 | ||
|
|
0ab949f781 | ||
|
|
e718be07c2 | ||
|
|
fb9748eae7 | ||
|
|
d571ebec0c | ||
|
|
1e285cbc11 | ||
|
|
b557e68d16 | ||
|
|
757f76c739 | ||
|
|
0a3a01a859 | ||
|
|
2260d72873 | ||
|
|
3848e031c2 | ||
|
|
6cd458b8ed | ||
|
|
09d90532e6 | ||
|
|
220a9d5f68 | ||
|
|
6eeba71273 | ||
|
|
3f6e98a9d5 | ||
|
|
9b21bf36b6 | ||
|
|
fe2c34c451 | ||
|
|
5d6309b941 | ||
|
|
8595b4ac43 | ||
|
|
aa1e2655bf | ||
|
|
21711b347a | ||
|
|
24d7b6b584 | ||
|
|
4bbbbc26ea | ||
|
|
c40632f11c | ||
|
|
1e752f5e3d | ||
|
|
e8fdf56406 | ||
|
|
c3e40f2f34 | ||
|
|
d7a3d4ca7e | ||
|
|
6f46ebc9ee | ||
|
|
c71ce83b93 | ||
|
|
ec2f2e8f0a | ||
|
|
e8a5b84603 | ||
|
|
b1ee068b89 | ||
|
|
7b088bed23 | ||
|
|
f682a69322 | ||
|
|
5dea8d2afb | ||
|
|
2c4a6d5e8b | ||
|
|
8323a35609 | ||
|
|
81199fdeb5 | ||
|
|
f8ac8b949a | ||
|
|
c9ea89480c | ||
|
|
667322ea29 | ||
|
|
26faea70cc | ||
|
|
df92a3a3dc | ||
|
|
c53ed24c2c | ||
|
|
c76234d9f3 | ||
|
|
2e5e5b41eb | ||
|
|
195b129cdd | ||
|
|
bad7ad33a3 | ||
|
|
5da570abb8 | ||
|
|
a8710c8132 | ||
|
|
6da76a7a7e | ||
|
|
335467843b | ||
|
|
adb519b4c7 | ||
|
|
6c5e46c776 | ||
|
|
4e909e5df2 | ||
|
|
63627bb7f3 | ||
|
|
906e622ce0 | ||
|
|
30c0f5be0b | ||
|
|
d4b7d384ed | ||
|
|
da9e526185 | ||
|
|
9a9fee2d10 | ||
|
|
c82dd8b730 | ||
|
|
0838acdb02 | ||
|
|
db927a2ff2 | ||
|
|
5b602f68c3 | ||
|
|
95d2173a80 | ||
|
|
e383fb2fed | ||
|
|
51d63c786e | ||
|
|
481725b1c8 | ||
|
|
70eafce2c1 | ||
|
|
f0c3b61f56 | ||
|
|
7cde0e7985 | ||
|
|
768f7196b7 | ||
|
|
222a9fd42f | ||
|
|
467a2e6229 | ||
|
|
226f1159dc | ||
|
|
24bb264c5a | ||
|
|
038c923636 | ||
|
|
c1bc54d904 | ||
|
|
0442c5512f | ||
|
|
6f798c0664 | ||
|
|
2094906dcb | ||
|
|
de50a0e277 | ||
|
|
3754fd51af | ||
|
|
3bc789dde4 | ||
|
|
9deefc7532 | ||
|
|
8510ee2ef3 | ||
|
|
2d744741ba | ||
|
|
7d44e9854a | ||
|
|
c5d8dd84ad | ||
|
|
2098c9628c | ||
|
|
86bec3e20f | ||
|
|
8f458e32ac | ||
|
|
1f64991a6d | ||
|
|
b3fea5526c | ||
|
|
9e7fca4c29 | ||
|
|
4c426817f4 | ||
|
|
1cae7a03f2 | ||
|
|
ae5bc2cfdf | ||
|
|
fc60755558 | ||
|
|
920cf8cf21 | ||
|
|
e0341720f5 | ||
|
|
e72c25d574 | ||
|
|
03ef902a6b | ||
|
|
aafbea48a9 | ||
|
|
9385a4a70f | ||
|
|
c75f0bdc34 | ||
|
|
69565f91c0 | ||
|
|
f8b425a366 | ||
|
|
6a6307aef4 | ||
|
|
e61547875a | ||
|
|
43d7836b2a | ||
|
|
ea12d91291 | ||
|
|
920c5d2f0f | ||
|
|
352f6ff230 | ||
|
|
08b02e0797 | ||
|
|
221369bdcd | ||
|
|
3a07584fd0 | ||
|
|
b4fe7fb185 | ||
|
|
0aa95c968a | ||
|
|
5a25d8ae15 | ||
|
|
82f01d5e79 | ||
|
|
9a465a433c | ||
|
|
cca62bd458 | ||
|
|
b28b345dad | ||
|
|
0398d542e4 | ||
|
|
36308361d2 | ||
|
|
981dda6465 | ||
|
|
2cfd29d533 | ||
|
|
f9bd4526ed | ||
|
|
c69ce749fc | ||
|
|
48e2db44be | ||
|
|
35cae84a60 | ||
|
|
cdd9a09edc | ||
|
|
c2cbca3f3c | ||
|
|
8a6395b18c | ||
|
|
dbe44e173c | ||
|
|
670b67dc55 | ||
|
|
7acf5b88c3 | ||
|
|
4d20963387 | ||
|
|
e9b09e7a68 | ||
|
|
e78e7818d1 | ||
|
|
ff12c58ee4 | ||
|
|
223ed538ae | ||
|
|
49f8b9a612 | ||
|
|
989053888f | ||
|
|
76ee7672c7 | ||
|
|
bccefb1624 | ||
|
|
6ccf555ee6 | ||
|
|
1ff03aa764 | ||
|
|
e2b6ccd8ef | ||
|
|
b0613884f0 | ||
|
|
5551280554 | ||
|
|
66ac979ea2 | ||
|
|
05f432469d | ||
|
|
c845787c81 | ||
|
|
308050e680 | ||
|
|
09d6051a9a | ||
|
|
0a5d1329bc | ||
|
|
6766fb47a0 | ||
|
|
533b5ada08 | ||
|
|
479a5af9fa | ||
|
|
0b970fb10d | ||
|
|
1124d3614d | ||
|
|
f67e5fb7e1 | ||
|
|
18bc739664 | ||
|
|
f83959ffc1 | ||
|
|
59ebab3b68 | ||
|
|
2218db6adf | ||
|
|
86cc2fb905 | ||
|
|
9ef084e5cc | ||
|
|
cbdf9ae077 | ||
|
|
b55929b645 | ||
|
|
e4a7faa664 | ||
|
|
07ef2aaa12 | ||
|
|
1d98ef0bfd | ||
|
|
3e0cd044ce | ||
|
|
eaf68d427e | ||
|
|
e1f4bf83af | ||
|
|
3f0c824da5 | ||
|
|
b0cd335854 | ||
|
|
6053e13681 | ||
|
|
3201143c88 | ||
|
|
376693ce02 | ||
|
|
da087c27e9 | ||
|
|
e29ad80772 | ||
|
|
e41d0493b6 | ||
|
|
d43679b5e5 | ||
|
|
45fb35519f | ||
|
|
914223ad1b | ||
|
|
1e07408244 | ||
|
|
1760ac890e | ||
|
|
ab90b9aa93 | ||
|
|
d59576425f | ||
|
|
357bc356f9 | ||
|
|
304364f45f | ||
|
|
8dc1eb3117 | ||
|
|
b835e0f6f9 | ||
|
|
a7e8deb5ed | ||
|
|
5f99de48ff | ||
|
|
985b8fca3e | ||
|
|
4575f3975a | ||
|
|
8169bb6a5d | ||
|
|
fe0d3cb3df | ||
|
|
539186f231 | ||
|
|
4619aa5951 | ||
|
|
3875df2cbd | ||
|
|
54f2fdb708 | ||
|
|
78995cf451 | ||
|
|
daccf7e050 | ||
|
|
330e69f417 |
267
.github/actions/loadtest/action.yml
vendored
Normal file
267
.github/actions/loadtest/action.yml
vendored
Normal file
@@ -0,0 +1,267 @@
|
||||
name: 'Reloader Load Test'
|
||||
description: 'Run Reloader load tests with A/B comparison support'
|
||||
|
||||
inputs:
|
||||
old-ref:
|
||||
description: 'Git ref for "old" version (optional, enables A/B comparison)'
|
||||
required: false
|
||||
default: ''
|
||||
new-ref:
|
||||
description: 'Git ref for "new" version (defaults to current checkout)'
|
||||
required: false
|
||||
default: ''
|
||||
old-image:
|
||||
description: 'Pre-built container image for "old" version (alternative to old-ref)'
|
||||
required: false
|
||||
default: ''
|
||||
new-image:
|
||||
description: 'Pre-built container image for "new" version (alternative to new-ref)'
|
||||
required: false
|
||||
default: ''
|
||||
scenarios:
|
||||
description: 'Scenarios to run: S1,S4,S6 or all'
|
||||
required: false
|
||||
default: 'S1,S4,S6'
|
||||
test-type:
|
||||
description: 'Test type label for summary: quick or full'
|
||||
required: false
|
||||
default: 'quick'
|
||||
duration:
|
||||
description: 'Test duration in seconds'
|
||||
required: false
|
||||
default: '60'
|
||||
kind-cluster:
|
||||
description: 'Name of existing Kind cluster (if empty, creates new one)'
|
||||
required: false
|
||||
default: ''
|
||||
post-comment:
|
||||
description: 'Post results as PR comment'
|
||||
required: false
|
||||
default: 'false'
|
||||
pr-number:
|
||||
description: 'PR number for commenting (required if post-comment is true)'
|
||||
required: false
|
||||
default: ''
|
||||
github-token:
|
||||
description: 'GitHub token for posting comments'
|
||||
required: false
|
||||
default: ${{ github.token }}
|
||||
comment-header:
|
||||
description: 'Optional header text for the comment'
|
||||
required: false
|
||||
default: ''
|
||||
|
||||
outputs:
|
||||
status:
|
||||
description: 'Overall test status: pass or fail'
|
||||
value: ${{ steps.run.outputs.status }}
|
||||
summary:
|
||||
description: 'Markdown summary of results'
|
||||
value: ${{ steps.summary.outputs.summary }}
|
||||
pass-count:
|
||||
description: 'Number of passed scenarios'
|
||||
value: ${{ steps.summary.outputs.pass_count }}
|
||||
fail-count:
|
||||
description: 'Number of failed scenarios'
|
||||
value: ${{ steps.summary.outputs.fail_count }}
|
||||
|
||||
runs:
|
||||
using: 'composite'
|
||||
steps:
|
||||
- name: Determine images to use
|
||||
id: images
|
||||
shell: bash
|
||||
run: |
|
||||
# Determine old image
|
||||
if [ -n "${{ inputs.old-image }}" ]; then
|
||||
echo "old=${{ inputs.old-image }}" >> $GITHUB_OUTPUT
|
||||
elif [ -n "${{ inputs.old-ref }}" ]; then
|
||||
echo "old=localhost/reloader:old" >> $GITHUB_OUTPUT
|
||||
echo "build_old=true" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "old=" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
# Determine new image
|
||||
if [ -n "${{ inputs.new-image }}" ]; then
|
||||
echo "new=${{ inputs.new-image }}" >> $GITHUB_OUTPUT
|
||||
elif [ -n "${{ inputs.new-ref }}" ]; then
|
||||
echo "new=localhost/reloader:new" >> $GITHUB_OUTPUT
|
||||
echo "build_new=true" >> $GITHUB_OUTPUT
|
||||
else
|
||||
# Default: build from current checkout
|
||||
echo "new=localhost/reloader:new" >> $GITHUB_OUTPUT
|
||||
echo "build_new_current=true" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- name: Build old image from ref
|
||||
if: steps.images.outputs.build_old == 'true'
|
||||
shell: bash
|
||||
run: |
|
||||
CURRENT_SHA=$(git rev-parse HEAD)
|
||||
git checkout ${{ inputs.old-ref }}
|
||||
docker build -t localhost/reloader:old .
|
||||
echo "Built old image from ref: ${{ inputs.old-ref }}"
|
||||
git checkout $CURRENT_SHA
|
||||
|
||||
- name: Build new image from ref
|
||||
if: steps.images.outputs.build_new == 'true'
|
||||
shell: bash
|
||||
run: |
|
||||
CURRENT_SHA=$(git rev-parse HEAD)
|
||||
git checkout ${{ inputs.new-ref }}
|
||||
docker build -t localhost/reloader:new .
|
||||
echo "Built new image from ref: ${{ inputs.new-ref }}"
|
||||
git checkout $CURRENT_SHA
|
||||
|
||||
- name: Build new image from current checkout
|
||||
if: steps.images.outputs.build_new_current == 'true'
|
||||
shell: bash
|
||||
run: |
|
||||
docker build -t localhost/reloader:new .
|
||||
echo "Built new image from current checkout"
|
||||
|
||||
- name: Build loadtest binary
|
||||
shell: bash
|
||||
run: |
|
||||
cd ${{ github.workspace }}/test/loadtest
|
||||
go build -o loadtest ./cmd/loadtest
|
||||
|
||||
- name: Determine cluster name
|
||||
id: cluster
|
||||
shell: bash
|
||||
run: |
|
||||
if [ -n "${{ inputs.kind-cluster }}" ]; then
|
||||
echo "name=${{ inputs.kind-cluster }}" >> $GITHUB_OUTPUT
|
||||
echo "skip=true" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "name=reloader-loadtest" >> $GITHUB_OUTPUT
|
||||
echo "skip=false" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- name: Load images into Kind
|
||||
shell: bash
|
||||
run: |
|
||||
CLUSTER="${{ steps.cluster.outputs.name }}"
|
||||
|
||||
if [ -n "${{ steps.images.outputs.old }}" ]; then
|
||||
echo "Loading old image: ${{ steps.images.outputs.old }}"
|
||||
kind load docker-image "${{ steps.images.outputs.old }}" --name "$CLUSTER" || true
|
||||
fi
|
||||
|
||||
echo "Loading new image: ${{ steps.images.outputs.new }}"
|
||||
kind load docker-image "${{ steps.images.outputs.new }}" --name "$CLUSTER" || true
|
||||
|
||||
- name: Run load tests
|
||||
id: run
|
||||
shell: bash
|
||||
run: |
|
||||
cd ${{ github.workspace }}/test/loadtest
|
||||
|
||||
ARGS="--new-image=${{ steps.images.outputs.new }}"
|
||||
ARGS="$ARGS --scenario=${{ inputs.scenarios }}"
|
||||
ARGS="$ARGS --duration=${{ inputs.duration }}"
|
||||
ARGS="$ARGS --cluster-name=${{ steps.cluster.outputs.name }}"
|
||||
ARGS="$ARGS --skip-image-load"
|
||||
|
||||
if [ -n "${{ steps.images.outputs.old }}" ]; then
|
||||
ARGS="$ARGS --old-image=${{ steps.images.outputs.old }}"
|
||||
fi
|
||||
|
||||
if [ "${{ steps.cluster.outputs.skip }}" = "true" ]; then
|
||||
ARGS="$ARGS --skip-cluster"
|
||||
fi
|
||||
|
||||
echo "Running: ./loadtest run $ARGS"
|
||||
if ./loadtest run $ARGS; then
|
||||
echo "status=pass" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "status=fail" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- name: Generate summary
|
||||
id: summary
|
||||
shell: bash
|
||||
run: |
|
||||
cd ${{ github.workspace }}/test/loadtest
|
||||
|
||||
# Generate markdown summary
|
||||
./loadtest summary \
|
||||
--results-dir=./results \
|
||||
--test-type=${{ inputs.test-type }} \
|
||||
--format=markdown > summary.md 2>/dev/null || true
|
||||
|
||||
# Output to GitHub Step Summary
|
||||
cat summary.md >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
# Store summary for output (using heredoc for multiline)
|
||||
{
|
||||
echo 'summary<<EOF'
|
||||
cat summary.md
|
||||
echo 'EOF'
|
||||
} >> $GITHUB_OUTPUT
|
||||
|
||||
# Get pass/fail counts from JSON
|
||||
COUNTS=$(./loadtest summary --format=json 2>/dev/null | head -20 || echo '{}')
|
||||
echo "pass_count=$(echo "$COUNTS" | grep -o '"pass_count": [0-9]*' | grep -o '[0-9]*' || echo 0)" >> $GITHUB_OUTPUT
|
||||
echo "fail_count=$(echo "$COUNTS" | grep -o '"fail_count": [0-9]*' | grep -o '[0-9]*' || echo 0)" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Post PR comment
|
||||
if: inputs.post-comment == 'true' && inputs.pr-number != ''
|
||||
continue-on-error: true
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
github-token: ${{ inputs.github-token }}
|
||||
script: |
|
||||
const fs = require('fs');
|
||||
const summaryPath = '${{ github.workspace }}/test/loadtest/summary.md';
|
||||
let summary = 'No results available';
|
||||
try {
|
||||
summary = fs.readFileSync(summaryPath, 'utf8');
|
||||
} catch (e) {
|
||||
console.log('Could not read summary file:', e.message);
|
||||
}
|
||||
|
||||
const header = '${{ inputs.comment-header }}';
|
||||
const status = '${{ steps.run.outputs.status }}';
|
||||
const statusEmoji = status === 'pass' ? ':white_check_mark:' : ':x:';
|
||||
|
||||
const body = [
|
||||
header ? header : `## ${statusEmoji} Load Test Results (${{ inputs.test-type }})`,
|
||||
'',
|
||||
summary,
|
||||
'',
|
||||
'---',
|
||||
`**Artifacts:** [Download](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }})`,
|
||||
].join('\n');
|
||||
|
||||
try {
|
||||
await github.rest.issues.createComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: ${{ inputs.pr-number }},
|
||||
body: body
|
||||
});
|
||||
console.log('Comment posted successfully');
|
||||
} catch (error) {
|
||||
if (error.status === 403) {
|
||||
console.log('Could not post comment (fork PR with restricted permissions). Use /loadtest command to run with comment posting.');
|
||||
} else {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
- name: Upload results
|
||||
uses: actions/upload-artifact@v4
|
||||
if: always()
|
||||
with:
|
||||
name: loadtest-${{ inputs.test-type }}-results
|
||||
path: |
|
||||
${{ github.workspace }}/test/loadtest/results/
|
||||
retention-days: 30
|
||||
|
||||
- name: Cleanup Kind cluster (only if we created it)
|
||||
if: always() && steps.cluster.outputs.skip == 'false'
|
||||
shell: bash
|
||||
run: |
|
||||
kind delete cluster --name ${{ steps.cluster.outputs.name }} || true
|
||||
3
.github/md_config.json
vendored
3
.github/md_config.json
vendored
@@ -3,5 +3,6 @@
|
||||
{
|
||||
"pattern": "^(?!http).+"
|
||||
}
|
||||
]
|
||||
],
|
||||
"retryOn429": true
|
||||
}
|
||||
|
||||
68
.github/workflows/init-branch-release.yaml
vendored
Normal file
68
.github/workflows/init-branch-release.yaml
vendored
Normal file
@@ -0,0 +1,68 @@
|
||||
name: Init Release
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
TARGET_BRANCH:
|
||||
description: 'TARGET_BRANCH on which release will be based'
|
||||
required: true
|
||||
type: string
|
||||
|
||||
TARGET_VERSION:
|
||||
description: 'TARGET_VERSION to build kubernetes manifests with using Kustomize'
|
||||
required: true
|
||||
type: string
|
||||
|
||||
permissions: {}
|
||||
|
||||
jobs:
|
||||
prepare-release:
|
||||
permissions:
|
||||
contents: write # for peter-evans/create-pull-request to create branch
|
||||
pull-requests: write # for peter-evans/create-pull-request to create a PR
|
||||
name: Automatically generate version and manifests on ${{ inputs.TARGET_BRANCH }}
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v5.0.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
ref: ${{ inputs.TARGET_BRANCH }}
|
||||
|
||||
- name: Check if TARGET_VERSION is well formed.
|
||||
run: |
|
||||
set -xue
|
||||
# Target version must not contain 'v' prefix
|
||||
if echo "${{ inputs.TARGET_VERSION }}" | grep -e '^v'; then
|
||||
echo "::error::Target version '${{ inputs.TARGET_VERSION }}' should not begin with a 'v' prefix, refusing to continue." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Create VERSION information
|
||||
run: |
|
||||
set -ue
|
||||
echo "Bumping version from $(cat VERSION) to ${{ inputs.TARGET_VERSION }}"
|
||||
echo "${{ inputs.TARGET_VERSION }}" > VERSION
|
||||
|
||||
- name: Replace latest tag with version from input
|
||||
run: |
|
||||
set -ue
|
||||
VERSION=${{ inputs.TARGET_VERSION }} make update-manifests-version
|
||||
git diff
|
||||
|
||||
- name: Generate new set of manifests
|
||||
run: |
|
||||
set -ue
|
||||
make k8s-manifests
|
||||
git diff
|
||||
|
||||
- name: Create pull request
|
||||
uses: peter-evans/create-pull-request@v7.0.8
|
||||
with:
|
||||
commit-message: "Bump version to ${{ inputs.TARGET_VERSION }}"
|
||||
title: "Bump version to ${{ inputs.TARGET_VERSION }} on ${{ inputs.TARGET_BRANCH }} branch"
|
||||
body: Updating VERSION and manifests to ${{ inputs.TARGET_VERSION }}
|
||||
branch: update-version
|
||||
branch-suffix: random
|
||||
signoff: true
|
||||
labels: release
|
||||
112
.github/workflows/loadtest.yml
vendored
Normal file
112
.github/workflows/loadtest.yml
vendored
Normal file
@@ -0,0 +1,112 @@
|
||||
name: Load Test (Full)
|
||||
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
issues: write
|
||||
|
||||
jobs:
|
||||
loadtest:
|
||||
# Only run on PR comments with /loadtest command
|
||||
if: |
|
||||
github.event.issue.pull_request &&
|
||||
contains(github.event.comment.body, '/loadtest')
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Add reaction to comment
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
await github.rest.reactions.createForIssueComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
comment_id: context.payload.comment.id,
|
||||
content: 'rocket'
|
||||
});
|
||||
|
||||
- name: Get PR details
|
||||
id: pr
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const pr = await github.rest.pulls.get({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
pull_number: context.issue.number
|
||||
});
|
||||
core.setOutput('head_ref', pr.data.head.ref);
|
||||
core.setOutput('head_sha', pr.data.head.sha);
|
||||
core.setOutput('base_ref', pr.data.base.ref);
|
||||
core.setOutput('base_sha', pr.data.base.sha);
|
||||
console.log(`PR #${context.issue.number}: ${pr.data.head.ref} -> ${pr.data.base.ref}`);
|
||||
|
||||
- name: Checkout PR branch
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
ref: ${{ steps.pr.outputs.head_sha }}
|
||||
fetch-depth: 0 # Full history for building from base ref
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: '1.26'
|
||||
cache: false
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Install kind
|
||||
run: |
|
||||
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
|
||||
chmod +x ./kind
|
||||
sudo mv ./kind /usr/local/bin/kind
|
||||
|
||||
- name: Install kubectl
|
||||
run: |
|
||||
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
|
||||
chmod +x kubectl
|
||||
sudo mv kubectl /usr/local/bin/kubectl
|
||||
|
||||
- name: Run full A/B comparison load test
|
||||
id: loadtest
|
||||
uses: ./.github/actions/loadtest
|
||||
with:
|
||||
old-ref: ${{ steps.pr.outputs.base_sha }}
|
||||
new-ref: ${{ steps.pr.outputs.head_sha }}
|
||||
scenarios: 'all'
|
||||
test-type: 'full'
|
||||
post-comment: 'true'
|
||||
pr-number: ${{ github.event.issue.number }}
|
||||
comment-header: |
|
||||
## Load Test Results (Full A/B Comparison)
|
||||
**Comparing:** `${{ steps.pr.outputs.base_ref }}` → `${{ steps.pr.outputs.head_ref }}`
|
||||
**Triggered by:** @${{ github.event.comment.user.login }}
|
||||
|
||||
- name: Add success reaction
|
||||
if: steps.loadtest.outputs.status == 'pass'
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
await github.rest.reactions.createForIssueComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
comment_id: context.payload.comment.id,
|
||||
content: '+1'
|
||||
});
|
||||
|
||||
- name: Add failure reaction
|
||||
if: steps.loadtest.outputs.status == 'fail'
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
await github.rest.reactions.createForIssueComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
comment_id: context.payload.comment.id,
|
||||
content: '-1'
|
||||
});
|
||||
90
.github/workflows/pull_request-helm.yaml
vendored
Normal file
90
.github/workflows/pull_request-helm.yaml
vendored
Normal file
@@ -0,0 +1,90 @@
|
||||
name: Pull Request Workflow for Helm Chart changes
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
branches:
|
||||
- master
|
||||
paths:
|
||||
- 'deployments/kubernetes/chart/reloader/**'
|
||||
|
||||
env:
|
||||
DOCKER_FILE_PATH: Dockerfile
|
||||
DOCKER_UBI_FILE_PATH: Dockerfile.ubi
|
||||
KUBERNETES_VERSION: "1.30.0"
|
||||
KIND_VERSION: "0.23.0"
|
||||
REGISTRY: ghcr.io
|
||||
|
||||
jobs:
|
||||
|
||||
helm-chart-validation:
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
name: Helm Chart Validation
|
||||
|
||||
steps:
|
||||
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
ref: ${{github.event.pull_request.head.sha}}
|
||||
fetch-depth: 0
|
||||
|
||||
# Setting up helm binary
|
||||
- name: Set up Helm
|
||||
uses: azure/setup-helm@v4
|
||||
with:
|
||||
version: v3.11.3
|
||||
|
||||
- name: Helm chart unit tests
|
||||
uses: d3adb5/helm-unittest-action@v2
|
||||
with:
|
||||
charts: deployments/kubernetes/chart/reloader
|
||||
|
||||
helm-version-validation:
|
||||
needs: helm-chart-validation
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
name: Helm Version Validation
|
||||
if: ${{ contains(github.event.pull_request.labels.*.name, 'release/helm-chart') }}
|
||||
|
||||
steps:
|
||||
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
ref: ${{github.event.pull_request.head.sha}}
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Add Stakater Helm Repo
|
||||
run: |
|
||||
helm repo add stakater https://stakater.github.io/stakater-charts
|
||||
|
||||
- name: Get version for chart from helm repo
|
||||
id: chart_eval
|
||||
run: |
|
||||
current_chart_version=$(helm search repo stakater/reloader | tail -n 1 | awk '{print $2}')
|
||||
echo "CURRENT_CHART_VERSION=$(echo ${current_chart_version})" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Get Updated Chart version from Chart.yaml
|
||||
uses: mikefarah/yq@master
|
||||
id: new_chart_version
|
||||
with:
|
||||
cmd: yq e '.version' deployments/kubernetes/chart/reloader/Chart.yaml
|
||||
|
||||
- name: Check Version
|
||||
uses: aleoyakas/check-semver-increased-action@v1
|
||||
id: check-version
|
||||
with:
|
||||
current-version: ${{ steps.new_chart_version.outputs.result }}
|
||||
previous-version: ${{ steps.chart_eval.outputs.CURRENT_CHART_VERSION }}
|
||||
|
||||
- name: Fail if Helm Chart version isnt updated
|
||||
if: steps.check-version.outputs.is-version-increased != 'true'
|
||||
run: |
|
||||
echo "Helm Chart Version wasnt updated"
|
||||
exit 1
|
||||
175
.github/workflows/pull_request.yaml
vendored
175
.github/workflows/pull_request.yaml
vendored
@@ -1,37 +1,58 @@
|
||||
name: Pull Request
|
||||
name: Pull Request Workflow for Code changes
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
pull_request:
|
||||
branches:
|
||||
- master
|
||||
- 'v**'
|
||||
paths:
|
||||
- '**'
|
||||
- '!.markdownlint.yaml'
|
||||
- '!.vale.ini'
|
||||
- '!Dockerfile-docs'
|
||||
- '!docs-nginx.conf'
|
||||
- '!docs/**'
|
||||
- '!theme_common'
|
||||
- '!theme_override'
|
||||
- '!deployments/kubernetes/chart/reloader/**'
|
||||
|
||||
env:
|
||||
DOCKER_FILE_PATH: Dockerfile
|
||||
DOCKER_UBI_FILE_PATH: Dockerfile.ubi
|
||||
KUBERNETES_VERSION: "1.19.0"
|
||||
KIND_VERSION: "0.17.0"
|
||||
KUBERNETES_VERSION: "1.30.0"
|
||||
KIND_VERSION: "0.23.0"
|
||||
REGISTRY: ghcr.io
|
||||
RELOADER_EDITION: oss
|
||||
|
||||
jobs:
|
||||
qa:
|
||||
uses: stakater/.github/.github/workflows/pull_request_doc_qa.yaml@v0.0.52
|
||||
uses: stakater/.github/.github/workflows/pull_request_doc_qa.yaml@v0.0.163
|
||||
with:
|
||||
MD_CONFIG: .github/md_config.json
|
||||
DOC_SRC: README.md docs
|
||||
DOC_SRC: README.md
|
||||
MD_LINT_CONFIG: .markdownlint.yaml
|
||||
|
||||
build:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
issues: write
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
name: Build
|
||||
if: "! contains(toJSON(github.event.commits.*.message), '[skip-ci]')"
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
ref: ${{github.event.pull_request.head.sha}}
|
||||
fetch-depth: 0
|
||||
|
||||
# Setting up helm binary
|
||||
- name: Set up Helm
|
||||
uses: azure/setup-helm@v3
|
||||
uses: azure/setup-helm@v4
|
||||
with:
|
||||
version: v3.11.3
|
||||
|
||||
- name: Helm chart unit tests
|
||||
uses: d3adb5/helm-unittest-action@v2
|
||||
@@ -39,22 +60,30 @@ jobs:
|
||||
charts: deployments/kubernetes/chart/reloader
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v4
|
||||
uses: actions/setup-go@v6
|
||||
with:
|
||||
go-version-file: 'go.mod'
|
||||
check-latest: true
|
||||
cache: true
|
||||
|
||||
- name: Create timestamp
|
||||
id: prep
|
||||
run: echo "created=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" >> $GITHUB_OUTPUT
|
||||
|
||||
|
||||
# Get highest tag and remove any suffixes with '-'
|
||||
- name: Get Highest tag
|
||||
id: highest_tag
|
||||
run: |
|
||||
highest=$(git tag -l --sort -version:refname | head -n 1)
|
||||
echo "tag=${highest%%-*}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Install Dependencies
|
||||
run: |
|
||||
make install
|
||||
|
||||
- name: Run golangci-lint
|
||||
uses: golangci/golangci-lint-action@v3
|
||||
with:
|
||||
version: v1.51.1
|
||||
only-new-issues: false
|
||||
args: --timeout 10m
|
||||
run: make lint
|
||||
|
||||
- name: Helm Lint
|
||||
run: |
|
||||
@@ -65,8 +94,7 @@ jobs:
|
||||
run: |
|
||||
curl -LO "https://storage.googleapis.com/kubernetes-release/release/v${KUBERNETES_VERSION}/bin/linux/amd64/kubectl"
|
||||
sudo install ./kubectl /usr/local/bin/ && rm kubectl
|
||||
kubectl version --short --client
|
||||
kubectl version --short --client | grep -q ${KUBERNETES_VERSION}
|
||||
kubectl version --client=true
|
||||
|
||||
- name: Install Kind
|
||||
run: |
|
||||
@@ -80,9 +108,21 @@ jobs:
|
||||
kind create cluster
|
||||
kubectl cluster-info
|
||||
|
||||
|
||||
- name: Test
|
||||
run: make test
|
||||
|
||||
- name: Run quick A/B load tests
|
||||
uses: ./.github/actions/loadtest
|
||||
with:
|
||||
old-ref: ${{ github.event.pull_request.base.sha }}
|
||||
# new-ref defaults to current checkout (PR branch)
|
||||
scenarios: 'S1,S4,S6'
|
||||
test-type: 'quick'
|
||||
kind-cluster: 'kind' # Use the existing cluster created above
|
||||
post-comment: 'true'
|
||||
pr-number: ${{ github.event.pull_request.number }}
|
||||
|
||||
- name: Generate Tags
|
||||
id: generate_tag
|
||||
run: |
|
||||
@@ -98,71 +138,26 @@ jobs:
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Login to Docker Registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.STAKATER_DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.STAKATER_DOCKERHUB_PASSWORD }}
|
||||
|
||||
- name: Generate image repository path for Docker registry
|
||||
run: |
|
||||
echo DOCKER_IMAGE_REPOSITORY=$(echo ${{ github.repository }} | tr '[:upper:]' '[:lower:]') >> $GITHUB_ENV
|
||||
|
||||
- name: Build and Push Docker Image to Docker registry
|
||||
uses: docker/build-push-action@v5
|
||||
with:
|
||||
context: .
|
||||
file: ${{ env.DOCKER_FILE_PATH }}
|
||||
pull: true
|
||||
push: true
|
||||
build-args: BUILD_PARAMETERS=${{ env.BUILD_PARAMETERS }}
|
||||
cache-to: type=inline
|
||||
platforms: linux/amd64,linux/arm,linux/arm64
|
||||
tags: |
|
||||
${{ env.DOCKER_IMAGE_REPOSITORY }}:${{ steps.generate_tag.outputs.GIT_TAG }}
|
||||
labels: |
|
||||
org.opencontainers.image.source=${{ github.event.repository.clone_url }}
|
||||
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
|
||||
org.opencontainers.image.revision=${{ github.sha }}
|
||||
|
||||
- name: Build and Push Docker UBI Image to Docker registry
|
||||
uses: docker/build-push-action@v5
|
||||
with:
|
||||
context: .
|
||||
file: ${{ env.DOCKER_UBI_FILE_PATH }}
|
||||
pull: true
|
||||
push: true
|
||||
build-args: |
|
||||
BUILD_PARAMETERS=${{ env.BUILD_PARAMETERS }}
|
||||
BUILDER_IMAGE=${{ env.DOCKER_IMAGE_REPOSITORY }}:${{ steps.generate_tag.outputs.GIT_TAG }}
|
||||
cache-to: type=inline
|
||||
platforms: linux/amd64,linux/arm64
|
||||
tags: |
|
||||
${{ env.DOCKER_IMAGE_REPOSITORY }}:${{ steps.generate_tag.outputs.GIT_UBI_TAG }}
|
||||
labels: |
|
||||
org.opencontainers.image.source=${{ github.event.repository.clone_url }}
|
||||
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
|
||||
org.opencontainers.image.revision=${{ github.sha }}
|
||||
|
||||
- name: Login to ghcr registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ${{env.REGISTRY}}
|
||||
username: ${{github.actor}}
|
||||
password: ${{secrets.GITHUB_TOKEN}}
|
||||
|
||||
- name: Generate image repository path for ghcr registry
|
||||
run: |
|
||||
echo GHCR_IMAGE_REPOSITORY=${{env.REGISTRY}}/$(echo ${{ github.repository }} | tr '[:upper:]' '[:lower:]') >> $GITHUB_ENV
|
||||
|
||||
- name: Build and Push Docker Image to ghcr registry
|
||||
uses: docker/build-push-action@v5
|
||||
# To identify any broken changes in dockerfiles or dependencies
|
||||
|
||||
- name: Build Docker Image
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
file: ${{ env.DOCKER_FILE_PATH }}
|
||||
pull: true
|
||||
push: true
|
||||
build-args: BUILD_PARAMETERS=${{ env.BUILD_PARAMETERS }}
|
||||
push: false
|
||||
build-args: |
|
||||
VERSION=merge-${{ steps.generate_tag.outputs.GIT_TAG }}
|
||||
COMMIT=${{github.event.pull_request.head.sha}}
|
||||
BUILD_DATE=${{ steps.prep.outputs.created }}
|
||||
EDITION=${{ env.RELOADER_EDITION }}
|
||||
BUILD_PARAMETERS=${{ env.BUILD_PARAMETERS }}
|
||||
|
||||
cache-to: type=inline
|
||||
platforms: linux/amd64,linux/arm,linux/arm64
|
||||
tags: |
|
||||
@@ -172,16 +167,20 @@ jobs:
|
||||
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
|
||||
org.opencontainers.image.revision=${{ github.sha }}
|
||||
|
||||
- name: Build and Push Docker UBI Image to ghcr registry
|
||||
uses: docker/build-push-action@v5
|
||||
- name: Build Docker UBI Image
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
file: ${{ env.DOCKER_UBI_FILE_PATH }}
|
||||
pull: true
|
||||
push: true
|
||||
push: false
|
||||
build-args: |
|
||||
VERSION=merge-${{ steps.generate_tag.outputs.GIT_UBI_TAG }}
|
||||
COMMIT=${{github.event.pull_request.head.sha}}
|
||||
BUILD_DATE=${{ steps.prep.outputs.created }}
|
||||
EDITION=${{ env.RELOADER_EDITION }}
|
||||
BUILD_PARAMETERS=${{ env.BUILD_PARAMETERS }}
|
||||
BUILDER_IMAGE=${{ env.GHCR_IMAGE_REPOSITORY }}:${{ steps.generate_tag.outputs.GIT_TAG }}
|
||||
BUILDER_IMAGE=${{ env.GHCR_IMAGE_REPOSITORY }}:${{ steps.highest_tag.outputs.tag }}
|
||||
cache-to: type=inline
|
||||
platforms: linux/amd64,linux/arm64
|
||||
tags: |
|
||||
@@ -190,23 +189,3 @@ jobs:
|
||||
org.opencontainers.image.source=${{ github.event.repository.clone_url }}
|
||||
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
|
||||
org.opencontainers.image.revision=${{ github.sha }}
|
||||
|
||||
- name: Comment on PR
|
||||
uses: mshick/add-pr-comment@v2
|
||||
if: always()
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
message-success: '@${{ github.actor }} Images are available for testing. `docker pull ${{ env.GHCR_IMAGE_REPOSITORY }}:${{ steps.generate_tag.outputs.GIT_TAG }}`\n`docker pull ${{ env.GHCR_IMAGE_REPOSITORY }}:${{ steps.generate_tag.outputs.GIT_UBI_TAG }}`'
|
||||
message-failure: '@${{ github.actor }} Yikes! You better fix it before anyone else finds out! [Build](https://github.com/${{ github.repository }}/commit/${{ github.event.pull_request.head.sha }}/checks) has Failed!'
|
||||
allow-repeats: true
|
||||
|
||||
- name: Notify Slack
|
||||
uses: 8398a7/action-slack@v3
|
||||
if: always() # Pick up events even if the job fails or is canceled.
|
||||
with:
|
||||
status: ${{ job.status }}
|
||||
fields: repo,author,action,eventName,ref,workflow
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.STAKATER_GITHUB_TOKEN }}
|
||||
SLACK_WEBHOOK_URL: ${{ secrets.STAKATER_DELIVERY_SLACK_WEBHOOK }}
|
||||
|
||||
33
.github/workflows/pull_request_docs.yaml
vendored
Normal file
33
.github/workflows/pull_request_docs.yaml
vendored
Normal file
@@ -0,0 +1,33 @@
|
||||
name: Pull Request for Documentation Changes
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
branches:
|
||||
- master
|
||||
paths:
|
||||
- '.markdownlint.yaml'
|
||||
- '.vale.ini'
|
||||
- 'Dockerfile-docs'
|
||||
- 'docs-nginx.conf'
|
||||
- 'docs/**'
|
||||
- 'theme_common'
|
||||
- 'theme_override'
|
||||
- 'deployments/kubernetes/chart/reloader/README.md'
|
||||
|
||||
jobs:
|
||||
qa:
|
||||
uses: stakater/.github/.github/workflows/pull_request_doc_qa.yaml@v0.0.163
|
||||
with:
|
||||
MD_CONFIG: .github/md_config.json
|
||||
DOC_SRC: docs
|
||||
MD_LINT_CONFIG: .markdownlint.yaml
|
||||
build:
|
||||
uses: stakater/.github/.github/workflows/pull_request_container_build.yaml@v0.0.163
|
||||
with:
|
||||
DOCKER_FILE_PATH: Dockerfile-docs
|
||||
CONTAINER_REGISTRY_URL: ghcr.io/stakater
|
||||
PUSH_IMAGE: false
|
||||
secrets:
|
||||
CONTAINER_REGISTRY_USERNAME: ${{ github.actor }}
|
||||
CONTAINER_REGISTRY_PASSWORD: ${{ secrets.GHCR_TOKEN }}
|
||||
SLACK_WEBHOOK_URL: ${{ secrets.STAKATER_DELIVERY_SLACK_WEBHOOK }}
|
||||
123
.github/workflows/push-helm-chart.yaml
vendored
Normal file
123
.github/workflows/push-helm-chart.yaml
vendored
Normal file
@@ -0,0 +1,123 @@
|
||||
name: Push Helm Chart
|
||||
|
||||
# TODO: fix: workflows have a problem where only code owners' PRs get the actions running
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types:
|
||||
- closed
|
||||
branches:
|
||||
- master
|
||||
paths:
|
||||
- 'deployments/kubernetes/chart/reloader/**'
|
||||
- '.github/workflows/push-helm-chart.yaml'
|
||||
- '.github/workflows/release-helm-chart.yaml'
|
||||
|
||||
env:
|
||||
HELM_REGISTRY_URL: "https://stakater.github.io/stakater-charts"
|
||||
REGISTRY: ghcr.io # container registry
|
||||
|
||||
jobs:
|
||||
verify-and-push-helm-chart:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
id-token: write # needed for signing the images with GitHub OIDC Token
|
||||
packages: write # for pushing and signing container images
|
||||
|
||||
name: Verify and Push Helm Chart
|
||||
if: ${{ (github.event.pull_request.merged == true) && (contains(github.event.pull_request.labels.*.name, 'release/helm-chart')) }}
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
token: ${{ secrets.PUBLISH_TOKEN }}
|
||||
fetch-depth: 0 # otherwise, you will fail to push refs to dest repo
|
||||
submodules: recursive
|
||||
|
||||
# Setting up helm binary
|
||||
- name: Set up Helm
|
||||
uses: azure/setup-helm@v4
|
||||
with:
|
||||
version: v3.11.3
|
||||
|
||||
- name: Add Stakater Helm Repo
|
||||
run: |
|
||||
helm repo add stakater https://stakater.github.io/stakater-charts
|
||||
|
||||
- name: Get version for chart from helm repo
|
||||
id: chart_eval
|
||||
run: |
|
||||
current_chart_version=$(helm search repo stakater/reloader | tail -n 1 | awk '{print $2}')
|
||||
echo "CURRENT_CHART_VERSION=$(echo ${current_chart_version})" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Get Updated Chart version from Chart.yaml
|
||||
uses: mikefarah/yq@master
|
||||
id: new_chart_version
|
||||
with:
|
||||
cmd: yq e '.version' deployments/kubernetes/chart/reloader/Chart.yaml
|
||||
|
||||
- name: Check Version
|
||||
uses: aleoyakas/check-semver-increased-action@v1
|
||||
id: check-version
|
||||
with:
|
||||
current-version: ${{ steps.new_chart_version.outputs.result }}
|
||||
previous-version: ${{ steps.chart_eval.outputs.CURRENT_CHART_VERSION }}
|
||||
|
||||
- name: Fail if Helm Chart version isnt updated
|
||||
if: steps.check-version.outputs.is-version-increased != 'true'
|
||||
run: |
|
||||
echo "Helm Chart Version wasnt updated"
|
||||
exit 1
|
||||
|
||||
- name: Install Cosign
|
||||
uses: sigstore/cosign-installer@v4.0.0
|
||||
|
||||
- name: Login to GHCR Registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: stakater-user
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Publish Helm chart to ghcr.io
|
||||
run: |
|
||||
helm package ./deployments/kubernetes/chart/reloader --destination ./packaged-chart
|
||||
helm push ./packaged-chart/*.tgz oci://ghcr.io/stakater/charts
|
||||
rm -rf ./packaged-chart
|
||||
|
||||
- name: Sign artifacts with Cosign
|
||||
run: cosign sign --yes ghcr.io/stakater/charts/reloader:${{ steps.new_chart_version.outputs.result }}
|
||||
|
||||
- name: Publish Helm chart to gh-pages
|
||||
uses: stefanprodan/helm-gh-pages@master
|
||||
with:
|
||||
branch: master
|
||||
repository: stakater-charts
|
||||
target_dir: docs
|
||||
token: ${{ secrets.GHCR_TOKEN }}
|
||||
charts_dir: deployments/kubernetes/chart/
|
||||
charts_url: ${{ env.HELM_REGISTRY_URL }}
|
||||
owner: stakater
|
||||
linting: on
|
||||
commit_username: stakater-user
|
||||
commit_email: stakater@gmail.com
|
||||
|
||||
- name: Push new chart tag
|
||||
uses: anothrNick/github-tag-action@1.75.0
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.PUBLISH_TOKEN }}
|
||||
WITH_V: false
|
||||
CUSTOM_TAG: chart-v${{ steps.new_chart_version.outputs.result }}
|
||||
|
||||
- name: Notify Slack
|
||||
uses: 8398a7/action-slack@v3
|
||||
if: always() # Pick up events even if the job fails or is canceled.
|
||||
with:
|
||||
status: ${{ job.status }}
|
||||
fields: repo,author,action,eventName,ref,workflow
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.PUBLISH_TOKEN }}
|
||||
SLACK_WEBHOOK_URL: ${{ secrets.STAKATER_DELIVERY_SLACK_WEBHOOK }}
|
||||
91
.github/workflows/push-pr-image.yaml
vendored
Normal file
91
.github/workflows/push-pr-image.yaml
vendored
Normal file
@@ -0,0 +1,91 @@
|
||||
name: Push PR Image on Label
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
branches:
|
||||
- master
|
||||
types: [ labeled ]
|
||||
paths:
|
||||
- '!.markdownlint.yaml'
|
||||
- '!.vale.ini'
|
||||
- '!Dockerfile-docs'
|
||||
- '!docs-nginx.conf'
|
||||
- '!docs/**'
|
||||
- '!theme_common'
|
||||
- '!theme_override'
|
||||
- '!deployments/kubernetes/chart/reloader/**'
|
||||
|
||||
env:
|
||||
DOCKER_FILE_PATH: Dockerfile
|
||||
REGISTRY: ghcr.io
|
||||
|
||||
jobs:
|
||||
|
||||
build-and-push-pr-image:
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
name: Build and Push PR Image
|
||||
if: ${{ github.event.label.name == 'build-and-push-pr-image' }}
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
ref: ${{github.event.pull_request.head.sha}}
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v6
|
||||
with:
|
||||
go-version-file: 'go.mod'
|
||||
check-latest: true
|
||||
cache: true
|
||||
|
||||
- name: Install Dependencies
|
||||
run: |
|
||||
make install
|
||||
|
||||
- name: Run golangci-lint
|
||||
run: make lint
|
||||
|
||||
- name: Generate Tags
|
||||
id: generate_tag
|
||||
run: |
|
||||
sha=${{ github.event.pull_request.head.sha }}
|
||||
tag="SNAPSHOT-PR-${{ github.event.pull_request.number }}-${sha:0:8}"
|
||||
echo "GIT_TAG=$(echo ${tag})" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Generate image repository path for ghcr registry
|
||||
run: |
|
||||
echo GHCR_IMAGE_REPOSITORY=${{env.REGISTRY}}/$(echo ${{ github.repository }} | tr '[:upper:]' '[:lower:]') >> $GITHUB_ENV
|
||||
|
||||
- name: Login to ghcr registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ${{env.REGISTRY}}
|
||||
username: stakater-user
|
||||
password: ${{secrets.GITHUB_TOKEN}}
|
||||
|
||||
- name: Build Docker Image
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
file: ${{ env.DOCKER_FILE_PATH }}
|
||||
pull: true
|
||||
push: true
|
||||
build-args: BUILD_PARAMETERS=${{ env.BUILD_PARAMETERS }}
|
||||
cache-to: type=inline
|
||||
platforms: linux/amd64,linux/arm,linux/arm64
|
||||
tags: |
|
||||
${{ env.GHCR_IMAGE_REPOSITORY }}:${{ steps.generate_tag.outputs.GIT_TAG }}
|
||||
labels: |
|
||||
org.opencontainers.image.source=${{ github.event.repository.clone_url }}
|
||||
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
|
||||
org.opencontainers.image.revision=${{ github.sha }}
|
||||
185
.github/workflows/push.yaml
vendored
185
.github/workflows/push.yaml
vendored
@@ -1,37 +1,49 @@
|
||||
name: Push
|
||||
|
||||
on:
|
||||
push:
|
||||
pull_request:
|
||||
types:
|
||||
- closed
|
||||
branches:
|
||||
- master
|
||||
- 'v**'
|
||||
|
||||
env:
|
||||
DOCKER_FILE_PATH: Dockerfile
|
||||
DOCKER_UBI_FILE_PATH: Dockerfile.ubi
|
||||
KUBERNETES_VERSION: "1.19.0"
|
||||
KIND_VERSION: "0.17.0"
|
||||
KUBERNETES_VERSION: "1.30.0"
|
||||
KIND_VERSION: "0.23.0"
|
||||
HELM_REGISTRY_URL: "https://stakater.github.io/stakater-charts"
|
||||
REGISTRY: ghcr.io
|
||||
RELOADER_EDITION: oss
|
||||
|
||||
jobs:
|
||||
build:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write # to push artifacts to `ghcr.io`
|
||||
|
||||
name: Build
|
||||
if: "! contains(toJSON(github.event.commits.*.message), '[skip-ci]')"
|
||||
if: github.event.pull_request.merged == true
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
token: ${{ secrets.STAKATER_GITHUB_TOKEN }}
|
||||
token: ${{ secrets.PUBLISH_TOKEN }}
|
||||
fetch-depth: 0 # otherwise, you will fail to push refs to dest repo
|
||||
submodules: recursive
|
||||
|
||||
# Setting up helm binary
|
||||
- name: Set up Helm
|
||||
uses: azure/setup-helm@v3
|
||||
uses: azure/setup-helm@v4
|
||||
with:
|
||||
version: v3.11.3
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v4
|
||||
uses: actions/setup-go@v6
|
||||
with:
|
||||
go-version-file: 'go.mod'
|
||||
check-latest: true
|
||||
@@ -42,18 +54,13 @@ jobs:
|
||||
make install
|
||||
|
||||
- name: Run golangci-lint
|
||||
uses: golangci/golangci-lint-action@v3
|
||||
with:
|
||||
version: v1.51.1
|
||||
only-new-issues: false
|
||||
args: --timeout 10m
|
||||
run: make lint
|
||||
|
||||
- name: Install kubectl
|
||||
run: |
|
||||
curl -LO "https://storage.googleapis.com/kubernetes-release/release/v${KUBERNETES_VERSION}/bin/linux/amd64/kubectl"
|
||||
sudo install ./kubectl /usr/local/bin/ && rm kubectl
|
||||
kubectl version --short --client
|
||||
kubectl version --short --client | grep -q ${KUBERNETES_VERSION}
|
||||
kubectl version --client=true
|
||||
|
||||
- name: Install Kind
|
||||
run: |
|
||||
@@ -70,15 +77,6 @@ jobs:
|
||||
- name: Test
|
||||
run: make test
|
||||
|
||||
- name: Generate Tag
|
||||
id: generate_tag
|
||||
uses: anothrNick/github-tag-action@1.67.0
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.STAKATER_GITHUB_TOKEN }}
|
||||
WITH_V: true
|
||||
DEFAULT_BUMP: patch
|
||||
DRY_RUN: true
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
||||
@@ -90,30 +88,38 @@ jobs:
|
||||
with:
|
||||
username: ${{ secrets.STAKATER_DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.STAKATER_DOCKERHUB_PASSWORD }}
|
||||
|
||||
- name: Create timestamp
|
||||
id: prep
|
||||
run: echo "created=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Generate image repository path for Docker registry
|
||||
run: |
|
||||
echo DOCKER_IMAGE_REPOSITORY=$(echo ${{ github.repository }} | tr '[:upper:]' '[:lower:]') >> $GITHUB_ENV
|
||||
|
||||
- name: Build and Push Docker Image to Docker registry
|
||||
uses: docker/build-push-action@v5
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
file: ${{ env.DOCKER_FILE_PATH }}
|
||||
pull: true
|
||||
push: true
|
||||
build-args: BUILD_PARAMETERS=${{ env.BUILD_PARAMETERS }}
|
||||
build-args: |
|
||||
VERSION=merge-${{ github.event.number }}
|
||||
COMMIT=${{ github.sha }}
|
||||
BUILD_DATE=${{ steps.prep.outputs.created }}
|
||||
EDITION=${{ env.RELOADER_EDITION }}
|
||||
BUILD_PARAMETERS=${{ env.BUILD_PARAMETERS }}
|
||||
cache-to: type=inline
|
||||
platforms: linux/amd64,linux/arm,linux/arm64
|
||||
tags: |
|
||||
${{ env.DOCKER_IMAGE_REPOSITORY }}:${{ steps.generate_tag.outputs.new_tag }}
|
||||
${{ env.DOCKER_IMAGE_REPOSITORY }}:merge-${{ github.event.number }}
|
||||
labels: |
|
||||
org.opencontainers.image.source=${{ github.event.repository.clone_url }}
|
||||
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
|
||||
org.opencontainers.image.revision=${{ github.sha }}
|
||||
|
||||
- name: Build and Push Docker UBI Image to Docker registry
|
||||
uses: docker/build-push-action@v5
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
file: ${{ env.DOCKER_UBI_FILE_PATH }}
|
||||
@@ -121,14 +127,13 @@ jobs:
|
||||
push: true
|
||||
build-args: |
|
||||
BUILD_PARAMETERS=${{ env.BUILD_PARAMETERS }}
|
||||
BUILDER_IMAGE=${{ env.DOCKER_IMAGE_REPOSITORY }}:${{ steps.generate_tag.outputs.new_tag }}
|
||||
BUILDER_IMAGE=${{ env.DOCKER_IMAGE_REPOSITORY }}:merge-${{ github.event.number }}
|
||||
cache-to: type=inline
|
||||
platforms: linux/amd64,linux/arm64
|
||||
tags: |
|
||||
${{ env.DOCKER_IMAGE_REPOSITORY }}:ubi-${{ steps.generate_tag.outputs.new_tag }}
|
||||
${{ env.DOCKER_IMAGE_REPOSITORY }}:merge-${{ github.event.number }}-ubi
|
||||
labels: |
|
||||
org.opencontainers.image.source=${{ github.event.repository.clone_url }}
|
||||
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
|
||||
org.opencontainers.image.revision=${{ github.sha }}
|
||||
|
||||
- name: Login to ghcr registry
|
||||
@@ -143,24 +148,28 @@ jobs:
|
||||
echo GHCR_IMAGE_REPOSITORY=${{env.REGISTRY}}/$(echo ${{ github.repository }} | tr '[:upper:]' '[:lower:]') >> $GITHUB_ENV
|
||||
|
||||
- name: Build and Push Docker Image to ghcr registry
|
||||
uses: docker/build-push-action@v5
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
file: ${{ env.DOCKER_FILE_PATH }}
|
||||
pull: true
|
||||
push: true
|
||||
build-args: BUILD_PARAMETERS=${{ env.BUILD_PARAMETERS }}
|
||||
build-args: |
|
||||
VERSION=merge-${{ github.event.number }}
|
||||
COMMIT=${{ github.sha }}
|
||||
BUILD_DATE=${{ steps.prep.outputs.created }}
|
||||
EDITION=${{ env.RELOADER_EDITION }}
|
||||
BUILD_PARAMETERS=${{ env.BUILD_PARAMETERS }}
|
||||
cache-to: type=inline
|
||||
platforms: linux/amd64,linux/arm,linux/arm64
|
||||
tags: |
|
||||
${{ env.GHCR_IMAGE_REPOSITORY }}:${{ steps.generate_tag.outputs.new_tag }}
|
||||
${{ env.GHCR_IMAGE_REPOSITORY }}:merge-${{ github.event.number }}
|
||||
labels: |
|
||||
org.opencontainers.image.source=${{ github.event.repository.clone_url }}
|
||||
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
|
||||
org.opencontainers.image.revision=${{ github.sha }}
|
||||
|
||||
- name: Build and Push Docker UBI Image to ghcr registry
|
||||
uses: docker/build-push-action@v5
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
file: ${{ env.DOCKER_UBI_FILE_PATH }}
|
||||
@@ -168,86 +177,52 @@ jobs:
|
||||
push: true
|
||||
build-args: |
|
||||
BUILD_PARAMETERS=${{ env.BUILD_PARAMETERS }}
|
||||
BUILDER_IMAGE=${{ env.GHCR_IMAGE_REPOSITORY }}:${{ steps.generate_tag.outputs.new_tag }}
|
||||
BUILDER_IMAGE=${{ env.GHCR_IMAGE_REPOSITORY }}:merge-${{ github.event.number }}
|
||||
cache-to: type=inline
|
||||
platforms: linux/amd64,linux/arm64
|
||||
tags: |
|
||||
${{ env.GHCR_IMAGE_REPOSITORY }}:ubi-${{ steps.generate_tag.outputs.new_tag }}
|
||||
${{ env.GHCR_IMAGE_REPOSITORY }}:merge-${{ github.event.number }}-ubi
|
||||
labels: |
|
||||
org.opencontainers.image.source=${{ github.event.repository.clone_url }}
|
||||
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
|
||||
org.opencontainers.image.revision=${{ github.sha }}
|
||||
|
||||
##############################
|
||||
## Add steps to generate required artifacts for a release here(helm chart, operator manifest etc.)
|
||||
##############################
|
||||
|
||||
# Generate tag for operator without "v"
|
||||
- name: Generate Operator Tag
|
||||
id: generate_operator_tag
|
||||
uses: anothrNick/github-tag-action@1.67.0
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.STAKATER_GITHUB_TOKEN }}
|
||||
WITH_V: false
|
||||
DEFAULT_BUMP: patch
|
||||
DRY_RUN: true
|
||||
|
||||
# Update chart tag to the latest semver tag
|
||||
- name: Update Chart Version
|
||||
env:
|
||||
VERSION: ${{ steps.generate_operator_tag.outputs.new_tag }}
|
||||
run: make bump-chart
|
||||
|
||||
- name: Helm Template
|
||||
run: |
|
||||
helm template reloader deployments/kubernetes/chart/reloader/ > deployments/kubernetes/reloader.yaml
|
||||
helm template reloader deployments/kubernetes/chart/reloader/ --output-dir deployments/kubernetes/manifests && mv deployments/kubernetes/manifests/reloader/templates/* deployments/kubernetes/manifests/ && rm -r deployments/kubernetes/manifests/reloader
|
||||
|
||||
# Publish helm chart
|
||||
- name: Publish Helm chart
|
||||
uses: stefanprodan/helm-gh-pages@master
|
||||
- uses: dorny/paths-filter@v3
|
||||
id: filter
|
||||
with:
|
||||
branch: master
|
||||
repository: stakater-charts
|
||||
target_dir: docs
|
||||
token: ${{ secrets.STAKATER_GITHUB_TOKEN }}
|
||||
charts_dir: deployments/kubernetes/chart/
|
||||
charts_url: ${{ env.HELM_REGISTRY_URL }}
|
||||
owner: stakater
|
||||
linting: on
|
||||
commit_username: stakater-user
|
||||
commit_email: stakater@gmail.com
|
||||
filters: |
|
||||
docs:
|
||||
- '.markdownlint.yaml'
|
||||
- '.vale.ini'
|
||||
- 'Dockerfile-docs'
|
||||
- 'docs-nginx.conf'
|
||||
- 'docs/**'
|
||||
- 'README.md'
|
||||
- 'theme_common'
|
||||
- 'theme_override'
|
||||
|
||||
# Commit back changes
|
||||
- name: Log info about `.git` directory permissions
|
||||
run: |
|
||||
# Debug logging
|
||||
echo "Disk usage: "
|
||||
df -H
|
||||
|
||||
echo ".git files not owned by current user or current group:"
|
||||
find .git ! -user $(id -u) -o ! -group $(id -g) | xargs ls -lah
|
||||
|
||||
- name: Commit files
|
||||
run: |
|
||||
git config --local user.email "stakater@gmail.com"
|
||||
git config --local user.name "stakater-user"
|
||||
git status
|
||||
git add .
|
||||
git commit -m "[skip-ci] Update artifacts" -a
|
||||
|
||||
- name: Push changes
|
||||
uses: ad-m/github-push-action@master
|
||||
# run only if 'docs' files were changed
|
||||
- name: Build and Push Docker Image for Docs to ghcr registry
|
||||
if: steps.filter.outputs.docs == 'true'
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
github_token: ${{ secrets.STAKATER_GITHUB_TOKEN }}
|
||||
branch: ${{ github.ref }}
|
||||
context: .
|
||||
file: Dockerfile-docs
|
||||
pull: true
|
||||
push: true
|
||||
build-args: BUILD_PARAMETERS=${{ env.BUILD_PARAMETERS }}
|
||||
cache-to: type=inline
|
||||
tags: |
|
||||
${{ env.GHCR_IMAGE_REPOSITORY }}/docs:merge-${{ github.event.number }}
|
||||
labels: |
|
||||
org.opencontainers.image.source=${{ github.event.repository.clone_url }}
|
||||
org.opencontainers.image.revision=${{ github.sha }}
|
||||
|
||||
- name: Push Latest Tag
|
||||
uses: anothrNick/github-tag-action@1.67.0
|
||||
uses: anothrNick/github-tag-action@1.75.0
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.STAKATER_GITHUB_TOKEN }}
|
||||
WITH_V: true
|
||||
DEFAULT_BUMP: patch
|
||||
GITHUB_TOKEN: ${{ secrets.PUBLISH_TOKEN }}
|
||||
WITH_V: false
|
||||
CUSTOM_TAG: merge-${{ github.event.number }}
|
||||
|
||||
- name: Notify Slack
|
||||
uses: 8398a7/action-slack@v3
|
||||
@@ -256,5 +231,5 @@ jobs:
|
||||
status: ${{ job.status }}
|
||||
fields: repo,author,action,eventName,ref,workflow
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.STAKATER_GITHUB_TOKEN }}
|
||||
GITHUB_TOKEN: ${{ secrets.PUBLISH_TOKEN }}
|
||||
SLACK_WEBHOOK_URL: ${{ secrets.STAKATER_DELIVERY_SLACK_WEBHOOK }}
|
||||
|
||||
39
.github/workflows/release-helm-chart.yaml
vendored
Normal file
39
.github/workflows/release-helm-chart.yaml
vendored
Normal file
@@ -0,0 +1,39 @@
|
||||
name: Release Helm chart
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- "chart-v*"
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
|
||||
jobs:
|
||||
release-helm-chart:
|
||||
name: Release Helm chart
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Create release
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
tag: ${{ github.ref }}
|
||||
run: |
|
||||
gh release create "$tag" \
|
||||
--repo="$GITHUB_REPOSITORY" \
|
||||
--title="Helm chart ${tag#chart-}" \
|
||||
--generate-notes
|
||||
|
||||
- name: Notify Slack
|
||||
uses: 8398a7/action-slack@v3
|
||||
if: always()
|
||||
with:
|
||||
status: ${{ job.status }}
|
||||
fields: repo,author,action,eventName,ref,workflow
|
||||
env:
|
||||
SLACK_WEBHOOK_URL: ${{ secrets.STAKATER_DELIVERY_SLACK_WEBHOOK }}
|
||||
200
.github/workflows/release.yaml
vendored
200
.github/workflows/release.yaml
vendored
@@ -5,38 +5,220 @@ on:
|
||||
tags:
|
||||
- "v*"
|
||||
|
||||
env:
|
||||
DOCKER_FILE_PATH: Dockerfile
|
||||
DOCKER_UBI_FILE_PATH: Dockerfile.ubi
|
||||
KUBERNETES_VERSION: "1.30.0"
|
||||
KIND_VERSION: "0.23.0"
|
||||
REGISTRY: ghcr.io
|
||||
RELOADER_EDITION: oss
|
||||
|
||||
jobs:
|
||||
build:
|
||||
release:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write # to push artifacts to `ghcr.io`
|
||||
|
||||
name: GoReleaser build
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 0 # See: https://goreleaser.com/ci/actions/
|
||||
token: ${{ secrets.PUBLISH_TOKEN }}
|
||||
fetch-depth: 0 # otherwise, you will fail to push refs to dest repo
|
||||
submodules: recursive
|
||||
|
||||
# Setting up helm binary
|
||||
- name: Set up Helm
|
||||
uses: azure/setup-helm@v4
|
||||
with:
|
||||
version: v3.11.3
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v4
|
||||
uses: actions/setup-go@v6
|
||||
with:
|
||||
go-version-file: "go.mod"
|
||||
go-version-file: 'go.mod'
|
||||
check-latest: true
|
||||
cache: true
|
||||
|
||||
- name: Install Dependencies
|
||||
run: |
|
||||
make install
|
||||
|
||||
- name: Run golangci-lint
|
||||
run: make lint
|
||||
|
||||
- name: Install kubectl
|
||||
run: |
|
||||
curl -LO "https://storage.googleapis.com/kubernetes-release/release/v${KUBERNETES_VERSION}/bin/linux/amd64/kubectl"
|
||||
sudo install ./kubectl /usr/local/bin/ && rm kubectl
|
||||
kubectl version --client=true
|
||||
|
||||
- name: Install Kind
|
||||
run: |
|
||||
curl -L -o kind https://github.com/kubernetes-sigs/kind/releases/download/v${KIND_VERSION}/kind-linux-amd64
|
||||
sudo install ./kind /usr/local/bin && rm kind
|
||||
kind version
|
||||
kind version | grep -q ${KIND_VERSION}
|
||||
|
||||
- name: Create Kind Cluster
|
||||
run: |
|
||||
kind create cluster
|
||||
kubectl cluster-info
|
||||
|
||||
- name: Test
|
||||
run: make test
|
||||
|
||||
- name: Get Tag from Github Ref
|
||||
id: generate_tag
|
||||
run: echo "RELEASE_VERSION=${GITHUB_REF#refs/*/}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Create timestamp
|
||||
id: prep
|
||||
run: echo "created=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Login to Docker Registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.STAKATER_DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.STAKATER_DOCKERHUB_PASSWORD }}
|
||||
|
||||
- name: Generate image repository path for Docker registry
|
||||
run: |
|
||||
echo DOCKER_IMAGE_REPOSITORY=$(echo ${{ github.repository }} | tr '[:upper:]' '[:lower:]') >> $GITHUB_ENV
|
||||
|
||||
- name: Build and Push Docker Image to Docker registry
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
file: ${{ env.DOCKER_FILE_PATH }}
|
||||
pull: true
|
||||
push: true
|
||||
cache-to: type=inline
|
||||
platforms: linux/amd64,linux/arm,linux/arm64
|
||||
tags: |
|
||||
${{ env.DOCKER_IMAGE_REPOSITORY }}:${{ steps.generate_tag.outputs.RELEASE_VERSION }}
|
||||
build-args: |
|
||||
VERSION=${{ steps.generate_tag.outputs.RELEASE_VERSION }}
|
||||
COMMIT=${{ github.sha }}
|
||||
BUILD_DATE=${{ steps.prep.outputs.created }}
|
||||
EDITION=${{ env.RELOADER_EDITION }}
|
||||
labels: |
|
||||
org.opencontainers.image.source=${{ github.event.repository.clone_url }}
|
||||
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
|
||||
org.opencontainers.image.revision=${{ github.sha }}
|
||||
|
||||
- name: Build and Push Docker UBI Image to Docker registry
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
file: ${{ env.DOCKER_UBI_FILE_PATH }}
|
||||
pull: true
|
||||
push: true
|
||||
build-args: |
|
||||
BUILDER_IMAGE=${{ env.DOCKER_IMAGE_REPOSITORY }}:${{ steps.generate_tag.outputs.RELEASE_VERSION }}
|
||||
cache-to: type=inline
|
||||
platforms: linux/amd64,linux/arm64
|
||||
tags: |
|
||||
${{ env.DOCKER_IMAGE_REPOSITORY }}:${{ steps.generate_tag.outputs.RELEASE_VERSION }}-ubi
|
||||
labels: |
|
||||
org.opencontainers.image.source=${{ github.event.repository.clone_url }}
|
||||
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
|
||||
org.opencontainers.image.revision=${{ github.sha }}
|
||||
|
||||
- name: Login to ghcr registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ${{env.REGISTRY}}
|
||||
username: stakater-user
|
||||
password: ${{secrets.GITHUB_TOKEN}}
|
||||
|
||||
- name: Generate image repository path for ghcr registry
|
||||
run: |
|
||||
echo GHCR_IMAGE_REPOSITORY=${{env.REGISTRY}}/$(echo ${{ github.repository }} | tr '[:upper:]' '[:lower:]') >> $GITHUB_ENV
|
||||
|
||||
# tag this image as latest as it will be used in plain manifests
|
||||
- name: Build and Push Docker Image to ghcr registry
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
file: ${{ env.DOCKER_FILE_PATH }}
|
||||
pull: true
|
||||
push: true
|
||||
cache-to: type=inline
|
||||
platforms: linux/amd64,linux/arm,linux/arm64
|
||||
tags: |
|
||||
${{ env.GHCR_IMAGE_REPOSITORY }}:${{ steps.generate_tag.outputs.RELEASE_VERSION }},${{ env.GHCR_IMAGE_REPOSITORY }}:latest
|
||||
build-args: |
|
||||
VERSION=${{ steps.generate_tag.outputs.RELEASE_VERSION }}
|
||||
COMMIT=${{ github.sha }}
|
||||
BUILD_DATE=${{ steps.prep.outputs.created }}
|
||||
EDITION=${{ env.RELOADER_EDITION }}
|
||||
labels: |
|
||||
org.opencontainers.image.source=${{ github.event.repository.clone_url }}
|
||||
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
|
||||
org.opencontainers.image.revision=${{ github.sha }}
|
||||
|
||||
- name: Build and Push Docker UBI Image to ghcr registry
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
file: ${{ env.DOCKER_UBI_FILE_PATH }}
|
||||
pull: true
|
||||
push: true
|
||||
build-args: |
|
||||
BUILDER_IMAGE=${{ env.GHCR_IMAGE_REPOSITORY }}:${{ steps.generate_tag.outputs.RELEASE_VERSION }}
|
||||
cache-to: type=inline
|
||||
platforms: linux/amd64,linux/arm64
|
||||
tags: |
|
||||
${{ env.GHCR_IMAGE_REPOSITORY }}:${{ steps.generate_tag.outputs.RELEASE_VERSION }}-ubi
|
||||
labels: |
|
||||
org.opencontainers.image.source=${{ github.event.repository.clone_url }}
|
||||
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
|
||||
org.opencontainers.image.revision=${{ github.sha }}
|
||||
|
||||
- name: Build and Push Docker Image for Docs to ghcr registry
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
file: Dockerfile-docs
|
||||
pull: true
|
||||
push: true
|
||||
cache-to: type=inline
|
||||
tags: |
|
||||
${{ env.GHCR_IMAGE_REPOSITORY }}/docs:${{ steps.generate_tag.outputs.RELEASE_VERSION }}
|
||||
labels: |
|
||||
org.opencontainers.image.source=${{ github.event.repository.clone_url }}
|
||||
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
|
||||
org.opencontainers.image.revision=${{ github.sha }}
|
||||
|
||||
##############################
|
||||
## Add steps to generate required artifacts for a release here(helm chart, operator manifest etc.)
|
||||
##############################
|
||||
|
||||
- name: Run GoReleaser
|
||||
uses: goreleaser/goreleaser-action@master
|
||||
with:
|
||||
version: latest
|
||||
args: release --rm-dist
|
||||
args: release --clean
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.STAKATER_GITHUB_TOKEN }}
|
||||
GITHUB_TOKEN: ${{ secrets.PUBLISH_TOKEN }}
|
||||
|
||||
- name: Notify Slack
|
||||
uses: 8398a7/action-slack@v3
|
||||
if: always()
|
||||
if: always() # Pick up events even if the job fails or is canceled.
|
||||
with:
|
||||
status: ${{ job.status }}
|
||||
fields: repo,author,action,eventName,ref,workflow
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.STAKATER_GITHUB_TOKEN }}
|
||||
GITHUB_TOKEN: ${{ secrets.PUBLISH_TOKEN }}
|
||||
SLACK_WEBHOOK_URL: ${{ secrets.STAKATER_DELIVERY_SLACK_WEBHOOK }}
|
||||
|
||||
17
.github/workflows/reloader-enterprise-published.yml
vendored
Normal file
17
.github/workflows/reloader-enterprise-published.yml
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
name: Dispatch event for published release
|
||||
|
||||
on:
|
||||
release:
|
||||
types: [published]
|
||||
|
||||
jobs:
|
||||
dispatch:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Trigger target repository workflow
|
||||
run: |
|
||||
curl -X POST \
|
||||
-H "Accept: application/vnd.github.v3+json" \
|
||||
-H "Authorization: token ${{ secrets.STAKATER_AB_TOKEN_FOR_RLDR }}" \
|
||||
https://api.github.com/repos/stakater-ab/reloader-enterprise/dispatches \
|
||||
-d '{"event_type":"release-published","client_payload":{"tag":"${{ github.event.release.tag_name }}"}}'
|
||||
17
.github/workflows/reloader-enterprise-unpublished.yml
vendored
Normal file
17
.github/workflows/reloader-enterprise-unpublished.yml
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
name: Dispatch event for unpublished release
|
||||
|
||||
on:
|
||||
release:
|
||||
types: [unpublished ]
|
||||
|
||||
jobs:
|
||||
dispatch:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Trigger target repository workflow
|
||||
run: |
|
||||
curl -X POST \
|
||||
-H "Accept: application/vnd.github.v3+json" \
|
||||
-H "Authorization: token ${{ secrets.STAKATER_AB_TOKEN_FOR_RLDR }}" \
|
||||
https://api.github.com/repos/stakater-ab/reloader-enterprise/dispatches \
|
||||
-d '{"event_type":"release-unpublished","client_payload":{"tag":"${{ github.event.release.tag_name }}"}}'
|
||||
13
.gitignore
vendored
13
.gitignore
vendored
@@ -10,4 +10,15 @@ _gopath/
|
||||
vendor
|
||||
dist
|
||||
Reloader
|
||||
!**/chart/reloader
|
||||
!**/chart/reloader
|
||||
!**/internal/reloader
|
||||
*.tgz
|
||||
styles/
|
||||
site/
|
||||
/mkdocs.yml
|
||||
yq
|
||||
bin
|
||||
test/loadtest/results
|
||||
test/loadtest/loadtest
|
||||
# Temporary NFS files
|
||||
.nfs*
|
||||
|
||||
6
.gitmodules
vendored
6
.gitmodules
vendored
@@ -1,3 +1,3 @@
|
||||
[submodule "vocabulary"]
|
||||
path = vocabulary
|
||||
url = git@github.com:stakater/vocabulary.git
|
||||
[submodule "theme_common"]
|
||||
path = theme_common
|
||||
url = https://github.com/stakater/stakater-docs-mkdocs-theme.git
|
||||
|
||||
@@ -10,6 +10,7 @@ builds:
|
||||
- amd64
|
||||
- arm
|
||||
- arm64
|
||||
- ppc64le
|
||||
archives:
|
||||
- name_template: "{{ .ProjectName }}_v{{ .Version }}_{{ .Os }}_{{ .Arch }}{{ if .Arm }}v{{ .Arm }}{{ end }}"
|
||||
snapshot:
|
||||
@@ -17,10 +18,7 @@ snapshot:
|
||||
checksum:
|
||||
name_template: "{{ .ProjectName }}_{{ .Version }}_checksums.txt"
|
||||
changelog:
|
||||
sort: asc
|
||||
filters:
|
||||
exclude:
|
||||
- '^docs:'
|
||||
- '^test:'
|
||||
# It will be generated manually as part of making a new GitHub release
|
||||
disable: true
|
||||
env_files:
|
||||
github_token: /home/jenkins/.apitoken/hub
|
||||
|
||||
@@ -3,4 +3,6 @@
|
||||
"MD013": false,
|
||||
"MD024": false,
|
||||
"MD029": { "style": one },
|
||||
"MD033": false,
|
||||
"MD041": false,
|
||||
}
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
StylesPath = "vocabulary/styles"
|
||||
StylesPath = styles
|
||||
MinAlertLevel = warning
|
||||
|
||||
Vocab = "Stakater"
|
||||
Packages = https://github.com/stakater/vale-package/releases/download/v0.0.87/Stakater.zip
|
||||
Vocab = Stakater
|
||||
|
||||
# Only check MarkDown files
|
||||
[*.md]
|
||||
|
||||
3
CODE_OF_CONDUCT.md
Normal file
3
CODE_OF_CONDUCT.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Code of Conduct
|
||||
|
||||
Reloader follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).
|
||||
13
Dockerfile
13
Dockerfile
@@ -2,13 +2,18 @@ ARG BUILDER_IMAGE
|
||||
ARG BASE_IMAGE
|
||||
|
||||
# Build the manager binary
|
||||
FROM --platform=${BUILDPLATFORM} ${BUILDER_IMAGE:-golang:1.21.3} as builder
|
||||
FROM --platform=${BUILDPLATFORM} ${BUILDER_IMAGE:-golang:1.26} AS builder
|
||||
|
||||
ARG TARGETOS
|
||||
ARG TARGETARCH
|
||||
ARG GOPROXY
|
||||
ARG GOPRIVATE
|
||||
|
||||
ARG COMMIT
|
||||
ARG VERSION
|
||||
ARG BUILD_DATE
|
||||
ARG EDITION=oss
|
||||
|
||||
WORKDIR /workspace
|
||||
|
||||
# Copy the Go Modules manifests
|
||||
@@ -30,7 +35,11 @@ RUN CGO_ENABLED=0 \
|
||||
GOPROXY=${GOPROXY} \
|
||||
GOPRIVATE=${GOPRIVATE} \
|
||||
GO111MODULE=on \
|
||||
go build -mod=mod -a -o manager main.go
|
||||
go build -ldflags="-s -w -X github.com/stakater/Reloader/pkg/common.Version=${VERSION} \
|
||||
-X github.com/stakater/Reloader/pkg/common.Commit=${COMMIT} \
|
||||
-X github.com/stakater/Reloader/pkg/common.BuildDate=${BUILD_DATE} \
|
||||
-X github.com/stakater/Reloader/pkg/common.Edition=${EDITION}" \
|
||||
-installsuffix 'static' -mod=mod -a -o manager ./
|
||||
|
||||
# Use distroless as minimal base image to package the manager binary
|
||||
# Refer to https://github.com/GoogleContainerTools/distroless for more details
|
||||
|
||||
35
Dockerfile-docs
Normal file
35
Dockerfile-docs
Normal file
@@ -0,0 +1,35 @@
|
||||
FROM python:3.14-alpine as builder
|
||||
|
||||
# set workdir
|
||||
RUN mkdir -p $HOME/application
|
||||
WORKDIR $HOME/application
|
||||
|
||||
# copy the entire application
|
||||
COPY --chown=1001:root . .
|
||||
|
||||
RUN pip3 install -r theme_common/requirements.txt
|
||||
|
||||
# Combine Theme Resources
|
||||
RUN python theme_common/scripts/combine_theme_resources.py -s theme_common/resources -ov theme_override/resources -o dist/_theme
|
||||
# Produce mkdocs file
|
||||
RUN python theme_common/scripts/combine_mkdocs_config_yaml.py theme_common/mkdocs.yml theme_override/mkdocs.yml mkdocs.yml
|
||||
|
||||
# build the docs
|
||||
RUN mkdocs build
|
||||
|
||||
FROM nginxinc/nginx-unprivileged:1.29-alpine as deploy
|
||||
COPY --from=builder $HOME/application/site/ /usr/share/nginx/html/reloader/
|
||||
COPY docs-nginx.conf /etc/nginx/conf.d/default.conf
|
||||
|
||||
# set non-root user
|
||||
USER 1001
|
||||
|
||||
LABEL name="Stakater Reloader Documentation" \
|
||||
maintainer="Stakater <hello@stakater.com>" \
|
||||
vendor="Stakater" \
|
||||
release="1" \
|
||||
summary="Documentation for Stakater Reloader"
|
||||
|
||||
EXPOSE 8080:8080/tcp
|
||||
|
||||
CMD ["nginx", "-g", "daemon off;"]
|
||||
@@ -1,18 +1,52 @@
|
||||
ARG BUILDER_IMAGE
|
||||
ARG BASE_IMAGE
|
||||
|
||||
FROM --platform=${BUILDPLATFORM} ${BUILDER_IMAGE} as SRC
|
||||
FROM --platform=${BUILDPLATFORM} ${BUILDER_IMAGE} AS SRC
|
||||
|
||||
FROM ${BASE_IMAGE:-registry.access.redhat.com/ubi8/ubi-minimal:latest}
|
||||
FROM ${BASE_IMAGE:-registry.access.redhat.com/ubi9/ubi:9.7} AS ubi
|
||||
ARG TARGETARCH
|
||||
|
||||
|
||||
RUN dnf update -y && dnf install -y binutils
|
||||
# prep target rootfs for scratch container
|
||||
WORKDIR /
|
||||
RUN mkdir /image && \
|
||||
ln -s usr/bin /image/bin && \
|
||||
ln -s usr/sbin /image/sbin && \
|
||||
ln -s usr/lib64 /image/lib64 && \
|
||||
ln -s usr/lib /image/lib && \
|
||||
mkdir -p /image/{usr/bin,usr/lib64,usr/lib,root,home,proc,etc,sys,var,dev}
|
||||
|
||||
COPY ubi-build-files-${TARGETARCH}.txt /tmp
|
||||
# Copy all the required files from the base UBI image into the image directory
|
||||
# As the go binary is not statically compiled this includes everything needed for CGO to work, cacerts, tzdata and RH release files
|
||||
# Filter existing files and exclude temporary entitlement files that may be removed during build
|
||||
RUN while IFS= read -r file; do \
|
||||
[ -z "$file" ] && continue; \
|
||||
if [ -e "$file" ] || [ -L "$file" ]; then \
|
||||
echo "$file"; \
|
||||
fi; \
|
||||
done < /tmp/ubi-build-files-${TARGETARCH}.txt > /tmp/existing-files.txt && \
|
||||
if [ -s /tmp/existing-files.txt ]; then \
|
||||
tar -chf /tmp/files.tar --exclude='etc/pki/entitlement-host*' -T /tmp/existing-files.txt 2>&1 | grep -vE "(File removed before we read it|Cannot stat)" || true; \
|
||||
if [ -f /tmp/files.tar ]; then \
|
||||
tar xf /tmp/files.tar -C /image/ 2>/dev/null || true; \
|
||||
rm -f /tmp/files.tar; \
|
||||
fi; \
|
||||
fi && \
|
||||
rm -f /tmp/existing-files.txt
|
||||
|
||||
# Generate a rpm database which contains all the packages that you said were needed in ubi-build-files-*.txt
|
||||
RUN rpm --root /image --initdb \
|
||||
&& PACKAGES=$(rpm -qf $(cat /tmp/ubi-build-files-${TARGETARCH}.txt) | grep -v "is not owned by any package" | sort -u) \
|
||||
&& echo dnf install -y 'dnf-command(download)' \
|
||||
&& dnf download --destdir / ${PACKAGES} \
|
||||
&& rpm --root /image -ivh --justdb --nodeps `for i in ${PACKAGES}; do echo $i.rpm; done`
|
||||
|
||||
FROM scratch
|
||||
COPY --from=ubi /image/ /
|
||||
COPY --from=SRC /manager .
|
||||
|
||||
# Update image
|
||||
RUN microdnf update
|
||||
|
||||
USER 65532:65532
|
||||
|
||||
# Port for metrics and probes
|
||||
EXPOSE 9090
|
||||
|
||||
|
||||
5
MAINTAINERS
Normal file
5
MAINTAINERS
Normal file
@@ -0,0 +1,5 @@
|
||||
Bharath Nallapeta <bharath.nallapeta@stakater.com> (@bnallapeta)
|
||||
Karl Johan Grahn <karl.johan@stakater.com> (@karl-johan-grahn)
|
||||
Muhammad Sheryar Butt <sheryar@stakater.com> (@SheryarButt)
|
||||
Muneeb Aijaz <muneeb@stakater.com> (@MuneebAijaz)
|
||||
Tanveer Alam <tanveer.alam@stakater.com> (@tanalam2411)
|
||||
135
Makefile
135
Makefile
@@ -24,6 +24,73 @@ LDFLAGS =
|
||||
GOPROXY ?=
|
||||
GOPRIVATE ?=
|
||||
|
||||
## Location to install dependencies to
|
||||
LOCALBIN ?= $(shell pwd)/bin
|
||||
$(LOCALBIN):
|
||||
mkdir -p $(LOCALBIN)
|
||||
|
||||
## Tool Binaries
|
||||
KUBECTL ?= kubectl
|
||||
KUSTOMIZE ?= $(LOCALBIN)/kustomize-$(KUSTOMIZE_VERSION)
|
||||
CONTROLLER_GEN ?= $(LOCALBIN)/controller-gen-$(CONTROLLER_TOOLS_VERSION)
|
||||
ENVTEST ?= $(LOCALBIN)/setup-envtest-$(ENVTEST_VERSION)
|
||||
GOLANGCI_LINT = $(LOCALBIN)/golangci-lint-$(GOLANGCI_LINT_VERSION)
|
||||
YQ ?= $(LOCALBIN)/yq
|
||||
|
||||
## Tool Versions
|
||||
KUSTOMIZE_VERSION ?= v5.3.0
|
||||
CONTROLLER_TOOLS_VERSION ?= v0.14.0
|
||||
ENVTEST_VERSION ?= release-0.17
|
||||
GOLANGCI_LINT_VERSION ?= v2.6.1
|
||||
|
||||
YQ_VERSION ?= v4.27.5
|
||||
YQ_DOWNLOAD_URL = "https://github.com/mikefarah/yq/releases/download/$(YQ_VERSION)/yq_$(OS)_$(ARCH)"
|
||||
|
||||
.PHONY: yq
|
||||
yq: $(YQ) ## Download YQ locally if needed
|
||||
$(YQ):
|
||||
@test -d $(LOCALBIN) || mkdir -p $(LOCALBIN)
|
||||
@curl --retry 3 -fsSL $(YQ_DOWNLOAD_URL) -o $(YQ) || { \
|
||||
echo "Failed to download yq from $(YQ_DOWNLOAD_URL). Please check the URL and your network connection."; \
|
||||
exit 1; \
|
||||
}
|
||||
@chmod +x $(YQ)
|
||||
@echo "yq downloaded successfully to $(YQ)."
|
||||
|
||||
.PHONY: kustomize
|
||||
kustomize: $(KUSTOMIZE) ## Download kustomize locally if necessary.
|
||||
$(KUSTOMIZE): $(LOCALBIN)
|
||||
$(call go-install-tool,$(KUSTOMIZE),sigs.k8s.io/kustomize/kustomize/v5,$(KUSTOMIZE_VERSION))
|
||||
|
||||
.PHONY: controller-gen
|
||||
controller-gen: $(CONTROLLER_GEN) ## Download controller-gen locally if necessary.
|
||||
$(CONTROLLER_GEN): $(LOCALBIN)
|
||||
$(call go-install-tool,$(CONTROLLER_GEN),sigs.k8s.io/controller-tools/cmd/controller-gen,$(CONTROLLER_TOOLS_VERSION))
|
||||
|
||||
.PHONY: envtest
|
||||
envtest: $(ENVTEST) ## Download setup-envtest locally if necessary.
|
||||
$(ENVTEST): $(LOCALBIN)
|
||||
$(call go-install-tool,$(ENVTEST),sigs.k8s.io/controller-runtime/tools/setup-envtest,$(ENVTEST_VERSION))
|
||||
|
||||
.PHONY: golangci-lint
|
||||
golangci-lint: $(GOLANGCI_LINT) ## Download golangci-lint locally if necessary.
|
||||
$(GOLANGCI_LINT): $(LOCALBIN)
|
||||
$(call go-install-tool,$(GOLANGCI_LINT),github.com/golangci/golangci-lint/v2/cmd/golangci-lint,${GOLANGCI_LINT_VERSION})
|
||||
|
||||
# go-install-tool will 'go install' any package with custom target and name of binary, if it doesn't exist
|
||||
# $1 - target path with name of binary (ideally with version)
|
||||
# $2 - package url which can be installed
|
||||
# $3 - specific version of package
|
||||
define go-install-tool
|
||||
@[ -f $(1) ] || { \
|
||||
set -e; \
|
||||
package=$(2)@$(3) ;\
|
||||
echo "Downloading $${package}" ;\
|
||||
GOBIN=$(LOCALBIN) go install $${package} ;\
|
||||
mv "$$(echo "$(1)" | sed "s/-$(3)$$//")" $(1) ;\
|
||||
}
|
||||
endef
|
||||
|
||||
default: build test
|
||||
|
||||
install:
|
||||
@@ -35,6 +102,9 @@ run:
|
||||
build:
|
||||
"$(GOCMD)" build ${GOFLAGS} ${LDFLAGS} -o "${BINARY}"
|
||||
|
||||
lint: golangci-lint ## Run golangci-lint on the codebase
|
||||
$(GOLANGCI_LINT) run ./...
|
||||
|
||||
build-image:
|
||||
docker buildx build \
|
||||
--platform ${OS}/${ARCH} \
|
||||
@@ -80,9 +150,62 @@ apply:
|
||||
|
||||
deploy: binary-image push apply
|
||||
|
||||
# Bump Chart
|
||||
bump-chart:
|
||||
sed -i "s/^version:.*/version: $(VERSION)/" deployments/kubernetes/chart/reloader/Chart.yaml
|
||||
sed -i "s/^appVersion:.*/appVersion: v$(VERSION)/" deployments/kubernetes/chart/reloader/Chart.yaml
|
||||
sed -i "s/tag:.*/tag: v$(VERSION)/" deployments/kubernetes/chart/reloader/values.yaml
|
||||
sed -i "s/version:.*/version: v$(VERSION)/" deployments/kubernetes/chart/reloader/values.yaml
|
||||
.PHONY: k8s-manifests
|
||||
k8s-manifests: $(KUSTOMIZE) ## Generate k8s manifests using Kustomize from 'manifests' folder
|
||||
$(KUSTOMIZE) build ./deployments/kubernetes/ -o ./deployments/kubernetes/reloader.yaml
|
||||
|
||||
.PHONY: update-manifests-version
|
||||
update-manifests-version: ## Generate k8s manifests using Kustomize from 'manifests' folder
|
||||
sed -i 's/image:.*/image: \"ghcr.io\/stakater\/reloader:v$(VERSION)"/g' deployments/kubernetes/manifests/deployment.yaml
|
||||
|
||||
YQ_VERSION = v4.42.1
|
||||
YQ_BIN = $(shell pwd)/yq
|
||||
CURRENT_ARCH := $(shell uname -m | sed 's/x86_64/amd64/' | sed 's/aarch64/arm64/')
|
||||
|
||||
YQ_DOWNLOAD_URL = "https://github.com/mikefarah/yq/releases/download/$(YQ_VERSION)/yq_linux_$(CURRENT_ARCH)"
|
||||
|
||||
yq-install:
|
||||
@echo "Downloading yq $(YQ_VERSION) for linux/$(CURRENT_ARCH)"
|
||||
@curl -sL $(YQ_DOWNLOAD_URL) -o $(YQ_BIN)
|
||||
@chmod +x $(YQ_BIN)
|
||||
@echo "yq $(YQ_VERSION) installed at $(YQ_BIN)"
|
||||
|
||||
# =============================================================================
|
||||
# Load Testing
|
||||
# =============================================================================
|
||||
|
||||
LOADTEST_BIN = test/loadtest/loadtest
|
||||
LOADTEST_OLD_IMAGE ?= localhost/reloader:old
|
||||
LOADTEST_NEW_IMAGE ?= localhost/reloader:new
|
||||
LOADTEST_DURATION ?= 60
|
||||
LOADTEST_SCENARIOS ?= all
|
||||
|
||||
.PHONY: loadtest-build loadtest-quick loadtest-full loadtest loadtest-clean
|
||||
|
||||
loadtest-build: ## Build loadtest binary
|
||||
cd test/loadtest && $(GOCMD) build -o loadtest ./cmd/loadtest
|
||||
|
||||
loadtest-quick: loadtest-build ## Run quick load tests (S1, S4, S6)
|
||||
cd test/loadtest && ./loadtest run \
|
||||
--old-image=$(LOADTEST_OLD_IMAGE) \
|
||||
--new-image=$(LOADTEST_NEW_IMAGE) \
|
||||
--scenario=S1,S4,S6 \
|
||||
--duration=$(LOADTEST_DURATION)
|
||||
|
||||
loadtest-full: loadtest-build ## Run full load test suite
|
||||
cd test/loadtest && ./loadtest run \
|
||||
--old-image=$(LOADTEST_OLD_IMAGE) \
|
||||
--new-image=$(LOADTEST_NEW_IMAGE) \
|
||||
--scenario=all \
|
||||
--duration=$(LOADTEST_DURATION)
|
||||
|
||||
loadtest: loadtest-build ## Run load tests with configurable scenarios (default: all)
|
||||
cd test/loadtest && ./loadtest run \
|
||||
--old-image=$(LOADTEST_OLD_IMAGE) \
|
||||
--new-image=$(LOADTEST_NEW_IMAGE) \
|
||||
--scenario=$(LOADTEST_SCENARIOS) \
|
||||
--duration=$(LOADTEST_DURATION)
|
||||
|
||||
loadtest-clean: ## Clean loadtest binary and results
|
||||
rm -f $(LOADTEST_BIN)
|
||||
rm -rf test/loadtest/results
|
||||
|
||||
660
README.md
660
README.md
@@ -1,5 +1,8 @@
|
||||
#  Reloader
|
||||
<p align="center">
|
||||
<img src="assets/web/reloader.jpg" alt="Reloader" width="40%"/>
|
||||
</p>
|
||||
|
||||
[](https://github.com/sponsors/stakater?utm_source=github&utm_medium=readme&utm_campaign=reloader)
|
||||
[](https://goreportcard.com/report/github.com/stakater/reloader)
|
||||
[](https://godoc.org/github.com/stakater/reloader)
|
||||
[](https://github.com/stakater/reloader/releases/latest)
|
||||
@@ -7,337 +10,448 @@
|
||||
[](https://hub.docker.com/r/stakater/reloader/)
|
||||
[](https://hub.docker.com/r/stakater/reloader/)
|
||||
[](LICENSE)
|
||||
[](https://stakater.com/?utm_source=Reloader&utm_medium=github)
|
||||
|
||||
## Problem
|
||||
## 🔁 What is Reloader?
|
||||
|
||||
We would like to watch if some change happens in `ConfigMap` and/or `Secret`; then perform a rolling upgrade on relevant `DeploymentConfig`, `Deployment`, `Daemonset`, `Statefulset` and `Rollout`
|
||||
Reloader is a Kubernetes controller that automatically triggers rollouts of workloads (like Deployments, StatefulSets, and more) whenever referenced `Secrets`, `ConfigMaps` or **optionally CSI-mounted secrets** are updated.
|
||||
|
||||
## Solution
|
||||
In a traditional Kubernetes setup, updating a `Secret` or `ConfigMap` does not automatically restart or redeploy your workloads. This can lead to stale configurations running in production, especially when dealing with dynamic values like credentials, feature flags, or environment configs.
|
||||
|
||||
Reloader can watch changes in `ConfigMap` and `Secret` and do rolling upgrades on Pods with their associated `DeploymentConfigs`, `Deployments`, `Daemonsets` `Statefulsets` and `Rollouts`.
|
||||
Reloader bridges that gap by ensuring your workloads stay in sync with configuration changes — automatically and safely.
|
||||
|
||||
## Enterprise Version
|
||||
## 🚀 Why Reloader?
|
||||
|
||||
Reloader is available in two different versions:
|
||||
- ✅ **Zero manual restarts**: No need to manually rollout workloads after config/secret changes.
|
||||
- 🔒 **Secure by design**: Ensure your apps always use the most up-to-date credentials or tokens.
|
||||
- 🛠️ **Flexible**: Works with all major workload types — Deployment, StatefulSet, Daemonset, ArgoRollout, and more.
|
||||
- ⚡ **Fast feedback loop**: Ideal for CI/CD pipelines where secrets/configs change frequently.
|
||||
- 🔄 **Out-of-the-box integration**: Just label your workloads and let Reloader do the rest.
|
||||
|
||||
1. Open Source Version
|
||||
1. Enterprise Version, which includes:
|
||||
- SLA (Service Level Agreement) for support and unique requests
|
||||
- Slack support
|
||||
- Certified images
|
||||
## 🔧 How It Works?
|
||||
|
||||
Contact [`sales@stakater.com`](mailto:sales@stakater.com) for info about Reloader Enterprise.
|
||||
```mermaid
|
||||
flowchart LR
|
||||
ExternalSecret -->|Creates| Secret
|
||||
SealedSecret -->|Creates| Secret
|
||||
Certificate -->|Creates| Secret
|
||||
Secret -->|Watched by| Reloader
|
||||
ConfigMap -->|Watched by| Reloader
|
||||
|
||||
## Compatibility
|
||||
Reloader -->|Triggers Rollout| Deployment
|
||||
Reloader -->|Triggers Rollout| DeploymentConfig
|
||||
Reloader -->|Triggers Rollout| Daemonset
|
||||
Reloader -->|Triggers Rollout| Statefulset
|
||||
Reloader -->|Triggers Rollout| ArgoRollout
|
||||
Reloader -->|Triggers Job| CronJob
|
||||
Reloader -->|Sends Notification| Slack,Teams,Webhook
|
||||
```
|
||||
|
||||
Reloader is compatible with Kubernetes >= 1.19
|
||||
- Sources like `ExternalSecret`, `SealedSecret`, or `Certificate` from `cert-manager` can create or manage Kubernetes `Secrets` — but they can also be created manually or delivered through GitOps workflows.
|
||||
- `Secrets` and `ConfigMaps` are watched by Reloader.
|
||||
- When changes are detected, Reloader automatically triggers a rollout of the associated workloads, ensuring your app always runs with the latest configuration.
|
||||
|
||||
## How to use Reloader
|
||||
## ⚡ Quick Start
|
||||
|
||||
For a `Deployment` called `foo` have a `ConfigMap` called `foo-configmap` or `Secret` called `foo-secret` or both. Then add your annotation (by default `reloader.stakater.com/auto`) to main metadata of your `Deployment`
|
||||
### 1. Install Reloader
|
||||
|
||||
Follow any of this [installation options](#-installation).
|
||||
|
||||
### 2. Annotate Your Workload
|
||||
|
||||
To enable automatic reload for a Deployment:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: my-app
|
||||
annotations:
|
||||
reloader.stakater.com/auto: "true"
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: my-app
|
||||
spec:
|
||||
containers:
|
||||
- name: app
|
||||
image: your-image
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: my-config
|
||||
- secretRef:
|
||||
name: my-secret
|
||||
```
|
||||
|
||||
This will discover deploymentconfigs/deployments/daemonsets/statefulset/rollouts automatically where `foo-configmap` or `foo-secret` is being used either via environment variable or from volume mount. And it will perform rolling upgrade on related pods when `foo-configmap` or `foo-secret`are updated.
|
||||
This tells Reloader to watch the `ConfigMap` and `Secret` referenced in this deployment. When either is updated, it will trigger a rollout.
|
||||
|
||||
You can restrict this discovery to only `ConfigMap` or `Secret` objects that
|
||||
are tagged with a special annotation. To take advantage of that, annotate
|
||||
your deploymentconfigs/deployments/daemonsets/statefulset/rollouts like this:
|
||||
## 🏢 Enterprise Version
|
||||
|
||||
Stakater offers an enterprise-grade version of Reloader with:
|
||||
|
||||
1. SLA-backed support
|
||||
1. Certified images
|
||||
1. Private Slack support
|
||||
|
||||
Contact [`sales@stakater.com`](mailto:sales@stakater.com) for info about Reloader Enterprise.
|
||||
|
||||
## 🧩 Usage
|
||||
|
||||
Reloader supports multiple annotation-based controls to let you **customize when and how your Kubernetes workloads are reloaded** upon changes in `Secrets` or `ConfigMaps`.
|
||||
|
||||
Kubernetes does not trigger pod restarts when a referenced `Secret` or `ConfigMap` is updated. Reloader bridges this gap by watching for changes and automatically performing rollouts — but it gives you full control via annotations, so you can:
|
||||
|
||||
- Reload **all** resources by default
|
||||
- Restrict reloads to only **Secrets** or only **ConfigMaps**
|
||||
- Watch only **specific resources**
|
||||
- Use **opt-in via tagging** (`search` + `match`)
|
||||
- Exclude workloads you don’t want to reload
|
||||
|
||||
### 1. 🔁 Automatic Reload (Default)
|
||||
|
||||
Use these annotations to automatically restart the workload when referenced `Secrets` or `ConfigMaps` change.
|
||||
|
||||
| Annotation | Description |
|
||||
|--------------------------------------------|----------------------------------------------------------------------|
|
||||
| `reloader.stakater.com/auto: "true"` | Reloads workload when any referenced ConfigMap or Secret changes |
|
||||
| `secret.reloader.stakater.com/auto: "true"`| Reloads only when referenced Secret(s) change |
|
||||
| `configmap.reloader.stakater.com/auto: "true"`| Reloads only when referenced ConfigMap(s) change |
|
||||
|
||||
### 2. 📛 Named Resource Reload (Specific Resource Annotations)
|
||||
|
||||
These annotations allow you to manually define which ConfigMaps or Secrets should trigger a reload, regardless of whether they're used in the pod spec.
|
||||
|
||||
| Annotation | Description |
|
||||
|-----------------------------------------------------|--------------------------------------------------------------------------------------|
|
||||
| `secret.reloader.stakater.com/reload: "my-secret"` | Reloads when specific Secret(s) change, regardless of how they're used |
|
||||
| `configmap.reloader.stakater.com/reload: "my-config"`| Reloads when specific ConfigMap(s) change, regardless of how they're used |
|
||||
|
||||
#### Use when
|
||||
|
||||
1. ✅ This is useful in tightly scoped scenarios where config is shared but reloads are only relevant in certain cases.
|
||||
1. ✅ Use this when you know exactly which resource(s) matter and want to avoid auto-discovery or searching altogether.
|
||||
|
||||
### 3. 🎯 Targeted Reload (Match + Search Annotations)
|
||||
|
||||
This pattern allows fine-grained reload control — workloads only restart if the Secret/ConfigMap is both:
|
||||
|
||||
1. Referenced by the workload
|
||||
1. Explicitly annotated with `match: true`
|
||||
|
||||
| Annotation | Applies To | Description |
|
||||
|-------------------------------------------|--------------|-----------------------------------------------------------------------------|
|
||||
| `reloader.stakater.com/search: "true"` | Workload | Enables search mode (only reloads if matching secrets/configMaps are found) |
|
||||
| `reloader.stakater.com/match: "true"` | ConfigMap/Secret | Marks the config/secret as eligible for reload in search mode |
|
||||
|
||||
#### How it works
|
||||
|
||||
1. The workload must have: `reloader.stakater.com/search: "true"`
|
||||
1. The ConfigMap or Secret must have: `reloader.stakater.com/match: "true"`
|
||||
1. The resource (ConfigMap or Secret) must also be referenced in the workload (via env, `volumeMount`, etc.)
|
||||
|
||||
#### Use when
|
||||
|
||||
1. ✅ You want to reload a workload only if it references a ConfigMap or Secret that has been explicitly tagged with `reloader.stakater.com/match: "true"`.
|
||||
1. ✅ Use this when you want full control over which shared or system-wide resources trigger reloads. Great in multi-tenant clusters or shared configs.
|
||||
|
||||
### ⛔ Resource-Level Ignore Annotation
|
||||
|
||||
When you need to prevent specific ConfigMaps or Secrets from triggering any reloads, use the ignore annotation on the resource itself:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap # or Secret
|
||||
metadata:
|
||||
name: my-config
|
||||
annotations:
|
||||
reloader.stakater.com/ignore: "true"
|
||||
```
|
||||
|
||||
This instructs Reloader to skip all reload logic for that resource across all workloads.
|
||||
|
||||
### 4. ⚙️ Workload-Specific Rollout Strategy (Argo Rollouts Only)
|
||||
|
||||
Note: This is only applicable when using [Argo Rollouts](https://argoproj.github.io/argo-rollouts/). It is ignored for standard Kubernetes `Deployments`, `StatefulSets`, or `DaemonSets`. To use this feature, Argo Rollouts support must be enabled in Reloader (for example via --is-argo-rollouts=true).
|
||||
|
||||
By default, Reloader triggers the Argo Rollout controller to perform a standard rollout by updating the pod template. This works well in most cases, however, because this modifies the workload spec, GitOps tools like ArgoCD will detect this as "Configuration Drift" and mark your application as OutOfSync.
|
||||
|
||||
To avoid that, you can switch to the **restart** strategy, which simply restarts the pod without changing the pod template.
|
||||
|
||||
```yaml
|
||||
kind: Deployment
|
||||
metadata:
|
||||
annotations:
|
||||
reloader.stakater.com/search: "true"
|
||||
reloader.stakater.com/rollout-strategy: "restart"
|
||||
```
|
||||
|
||||
| Value | Behavior |
|
||||
|--------------------|-----------------------------------------------------------------|
|
||||
| `rollout` (default) | Updates pod template metadata to trigger a rollout |
|
||||
| `restart` | Deletes the pod to restart it without patching the template |
|
||||
|
||||
✅ Use `restart` if:
|
||||
|
||||
1. You're using GitOps and want to avoid drift
|
||||
1. You want a quick restart without changing the workload spec
|
||||
1. Your platform restricts metadata changes
|
||||
|
||||
This setting affects Argo Rollouts behavior, not Argo CD sync settings.
|
||||
|
||||
### 5. ❗ Annotation Behavior Rules & Compatibility
|
||||
|
||||
- `reloader.stakater.com/auto` and `reloader.stakater.com/search` **cannot be used together** — the `auto` annotation takes precedence.
|
||||
- If both `auto` and its typed versions (`secret.reloader.stakater.com/auto`, `configmap.reloader.stakater.com/auto`) are used, **only one needs to be true** to trigger a reload.
|
||||
- Setting `reloader.stakater.com/auto: "false"` explicitly disables reload for that workload.
|
||||
- If `--auto-reload-all` is enabled on the controller:
|
||||
- All workloads are treated as if they have `auto: "true"` unless they explicitly set it to `"false"`.
|
||||
- Missing or unrecognized annotation values are treated as `"false"`.
|
||||
|
||||
### 6. 🔔 Alerting on Reload
|
||||
|
||||
Reloader can optionally **send alerts** whenever it triggers a rolling upgrade for a workload (e.g., `Deployment`, `StatefulSet`, etc.).
|
||||
|
||||
These alerts are sent to a configured **webhook endpoint**, which can be a generic receiver or services like Slack, Microsoft Teams or Google Chat.
|
||||
|
||||
To enable this feature, update the `reloader.env.secret` section in your `values.yaml` (when installing via Helm):
|
||||
|
||||
```yaml
|
||||
reloader:
|
||||
deployment:
|
||||
env:
|
||||
secret:
|
||||
ALERT_ON_RELOAD: "true" # Enable alerting (default: false)
|
||||
ALERT_SINK: "slack" # Options: slack, teams, gchat or webhook (default: webhook)
|
||||
ALERT_WEBHOOK_URL: "<your-webhook-url>" # Required if ALERT_ON_RELOAD is true
|
||||
ALERT_ADDITIONAL_INFO: "Triggered by Reloader in staging environment"
|
||||
```
|
||||
|
||||
### 7. ⏸️ Pause Deployments
|
||||
|
||||
This feature allows you to pause rollouts for a deployment for a specified duration, helping to prevent multiple restarts when several ConfigMaps or Secrets are updated in quick succession.
|
||||
|
||||
| Annotation | Applies To | Description |
|
||||
|---------------------------------------------------------|--------------|-----------------------------------------------------------------------------|
|
||||
| `deployment.reloader.stakater.com/pause-period: "5m"` | Deployment | Pauses reloads for the specified period (e.g., `5m`, `1h`) |
|
||||
|
||||
#### How it works
|
||||
|
||||
1. Add the `deployment.reloader.stakater.com/pause-period` annotation to your Deployment, specifying the pause duration (e.g., `"5m"` for five minutes).
|
||||
1. When a watched ConfigMap or Secret changes, Reloader will still trigger a reload event, but if the deployment is paused, the rollout will have no effect until the pause period has elapsed.
|
||||
1. This avoids repeated restarts if multiple resources are updated close together.
|
||||
|
||||
#### Use when
|
||||
|
||||
1. ✅ Your deployment references multiple ConfigMaps or Secrets that may be updated at the same time.
|
||||
1. ✅ You want to minimize unnecessary rollouts and reduce downtime caused by back-to-back configuration changes.
|
||||
|
||||
### 8. 🔐 CSI Secret Provider Support
|
||||
|
||||
Reloader supports the [Secrets Store CSI Driver](https://secrets-store-csi-driver.sigs.k8s.io/), which allows mounting secrets from external secret stores (like AWS Secrets Manager, Azure Key Vault, HashiCorp Vault) directly into pods.
|
||||
Unlike Kubernetes Secret objects, CSI-mounted secrets do not always trigger native Kubernetes update events. Reloader solves this by watching CSI status resources and restarting affected workloads when mounted secret versions change.
|
||||
|
||||
#### How it works
|
||||
|
||||
When secret rotation is enabled, the Secrets Store CSI Driver updates a Kubernetes resource called: `SecretProviderClassPodStatus`
|
||||
|
||||
This resource reflects the currently mounted secret versions for a pod.
|
||||
Reloader watches these updates and triggers a rollout when a change is detected.
|
||||
|
||||
#### Prerequisites
|
||||
|
||||
- Secrets Store CSI Driver must be installed in your cluster
|
||||
- Secret rotation enabled in the CSI driver.
|
||||
- Enable CSI integration in Reloader: `--enable-csi-integration=true`
|
||||
|
||||
#### Annotations for CSI-mounted Secrets
|
||||
|
||||
| Annotation | Description |
|
||||
|------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------|
|
||||
| `reloader.stakater.com/auto: "true"` | Global Discovery: Automatically discovers and reloads the workload when any mounted ConfigMap or Secret is updated. |
|
||||
| `secretproviderclass.reloader.stakater.com/auto: 'true'` | CSI Discovery: Specifically watches for updates to all SecretProviderClasses used by the workload (CSI driver integration). |
|
||||
| `secretproviderclass.reloader.stakater.com/reload: "my-secretproviderclass"` | Targeted Reload: Only reloads the workload when the specifically named SecretProviderClass(es) are updated. |
|
||||
|
||||
Reloader monitors changes at the **per-secret level** by watching the `SecretProviderClassPodStatus`. Make sure each secret you want to monitor is properly defined with a `secretKey` in your `SecretProviderClass`:
|
||||
|
||||
```yaml
|
||||
apiVersion: secrets-store.csi.x-k8s.io/v1
|
||||
kind: SecretProviderClass
|
||||
metadata:
|
||||
name: vault-reloader-demo
|
||||
namespace: test
|
||||
spec:
|
||||
template:
|
||||
provider: vault
|
||||
parameters:
|
||||
vaultAddress: "http://vault.vault.svc:8200"
|
||||
vaultSkipTLSVerify: "true"
|
||||
roleName: "demo-role"
|
||||
objects: |
|
||||
- objectName: "password"
|
||||
secretPath: "secret/data/reloader-demo"
|
||||
secretKey: "password"
|
||||
```
|
||||
|
||||
and Reloader will trigger the rolling upgrade upon modification of any
|
||||
`ConfigMap` or `Secret` annotated like this:
|
||||
***Important***: Reloader tracks changes to individual secrets (identified by `secretKey`). If your SecretProviderClass doesn't specify `secretKey` for each object, Reloader may not detect updates correctly.
|
||||
|
||||
```yaml
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
annotations:
|
||||
reloader.stakater.com/match: "true"
|
||||
data:
|
||||
key: value
|
||||
#### Notes & Limitations
|
||||
|
||||
Reloader reacts to CSI status changes, not direct updates to external secret stores
|
||||
Secret rotation must be enabled in the CSI driver for updates to be detected
|
||||
CSI limitations (such as `subPath` mounts) still apply and may require pod restarts
|
||||
If secrets are synced to Kubernetes Secret objects, standard Reloader behavior applies and CSI support may not be required
|
||||
|
||||
## 🚀 Installation
|
||||
|
||||
### 1. 📦 Helm
|
||||
|
||||
Reloader can be installed in multiple ways depending on your Kubernetes setup and preference. Below are the supported methods:
|
||||
|
||||
```bash
|
||||
helm repo add stakater https://stakater.github.io/stakater-charts
|
||||
helm repo update
|
||||
helm install reloader stakater/reloader
|
||||
```
|
||||
|
||||
provided the secret/configmap is being used in an environment variable, or a
|
||||
volume mount.
|
||||
➡️ See full Helm configuration in the [chart README](./deployments/kubernetes/chart/reloader/README.md).
|
||||
|
||||
Please note that `reloader.stakater.com/search` and
|
||||
`reloader.stakater.com/auto` do not work together. If you have the
|
||||
`reloader.stakater.com/auto: "true"` annotation on your deployment, then it
|
||||
will always restart upon a change in configmaps or secrets it uses, regardless
|
||||
of whether they have the `reloader.stakater.com/match: "true"` annotation or
|
||||
not.
|
||||
### 2. 📄 Vanilla Manifests
|
||||
|
||||
We can also specify a specific configmap or secret which would trigger rolling upgrade only upon change in our specified configmap or secret, this way, it will not trigger rolling upgrade upon changes in all configmaps or secrets used in a `deploymentconfig`, `deployment`, `daemonset`, `statefulset` or `rollout`.
|
||||
To do this either set the auto annotation to `"false"` (`reloader.stakater.com/auto: "false"`) or remove it altogether, and use annotations for [Configmap](.#Configmap) or [Secret](.#Secret).
|
||||
|
||||
It's also possible to enable auto reloading for all resources, by setting the `--auto-reload-all` flag.
|
||||
In this case, all resources that do not have the auto annotation set to `"false"`, will be reloaded automatically when their ConfigMaps or Secrets are updated.
|
||||
Notice that setting the auto annotation to an undefined value counts as false as-well.
|
||||
|
||||
### Configmap
|
||||
|
||||
To perform rolling upgrade when change happens only on specific configmaps use below annotation.
|
||||
|
||||
For a `Deployment` called `foo` have a `ConfigMap` called `foo-configmap`. Then add this annotation to main metadata of your `Deployment`
|
||||
|
||||
```yaml
|
||||
kind: Deployment
|
||||
metadata:
|
||||
annotations:
|
||||
configmap.reloader.stakater.com/reload: "foo-configmap"
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
```
|
||||
|
||||
Use comma separated list to define multiple configmaps.
|
||||
|
||||
```yaml
|
||||
kind: Deployment
|
||||
metadata:
|
||||
annotations:
|
||||
configmap.reloader.stakater.com/reload: "foo-configmap,bar-configmap,baz-configmap"
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
```
|
||||
|
||||
### Secret
|
||||
|
||||
To perform rolling upgrade when change happens only on specific secrets use below annotation.
|
||||
|
||||
For a `Deployment` called `foo` have a `Secret` called `foo-secret`. Then add this annotation to main metadata of your `Deployment`
|
||||
|
||||
```yaml
|
||||
kind: Deployment
|
||||
metadata:
|
||||
annotations:
|
||||
secret.reloader.stakater.com/reload: "foo-secret"
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
```
|
||||
|
||||
Use comma separated list to define multiple secrets.
|
||||
|
||||
```yaml
|
||||
kind: Deployment
|
||||
metadata:
|
||||
annotations:
|
||||
secret.reloader.stakater.com/reload: "foo-secret,bar-secret,baz-secret"
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
```
|
||||
|
||||
### NOTES
|
||||
|
||||
- Reloader also supports [sealed-secrets](https://github.com/bitnami-labs/sealed-secrets). [Here](docs/Reloader-with-Sealed-Secrets.md) are the steps to use sealed-secrets with Reloader.
|
||||
- For [`rollouts`](https://github.com/argoproj/argo-rollouts/) Reloader simply triggers a change is up to you how you configure the `rollout` strategy.
|
||||
- `reloader.stakater.com/auto: "true"` will only reload the pod, if the configmap or secret is used (as a volume mount or as an env) in `DeploymentConfigs/Deployment/Daemonsets/Statefulsets`
|
||||
- `secret.reloader.stakater.com/reload` or `configmap.reloader.stakater.com/reload` annotation will reload the pod upon changes in specified configmap or secret, irrespective of the usage of configmap or secret.
|
||||
- you may override the auto annotation with the `--auto-annotation` flag
|
||||
- you may override the search annotation with the `--auto-search-annotation` flag
|
||||
and the match annotation with the `--search-match-annotation` flag
|
||||
- you may override the configmap annotation with the `--configmap-annotation` flag
|
||||
- you may override the secret annotation with the `--secret-annotation` flag
|
||||
- you may want to prevent watching certain namespaces with the `--namespaces-to-ignore` flag
|
||||
- you may want to watch only a set of namespaces with certain labels by using the `--namespace-selector` flag
|
||||
- you may want to watch only a set of secrets/configmaps with certain labels by using the `--resource-label-selector` flag
|
||||
- you may want to prevent watching certain resources with the `--resources-to-ignore` flag
|
||||
- you can configure logging in JSON format with the `--log-format=json` option
|
||||
- you can configure the "reload strategy" with the `--reload-strategy=<strategy-name>` option (details below)
|
||||
|
||||
## Reload Strategies
|
||||
|
||||
Reloader supports multiple "reload" strategies for performing rolling upgrades to resources. The following list describes them:
|
||||
|
||||
- **env-vars**: When a tracked `configMap`/`secret` is updated, this strategy attaches a Reloader specific environment variable to any containers referencing the changed `configMap` or `secret` on the owning resource (e.g., `Deployment`, `StatefulSet`, etc.). This strategy can be specified with the `--reload-strategy=env-vars` argument. Note: This is the default reload strategy.
|
||||
- **annotations**: When a tracked `configMap`/`secret` is updated, this strategy attaches a `reloader.stakater.com/last-reloaded-from` pod template annotation on the owning resource (e.g., `Deployment`, `StatefulSet`, etc.). This strategy is useful when using resource syncing tools like ArgoCD, since it will not cause these tools to detect configuration drift after a resource is reloaded. Note: Since the attached pod template annotation only tracks the last reload source, this strategy will reload any tracked resource should its `configMap` or `secret` be deleted and recreated. This strategy can be specified with the `--reload-strategy=annotations` argument.
|
||||
|
||||
## Deploying to Kubernetes
|
||||
|
||||
You can deploy Reloader by following methods:
|
||||
|
||||
### Vanilla Manifests
|
||||
|
||||
You can apply vanilla manifests by changing `RELEASE-NAME` placeholder provided in manifest with a proper value and apply it by running the command given below:
|
||||
Apply raw Kubernetes manifests directly:
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml
|
||||
```
|
||||
|
||||
By default, Reloader gets deployed in `default` namespace and watches changes `secrets` and `configmaps` in all namespaces.
|
||||
### 3. 🧱 Vanilla Kustomize
|
||||
|
||||
Reloader can be configured to ignore the resources `secrets` and `configmaps` by passing the following arguments (`spec.template.spec.containers.args`) to its container :
|
||||
|
||||
| Argument | Description |
|
||||
| -------------------------------- | -------------------- |
|
||||
| --resources-to-ignore=configMaps | To ignore configMaps |
|
||||
| --resources-to-ignore=secrets | To ignore secrets |
|
||||
|
||||
**Note:** At one time only one of these resource can be ignored, trying to do it will cause error in Reloader. Workaround for ignoring both resources is by scaling down the Reloader pods to `0`.
|
||||
|
||||
Reloader can be configured to only watch secrets/configmaps with one or more labels using the `--resource-label-selector` parameter. Supported operators are `!, in, notin, ==, =, !=`, if no operator is found the 'exists' operator is inferred (i.e. key only). Additional examples of these selectors can be found in the [Kubernetes Docs](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors).
|
||||
|
||||
**Note:** The old `:` delimited key value mappings are deprecated and if provided will be translated to `key=value`. Likewise, if a wildcard value is provided (e.g. `key:*`) it will be translated to the standalone `key` which checks for key existence.
|
||||
|
||||
These selectors can be combined together, for example with:
|
||||
|
||||
```yaml
|
||||
--resource-label-selector=reloader=enabled,key-exists,another-label in (value1,value2,value3)
|
||||
```
|
||||
|
||||
Only configmaps or secrets labeled like the following will be watched:
|
||||
|
||||
```yaml
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
...
|
||||
labels:
|
||||
reloader: enabled
|
||||
key-exists: yes
|
||||
another-label: value1
|
||||
|
||||
...
|
||||
```
|
||||
|
||||
Reloader can be configured to only watch namespaces labeled with one or more labels using the `--namespace-selector` parameter. Supported operators are `!, in, notin, ==, =, !=`, if no operator is found the 'exists' operator is inferred (i.e. key only). Additional examples of these selectors can be found in the [Kubernetes Docs](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors).
|
||||
|
||||
**Note:** The old `:` delimited key value mappings are deprecated and if provided will be translated to `key=value`. Likewise, if a wildcard value is provided (e.g. `key:*`) it will be translated to the standalone `key` which checks for key existence.
|
||||
|
||||
These selectors can be combined together, for example with:
|
||||
|
||||
```yaml
|
||||
--namespace-selector=reloader=enabled,test=true
|
||||
```
|
||||
|
||||
Only namespaces labeled as below would be watched and eligible for reloads:
|
||||
|
||||
```yaml
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
...
|
||||
labels:
|
||||
reloader: enabled
|
||||
test: true
|
||||
...
|
||||
```
|
||||
|
||||
### Vanilla Kustomize
|
||||
|
||||
You can also apply the vanilla manifests by running the following command
|
||||
Use the built-in Kustomize support:
|
||||
|
||||
```bash
|
||||
kubectl apply -k https://github.com/stakater/Reloader/deployments/kubernetes
|
||||
```
|
||||
|
||||
Similarly to vanilla manifests get deployed in `default` namespace and watches changes `secrets` and `configmaps` in all namespaces.
|
||||
### 4. 🛠️ Custom Kustomize Setup
|
||||
|
||||
### Kustomize
|
||||
|
||||
You can write your own `kustomization.yaml` using ours as a 'base' and write patches to tweak the configuration.
|
||||
You can create your own `kustomization.yaml` and use Reloader’s as a base:
|
||||
|
||||
```yaml
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
bases:
|
||||
resources:
|
||||
- https://github.com/stakater/Reloader/deployments/kubernetes
|
||||
|
||||
namespace: reloader
|
||||
```
|
||||
|
||||
### Helm Charts
|
||||
### 5. ⚖️ Default Resource Requests and Limits
|
||||
|
||||
Alternatively if you have configured helm on your cluster, you can add Reloader to helm from our public chart repository and deploy it via helm using below-mentioned commands. Follow [this](docs/Helm2-to-Helm3.md) guide, in case you have trouble migrating Reloader from Helm2 to Helm3.
|
||||
By default, Reloader is deployed with the following resource requests and limits:
|
||||
|
||||
```bash
|
||||
helm repo add stakater https://stakater.github.io/stakater-charts
|
||||
|
||||
helm repo update
|
||||
|
||||
helm install stakater/reloader # For helm3 add --generate-name flag or set the release name
|
||||
```yaml
|
||||
resources:
|
||||
limits:
|
||||
cpu: 150m
|
||||
memory: 512Mi
|
||||
requests:
|
||||
cpu: 10m
|
||||
memory: 128Mi
|
||||
```
|
||||
|
||||
**Note:** By default Reloader watches in all namespaces. To watch in single namespace, please run following command. It will install Reloader in `test` namespace which will only watch `Deployments`, `Daemonsets` `Statefulsets` and `Rollouts` in `test` namespace.
|
||||
### 6. ⚙️ Optional runtime configurations
|
||||
|
||||
These flags let you customize Reloader's behavior globally, at the Reloader controller level.
|
||||
|
||||
#### 1. 🔁 Reload Behavior
|
||||
|
||||
| Flag | Description |
|
||||
|------|-------------|
|
||||
| `--reload-on-create=true` | Reload workloads when a watched ConfigMap or Secret is created |
|
||||
| `--reload-on-delete=true` | Reload workloads when a watched ConfigMap or Secret is deleted |
|
||||
| `--auto-reload-all=true` | Automatically reload all workloads unless opted out (`auto: "false"`) |
|
||||
| `--reload-strategy=env-vars` | Strategy to use for triggering reload (`env-vars` or `annotations`) |
|
||||
| `--log-format=json` | Enable JSON-formatted logs for better machine readability |
|
||||
|
||||
##### Reload Strategies
|
||||
|
||||
Reloader supports multiple strategies for triggering rolling updates when a watched `ConfigMap` or `Secret` changes. You can configure the strategy using the `--reload-strategy` flag.
|
||||
|
||||
| Strategy | Description |
|
||||
|--------------|-------------|
|
||||
| `env-vars` (default) | Adds a dummy environment variable to any container referencing the changed resource (e.g., `Deployment`, `StatefulSet`, etc.). This forces Kubernetes to perform a rolling update. |
|
||||
| `annotations` | Adds a `reloader.stakater.com/last-reloaded-from` annotation to the pod template metadata. Ideal for GitOps tools like ArgoCD, as it avoids triggering unwanted sync diffs. |
|
||||
|
||||
- The `env-vars` strategy is the default and works in most setups.
|
||||
- The `annotations` strategy is preferred in **GitOps environments** to prevent config drift in tools like ArgoCD or Flux.
|
||||
- In `annotations` mode, a `ConfigMap` or `Secret` that is deleted and re-created will still trigger a reload (since previous state is not tracked).
|
||||
|
||||
#### 2. 🚫 Resource Filtering
|
||||
|
||||
| Flag | Description |
|
||||
|------|-------------|
|
||||
| `--resources-to-ignore=configmaps` | Ignore ConfigMaps (only one type can be ignored at a time) |
|
||||
| `--resources-to-ignore=secrets` | Ignore Secrets (cannot combine with configMaps) |
|
||||
| `--ignored-workload-types=jobs,cronjobs` | Ignore specific workload types from reload monitoring |
|
||||
| `--resource-label-selector=key=value` | Only watch ConfigMaps/Secrets with matching labels |
|
||||
|
||||
> **⚠️ Note:**
|
||||
>
|
||||
> Only **one** resource type can be ignored at a time.
|
||||
> Trying to ignore **both `configmaps` and `secrets`** will cause an error in Reloader.
|
||||
> ✅ **Workaround:** Scale the Reloader deployment to `0` replicas if you want to disable it completely.
|
||||
|
||||
**💡 Workload Type Examples:**
|
||||
|
||||
```bash
|
||||
helm install stakater/reloader --set reloader.watchGlobally=false --namespace test # For helm3 add --generate-name flag or set the release name
|
||||
# Ignore only Jobs
|
||||
--ignored-workload-types=jobs
|
||||
|
||||
# Ignore only CronJobs
|
||||
--ignored-workload-types=cronjobs
|
||||
|
||||
# Ignore both (comma-separated)
|
||||
--ignored-workload-types=jobs,cronjobs
|
||||
```
|
||||
|
||||
Reloader can be configured to ignore the resources `secrets` and `configmaps` by using the following parameters of `values.yaml` file:
|
||||
> **🔧 Use Case:** Ignoring workload types is useful when you don't want certain types of workloads to be automatically reloaded.
|
||||
|
||||
| Parameter | Description | Type |
|
||||
| ---------------- | -------------------------------------------------------------- | ------- |
|
||||
| ignoreSecrets | To ignore secrets. Valid value are either `true` or `false` | boolean |
|
||||
| ignoreConfigMaps | To ignore configMaps. Valid value are either `true` or `false` | boolean |
|
||||
#### 3. 🧩 Namespace Filtering
|
||||
|
||||
**Note:** At one time only one of these resource can be ignored, trying to do it will cause error in helm template compilation.
|
||||
| Flag | Description |
|
||||
|------|-------------|
|
||||
| `--namespace-selector='key=value'` <br /> <br />`--namespace-selector='key1=value1,key2=value2'` <br /> <br />`--namespace-selector='key in (value1,value2)'`| Watch only namespaces with matching labels. See [LIST and WATCH filtering](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#list-and-watch-filtering) for more details on label selectors |
|
||||
| `--namespaces-to-ignore=ns1,ns2` | Skip specific namespaces from being watched |
|
||||
|
||||
Reloader can be configured to only watch namespaces labeled with one or more labels using the `namespaceSelector` parameter
|
||||
#### 4. 📝 Annotation Key Overrides
|
||||
|
||||
| Parameter | Description | Type |
|
||||
| ---------------- | ---------------------------------------------------------------------------------- | ------- |
|
||||
| namespaceSelector | list of comma separated label selectors, if multiple are provided they are combined with the AND operator | string |
|
||||
These flags allow you to redefine annotation keys used in your workloads or resources:
|
||||
|
||||
Reloader can be configured to only watch configmaps/secrets labeled with one or more labels using the `resourceLabelSelector` parameter
|
||||
| Flag | Overrides |
|
||||
|------|-----------|
|
||||
| `--auto-annotation` | Overrides `reloader.stakater.com/auto` |
|
||||
| `--secret-auto-annotation` | Overrides `secret.reloader.stakater.com/auto` |
|
||||
| `--configmap-auto-annotation` | Overrides `configmap.reloader.stakater.com/auto` |
|
||||
| `--auto-search-annotation` | Overrides `reloader.stakater.com/search` |
|
||||
| `--search-match-annotation` | Overrides `reloader.stakater.com/match` |
|
||||
| `--secret-annotation` | Overrides `secret.reloader.stakater.com/reload` |
|
||||
| `--configmap-annotation` | Overrides `configmap.reloader.stakater.com/reload` |
|
||||
| `--pause-deployment-annotation` | Overrides `deployment.reloader.stakater.com/pause-period` |
|
||||
| `--pause-deployment-time-annotation` | Overrides `deployment.reloader.stakater.com/paused-at` |
|
||||
|
||||
| Parameter | Description | Type |
|
||||
| ---------------------- | ---------------------------------------------------------------------------------- | ------- |
|
||||
| resourceLabelSelector | list of comma separated label selectors, if multiple are provided they are combined with the AND operator | string |
|
||||
### 5. 🕷️ Debugging
|
||||
|
||||
**Note:** Both `namespaceSelector` & `resourceLabelSelector` can be used together. If they are then both conditions must be met for the configmap or secret to be eligible to trigger reload events. (e.g. If a configMap matches `resourceLabelSelector` but `namespaceSelector` does not match the namespace the configmap is in, it will be ignored).
|
||||
| Flag | Description |
|
||||
|--- |-------------|
|
||||
| `--enable-pprof` | Enables `pprof` for profiling |
|
||||
| `--pprof-addr` | Address to start `pprof` server on. Default is `:6060` |
|
||||
|
||||
You can also set the log format of Reloader to json by setting `logFormat` to `json` in values.yaml and apply the chart.
|
||||
## Compatibility
|
||||
|
||||
You can enable to scrape Reloader's Prometheus metrics by setting `serviceMonitor.enabled` or `podMonitor.enabled` to `true` in values.yaml file. Service monitor will be removed in future releases of Reloader in favour of Pod monitor.
|
||||
|
||||
**Note:** Reloading of OpenShift (DeploymentConfig) and/or Argo `Rollouts` has to be enabled explicitly because it might not be always possible to use it on a cluster with restricted permissions. This can be done by changing the following parameters:
|
||||
|
||||
| Parameter | Description | Type |
|
||||
|------------------|------------------------------------------------------------------------------------------------------------------------------------------| ------- |
|
||||
| isOpenshift | Enable OpenShift DeploymentConfigs. Valid value are either `true` or `false` | boolean |
|
||||
| isArgoRollouts | Enable Argo `Rollouts`. Valid value are either `true` or `false` | boolean |
|
||||
| reloadOnCreate | Enable reload on create events. Valid value are either `true` or `false` | boolean |
|
||||
| syncAfterRestart | Enable sync after Reloader restarts for **Add** events, works only when reloadOnCreate is `true`. Valid value are either `true` or `false` | boolean |
|
||||
|
||||
**isOpenShift** Recent versions of OpenShift (tested on 4.13.3) require the specified user to be in an uid range which is dynamically assigned by the namespace. The solution is to unset the runAsUser variable via ``deployment.securityContext.runAsUser=null`` and let OpenShift assign it at install.
|
||||
|
||||
**ReloadOnCreate** reloadOnCreate controls how Reloader handles secrets being added to the cache for the first time. If reloadOnCreate is set to true:
|
||||
|
||||
- Configmaps/secrets being added to the cache will cause Reloader to perform a rolling update of the associated workload.
|
||||
- When applications are deployed for the first time, Reloader will perform a rolling update of the associated workload.
|
||||
- If you are running Reloader in HA mode all workloads will have a rolling update performed when a new leader is elected.
|
||||
|
||||
If ReloadOnCreate is set to false:
|
||||
|
||||
- Updates to configMaps/Secrets that occur while there is no leader will not be picked up by the new leader until a subsequent update of the configmap/secret occurs. In the worst case the window in which there can be no leader is 15s as this is the LeaseDuration.
|
||||
Reloader is compatible with Kubernetes >= 1.19
|
||||
|
||||
## Help
|
||||
|
||||
### Documentation
|
||||
|
||||
You can find more documentation [here](docs)
|
||||
The Reloader documentation can be viewed from [the doc site](https://docs.stakater.com/reloader/). The doc source is in the [docs](./docs/) folder.
|
||||
|
||||
### Have a question?
|
||||
|
||||
@@ -345,7 +459,7 @@ File a GitHub [issue](https://github.com/stakater/Reloader/issues).
|
||||
|
||||
### Talk to us on Slack
|
||||
|
||||
Join and talk to us on Slack for discussing Reloader
|
||||
Join and talk to us on Slack for discussing Reloader:
|
||||
|
||||
[](https://slack.stakater.com/)
|
||||
[](https://stakater-community.slack.com/messages/CC5S05S12)
|
||||
@@ -358,12 +472,12 @@ Please use the [issue tracker](https://github.com/stakater/Reloader/issues) to r
|
||||
|
||||
### Developing
|
||||
|
||||
1. Deploy Reloader.
|
||||
1. Run `okteto up` to activate your development container.
|
||||
1. `make build`.
|
||||
1. Deploy Reloader
|
||||
1. Run `okteto up` to activate your development container
|
||||
1. `make build`
|
||||
1. `./Reloader`
|
||||
|
||||
PRs are welcome. In general, we follow the "fork-and-pull" Git workflow.
|
||||
PRs are welcome. In general, we follow the "fork-and-pull" Git workflow:
|
||||
|
||||
1. **Fork** the repo on GitHub
|
||||
1. **Clone** the project to your own machine
|
||||
@@ -373,23 +487,37 @@ PRs are welcome. In general, we follow the "fork-and-pull" Git workflow.
|
||||
|
||||
**NOTE:** Be sure to merge the latest from "upstream" before making a pull request!
|
||||
|
||||
## Release Processes
|
||||
|
||||
*Repository GitHub releases*: As requested by the community in [issue 685](https://github.com/stakater/Reloader/issues/685), Reloader is now based on a manual release process. Releases are no longer done on every merged PR to the main branch, but manually on request.
|
||||
|
||||
To make a GitHub release:
|
||||
|
||||
1. Code owners create a release branch `release-vX.Y.Z` from `master`
|
||||
1. Code owners run [Init Release](https://github.com/stakater/Reloader/actions/workflows/init-branch-release.yaml) workflow to automatically generate version and manifests on the release branch
|
||||
- Set the `TARGET_BRANCH` parameter to release branch i.e. `release-vX.Y.Z`
|
||||
- Set the `TARGET_VERSION` to release version without 'v' i.e. `X.Y.Z`
|
||||
1. A PR is created to bump the image version on the release branch, example: [PR-798](https://github.com/stakater/Reloader/pull/798)
|
||||
1. Code owners create a GitHub release with tag `vX.Y.Z` and target branch `release-vX.Y.Z`, which triggers creation of images
|
||||
1. Code owners create another branch from `master` and bump the helm chart version as well as Reloader image version.
|
||||
- Code owners create a PR with `release/helm-chart` label, example: [PR-846](https://github.com/stakater/Reloader/pull/846)
|
||||
|
||||
*Repository git tagging*: Push to the main branch will create a merge-image and merge-tag named `merge-${{ github.event.number }}`, for example `merge-800` when pull request number 800 is merged.
|
||||
|
||||
## Changelog
|
||||
|
||||
View our closed [Pull Requests](https://github.com/stakater/Reloader/pulls?q=is%3Apr+is%3Aclosed).
|
||||
View the [releases page](https://github.com/stakater/Reloader/releases) to see what has changed in each release.
|
||||
|
||||
## License
|
||||
|
||||
Apache2 © [Stakater][website]
|
||||
|
||||
## About
|
||||
## About Stakater
|
||||
|
||||
`Reloader` is maintained by [Stakater][website]. Like it? Please let us know at <hello@stakater.com>
|
||||
[](https://stakater.com/?utm_source=Reloader&utm_medium=github)
|
||||
|
||||
See [our other projects](https://github.com/stakater)
|
||||
or contact us in case of professional services and queries on <hello@stakater.com>
|
||||
`Reloader` is maintained by [Stakater][website]. Like it? Please let us know at [hello@stakater.com](hello@stakater.com)
|
||||
|
||||
See [our other projects](https://github.com/stakater) or contact us in case of professional services and queries on [hello@stakater.com](hello@stakater.com)
|
||||
|
||||
[website]: https://stakater.com
|
||||
|
||||
## Acknowledgements
|
||||
|
||||
- [ConfigmapController](https://github.com/fabric8io/configmapcontroller); We documented [here](docs/Reloader-vs-ConfigmapController.md) why we re-created Reloader
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 12 KiB |
BIN
assets/web/reloader.jpg
Normal file
BIN
assets/web/reloader.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 117 KiB |
@@ -1,10 +1,8 @@
|
||||
# Generated from deployments/kubernetes/templates/chart/Chart.yaml.tmpl
|
||||
|
||||
apiVersion: v1
|
||||
name: reloader
|
||||
description: Reloader chart that runs on kubernetes
|
||||
version: 1.0.50
|
||||
appVersion: v1.0.50
|
||||
version: 2.2.8
|
||||
appVersion: v1.4.13
|
||||
keywords:
|
||||
- Reloader
|
||||
- kubernetes
|
||||
@@ -18,4 +16,4 @@ maintainers:
|
||||
- name: rasheedamir
|
||||
email: rasheed@stakater.com
|
||||
- name: faizanahmad055
|
||||
email: faizan.ahmad55@outlook.com
|
||||
email: faizan@stakater.com
|
||||
|
||||
179
deployments/kubernetes/chart/reloader/README.md
Normal file
179
deployments/kubernetes/chart/reloader/README.md
Normal file
@@ -0,0 +1,179 @@
|
||||
# Reloader Helm Chart
|
||||
|
||||
If you have configured helm on your cluster, you can add Reloader to helm from our public chart repository and deploy it via helm using below-mentioned commands. Follow the [Helm2 to Helm3 guide](../../../../docs/Helm2-to-Helm3.md), in case you have trouble migrating Reloader from Helm2 to Helm3.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
# Add stakater helm repoository
|
||||
helm repo add stakater https://stakater.github.io/stakater-charts
|
||||
|
||||
helm repo update
|
||||
|
||||
helm install stakater/reloader # For helm3 add --generate-name flag or set the release name
|
||||
|
||||
helm install {{RELEASE_NAME}} stakater/reloader -n {{NAMESPACE}} --set reloader.watchGlobally=false # By default, Reloader watches in all namespaces. To watch in single namespace, set watchGlobally=false
|
||||
|
||||
helm install stakater/reloader --set reloader.watchGlobally=false --namespace test --generate-name # Install Reloader in `test` namespace which will only watch `Deployments`, `Daemonsets` `Statefulsets` and `Rollouts` in `test` namespace.
|
||||
|
||||
helm install stakater/reloader --set reloader.ignoreJobs=true --set reloader.ignoreCronJobs=true --generate-name # Install Reloader ignoring Jobs and CronJobs from reload monitoring
|
||||
```
|
||||
|
||||
## Uninstalling
|
||||
|
||||
```bash
|
||||
helm uninstall {{RELEASE_NAME}} -n {{NAMESPACE}}
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
### Global Parameters
|
||||
|
||||
| Parameter | Description | Type | Default |
|
||||
| ------------------------- | --------------------------------------------------------------- | ----- | ------- |
|
||||
| `global.imagePullSecrets` | Reference to one or more secrets to be used when pulling images | array | `[]` |
|
||||
|
||||
### Common Parameters
|
||||
|
||||
| Parameter | Description | Type | Default |
|
||||
| ------------------ | ---------------------------------------- | ------ | ----------------- |
|
||||
| `nameOverride` | replace the name of the chart | string | `""` |
|
||||
| `fullnameOverride` | replace the generated name | string | `""` |
|
||||
| `image` | Set container image name, tag and policy | map | `see values.yaml` |
|
||||
|
||||
### Core Reloader Parameters
|
||||
|
||||
| Parameter | Description | Type | Default |
|
||||
| ----------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- | --------- |
|
||||
| `reloader.autoReloadAll` | | boolean | `false` |
|
||||
| `reloader.isArgoRollouts` | Enable Argo `Rollouts`. Valid value are either `true` or `false` | boolean | `false` |
|
||||
| `reloader.isOpenshift` | Enable OpenShift DeploymentConfigs. Valid value are either `true` or `false` | boolean | `false` |
|
||||
| `reloader.ignoreSecrets` | To ignore secrets. Valid value are either `true` or `false`. Either `ignoreSecrets` or `ignoreConfigMaps` can be ignored, not both at the same time | boolean | `false` |
|
||||
| `reloader.ignoreConfigMaps` | To ignore configmaps. Valid value are either `true` or `false` | boolean | `false` |
|
||||
| `reloader.ignoreJobs` | To ignore jobs from reload monitoring. Valid value are either `true` or `false`. Translates to `--ignored-workload-types=jobs` | boolean | `false` |
|
||||
| `reloader.ignoreCronJobs` | To ignore CronJobs from reload monitoring. Valid value are either `true` or `false`. Translates to `--ignored-workload-types=cronjobs` | boolean | `false` |
|
||||
| `reloader.reloadOnCreate` | Enable reload on create events. Valid value are either `true` or `false` | boolean | `false` |
|
||||
| `reloader.reloadOnDelete` | Enable reload on delete events. Valid value are either `true` or `false` | boolean | `false` |
|
||||
| `reloader.syncAfterRestart` | Enable sync after Reloader restarts for **Add** events, works only when reloadOnCreate is `true`. Valid value are either `true` or `false` | boolean | `false` |
|
||||
| `reloader.reloadStrategy` | Strategy to trigger resource restart, set to either `default`, `env-vars` or `annotations` | enumeration | `default` |
|
||||
| `reloader.ignoreNamespaces` | List of comma separated namespaces to ignore, if multiple are provided, they are combined with the AND operator | string | `""` |
|
||||
| `reloader.namespaceSelector` | List of comma separated k8s label selectors for namespaces selection. The parameter only used when `reloader.watchGlobally` is `true`. See [LIST and WATCH filtering](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#list-and-watch-filtering) for more details on label-selector | string | `""` |
|
||||
| `reloader.resourceLabelSelector` | List of comma separated label selectors, if multiple are provided they are combined with the AND operator | string | `""` |
|
||||
| `reloader.logFormat` | Set type of log format. Value could be either `json` or `""` | string | `""` |
|
||||
| `reloader.watchGlobally` | Allow Reloader to watch in all namespaces (`true`) or just in a single namespace (`false`) | boolean | `true` |
|
||||
| `reloader.enableHA` | Enable leadership election allowing you to run multiple replicas | boolean | `false` |
|
||||
| `reloader.enablePProf` | Enables pprof for profiling | boolean | `false` |
|
||||
| `reloader.pprofAddr` | Address to start pprof server on | string | `:6060` |
|
||||
| `reloader.readOnlyRootFileSystem` | Enforce readOnlyRootFilesystem | boolean | `false` |
|
||||
| `reloader.legacy.rbac` | | boolean | `false` |
|
||||
| `reloader.matchLabels` | Pod labels to match | map | `{}` |
|
||||
| `reloader.enableMetricsByNamespace` | Expose an additional Prometheus counter of reloads by namespace (this metric may have high cardinality in clusters with many namespaces) | boolean | `false` |
|
||||
|
||||
### Deployment Reloader Parameters
|
||||
|
||||
| Parameter | Description | Type | Default |
|
||||
| ----------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ----------------- |
|
||||
| `reloader.deployment.replicas` | Number of replicas, if you wish to run multiple replicas set `reloader.enableHA = true`. The replicas will be limited to 1 when `reloader.enableHA = false` | int | 1 |
|
||||
| `reloader.deployment.revisionHistoryLimit` | Limit the number of revisions retained in the revision history | int | 2 |
|
||||
| `reloader.deployment.nodeSelector` | Scheduling pod to a specific node based on set labels | map | `{}` |
|
||||
| `reloader.deployment.affinity` | Set affinity rules on pod | map | `{}` |
|
||||
| `reloader.deployment.securityContext` | Set pod security context | map | `{}` |
|
||||
| `reloader.deployment.containerSecurityContext` | Set container security context | map | `{}` |
|
||||
| `reloader.deployment.tolerations` | A list of `tolerations` to be applied to the deployment | array | `[]` |
|
||||
| `reloader.deployment.topologySpreadConstraints` | Topology spread constraints for pod assignment | array | `[]` |
|
||||
| `reloader.deployment.annotations` | Set deployment annotations | map | `{}` |
|
||||
| `reloader.deployment.labels` | Set deployment labels, default to Stakater settings | array | `see values.yaml` |
|
||||
| `reloader.deployment.env` | Support for extra environment variables | array | `[]` |
|
||||
| `reloader.deployment.livenessProbe` | Set liveness probe timeout values | map | `{}` |
|
||||
| `reloader.deployment.readinessProbe` | Set readiness probe timeout values | map | `{}` |
|
||||
| `reloader.deployment.resources` | Set container requests and limits (e.g. CPU or memory) | map | `{}` |
|
||||
| `reloader.deployment.pod.annotations` | Set annotations for pod | map | `{}` |
|
||||
| `reloader.deployment.priorityClassName` | Set priority class for pod in cluster | string | `""` |
|
||||
| `reloader.deployment.volumeMounts` | Mount volume | array | `[]` |
|
||||
| `reloader.deployment.volumes` | Add volume to a pod | array | `[]` |
|
||||
|
||||
| `reloader.deployment.dnsConfig` | dns configuration for pods | map | `{}` |
|
||||
### Other Reloader Parameters
|
||||
|
||||
| Parameter | Description | Type | Default |
|
||||
| -------------------------------------- | --------------------------------------------------------------- | ------- | ------- |
|
||||
| `reloader.service` | | map | `{}` |
|
||||
| `reloader.rbac.enabled` | Specifies whether a role based access control should be created | boolean | `true` |
|
||||
| `reloader.serviceAccount.create` | Specifies whether a ServiceAccount should be created | boolean | `true` |
|
||||
| `reloader.custom_annotations` | Add custom annotations | map | `{}` |
|
||||
| `reloader.serviceMonitor.enabled` | Enable to scrape Reloader's Prometheus metrics (legacy) | boolean | `false` |
|
||||
| `reloader.podMonitor.enabled` | Enable to scrape Reloader's Prometheus metrics | boolean | `false` |
|
||||
| `reloader.podDisruptionBudget.enabled` | Limit the number of pods of a replicated application | boolean | `false` |
|
||||
| `reloader.netpol.enabled` | | boolean | `false` |
|
||||
| `reloader.volumeMounts` | Mount volume | array | `[]` |
|
||||
| `reloader.volumes` | Add volume to a pod | array | `[]` |
|
||||
| `reloader.webhookUrl` | Add webhook to Reloader | string | `""` |
|
||||
|
||||
## ⚙️ Helm Chart Configuration Notes
|
||||
|
||||
### Selector Behavior
|
||||
- Both `namespaceSelector` & `resourceLabelSelector` can be used together
|
||||
- **Both conditions must be met** for a ConfigMap/Secret to trigger reloads
|
||||
- Example: If a ConfigMap matches `resourceLabelSelector` but not `namespaceSelector`, it will be ignored
|
||||
|
||||
### Important Limitations
|
||||
- Only one of these resources can be ignored at a time:
|
||||
- `ignoreConfigMaps` **or** `ignoreSecrets`
|
||||
- Trying to ignore both will cause Helm template compilation errors
|
||||
- The `ignoreJobs` and `ignoreCronJobs` flags can be used together or individually
|
||||
- When both are enabled, translates to `--ignored-workload-types=jobs,cronjobs`
|
||||
- When used individually, translates to `--ignored-workload-types=jobs` or `--ignored-workload-types=cronjobs`
|
||||
- These flags prevent Reloader from monitoring and reloading the specified workload types
|
||||
|
||||
### Special Integrations
|
||||
- OpenShift (`DeploymentConfig`) and Argo Rollouts support must be **explicitly enabled**
|
||||
- Required due to potential permission restrictions on clusters
|
||||
|
||||
### OpenShift Considerations
|
||||
- Recent OpenShift versions (tested on 4.13.3) require:
|
||||
- Users to be in a dynamically assigned UID range
|
||||
- **Solution**: Unset `runAsUser` via `reloader.deployment.securityContext.runAsUser=null`
|
||||
- Let OpenShift assign UID automatically during installation
|
||||
|
||||
### Core Functionality Flags
|
||||
|
||||
#### 🔄 `reloadOnCreate` Behavior
|
||||
**When true:**
|
||||
✅ New ConfigMaps/Secrets trigger rolling updates
|
||||
✅ New deployments referencing existing resources reload
|
||||
✅ In HA mode, new leader reloads all tracked workloads
|
||||
|
||||
**When false:**
|
||||
❌ Updates during leader downtime are missed
|
||||
⏳ Potential 15s delay window (default `LeaseDuration`)
|
||||
|
||||
#### 🗑️ `reloadOnDelete` Behavior
|
||||
**When true:**
|
||||
✅ Deleted resources trigger rolling updates of referencing workloads
|
||||
|
||||
**When false:**
|
||||
❌ Deletions have no effect on referencing pods
|
||||
|
||||
#### Default Settings
|
||||
⚠️ All flags default to `false` (must be enabled explicitly):
|
||||
- `reloadOnCreate`
|
||||
- `reloadOnDelete`
|
||||
- `syncAfterRestart`
|
||||
|
||||
### Deprecation Notice
|
||||
- `serviceMonitor` will be removed in future releases in favor of `PodMonitor`
|
||||
|
||||
## Release Process
|
||||
|
||||
_Helm chart versioning_: The Reloader Helm chart is maintained in this repository. The Helm chart has its own semantic versioning. Helm charts and code releases are separate artifacts and separately versioned. Manifest making strategy relies on Kustomize. The Reloader Helm chart manages the two artifacts with these two fields:
|
||||
|
||||
- [`appVersion`](Chart.yaml) points to released Reloader application image version listed on the [releases page](https://github.com/stakater/Reloader/releases)
|
||||
- [`version`](Chart.yaml) sets the Reloader Helm chart version
|
||||
|
||||
Helm chart will be released to the chart registry whenever files in `deployments/kubernetes/chart/reloader/**` change on the main branch.
|
||||
|
||||
### To release the Helm chart
|
||||
|
||||
1. Create a new branch and update the Helm chart `appVersion` and `version`, example pull-request: [PR-846](https://github.com/stakater/Reloader/pull/846)
|
||||
1. Label the PR with `release/helm-chart`
|
||||
1. After approval and just before squash, make sure the squash commit message represents all changes, because it will be used to autogenerate the changelog message
|
||||
@@ -20,12 +20,27 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "reloader-labels.chart" -}}
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "reloader-chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{- define "reloader-match-labels.chart" -}}
|
||||
app: {{ template "reloader-fullname" . }}
|
||||
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
|
||||
release: {{ .Release.Name | quote }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "reloader-labels.chart" -}}
|
||||
{{ include "reloader-match-labels.chart" . }}
|
||||
app.kubernetes.io/name: {{ template "reloader-name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name | quote }}
|
||||
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
|
||||
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
|
||||
heritage: {{ .Release.Service | quote }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
@@ -38,10 +53,10 @@ podAntiAffinity:
|
||||
podAffinityTerm:
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
- key: app
|
||||
- key: app.kubernetes.io/instance
|
||||
operator: In
|
||||
values:
|
||||
- {{ template "reloader-fullname" . }}
|
||||
- {{ .Release.Name | quote }}
|
||||
topologyKey: "kubernetes.io/hostname"
|
||||
{{- end -}}
|
||||
|
||||
@@ -63,3 +78,28 @@ Create the annotations to support helm3
|
||||
meta.helm.sh/release-namespace: {{ .Release.Namespace | quote }}
|
||||
meta.helm.sh/release-name: {{ .Release.Name | quote }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create the namespace selector if it does not watch globally
|
||||
*/}}
|
||||
{{- define "reloader-namespaceSelector" -}}
|
||||
{{- if and .Values.reloader.watchGlobally .Values.reloader.namespaceSelector -}}
|
||||
{{ .Values.reloader.namespaceSelector }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Normalizes global.imagePullSecrets to a list of objects with name fields.
|
||||
Supports both of these in values.yaml:
|
||||
# - name: my-pull-secret
|
||||
# - my-pull-secret
|
||||
*/}}
|
||||
{{- define "reloader-imagePullSecrets" -}}
|
||||
{{- range $s := .Values.global.imagePullSecrets }}
|
||||
{{- if kindIs "map" $s }}
|
||||
- {{ toYaml $s | nindent 2 | trim }}
|
||||
{{- else }}
|
||||
- name: {{ $s }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
|
||||
@@ -11,10 +11,10 @@ metadata:
|
||||
labels:
|
||||
{{ include "reloader-labels.chart" . | indent 4 }}
|
||||
{{- if .Values.reloader.rbac.labels }}
|
||||
{{ toYaml .Values.reloader.rbac.labels | indent 4 }}
|
||||
{{ tpl (toYaml .Values.reloader.rbac.labels) . | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.matchLabels }}
|
||||
{{ toYaml .Values.reloader.matchLabels | indent 4 }}
|
||||
{{ tpl (toYaml .Values.reloader.matchLabels) . | indent 4 }}
|
||||
{{- end }}
|
||||
name: {{ template "reloader-fullname" . }}-role
|
||||
rules:
|
||||
@@ -31,7 +31,7 @@ rules:
|
||||
- list
|
||||
- get
|
||||
- watch
|
||||
{{- if .Values.reloader.namespaceSelector }}
|
||||
{{- if (include "reloader-namespaceSelector" .) }}
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
@@ -76,16 +76,7 @@ rules:
|
||||
- get
|
||||
- update
|
||||
- patch
|
||||
- apiGroups:
|
||||
- "extensions"
|
||||
resources:
|
||||
- deployments
|
||||
- daemonsets
|
||||
verbs:
|
||||
- list
|
||||
- get
|
||||
- update
|
||||
- patch
|
||||
{{- if .Values.reloader.ignoreCronJobs }}{{- else }}
|
||||
- apiGroups:
|
||||
- "batch"
|
||||
resources:
|
||||
@@ -93,12 +84,18 @@ rules:
|
||||
verbs:
|
||||
- list
|
||||
- get
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.ignoreJobs }}{{- else }}
|
||||
- apiGroups:
|
||||
- "batch"
|
||||
resources:
|
||||
- jobs
|
||||
verbs:
|
||||
- create
|
||||
- delete
|
||||
- list
|
||||
- get
|
||||
{{- end}}
|
||||
{{- if .Values.reloader.enableHA }}
|
||||
- apiGroups:
|
||||
- "coordination.k8s.io"
|
||||
@@ -108,6 +105,17 @@ rules:
|
||||
- create
|
||||
- get
|
||||
- update
|
||||
{{- end}}
|
||||
{{- if .Values.reloader.enableCSIIntegration }}
|
||||
- apiGroups:
|
||||
- "secrets-store.csi.x-k8s.io"
|
||||
resources:
|
||||
- secretproviderclasspodstatuses
|
||||
- secretproviderclasses
|
||||
verbs:
|
||||
- list
|
||||
- get
|
||||
- watch
|
||||
{{- end}}
|
||||
- apiGroups:
|
||||
- ""
|
||||
|
||||
@@ -11,10 +11,10 @@ metadata:
|
||||
labels:
|
||||
{{ include "reloader-labels.chart" . | indent 4 }}
|
||||
{{- if .Values.reloader.rbac.labels }}
|
||||
{{ toYaml .Values.reloader.rbac.labels | indent 4 }}
|
||||
{{ tpl (toYaml .Values.reloader.rbac.labels) . | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.matchLabels }}
|
||||
{{ toYaml .Values.reloader.matchLabels | indent 4 }}
|
||||
{{ tpl (toYaml .Values.reloader.matchLabels) . | indent 4 }}
|
||||
{{- end }}
|
||||
name: {{ template "reloader-fullname" . }}-role-binding
|
||||
roleRef:
|
||||
|
||||
@@ -4,50 +4,49 @@ metadata:
|
||||
annotations:
|
||||
{{ include "reloader-helm3.annotations" . | indent 4 }}
|
||||
{{- if .Values.reloader.deployment.annotations }}
|
||||
{{ toYaml .Values.reloader.deployment.annotations | indent 4 }}
|
||||
{{ tpl (toYaml .Values.reloader.deployment.annotations) . | indent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{ include "reloader-labels.chart" . | indent 4 }}
|
||||
{{- if .Values.reloader.deployment.labels }}
|
||||
{{ toYaml .Values.reloader.deployment.labels | indent 4 }}
|
||||
{{ tpl (toYaml .Values.reloader.deployment.labels) . | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.matchLabels }}
|
||||
{{ toYaml .Values.reloader.matchLabels | indent 4 }}
|
||||
{{ tpl (toYaml .Values.reloader.matchLabels) . | indent 4 }}
|
||||
{{- end }}
|
||||
name: {{ template "reloader-fullname" . }}
|
||||
namespace: {{ .Values.namespace | default .Release.Namespace }}
|
||||
spec:
|
||||
{{- if not (.Values.reloader.enableHA) }}
|
||||
replicas: 1
|
||||
replicas: {{ min .Values.reloader.deployment.replicas 1 }}
|
||||
{{- else }}
|
||||
replicas: {{ .Values.reloader.deployment.replicas }}
|
||||
{{- end}}
|
||||
revisionHistoryLimit: 2
|
||||
revisionHistoryLimit: {{ .Values.reloader.deployment.revisionHistoryLimit }}
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {{ template "reloader-fullname" . }}
|
||||
release: {{ .Release.Name | quote }}
|
||||
{{ include "reloader-match-labels.chart" . | indent 6 }}
|
||||
{{- if .Values.reloader.matchLabels }}
|
||||
{{ toYaml .Values.reloader.matchLabels | indent 6 }}
|
||||
{{ tpl (toYaml .Values.reloader.matchLabels) . | indent 6 }}
|
||||
{{- end }}
|
||||
template:
|
||||
metadata:
|
||||
{{- if .Values.reloader.deployment.pod.annotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.reloader.deployment.pod.annotations | indent 8 }}
|
||||
{{ tpl (toYaml .Values.reloader.deployment.pod.annotations) . | indent 8 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{ include "reloader-labels.chart" . | indent 8 }}
|
||||
{{- if .Values.reloader.deployment.labels }}
|
||||
{{ toYaml .Values.reloader.deployment.labels | indent 8 }}
|
||||
{{ tpl (toYaml .Values.reloader.deployment.labels) . | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.matchLabels }}
|
||||
{{ toYaml .Values.reloader.matchLabels | indent 8 }}
|
||||
{{ tpl (toYaml .Values.reloader.matchLabels) . | indent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- with .Values.reloader.deployment.imagePullSecrets }}
|
||||
{{- if .Values.global.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{ include "reloader-imagePullSecrets" . | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.deployment.nodeSelector }}
|
||||
nodeSelector:
|
||||
@@ -57,8 +56,9 @@ spec:
|
||||
affinity:
|
||||
{{- if .Values.reloader.deployment.affinity }}
|
||||
{{ toYaml .Values.reloader.deployment.affinity | indent 8 }}
|
||||
{{- end}}
|
||||
{{- else }}
|
||||
{{ include "reloader-podAntiAffinity" . | indent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.deployment.tolerations }}
|
||||
tolerations:
|
||||
@@ -71,12 +71,41 @@ spec:
|
||||
{{- if .Values.reloader.deployment.priorityClassName }}
|
||||
priorityClassName: {{ .Values.reloader.deployment.priorityClassName }}
|
||||
{{- end }}
|
||||
{{- with .Values.reloader.deployment.dnsConfig }}
|
||||
dnsConfig:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
containers:
|
||||
- image: "{{ .Values.reloader.deployment.image.name }}:{{ .Values.reloader.deployment.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.reloader.deployment.image.pullPolicy }}
|
||||
{{- if .Values.global.imageRegistry }}
|
||||
- image: "{{ .Values.global.imageRegistry }}/{{ .Values.image.name }}:{{ .Values.image.tag }}"
|
||||
{{- else }}
|
||||
{{- if .Values.image.digest }}
|
||||
- image: "{{ .Values.image.repository }}@{{ .Values.image.digest }}"
|
||||
{{- else }}
|
||||
- image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
name: {{ template "reloader-fullname" . }}
|
||||
{{- if or (.Values.reloader.deployment.env.open) (.Values.reloader.deployment.env.secret) (.Values.reloader.deployment.env.field) (.Values.reloader.deployment.env.existing) (eq .Values.reloader.watchGlobally false) (.Values.reloader.enableHA)}}
|
||||
env:
|
||||
- name: GOMAXPROCS
|
||||
{{- if .Values.reloader.deployment.gomaxprocsOverride }}
|
||||
value: {{ .Values.reloader.deployment.gomaxprocsOverride | quote }}
|
||||
{{- else }}
|
||||
valueFrom:
|
||||
resourceFieldRef:
|
||||
resource: limits.cpu
|
||||
divisor: '1'
|
||||
{{- end }}
|
||||
- name: GOMEMLIMIT
|
||||
{{- if .Values.reloader.deployment.gomemlimitOverride }}
|
||||
value: {{ .Values.reloader.deployment.gomemlimitOverride | quote }}
|
||||
{{- else }}
|
||||
valueFrom:
|
||||
resourceFieldRef:
|
||||
resource: limits.memory
|
||||
divisor: '1'
|
||||
{{- end }}
|
||||
{{- range $name, $value := .Values.reloader.deployment.env.open }}
|
||||
{{- if not (empty $value) }}
|
||||
- name: {{ $name | quote }}
|
||||
@@ -118,6 +147,15 @@ spec:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
{{- end }}
|
||||
|
||||
- name: RELOADER_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
|
||||
- name: RELOADER_DEPLOYMENT_NAME
|
||||
value: {{ template "reloader-fullname" . }}
|
||||
|
||||
{{- if .Values.reloader.enableHA }}
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
@@ -128,8 +166,10 @@ spec:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.enableMetricsByNamespace }}
|
||||
- name: METRICS_COUNT_BY_NAMESPACE
|
||||
value: enabled
|
||||
{{- end }}
|
||||
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 9090
|
||||
@@ -160,31 +200,55 @@ spec:
|
||||
securityContext:
|
||||
{{- toYaml $containerSecurityContext | nindent 10 }}
|
||||
|
||||
{{- if eq .Values.reloader.readOnlyRootFileSystem true }}
|
||||
{{- if (or (.Values.reloader.deployment.volumeMounts) (eq .Values.reloader.readOnlyRootFileSystem true)) }}
|
||||
volumeMounts:
|
||||
{{- if eq .Values.reloader.readOnlyRootFileSystem true }}
|
||||
- mountPath: /tmp/
|
||||
name: tmp-volume
|
||||
{{- end }}
|
||||
{{- with .Values.reloader.deployment.volumeMounts }}
|
||||
{{- . | toYaml | nindent 10 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if or (.Values.reloader.logFormat) (.Values.reloader.ignoreSecrets) (.Values.reloader.ignoreNamespaces) (.Values.reloader.namespaceSelector) (.Values.reloader.resourceLabelSelector) (.Values.reloader.ignoreConfigMaps) (.Values.reloader.custom_annotations) (eq .Values.reloader.isArgoRollouts true) (eq .Values.reloader.reloadOnCreate true) (ne .Values.reloader.reloadStrategy "default") (.Values.reloader.enableHA) (.Values.reloader.autoReloadAll)}}
|
||||
{{- if or (.Values.reloader.logFormat) (.Values.reloader.logLevel) (.Values.reloader.ignoreSecrets) (.Values.reloader.ignoreNamespaces) (include "reloader-namespaceSelector" .) (.Values.reloader.resourceLabelSelector) (.Values.reloader.ignoreConfigMaps) (.Values.reloader.custom_annotations) (eq .Values.reloader.isArgoRollouts true) (eq .Values.reloader.reloadOnCreate true) (eq .Values.reloader.reloadOnDelete true) (ne .Values.reloader.reloadStrategy "default") (.Values.reloader.enableHA) (.Values.reloader.autoReloadAll) (.Values.reloader.ignoreJobs) (.Values.reloader.ignoreCronJobs) (.Values.reloader.enableCSIIntegration)}}
|
||||
args:
|
||||
{{- if .Values.reloader.logFormat }}
|
||||
- "--log-format={{ .Values.reloader.logFormat }}"
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.logLevel }}
|
||||
- "--log-level={{ .Values.reloader.logLevel }}"
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.ignoreSecrets }}
|
||||
- "--resources-to-ignore=secrets"
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.ignoreConfigMaps }}
|
||||
- "--resources-to-ignore=configMaps"
|
||||
{{- end }}
|
||||
{{- if and (.Values.reloader.ignoreJobs) (.Values.reloader.ignoreCronJobs) }}
|
||||
- "--ignored-workload-types=jobs,cronjobs"
|
||||
{{- else if .Values.reloader.ignoreJobs }}
|
||||
- "--ignored-workload-types=jobs"
|
||||
{{- else if .Values.reloader.ignoreCronJobs }}
|
||||
- "--ignored-workload-types=cronjobs"
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.ignoreNamespaces }}
|
||||
- "--namespaces-to-ignore={{ .Values.reloader.ignoreNamespaces }}"
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.namespaceSelector }}
|
||||
- "--namespace-selector={{ .Values.reloader.namespaceSelector }}"
|
||||
{{- end }}
|
||||
{{- if (include "reloader-namespaceSelector" .) }}
|
||||
- "--namespace-selector=\"{{ include "reloader-namespaceSelector" . }}\""
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.resourceLabelSelector }}
|
||||
- "--resource-label-selector={{ .Values.reloader.resourceLabelSelector }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.enablePProf }}
|
||||
- "--enable-pprof"
|
||||
{{- if and .Values.reloader.pprofAddr }}
|
||||
- "--pprof-addr={{ .Values.reloader.pprofAddr }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.enableCSIIntegration }}
|
||||
- "--enable-csi-integration=true"
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.custom_annotations }}
|
||||
{{- if .Values.reloader.custom_annotations.configmap }}
|
||||
- "--configmap-annotation"
|
||||
@@ -197,6 +261,14 @@ spec:
|
||||
{{- if .Values.reloader.custom_annotations.auto }}
|
||||
- "--auto-annotation"
|
||||
- "{{ .Values.reloader.custom_annotations.auto }}"
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.custom_annotations.secret_auto }}
|
||||
- "--secret-auto-annotation"
|
||||
- "{{ .Values.reloader.custom_annotations.secret_auto }}"
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.custom_annotations.configmap_auto }}
|
||||
- "--configmap-auto-annotation"
|
||||
- "{{ .Values.reloader.custom_annotations.configmap_auto }}"
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.custom_annotations.search }}
|
||||
- "--auto-search-annotation"
|
||||
@@ -205,6 +277,14 @@ spec:
|
||||
{{- if .Values.reloader.custom_annotations.match }}
|
||||
- "--search-match-annotation"
|
||||
- "{{ .Values.reloader.custom_annotations.match }}"
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.custom_annotations.pausePeriod }}
|
||||
- "--pause-deployment-annotation"
|
||||
- "{{ .Values.reloader.custom_annotations.pausePeriod }}"
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.custom_annotations.pauseTime }}
|
||||
- "--pause-deployment-time-annotation"
|
||||
- "{{ .Values.reloader.custom_annotations.pauseTime }}"
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.webhookUrl }}
|
||||
- "--webhook-url"
|
||||
@@ -217,6 +297,9 @@ spec:
|
||||
{{- if eq .Values.reloader.reloadOnCreate true }}
|
||||
- "--reload-on-create={{ .Values.reloader.reloadOnCreate }}"
|
||||
{{- end }}
|
||||
{{- if eq .Values.reloader.reloadOnDelete true }}
|
||||
- "--reload-on-delete={{ .Values.reloader.reloadOnDelete }}"
|
||||
{{- end }}
|
||||
{{- if eq .Values.reloader.syncAfterRestart true }}
|
||||
- "--sync-after-restart={{ .Values.reloader.syncAfterRestart }}"
|
||||
{{- end }}
|
||||
@@ -241,8 +324,13 @@ spec:
|
||||
{{- if hasKey .Values.reloader.deployment "automountServiceAccountToken" }}
|
||||
automountServiceAccountToken: {{ .Values.reloader.deployment.automountServiceAccountToken }}
|
||||
{{- end }}
|
||||
{{- if eq .Values.reloader.readOnlyRootFileSystem true }}
|
||||
{{- if (or (.Values.reloader.deployment.volumes) (eq .Values.reloader.readOnlyRootFileSystem true)) }}
|
||||
volumes:
|
||||
{{- if eq .Values.reloader.readOnlyRootFileSystem true }}
|
||||
- emptyDir: {}
|
||||
name: tmp-volume
|
||||
{{- end }}
|
||||
{{- with .Values.reloader.deployment.volumes }}
|
||||
{{- . | toYaml | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -7,16 +7,16 @@ metadata:
|
||||
labels:
|
||||
{{ include "reloader-labels.chart" . | indent 4 }}
|
||||
{{- if .Values.reloader.matchLabels }}
|
||||
{{ toYaml .Values.reloader.matchLabels | indent 4 }}
|
||||
{{ tpl (toYaml .Values.reloader.matchLabels) . | indent 4 }}
|
||||
{{- end }}
|
||||
name: {{ template "reloader-fullname" . }}
|
||||
namespace: {{ .Values.namespace | default .Release.Namespace }}
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: {{ template "reloader-fullname" . }}
|
||||
release: {{ .Release.Name | quote }}
|
||||
{{ include "reloader-match-labels.chart" . | indent 6 }}
|
||||
{{- if .Values.reloader.matchLabels }}
|
||||
{{ toYaml .Values.reloader.matchLabels | indent 6 }}
|
||||
{{ tpl (toYaml .Values.reloader.matchLabels) . | indent 6 }}
|
||||
{{- end }}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
|
||||
@@ -3,9 +3,15 @@ apiVersion: policy/v1
|
||||
kind: PodDisruptionBudget
|
||||
metadata:
|
||||
name: {{ template "reloader-fullname" . }}
|
||||
namespace: {{ .Values.namespace | default .Release.Namespace }}
|
||||
spec:
|
||||
{{- if .Values.reloader.podDisruptionBudget.maxUnavailable }}
|
||||
maxUnavailable: {{ .Values.reloader.podDisruptionBudget.maxUnavailable }}
|
||||
{{- end }}
|
||||
{{- if and .Values.reloader.podDisruptionBudget.minAvailable (not .Values.reloader.podDisruptionBudget.maxUnavailable)}}
|
||||
minAvailable: {{ .Values.reloader.podDisruptionBudget.minAvailable }}
|
||||
{{- end }}
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {{ template "reloader-fullname" . }}
|
||||
{{ include "reloader-match-labels.chart" . | nindent 6 }}
|
||||
{{- end }}
|
||||
|
||||
@@ -14,6 +14,8 @@ metadata:
|
||||
name: {{ template "reloader-fullname" . }}
|
||||
{{- if .Values.reloader.podMonitor.namespace }}
|
||||
namespace: {{ tpl .Values.reloader.podMonitor.namespace . }}
|
||||
{{- else }}
|
||||
namespace: {{ .Values.namespace | default .Release.Namespace }}
|
||||
{{- end }}
|
||||
spec:
|
||||
podMetricsEndpoints:
|
||||
@@ -54,5 +56,5 @@ spec:
|
||||
- {{ .Release.Namespace }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{ include "reloader-labels.chart" . | nindent 6 }}
|
||||
{{ include "reloader-match-labels.chart" . | nindent 6 }}
|
||||
{{- end }}
|
||||
|
||||
@@ -11,10 +11,10 @@ metadata:
|
||||
labels:
|
||||
{{ include "reloader-labels.chart" . | indent 4 }}
|
||||
{{- if .Values.reloader.rbac.labels }}
|
||||
{{ toYaml .Values.reloader.rbac.labels | indent 4 }}
|
||||
{{ tpl (toYaml .Values.reloader.rbac.labels) . | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.matchLabels }}
|
||||
{{ toYaml .Values.reloader.matchLabels | indent 4 }}
|
||||
{{ tpl (toYaml .Values.reloader.matchLabels) . | indent 4 }}
|
||||
{{- end }}
|
||||
name: {{ template "reloader-fullname" . }}-role
|
||||
namespace: {{ .Values.namespace | default .Release.Namespace }}
|
||||
@@ -67,16 +67,6 @@ rules:
|
||||
- get
|
||||
- update
|
||||
- patch
|
||||
- apiGroups:
|
||||
- "extensions"
|
||||
resources:
|
||||
- deployments
|
||||
- daemonsets
|
||||
verbs:
|
||||
- list
|
||||
- get
|
||||
- update
|
||||
- patch
|
||||
- apiGroups:
|
||||
- "batch"
|
||||
resources:
|
||||
@@ -90,6 +80,9 @@ rules:
|
||||
- jobs
|
||||
verbs:
|
||||
- create
|
||||
- delete
|
||||
- list
|
||||
- get
|
||||
{{- if .Values.reloader.enableHA }}
|
||||
- apiGroups:
|
||||
- "coordination.k8s.io"
|
||||
@@ -99,6 +92,17 @@ rules:
|
||||
- create
|
||||
- get
|
||||
- update
|
||||
{{- end}}
|
||||
{{- if .Values.reloader.enableCSIIntegration }}
|
||||
- apiGroups:
|
||||
- "secrets-store.csi.x-k8s.io"
|
||||
resources:
|
||||
- secretproviderclasspodstatuses
|
||||
- secretproviderclasses
|
||||
verbs:
|
||||
- list
|
||||
- get
|
||||
- watch
|
||||
{{- end}}
|
||||
- apiGroups:
|
||||
- ""
|
||||
@@ -108,3 +112,34 @@ rules:
|
||||
- create
|
||||
- patch
|
||||
{{- end }}
|
||||
|
||||
---
|
||||
|
||||
{{- if .Values.reloader.rbac.enabled }}
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
annotations:
|
||||
{{ include "reloader-helm3.annotations" . | indent 4 }}
|
||||
labels:
|
||||
{{ include "reloader-labels.chart" . | indent 4 }}
|
||||
{{- if .Values.reloader.rbac.labels }}
|
||||
{{ tpl (toYaml .Values.reloader.rbac.labels) . | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.matchLabels }}
|
||||
{{ tpl (toYaml .Values.reloader.matchLabels) . | indent 4 }}
|
||||
{{- end }}
|
||||
name: {{ template "reloader-fullname" . }}-metadata-role
|
||||
namespace: {{ .Values.namespace | default .Release.Namespace }}
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
verbs:
|
||||
- list
|
||||
- get
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
{{- end }}
|
||||
@@ -11,10 +11,10 @@ metadata:
|
||||
labels:
|
||||
{{ include "reloader-labels.chart" . | indent 4 }}
|
||||
{{- if .Values.reloader.rbac.labels }}
|
||||
{{ toYaml .Values.reloader.rbac.labels | indent 4 }}
|
||||
{{ tpl (toYaml .Values.reloader.rbac.labels) . | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.matchLabels }}
|
||||
{{ toYaml .Values.reloader.matchLabels | indent 4 }}
|
||||
{{ tpl (toYaml .Values.reloader.matchLabels) . | indent 4 }}
|
||||
{{- end }}
|
||||
name: {{ template "reloader-fullname" . }}-role-binding
|
||||
namespace: {{ .Values.namespace | default .Release.Namespace }}
|
||||
@@ -27,3 +27,30 @@ subjects:
|
||||
name: {{ template "reloader-serviceAccountName" . }}
|
||||
namespace: {{ .Values.namespace | default .Release.Namespace }}
|
||||
{{- end }}
|
||||
|
||||
---
|
||||
{{- if .Values.reloader.rbac.enabled }}
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
annotations:
|
||||
{{ include "reloader-helm3.annotations" . | indent 4 }}
|
||||
labels:
|
||||
{{ include "reloader-labels.chart" . | indent 4 }}
|
||||
{{- if .Values.reloader.rbac.labels }}
|
||||
{{ tpl (toYaml .Values.reloader.rbac.labels) . | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.matchLabels }}
|
||||
{{ tpl (toYaml .Values.reloader.matchLabels) . | indent 4 }}
|
||||
{{- end }}
|
||||
name: {{ template "reloader-fullname" . }}-metadata-role-binding
|
||||
namespace: {{ .Values.namespace | default .Release.Namespace }}
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: {{ template "reloader-fullname" . }}-metadata-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ template "reloader-serviceAccountName" . }}
|
||||
namespace: {{ .Values.namespace | default .Release.Namespace }}
|
||||
{{- end }}
|
||||
@@ -5,22 +5,22 @@ metadata:
|
||||
annotations:
|
||||
{{ include "reloader-helm3.annotations" . | indent 4 }}
|
||||
{{- if .Values.reloader.service.annotations }}
|
||||
{{ toYaml .Values.reloader.service.annotations | indent 4 }}
|
||||
{{ tpl (toYaml .Values.reloader.service.annotations) . | indent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{ include "reloader-labels.chart" . | indent 4 }}
|
||||
{{- if .Values.reloader.service.labels }}
|
||||
{{ toYaml .Values.reloader.service.labels | indent 4 }}
|
||||
{{ tpl (toYaml .Values.reloader.service.labels) . | indent 4 }}
|
||||
{{- end }}
|
||||
name: {{ template "reloader-fullname" . }}
|
||||
namespace: {{ .Values.namespace | default .Release.Namespace }}
|
||||
spec:
|
||||
selector:
|
||||
{{- if .Values.reloader.deployment.labels }}
|
||||
{{ toYaml .Values.reloader.deployment.labels | indent 4 }}
|
||||
{{ tpl (toYaml .Values.reloader.deployment.labels) . | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.matchLabels }}
|
||||
{{ toYaml .Values.reloader.matchLabels | indent 4 }}
|
||||
{{ tpl (toYaml .Values.reloader.matchLabels) . | indent 4 }}
|
||||
{{- end }}
|
||||
ports:
|
||||
- port: {{ .Values.reloader.service.port }}
|
||||
|
||||
@@ -2,7 +2,8 @@
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
{{- if .Values.global.imagePullSecrets }}
|
||||
imagePullSecrets: {{ toYaml .Values.global.imagePullSecrets | nindent 2 }}
|
||||
imagePullSecrets:
|
||||
{{ include "reloader-imagePullSecrets" . | indent 2 }}
|
||||
{{- end }}
|
||||
{{- if hasKey .Values.reloader.serviceAccount "automountServiceAccountToken" }}
|
||||
automountServiceAccountToken: {{ .Values.reloader.serviceAccount.automountServiceAccountToken }}
|
||||
@@ -11,15 +12,15 @@ metadata:
|
||||
annotations:
|
||||
{{ include "reloader-helm3.annotations" . | indent 4 }}
|
||||
{{- if .Values.reloader.serviceAccount.annotations }}
|
||||
{{ toYaml .Values.reloader.serviceAccount.annotations | indent 4 }}
|
||||
{{ tpl (toYaml .Values.reloader.serviceAccount.annotations) . | indent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{ include "reloader-labels.chart" . | indent 4 }}
|
||||
{{- if .Values.reloader.serviceAccount.labels }}
|
||||
{{ toYaml .Values.reloader.serviceAccount.labels | indent 4 }}
|
||||
{{ tpl (toYaml .Values.reloader.serviceAccount.labels) . | indent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.matchLabels }}
|
||||
{{ toYaml .Values.reloader.matchLabels | indent 4 }}
|
||||
{{ tpl (toYaml .Values.reloader.matchLabels) . | indent 4 }}
|
||||
{{- end }}
|
||||
name: {{ template "reloader-serviceAccountName" . }}
|
||||
namespace: {{ .Values.namespace | default .Release.Namespace }}
|
||||
|
||||
@@ -14,6 +14,8 @@ metadata:
|
||||
name: {{ template "reloader-fullname" . }}
|
||||
{{- if .Values.reloader.serviceMonitor.namespace }}
|
||||
namespace: {{ tpl .Values.reloader.serviceMonitor.namespace . }}
|
||||
{{- else }}
|
||||
namespace: {{ .Values.namespace | default .Release.Namespace }}
|
||||
{{- end }}
|
||||
spec:
|
||||
endpoints:
|
||||
@@ -54,5 +56,5 @@ spec:
|
||||
- {{ .Release.Namespace }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{ include "reloader-labels.chart" . | nindent 6 }}
|
||||
{{ include "reloader-match-labels.chart" . | nindent 6 }}
|
||||
{{- end }}
|
||||
|
||||
@@ -0,0 +1,40 @@
|
||||
{{- if and (.Capabilities.APIVersions.Has "autoscaling.k8s.io/v1") (.Values.reloader.verticalPodAutoscaler.enabled) }}
|
||||
apiVersion: autoscaling.k8s.io/v1
|
||||
kind: VerticalPodAutoscaler
|
||||
metadata:
|
||||
name: {{ template "reloader-fullname" . }}
|
||||
namespace: {{ .Values.namespace | default .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "reloader-labels.chart" . | nindent 4 }}
|
||||
spec:
|
||||
{{- with .Values.reloader.verticalPodAutoscaler.recommenders }}
|
||||
recommenders:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
resourcePolicy:
|
||||
containerPolicies:
|
||||
- containerName: {{ template "reloader-fullname" . }}
|
||||
{{- with .Values.reloader.verticalPodAutoscaler.controlledResources }}
|
||||
controlledResources:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.verticalPodAutoscaler.controlledValues }}
|
||||
controlledValues: {{ .Values.reloader.verticalPodAutoscaler.controlledValues }}
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.verticalPodAutoscaler.maxAllowed }}
|
||||
maxAllowed:
|
||||
{{ toYaml .Values.reloader.verticalPodAutoscaler.maxAllowed | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.reloader.verticalPodAutoscaler.minAllowed }}
|
||||
minAllowed:
|
||||
{{ toYaml .Values.reloader.verticalPodAutoscaler.minAllowed | nindent 8 }}
|
||||
{{- end }}
|
||||
targetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: {{ template "reloader-fullname" . }}
|
||||
{{- with .Values.reloader.verticalPodAutoscaler.updatePolicy }}
|
||||
updatePolicy:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
@@ -48,3 +48,57 @@ tests:
|
||||
asserts:
|
||||
- isEmpty:
|
||||
path: spec.template.spec.containers[0].securityContext
|
||||
|
||||
- it: template still sets POD_NAME and POD_NAMESPACE environment variables when enableHA is true
|
||||
set:
|
||||
reloader:
|
||||
enableHA: true
|
||||
asserts:
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].env
|
||||
content:
|
||||
name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
|
||||
- it: sets ignored-workload-types argument when ignoreJobs is true
|
||||
set:
|
||||
reloader:
|
||||
ignoreJobs: true
|
||||
asserts:
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].args
|
||||
content: "--ignored-workload-types=jobs"
|
||||
|
||||
- it: sets ignored-workload-types argument when ignoreCronJobs is true
|
||||
set:
|
||||
reloader:
|
||||
ignoreCronJobs: true
|
||||
asserts:
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].args
|
||||
content: "--ignored-workload-types=cronjobs"
|
||||
|
||||
- it: sets ignored-workload-types argument when both ignoreJobs and ignoreCronJobs are true
|
||||
set:
|
||||
reloader:
|
||||
ignoreJobs: true
|
||||
ignoreCronJobs: true
|
||||
asserts:
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].args
|
||||
content: "--ignored-workload-types=jobs,cronjobs"
|
||||
|
||||
- it: does not set ignored-workload-types argument when both ignoreJobs and ignoreCronJobs are false
|
||||
set:
|
||||
reloader:
|
||||
ignoreJobs: false
|
||||
ignoreCronJobs: false
|
||||
asserts:
|
||||
- notContains:
|
||||
path: spec.template.spec.containers[0].args
|
||||
content: "--ignored-workload-types=jobs"
|
||||
- notContains:
|
||||
path: spec.template.spec.containers[0].args
|
||||
content: "--ignored-workload-types=cronjobs"
|
||||
|
||||
@@ -3,7 +3,12 @@ global:
|
||||
## Reference to one or more secrets to be used when pulling images
|
||||
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
|
||||
##
|
||||
imageRegistry: ""
|
||||
imagePullSecrets: []
|
||||
#imagePullSecrets:
|
||||
# - name: my-pull-secret
|
||||
#imagePullSecrets:
|
||||
# - my-pull-secret
|
||||
|
||||
kubernetes:
|
||||
host: https://kubernetes.default
|
||||
@@ -11,30 +16,68 @@ kubernetes:
|
||||
nameOverride: ""
|
||||
fullnameOverride: ""
|
||||
|
||||
image:
|
||||
name: stakater/reloader
|
||||
repository: ghcr.io/stakater/reloader
|
||||
tag: v1.4.13
|
||||
# digest: sha256:1234567
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
reloader:
|
||||
autoReloadAll: false
|
||||
isArgoRollouts: false
|
||||
isOpenshift: false
|
||||
ignoreSecrets: false
|
||||
ignoreConfigMaps: false
|
||||
# Set to true to exclude Job workloads from automatic reload monitoring
|
||||
# Useful when you don't want Jobs to be restarted when their referenced ConfigMaps/Secrets change
|
||||
ignoreJobs: false
|
||||
# Set to true to exclude CronJob workloads from automatic reload monitoring
|
||||
# Useful when you don't want CronJobs to be restarted when their referenced ConfigMaps/Secrets change
|
||||
ignoreCronJobs: false
|
||||
reloadOnCreate: false
|
||||
reloadOnDelete: false
|
||||
syncAfterRestart: false
|
||||
reloadStrategy: default # Set to default, env-vars or annotations
|
||||
ignoreNamespaces: "" # Comma separated list of namespaces to ignore
|
||||
namespaceSelector: "" # Comma separated list of k8s label selectors for namespaces selection
|
||||
resourceLabelSelector: "" # Comma separated list of k8s label selectors for configmap/secret selection
|
||||
logFormat: "" #json
|
||||
logFormat: "" # json
|
||||
logLevel: info # Log level to use (trace, debug, info, warning, error, fatal and panic)
|
||||
watchGlobally: true
|
||||
# Set to true to enable leadership election allowing you to run multiple replicas
|
||||
enableHA: false
|
||||
# Set to true to enable pprof for profiling
|
||||
enablePProf: false
|
||||
enableCSIIntegration: false
|
||||
# Address to start pprof server on. Default is ":6060"
|
||||
pprofAddr: ":6060"
|
||||
# Set to true if you have a pod security policy that enforces readOnlyRootFilesystem
|
||||
readOnlyRootFileSystem: false
|
||||
legacy:
|
||||
rbac: false
|
||||
matchLabels: {}
|
||||
# Set to true to expose a prometheus counter of reloads by namespace (this metric may have high cardinality in clusters with many namespaces)
|
||||
enableMetricsByNamespace: false
|
||||
deployment:
|
||||
# Specifies the deployment DNS configuration.
|
||||
dnsConfig: {}
|
||||
# nameservers:
|
||||
# - 1.2.3.4
|
||||
# searches:
|
||||
# - ns1.svc.cluster-domain.example
|
||||
# - my.dns.search.suffix
|
||||
# options:
|
||||
# - name: ndots
|
||||
# value: "1"
|
||||
# - name: attempts
|
||||
# value: "3"
|
||||
|
||||
# If you wish to run multiple replicas set reloader.enableHA = true
|
||||
replicas: 1
|
||||
|
||||
revisionHistoryLimit: 2
|
||||
|
||||
nodeSelector:
|
||||
# cloud.google.com/gke-nodepool: default-pool
|
||||
|
||||
@@ -49,11 +92,17 @@ reloader:
|
||||
# operator: "Exists"
|
||||
affinity: {}
|
||||
|
||||
volumeMounts: []
|
||||
volumes: []
|
||||
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 65534
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
|
||||
containerSecurityContext: {}
|
||||
containerSecurityContext:
|
||||
{}
|
||||
# capabilities:
|
||||
# drop:
|
||||
# - ALL
|
||||
@@ -77,18 +126,14 @@ reloader:
|
||||
# whenUnsatisfiable: DoNotSchedule
|
||||
# labelSelector:
|
||||
# matchLabels:
|
||||
# app: my-app
|
||||
# app.kubernetes.io/instance: my-app
|
||||
topologySpreadConstraints: []
|
||||
|
||||
annotations: {}
|
||||
labels:
|
||||
provider: stakater
|
||||
group: com.stakater.platform
|
||||
version: v1.0.50
|
||||
image:
|
||||
name: ghcr.io/stakater/reloader
|
||||
tag: v1.0.50
|
||||
pullPolicy: IfNotPresent
|
||||
version: v1.4.13
|
||||
# Support for extra environment variables.
|
||||
env:
|
||||
# Open supports Key value pair as environment variables.
|
||||
@@ -139,7 +184,14 @@ reloader:
|
||||
# imagePullSecrets:
|
||||
# - name: myregistrykey
|
||||
|
||||
service: {}
|
||||
# Put "0" in either to have go runtime ignore the set value.
|
||||
# Otherwise, see https://pkg.go.dev/runtime#hdr-Environment_Variables for GOMAXPROCS and GOMEMLIMIT
|
||||
gomaxprocsOverride: ""
|
||||
gomemlimitOverride: ""
|
||||
|
||||
service:
|
||||
{}
|
||||
|
||||
# labels: {}
|
||||
# annotations: {}
|
||||
# port: 9090
|
||||
@@ -269,6 +321,9 @@ reloader:
|
||||
enabled: false
|
||||
# Set the minimum available replicas
|
||||
# minAvailable: 1
|
||||
# OR Set the maximum unavailable replicas
|
||||
# maxUnavailable: 1
|
||||
# If both defined only maxUnavailable will be used
|
||||
|
||||
netpol:
|
||||
enabled: false
|
||||
@@ -277,5 +332,36 @@ reloader:
|
||||
# matchLabels:
|
||||
# app.kubernetes.io/name: prometheus
|
||||
to: []
|
||||
|
||||
|
||||
# Enable vertical pod autoscaler
|
||||
verticalPodAutoscaler:
|
||||
enabled: false
|
||||
|
||||
# Recommender responsible for generating recommendation for the object.
|
||||
# List should be empty (then the default recommender will generate the recommendation)
|
||||
# or contain exactly one recommender.
|
||||
# recommenders:
|
||||
# - name: custom-recommender-performance
|
||||
|
||||
# List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory
|
||||
controlledResources: []
|
||||
# Specifies which resource values should be controlled: RequestsOnly or RequestsAndLimits.
|
||||
# controlledValues: RequestsAndLimits
|
||||
|
||||
# Define the max allowed resources for the pod
|
||||
maxAllowed: {}
|
||||
# cpu: 200m
|
||||
# memory: 100Mi
|
||||
# Define the min allowed resources for the pod
|
||||
minAllowed: {}
|
||||
# cpu: 200m
|
||||
# memory: 100Mi
|
||||
|
||||
updatePolicy:
|
||||
# Specifies minimal number of replicas which need to be alive for VPA Updater to attempt pod eviction
|
||||
# minReplicas: 1
|
||||
# Specifies whether recommended updates are applied when a Pod is started and whether recommended updates
|
||||
# are applied during the life of a Pod. Possible values are "Off", "Initial", "Recreate", and "Auto".
|
||||
updateMode: Auto
|
||||
|
||||
webhookUrl: ""
|
||||
|
||||
@@ -6,3 +6,4 @@ resources:
|
||||
- manifests/clusterrolebinding.yaml
|
||||
- manifests/serviceaccount.yaml
|
||||
- manifests/deployment.yaml
|
||||
- manifests/role.yaml
|
||||
|
||||
@@ -1,18 +1,8 @@
|
||||
---
|
||||
# Source: reloader/templates/clusterrole.yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
annotations:
|
||||
meta.helm.sh/release-namespace: "default"
|
||||
meta.helm.sh/release-name: "reloader"
|
||||
labels:
|
||||
app: reloader-reloader
|
||||
chart: "reloader-1.0.50"
|
||||
release: "reloader"
|
||||
heritage: "Helm"
|
||||
app.kubernetes.io/managed-by: "Helm"
|
||||
name: reloader-reloader-role
|
||||
rules:
|
||||
- apiGroups:
|
||||
@@ -58,6 +48,9 @@ rules:
|
||||
- jobs
|
||||
verbs:
|
||||
- create
|
||||
- delete
|
||||
- list
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
|
||||
@@ -1,18 +1,8 @@
|
||||
---
|
||||
# Source: reloader/templates/clusterrolebinding.yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
annotations:
|
||||
meta.helm.sh/release-namespace: "default"
|
||||
meta.helm.sh/release-name: "reloader"
|
||||
labels:
|
||||
app: reloader-reloader
|
||||
chart: "reloader-1.0.50"
|
||||
release: "reloader"
|
||||
heritage: "Helm"
|
||||
app.kubernetes.io/managed-by: "Helm"
|
||||
name: reloader-reloader-role-binding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
||||
@@ -3,18 +3,6 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
annotations:
|
||||
meta.helm.sh/release-namespace: "default"
|
||||
meta.helm.sh/release-name: "reloader"
|
||||
labels:
|
||||
app: reloader-reloader
|
||||
chart: "reloader-1.0.50"
|
||||
release: "reloader"
|
||||
heritage: "Helm"
|
||||
app.kubernetes.io/managed-by: "Helm"
|
||||
group: com.stakater.platform
|
||||
provider: stakater
|
||||
version: v1.0.50
|
||||
name: reloader-reloader
|
||||
namespace: default
|
||||
spec:
|
||||
@@ -23,49 +11,65 @@ spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: reloader-reloader
|
||||
release: "reloader"
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: reloader-reloader
|
||||
chart: "reloader-1.0.50"
|
||||
release: "reloader"
|
||||
heritage: "Helm"
|
||||
app.kubernetes.io/managed-by: "Helm"
|
||||
group: com.stakater.platform
|
||||
provider: stakater
|
||||
version: v1.0.50
|
||||
spec:
|
||||
containers:
|
||||
- image: "ghcr.io/stakater/reloader:v1.0.50"
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: reloader-reloader
|
||||
- image: "ghcr.io/stakater/reloader:v1.4.13"
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: reloader-reloader
|
||||
env:
|
||||
- name: GOMAXPROCS
|
||||
valueFrom:
|
||||
resourceFieldRef:
|
||||
resource: limits.cpu
|
||||
divisor: '1'
|
||||
- name: GOMEMLIMIT
|
||||
valueFrom:
|
||||
resourceFieldRef:
|
||||
resource: limits.memory
|
||||
divisor: '1'
|
||||
- name: RELOADER_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 9090
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /live
|
||||
port: http
|
||||
timeoutSeconds: 5
|
||||
failureThreshold: 5
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
initialDelaySeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /metrics
|
||||
port: http
|
||||
timeoutSeconds: 5
|
||||
failureThreshold: 5
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
initialDelaySeconds: 10
|
||||
|
||||
securityContext:
|
||||
{}
|
||||
securityContext:
|
||||
- name: RELOADER_DEPLOYMENT_NAME
|
||||
value: reloader-reloader
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 9090
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /live
|
||||
port: http
|
||||
timeoutSeconds: 5
|
||||
failureThreshold: 5
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
initialDelaySeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /metrics
|
||||
port: http
|
||||
timeoutSeconds: 5
|
||||
failureThreshold: 5
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
initialDelaySeconds: 10
|
||||
securityContext: {}
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
memory: 512Mi
|
||||
requests:
|
||||
cpu: 10m
|
||||
memory: 512Mi
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 65534
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
serviceAccountName: reloader-reloader
|
||||
|
||||
32
deployments/kubernetes/manifests/role.yaml
Normal file
32
deployments/kubernetes/manifests/role.yaml
Normal file
@@ -0,0 +1,32 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: reloader-reloader-metadata-role
|
||||
namespace: default
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
verbs:
|
||||
- list
|
||||
- get
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
|
||||
---
|
||||
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: reloader-reloader-metadata-rolebinding
|
||||
namespace: default
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: reloader-reloader
|
||||
namespace: default
|
||||
roleRef:
|
||||
kind: Role
|
||||
name: reloader-reloader-metadata-role
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
@@ -3,14 +3,5 @@
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
annotations:
|
||||
meta.helm.sh/release-namespace: "default"
|
||||
meta.helm.sh/release-name: "reloader"
|
||||
labels:
|
||||
app: reloader-reloader
|
||||
chart: "reloader-1.0.50"
|
||||
release: "reloader"
|
||||
heritage: "Helm"
|
||||
app.kubernetes.io/managed-by: "Helm"
|
||||
name: reloader-reloader
|
||||
namespace: default
|
||||
|
||||
@@ -1,127 +1,115 @@
|
||||
---
|
||||
# Source: reloader/templates/serviceaccount.yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
annotations:
|
||||
meta.helm.sh/release-namespace: "default"
|
||||
meta.helm.sh/release-name: "reloader"
|
||||
labels:
|
||||
app: reloader-reloader
|
||||
chart: "reloader-1.0.50"
|
||||
release: "reloader"
|
||||
heritage: "Helm"
|
||||
app.kubernetes.io/managed-by: "Helm"
|
||||
name: reloader-reloader
|
||||
namespace: default
|
||||
---
|
||||
# Source: reloader/templates/clusterrole.yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
|
||||
kind: Role
|
||||
metadata:
|
||||
name: reloader-reloader-metadata-role
|
||||
namespace: default
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
verbs:
|
||||
- list
|
||||
- get
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
annotations:
|
||||
meta.helm.sh/release-namespace: "default"
|
||||
meta.helm.sh/release-name: "reloader"
|
||||
labels:
|
||||
app: reloader-reloader
|
||||
chart: "reloader-1.0.50"
|
||||
release: "reloader"
|
||||
heritage: "Helm"
|
||||
app.kubernetes.io/managed-by: "Helm"
|
||||
name: reloader-reloader-role
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- secrets
|
||||
- configmaps
|
||||
verbs:
|
||||
- list
|
||||
- get
|
||||
- watch
|
||||
- apiGroups:
|
||||
- "apps"
|
||||
resources:
|
||||
- deployments
|
||||
- daemonsets
|
||||
- statefulsets
|
||||
verbs:
|
||||
- list
|
||||
- get
|
||||
- update
|
||||
- patch
|
||||
- apiGroups:
|
||||
- "extensions"
|
||||
resources:
|
||||
- deployments
|
||||
- daemonsets
|
||||
verbs:
|
||||
- list
|
||||
- get
|
||||
- update
|
||||
- patch
|
||||
- apiGroups:
|
||||
- "batch"
|
||||
resources:
|
||||
- cronjobs
|
||||
verbs:
|
||||
- list
|
||||
- get
|
||||
- apiGroups:
|
||||
- "batch"
|
||||
resources:
|
||||
- jobs
|
||||
verbs:
|
||||
- create
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- events
|
||||
verbs:
|
||||
- create
|
||||
- patch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- secrets
|
||||
- configmaps
|
||||
verbs:
|
||||
- list
|
||||
- get
|
||||
- watch
|
||||
- apiGroups:
|
||||
- apps
|
||||
resources:
|
||||
- deployments
|
||||
- daemonsets
|
||||
- statefulsets
|
||||
verbs:
|
||||
- list
|
||||
- get
|
||||
- update
|
||||
- patch
|
||||
- apiGroups:
|
||||
- extensions
|
||||
resources:
|
||||
- deployments
|
||||
- daemonsets
|
||||
verbs:
|
||||
- list
|
||||
- get
|
||||
- update
|
||||
- patch
|
||||
- apiGroups:
|
||||
- batch
|
||||
resources:
|
||||
- cronjobs
|
||||
verbs:
|
||||
- list
|
||||
- get
|
||||
- apiGroups:
|
||||
- batch
|
||||
resources:
|
||||
- jobs
|
||||
verbs:
|
||||
- create
|
||||
- delete
|
||||
- list
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- events
|
||||
verbs:
|
||||
- create
|
||||
- patch
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: reloader-reloader-metadata-rolebinding
|
||||
namespace: default
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: reloader-reloader-metadata-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: reloader-reloader
|
||||
namespace: default
|
||||
---
|
||||
# Source: reloader/templates/clusterrolebinding.yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
annotations:
|
||||
meta.helm.sh/release-namespace: "default"
|
||||
meta.helm.sh/release-name: "reloader"
|
||||
labels:
|
||||
app: reloader-reloader
|
||||
chart: "reloader-1.0.50"
|
||||
release: "reloader"
|
||||
heritage: "Helm"
|
||||
app.kubernetes.io/managed-by: "Helm"
|
||||
name: reloader-reloader-role-binding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: reloader-reloader-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: reloader-reloader
|
||||
namespace: default
|
||||
- kind: ServiceAccount
|
||||
name: reloader-reloader
|
||||
namespace: default
|
||||
---
|
||||
# Source: reloader/templates/deployment.yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
annotations:
|
||||
meta.helm.sh/release-namespace: "default"
|
||||
meta.helm.sh/release-name: "reloader"
|
||||
labels:
|
||||
app: reloader-reloader
|
||||
chart: "reloader-1.0.50"
|
||||
release: "reloader"
|
||||
heritage: "Helm"
|
||||
app.kubernetes.io/managed-by: "Helm"
|
||||
group: com.stakater.platform
|
||||
provider: stakater
|
||||
version: v1.0.50
|
||||
name: reloader-reloader
|
||||
namespace: default
|
||||
spec:
|
||||
@@ -130,49 +118,64 @@ spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: reloader-reloader
|
||||
release: "reloader"
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: reloader-reloader
|
||||
chart: "reloader-1.0.50"
|
||||
release: "reloader"
|
||||
heritage: "Helm"
|
||||
app.kubernetes.io/managed-by: "Helm"
|
||||
group: com.stakater.platform
|
||||
provider: stakater
|
||||
version: v1.0.50
|
||||
spec:
|
||||
containers:
|
||||
- image: "ghcr.io/stakater/reloader:v1.0.50"
|
||||
- env:
|
||||
- name: GOMAXPROCS
|
||||
valueFrom:
|
||||
resourceFieldRef:
|
||||
divisor: "1"
|
||||
resource: limits.cpu
|
||||
- name: GOMEMLIMIT
|
||||
valueFrom:
|
||||
resourceFieldRef:
|
||||
divisor: "1"
|
||||
resource: limits.memory
|
||||
- name: RELOADER_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: RELOADER_DEPLOYMENT_NAME
|
||||
value: reloader-reloader
|
||||
image: ghcr.io/stakater/reloader:v1.4.13
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: reloader-reloader
|
||||
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 9090
|
||||
livenessProbe:
|
||||
failureThreshold: 5
|
||||
httpGet:
|
||||
path: /live
|
||||
port: http
|
||||
timeoutSeconds: 5
|
||||
failureThreshold: 5
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 5
|
||||
name: reloader-reloader
|
||||
ports:
|
||||
- containerPort: 9090
|
||||
name: http
|
||||
readinessProbe:
|
||||
failureThreshold: 5
|
||||
httpGet:
|
||||
path: /metrics
|
||||
port: http
|
||||
timeoutSeconds: 5
|
||||
failureThreshold: 5
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
initialDelaySeconds: 10
|
||||
|
||||
securityContext:
|
||||
{}
|
||||
securityContext:
|
||||
timeoutSeconds: 5
|
||||
resources:
|
||||
limits:
|
||||
cpu: "1"
|
||||
memory: 512Mi
|
||||
requests:
|
||||
cpu: 10m
|
||||
memory: 512Mi
|
||||
securityContext: {}
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 65534
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
serviceAccountName: reloader-reloader
|
||||
|
||||
@@ -23,6 +23,8 @@ reloader:
|
||||
legacy:
|
||||
rbac: false
|
||||
matchLabels: {}
|
||||
# Set to true to expose a prometheus counter of reloads by namespace (this metric may have high cardinality in clusters with many namespaces)
|
||||
enableMetricsByNamespace: false
|
||||
deployment:
|
||||
replicas: 1
|
||||
nodeSelector:
|
||||
|
||||
11
docs-nginx.conf
Normal file
11
docs-nginx.conf
Normal file
@@ -0,0 +1,11 @@
|
||||
server {
|
||||
listen 8080;
|
||||
root /usr/share/nginx/html/;
|
||||
index index.html;
|
||||
error_page 403 404 /404.html;
|
||||
location = /404.html {
|
||||
internal;
|
||||
}
|
||||
# redirects issued by nginx will be relative
|
||||
absolute_redirect off;
|
||||
}
|
||||
@@ -2,17 +2,17 @@
|
||||
|
||||
Reloader can alert when it triggers a rolling upgrade on Deployments or StatefulSets. Webhook notification alert would be sent to the configured webhook server with all the required information.
|
||||
|
||||
## Enabling the feature
|
||||
## Enabling
|
||||
|
||||
In-order to enable this feature, you need to update the `reloader.env.secret` section of values.yaml providing the information needed for alert.
|
||||
In-order to enable this feature, you need to update the `reloader.env.secret` section of `values.yaml` providing the information needed for alert:
|
||||
|
||||
```yaml
|
||||
ALERT_ON_RELOAD: [ true/false ] Default: false
|
||||
ALERT_SINK: [ slack/webhook ] Default: webhook
|
||||
ALERT_ON_RELOAD: [ true/false ] Default: false
|
||||
ALERT_SINK: [ slack/teams/gchat/webhook ] Default: webhook
|
||||
ALERT_WEBHOOK_URL: Required if ALERT_ON_RELOAD is true
|
||||
ALERT_ADDITIONAL_INFO: Any additional information to be added to alert
|
||||
```
|
||||
|
||||
## Slack incoming-webhook creation docs
|
||||
## Slack Incoming-Webhook Creation Docs
|
||||
|
||||
[Sending messages using Incoming Webhooks](https://api.slack.com/messaging/webhooks)
|
||||
|
||||
@@ -8,7 +8,7 @@ There are 3 steps involved in migrating the Reloader from Helm2 to Helm3.
|
||||
|
||||
### Step 1
|
||||
|
||||
Install the helm-2to3 plugin
|
||||
Install the `helm-2to3` plugin
|
||||
|
||||
```bash
|
||||
helm3 plugin install https://github.com/helm/helm-2to3
|
||||
|
||||
@@ -1,21 +1,32 @@
|
||||
# How it works?
|
||||
# How Does Reloader Work?
|
||||
|
||||
Reloader watches for `ConfigMap` and `Secret` and detects if there are changes in data of these objects. After change detection Reloader performs rolling upgrade on relevant Pods via associated `Deployment`, `Daemonset` and `Statefulset`.
|
||||
Reloader watches for `ConfigMap` and `Secret` and detects if there are changes in data of these objects. After change detection Reloader performs rolling upgrade on relevant Pods via associated `Deployment`, `Daemonset` and `Statefulset`:
|
||||
|
||||
## How change detection works
|
||||
```mermaid
|
||||
flowchart LR
|
||||
subgraph Reloader
|
||||
controller("Controller watches in a loop") -- "Detects a change" --> upgrade_handler("Upgrade handler checks if the change is a valid data change by comparing the change hash")
|
||||
upgrade_handler -- "Update resource" --> update_resource("Updates the resource with computed hash of change")
|
||||
end
|
||||
Reloader -- "Watches" --> secret_configmaps("Secrets/ConfigMaps")
|
||||
Reloader -- "Updates resources with Reloader environment variable" --> resources("Deployments/DaemonSets/StatefulSets resources with Reloader annotation")
|
||||
resources -- "Restart pods based on StrategyType" --> Pods
|
||||
```
|
||||
|
||||
Reloader watches changes in `configmaps` and `secrets` data. As soon as it detects a change in these. It forwards these objects to an update handler which decides if and how to perform the rolling upgrade.
|
||||
## How Does Change Detection Work?
|
||||
|
||||
## Requirements for rolling upgrade
|
||||
Reloader watches changes in `ConfigMaps` and `Secrets` data. As soon as it detects a change in these. It forwards these objects to an update handler which decides if and how to perform the rolling upgrade.
|
||||
|
||||
## Requirements for Rolling Upgrade
|
||||
|
||||
To perform rolling upgrade a `deployment`, `daemonset` or `statefulset` must have
|
||||
|
||||
- support for rolling upgrade strategy
|
||||
- specific annotation for `configmaps` or `secrets`
|
||||
- specific annotation for `ConfigMaps` or `Secrets`
|
||||
|
||||
The annotation value is comma separated list of `configmaps` or `secrets`. If a change is detected in data of these `configmaps` or `secrets`, Reloader will perform rolling upgrades on their associated `deployments`, `daemonsets` or `statefulsets`.
|
||||
The annotation value is comma separated list of `ConfigMaps` or `Secrets`. If a change is detected in data of these `ConfigMaps` or `Secrets`, Reloader will perform rolling upgrades on their associated `deployments`, `daemonsets` or `statefulsets`.
|
||||
|
||||
### Annotation for Configmap
|
||||
### Annotation for ConfigMap
|
||||
|
||||
For a `Deployment` called `foo` have a `ConfigMap` called `foo`. Then add this annotation* to your `Deployment`, where the default annotation can be changed with the `--configmap-annotation` flag:
|
||||
|
||||
@@ -37,21 +48,21 @@ metadata:
|
||||
|
||||
Above mentioned annotation are also work for `Daemonsets` `Statefulsets` and `Rollouts`
|
||||
|
||||
## How Rolling upgrade works?
|
||||
## How Does Rolling Upgrade Work?
|
||||
|
||||
When Reloader detects changes in configmap. It gets two objects of configmap. First object is an old configmap object which has a state before the latest change. Second object is new configmap object which contains latest changes. Reloader compares both objects and see whether any change in data occurred or not. If Reloader finds any change in new configmap object, only then, it moves forward with rolling upgrade.
|
||||
When Reloader detects changes in `ConfigMap`. It gets two objects of `ConfigMap`. First object is an old `ConfigMap` object which has a state before the latest change. Second object is new `ConfigMap` object which contains latest changes. Reloader compares both objects and see whether any change in data occurred or not. If Reloader finds any change in new `ConfigMap` object, only then, it moves forward with rolling upgrade.
|
||||
|
||||
After that, Reloader gets the list of all `deployments`, `daemonsets` and `statefulset` and looks for above mentioned annotation for configmap. If the annotation value contains the configmap name, it then looks for an environment variable which can contain the configmap or secret data change hash.
|
||||
After that, Reloader gets the list of all `deployments`, `daemonsets` and `statefulset` and looks for above mentioned annotation for `ConfigMap`. If the annotation value contains the `ConfigMap` name, it then looks for an environment variable which can contain the `ConfigMap` or secret data change hash.
|
||||
|
||||
### Environment variable for Configmap
|
||||
### Environment Variable for ConfigMap
|
||||
|
||||
If configmap name is foo then
|
||||
If `ConfigMap` name is foo then
|
||||
|
||||
```yaml
|
||||
STAKATER_FOO_CONFIGMAP
|
||||
```
|
||||
|
||||
### Environment variable for Secret
|
||||
### Environment Variable for Secret
|
||||
|
||||
If Secret name is foo then
|
||||
|
||||
@@ -59,11 +70,11 @@ If Secret name is foo then
|
||||
STAKATER_FOO_SECRET
|
||||
```
|
||||
|
||||
If the environment variable is found then it gets its value and compares it with new configmap hash value. If old value in environment variable is different from new hash value then Reloader updates the environment variable. If the environment variable does not exist then it creates a new environment variable with latest hash value from configmap and updates the relevant `deployment`, `daemonset` or `statefulset`
|
||||
If the environment variable is found then it gets its value and compares it with new `ConfigMap` hash value. If old value in environment variable is different from new hash value then Reloader updates the environment variable. If the environment variable does not exist then it creates a new environment variable with latest hash value from `ConfigMap` and updates the relevant `deployment`, `daemonset` or `statefulset`
|
||||
|
||||
Note: Rolling upgrade also works in the same way for secrets.
|
||||
|
||||
### Hash value Computation
|
||||
### Hash Value Computation
|
||||
|
||||
Reloader uses SHA1 to compute hash value. SHA1 is used because it is efficient and less prone to collision.
|
||||
|
||||
@@ -77,6 +88,6 @@ helm --namespace {replace this with namespace name} template . > reloader.yaml
|
||||
|
||||
The output file can then be used to deploy Reloader in specific namespace.
|
||||
|
||||
## Compatibility with helm install and upgrade
|
||||
## Compatibility With Helm Install and Upgrade
|
||||
|
||||
Reloader has no impact on helm deployment cycle. Reloader only injects an environment variable in `deployment`, `daemonset` or `statefulset`. The environment variable contains the SHA1 value of configmap's or secret's data. So if a deployment is created using Helm and Reloader updates the deployment, then next time you upgrade the helm release, Reloader will do nothing except changing that environment variable value in `deployment` , `daemonset` or `statefulset`.
|
||||
Reloader has no impact on helm deployment cycle. Reloader only injects an environment variable in `deployment`, `daemonset` or `statefulset`. The environment variable contains the SHA1 value of `ConfigMaps` or `Secrets` data. So if a deployment is created using Helm and Reloader updates the deployment, then next time you upgrade the helm release, Reloader will do nothing except changing that environment variable value in `deployment` , `daemonset` or `statefulset`.
|
||||
|
||||
@@ -1,12 +1,11 @@
|
||||
|
||||
# Reloader vs ConfigmapController
|
||||
|
||||
Reloader is inspired from [Configmapcontroller](https://github.com/fabric8io/configmapcontroller) but there are many ways in which it differs from configmapController. Below is the small comparison between these two controllers.
|
||||
Reloader is inspired from [`configmapcontroller`](https://github.com/fabric8io/configmapcontroller) but there are many ways in which it differs from `configmapcontroller`. Below is the small comparison between these two controllers.
|
||||
|
||||
| Reloader | Configmap |
|
||||
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Reloader can watch both `secrets` and `configmaps`. | ConfigmapController can only watch changes in `configmaps`. It cannot detect changes in other resources like `secrets`. |
|
||||
| Reloader can perform rolling upgrades on `deployments` as well as on `statefulsets` and `daemonsets` | ConfigmapController can only perform rolling upgrades on `deployments`. It currently does not support rolling upgrades on `statefulsets` and `daemonsets` |
|
||||
| Reloader provides both unit test cases and end to end integration test cases for future updates. So one can make sure that new changes do not break any old functionality. | Currently there are not any unit test cases or end to end integration test cases in configmap controller. It add difficulties for any additional updates in configmap controller and one can not know for sure whether new changes breaks any old functionality or not. |
|
||||
| Reloader uses SHA1 to encode the change in configmap or secret. It then saves the SHA1 value in `STAKATER_FOO_CONFIGMAP` or `STAKATER_FOO_SECRET` environment variable depending upon where the change has happened. The use of SHA1 provides a concise 40 characters encoded value that is very less prone to collision. | Configmap controller uses `FABRICB_FOO_REVISION` environment variable to store any change in configmap controller. It does not encode it or convert it in suitable hash value to avoid data pollution in deployment. |
|
||||
| Reloader allows you to customize your own annotation (for both Secrets and Configmaps) using command line flags | Configmap controller restricts you to only their provided annotation |
|
||||
| Reloader | ConfigMap |
|
||||
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Reloader can watch both `Secrets` and `ConfigMaps`. | `configmapcontroller` can only watch changes in `ConfigMaps`. It cannot detect changes in other resources like `Secrets`. |
|
||||
| Reloader can perform rolling upgrades on `deployments` as well as on `statefulsets` and `daemonsets` | `configmapcontroller` can only perform rolling upgrades on `deployments`. It currently does not support rolling upgrades on `statefulsets` and `daemonsets` |
|
||||
| Reloader provides both unit test cases and end to end integration test cases for future updates. So one can make sure that new changes do not break any old functionality. | Currently there are not any unit test cases or end to end integration test cases in `configmap-controller`. It adds difficulties for any additional updates in `configmap-controller` and one can not know for sure whether new changes breaks any old functionality or not. |
|
||||
| Reloader uses SHA1 to encode the change in `ConfigMap` or `Secret`. It then saves the SHA1 value in `STAKATER_FOO_CONFIGMAP` or `STAKATER_FOO_SECRET` environment variable depending upon where the change has happened. The use of SHA1 provides a concise 40 characters encoded value that is very less prone to collision. | `configmap-controller` uses `FABRICB_FOO_REVISION` environment variable to store any change in `ConfigMap` controller. It does not encode it or convert it in suitable hash value to avoid data pollution in deployment. |
|
||||
| Reloader allows you to customize your own annotation (for both `Secrets` and `ConfigMaps`) using command line flags | `configmap-controller` restricts you to only their provided annotation |
|
||||
|
||||
@@ -4,9 +4,9 @@ Reloader and k8s-trigger-controller are both built for same purpose. So there ar
|
||||
|
||||
## Similarities
|
||||
|
||||
- Both controllers support change detection in configmap and secrets
|
||||
- Both controllers support change detection in `ConfigMaps` and `Secrets`
|
||||
- Both controllers support deployment `rollout`
|
||||
- Both controllers use SHA1 for hashing
|
||||
- Reloader controller use SHA1 for hashing
|
||||
- Both controllers have end to end as well as unit test cases.
|
||||
|
||||
## Differences
|
||||
@@ -21,7 +21,7 @@ Reloader and k8s-trigger-controller are both built for same purpose. So there ar
|
||||
|
||||
Reloader supports deployment `rollout` as well as `daemonsets` and `statefulsets` `rollout`.
|
||||
|
||||
### Hashing usage
|
||||
### Hashing Usage
|
||||
|
||||
#### `k8s-trigger-controller`
|
||||
|
||||
|
||||
@@ -3,12 +3,12 @@
|
||||
Below are the steps to use Reloader with Sealed Secrets:
|
||||
|
||||
1. Download and install the kubeseal client from [here](https://github.com/bitnami-labs/sealed-secrets)
|
||||
1. Install the controller for sealed secrets
|
||||
1. Install the controller for Sealed Secrets
|
||||
1. Fetch the encryption certificate
|
||||
1. Encrypt the secret
|
||||
1. Apply the secret
|
||||
1. Install the tool which uses that sealed secret
|
||||
1. Install the tool which uses that Sealed Secret
|
||||
1. Install Reloader
|
||||
1. Once everything is setup, update the original secret at client and encrypt it with kubeseal to see Reloader working
|
||||
1. Apply the updated sealed secret
|
||||
1. Apply the updated Sealed Secret
|
||||
1. Reloader will restart the pod to use that updated secret
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
Reloader's working can be verified by three ways.
|
||||
|
||||
## Verify from logs
|
||||
## Verify From Logs
|
||||
|
||||
Check the logs of Reloader and verify that you can see logs looks like below, if you are able to find these logs then it means Reloader is working.
|
||||
|
||||
@@ -14,11 +14,11 @@ Updated test-resource of type Deployment in namespace: test-reloader
|
||||
|
||||
Below are the details that explain these logs:
|
||||
|
||||
### test-object
|
||||
### `test-object`
|
||||
|
||||
`test-object` is the name of a `secret` or a `deployment` in which change has been detected.
|
||||
`test-object` is the name of a `secret` or a `configmap` in which change has been detected.
|
||||
|
||||
### SECRET
|
||||
### `SECRET`
|
||||
|
||||
`SECRET` is the type of `test-object`. It can either be `SECRET` or `CONFIGMAP`
|
||||
|
||||
@@ -30,11 +30,11 @@ Below are the details that explain these logs:
|
||||
|
||||
`test-resource` is the name of resource which is going to be updated
|
||||
|
||||
### Deployment
|
||||
### `Deployment`
|
||||
|
||||
`Deployment` is the type of `test-resource`. It can either be a `Deployment`, `Daemonset` or `Statefulset`
|
||||
|
||||
## Verify by checking the age of Pod
|
||||
## Verify by Checking the Age of Pod
|
||||
|
||||
A pod's age can tell whether Reloader is working correctly or not. If you know that a change in a `secret` or `configmap` has occurred, then check the relevant Pod's age immediately. It should be newly created few moments ago.
|
||||
|
||||
@@ -42,7 +42,7 @@ A pod's age can tell whether Reloader is working correctly or not. If you know t
|
||||
|
||||
`kubernetes dashboard` can be used to verify the working of Reloader. After a change in `secret` or `configmap`, check the relevant Pod's age from dashboard. It should be newly created few moments ago.
|
||||
|
||||
### Verify from command line
|
||||
### Verify from Command Line
|
||||
|
||||
After a change in `secret` or `configmap`. Run the below-mentioned command and verify that the pod is newly created.
|
||||
|
||||
@@ -50,7 +50,7 @@ After a change in `secret` or `configmap`. Run the below-mentioned command and v
|
||||
kubectl get pods <pod name> -n <namespace name>
|
||||
```
|
||||
|
||||
## Verify from metrics
|
||||
## Verify From Metrics
|
||||
|
||||
Some metrics are exported to Prometheus endpoint `/metrics` on port `9090`.
|
||||
|
||||
@@ -60,3 +60,16 @@ When Reloader is unable to reload, `reloader_reload_executed_total{success="fals
|
||||
reloader_reload_executed_total{success="false"} 15
|
||||
reloader_reload_executed_total{success="true"} 12
|
||||
```
|
||||
|
||||
### Reloads by Namespace
|
||||
|
||||
Reloader can also export a metric to show the number of reloads by namespace. This feature is disabled by default, as it can lead to high cardinality in clusters with many namespaces.
|
||||
|
||||
The metric will have both `success` and `namespace` as attributes:
|
||||
|
||||
```text
|
||||
reloader_reload_executed_total{success="false", namespace="some-namespace"} 2
|
||||
reloader_reload_executed_total{success="true", namespace="some-namespace"} 1
|
||||
```
|
||||
|
||||
To opt in, set the environment variable `METRICS_COUNT_BY_NAMESPACE` to `enabled` or set the Helm value `reloader.enableMetricsByNamespace` to `true`.
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
# Features
|
||||
|
||||
These are the key features of Reloader:
|
||||
|
||||
1. Restart pod in a `deployment` on change in linked/related configmap's or secret's
|
||||
1. Restart pod in a `daemonset` on change in linked/related configmap's or secret's
|
||||
1. Restart pod in a `statefulset` on change in linked/related configmap's or secret's
|
||||
1. Restart pod in a `rollout` on change in linked/related configmap's or secret's
|
||||
26
docs/index.md
Normal file
26
docs/index.md
Normal file
@@ -0,0 +1,26 @@
|
||||
# Introduction
|
||||
|
||||
Reloader can watch changes in `ConfigMap` and `Secret` and do rolling upgrades on Pods with their associated `DeploymentConfigs`, `Deployments`, `Daemonsets` `Statefulsets` and `Rollouts`.
|
||||
|
||||
These are the key features of Reloader:
|
||||
|
||||
1. Restart pod in a `deployment` on change in linked/related `ConfigMaps` or `Secrets`
|
||||
1. Restart pod in a `daemonset` on change in linked/related `ConfigMaps` or `Secrets`
|
||||
1. Restart pod in a `statefulset` on change in linked/related `ConfigMaps` or `Secrets`
|
||||
1. Restart pod in a `rollout` on change in linked/related `ConfigMaps` or `Secrets`
|
||||
|
||||
This site contains more details on how Reloader works. For an overview, please see the repository's [README file](https://github.com/stakater/Reloader/blob/master/README.md).
|
||||
|
||||
---
|
||||
|
||||
<div align="center">
|
||||
|
||||
[](https://github.com/sponsors/stakater?utm_source=docs&utm_medium=footer&utm_campaign=reloader)
|
||||
|
||||
<p>
|
||||
Your support funds maintenance, security updates, and new features for Reloader, plus continued investment in other open source tools.
|
||||
</p>
|
||||
|
||||
</div>
|
||||
|
||||
---
|
||||
110
go.mod
110
go.mod
@@ -1,80 +1,82 @@
|
||||
module github.com/stakater/Reloader
|
||||
|
||||
go 1.21
|
||||
go 1.26
|
||||
|
||||
require (
|
||||
github.com/argoproj/argo-rollouts v1.6.0
|
||||
github.com/openshift/api v3.9.0+incompatible
|
||||
github.com/openshift/client-go v0.0.0-20231024221206-506d798bc61c
|
||||
github.com/parnurzeal/gorequest v0.2.16
|
||||
github.com/prometheus/client_golang v1.17.0
|
||||
github.com/argoproj/argo-rollouts v1.8.3
|
||||
github.com/openshift/api v0.0.0-20260102143802-d2ec16864f86
|
||||
github.com/openshift/client-go v0.0.0-20251223102348-558b0eef16bc
|
||||
github.com/parnurzeal/gorequest v0.3.0
|
||||
github.com/prometheus/client_golang v1.23.2
|
||||
github.com/sirupsen/logrus v1.9.3
|
||||
github.com/spf13/cobra v1.7.0
|
||||
k8s.io/api v0.28.3
|
||||
k8s.io/apimachinery v0.28.3
|
||||
k8s.io/client-go v0.28.3
|
||||
k8s.io/kubectl v0.28.3
|
||||
k8s.io/utils v0.0.0-20230726121419-3b25d923346b
|
||||
github.com/spf13/cobra v1.10.2
|
||||
github.com/stretchr/testify v1.11.1
|
||||
k8s.io/api v0.35.0
|
||||
k8s.io/apimachinery v0.35.0
|
||||
k8s.io/client-go v0.35.0
|
||||
k8s.io/kubectl v0.35.0
|
||||
sigs.k8s.io/secrets-store-csi-driver v1.5.5
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/beorn7/perks v1.0.1 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.2.0 // indirect
|
||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||
github.com/elazarl/goproxy v0.0.0-20221015165544-a0805db90819 // indirect
|
||||
github.com/emicklei/go-restful/v3 v3.10.1 // indirect
|
||||
github.com/evanphx/json-patch v5.6.0+incompatible // indirect
|
||||
github.com/go-logr/logr v1.2.4 // indirect
|
||||
github.com/go-openapi/jsonpointer v0.19.6 // indirect
|
||||
github.com/go-openapi/jsonreference v0.20.2 // indirect
|
||||
github.com/go-openapi/swag v0.22.3 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
||||
github.com/elazarl/goproxy v0.0.0-20240726154733-8b0c20506380 // indirect
|
||||
github.com/emicklei/go-restful/v3 v3.12.2 // indirect
|
||||
github.com/fxamacker/cbor/v2 v2.9.0 // indirect
|
||||
github.com/go-logr/logr v1.4.3 // indirect
|
||||
github.com/go-openapi/jsonpointer v0.21.1 // indirect
|
||||
github.com/go-openapi/jsonreference v0.21.0 // indirect
|
||||
github.com/go-openapi/swag v0.23.1 // indirect
|
||||
github.com/gogo/protobuf v1.3.2 // indirect
|
||||
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
|
||||
github.com/golang/protobuf v1.5.3 // indirect
|
||||
github.com/google/gnostic-models v0.6.8 // indirect
|
||||
github.com/google/go-cmp v0.5.9 // indirect
|
||||
github.com/google/gofuzz v1.2.0 // indirect
|
||||
github.com/google/uuid v1.3.0 // indirect
|
||||
github.com/imdario/mergo v0.3.13 // indirect
|
||||
github.com/google/gnostic-models v0.7.0 // indirect
|
||||
github.com/google/go-cmp v0.7.0 // indirect
|
||||
github.com/google/uuid v1.6.0 // indirect
|
||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||
github.com/josharian/intern v1.0.0 // indirect
|
||||
github.com/json-iterator/go v1.1.12 // indirect
|
||||
github.com/mailru/easyjson v0.7.7 // indirect
|
||||
github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0 // indirect
|
||||
github.com/kylelemons/godebug v1.1.0 // indirect
|
||||
github.com/mailru/easyjson v0.9.0 // indirect
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect
|
||||
github.com/moul/http2curl v1.0.0 // indirect
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
|
||||
github.com/pkg/errors v0.9.1 // indirect
|
||||
github.com/prometheus/client_model v0.5.0 // indirect
|
||||
github.com/prometheus/common v0.45.0 // indirect
|
||||
github.com/prometheus/procfs v0.11.1 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
github.com/prometheus/client_model v0.6.2 // indirect
|
||||
github.com/prometheus/common v0.66.1 // indirect
|
||||
github.com/prometheus/procfs v0.16.1 // indirect
|
||||
github.com/smartystreets/goconvey v1.7.2 // indirect
|
||||
github.com/spf13/pflag v1.0.5 // indirect
|
||||
golang.org/x/net v0.17.0 // indirect
|
||||
golang.org/x/oauth2 v0.12.0 // indirect
|
||||
golang.org/x/sys v0.13.0 // indirect
|
||||
golang.org/x/term v0.13.0 // indirect
|
||||
golang.org/x/text v0.13.0 // indirect
|
||||
golang.org/x/time v0.3.0 // indirect
|
||||
google.golang.org/appengine v1.6.7 // indirect
|
||||
google.golang.org/protobuf v1.31.0 // indirect
|
||||
github.com/spf13/pflag v1.0.9 // indirect
|
||||
github.com/x448/float16 v0.8.4 // indirect
|
||||
go.yaml.in/yaml/v2 v2.4.3 // indirect
|
||||
go.yaml.in/yaml/v3 v3.0.4 // indirect
|
||||
golang.org/x/net v0.47.0 // indirect
|
||||
golang.org/x/oauth2 v0.30.0 // indirect
|
||||
golang.org/x/sys v0.39.0 // indirect
|
||||
golang.org/x/term v0.38.0 // indirect
|
||||
golang.org/x/text v0.32.0 // indirect
|
||||
golang.org/x/time v0.11.0 // indirect
|
||||
google.golang.org/protobuf v1.36.8 // indirect
|
||||
gopkg.in/evanphx/json-patch.v4 v4.13.0 // indirect
|
||||
gopkg.in/inf.v0 v0.9.1 // indirect
|
||||
gopkg.in/yaml.v2 v2.4.0 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
k8s.io/klog/v2 v2.100.1 // indirect
|
||||
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 // indirect
|
||||
moul.io/http2curl v1.0.0 // indirect
|
||||
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
|
||||
sigs.k8s.io/yaml v1.3.0 // indirect
|
||||
k8s.io/klog/v2 v2.130.1 // indirect
|
||||
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912 // indirect
|
||||
k8s.io/utils v0.0.0-20251222233032-718f0e51e6d2 // indirect
|
||||
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect
|
||||
sigs.k8s.io/randfill v1.0.0 // indirect
|
||||
sigs.k8s.io/structured-merge-diff/v6 v6.3.0 // indirect
|
||||
sigs.k8s.io/yaml v1.6.0 // indirect
|
||||
)
|
||||
|
||||
// Replacements for argo-rollouts
|
||||
replace (
|
||||
github.com/go-check/check => github.com/go-check/check v0.0.0-20201130134442-10cb98267c6c
|
||||
k8s.io/api v0.0.0 => k8s.io/api v0.28.3
|
||||
k8s.io/apimachinery v0.0.0 => k8s.io/apimachinery v0.28.3
|
||||
k8s.io/client-go v0.0.0 => k8s.io/client-go v0.27.4
|
||||
k8s.io/api v0.0.0 => k8s.io/api v0.35.0
|
||||
k8s.io/apimachinery v0.0.0 => k8s.io/apimachinery v0.35.0
|
||||
k8s.io/client-go v0.0.0 => k8s.io/client-go v0.35.0
|
||||
k8s.io/cloud-provider v0.0.0 => k8s.io/cloud-provider v0.24.2
|
||||
k8s.io/controller-manager v0.0.0 => k8s.io/controller-manager v0.24.2
|
||||
k8s.io/cri-api v0.0.0 => k8s.io/cri-api v0.20.5-rc.0
|
||||
@@ -83,7 +85,7 @@ replace (
|
||||
k8s.io/kube-controller-manager v0.0.0 => k8s.io/kube-controller-manager v0.24.2
|
||||
k8s.io/kube-proxy v0.0.0 => k8s.io/kube-proxy v0.24.2
|
||||
k8s.io/kube-scheduler v0.0.0 => k8s.io/kube-scheduler v0.24.2
|
||||
k8s.io/kubectl v0.0.0 => k8s.io/kubectl v0.27.1
|
||||
k8s.io/kubectl v0.0.0 => k8s.io/kubectl v0.35.0
|
||||
k8s.io/kubelet v0.0.0 => k8s.io/kubelet v0.24.2
|
||||
k8s.io/legacy-cloud-providers v0.0.0 => k8s.io/legacy-cloud-providers v0.24.2
|
||||
k8s.io/mount-utils v0.0.0 => k8s.io/mount-utils v0.20.5-rc.0
|
||||
|
||||
261
go.sum
261
go.sum
@@ -1,56 +1,45 @@
|
||||
github.com/argoproj/argo-rollouts v1.6.0 h1:u6DfVqAdi4UaDLezd8Yz0fJUlby9tTw20MWu2VCP/So=
|
||||
github.com/argoproj/argo-rollouts v1.6.0/go.mod h1:0lpA02iNoyDB/N/QLrmBRaM5AMAzFp2qoYIvwhLozNY=
|
||||
github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0=
|
||||
github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM=
|
||||
github.com/argoproj/argo-rollouts v1.8.3 h1:blbtQva4IK9r6gFh+dWkCrLnFdPOWiv9ubQYu36qeaA=
|
||||
github.com/argoproj/argo-rollouts v1.8.3/go.mod h1:kCAUvIfMGfOyVf3lvQbBt0nqQn4Pd+zB5/YwKv+UBa8=
|
||||
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
||||
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
|
||||
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
||||
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
|
||||
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
||||
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/elazarl/goproxy v0.0.0-20221015165544-a0805db90819 h1:RIB4cRk+lBqKK3Oy0r2gRX4ui7tuhiZq2SuTtTCi0/0=
|
||||
github.com/elazarl/goproxy v0.0.0-20221015165544-a0805db90819/go.mod h1:Ro8st/ElPeALwNFlcTpWmkr6IoMFfkjXAvTHpevnDsM=
|
||||
github.com/elazarl/goproxy/ext v0.0.0-20190711103511-473e67f1d7d2/go.mod h1:gNh8nYJoAm43RfaxurUnxr+N1PwuFV3ZMl/efxlIlY8=
|
||||
github.com/emicklei/go-restful/v3 v3.10.1 h1:rc42Y5YTp7Am7CS630D7JmhRjq4UlEUuEKfrDac4bSQ=
|
||||
github.com/emicklei/go-restful/v3 v3.10.1/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
|
||||
github.com/evanphx/json-patch v5.6.0+incompatible h1:jBYDEEiFBPxA0v50tFdvOzQQTCvpL6mnFh5mB2/l16U=
|
||||
github.com/evanphx/json-patch v5.6.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
|
||||
github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||
github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ=
|
||||
github.com/go-logr/logr v1.2.4/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||
github.com/go-openapi/jsonpointer v0.19.6 h1:eCs3fxoIi3Wh6vtgmLTOjdhSpiqphQ+DaPn38N2ZdrE=
|
||||
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
|
||||
github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
|
||||
github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k=
|
||||
github.com/go-openapi/swag v0.22.3 h1:yMBqmnQ0gyZvEb/+KzuWZOXgllrXT4SADYbvDaXHv/g=
|
||||
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
|
||||
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI=
|
||||
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572/go.mod h1:9Pwr4B2jHnOSGXyyzV8ROjYa2ojvAY6HCGYYfMoC3Ls=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/elazarl/goproxy v0.0.0-20240726154733-8b0c20506380 h1:1NyRx2f4W4WBRyg0Kys0ZbaNmDDzZ2R/C7DTi+bbsJ0=
|
||||
github.com/elazarl/goproxy v0.0.0-20240726154733-8b0c20506380/go.mod h1:thX175TtLTzLj3p7N/Q9IiKZ7NF+p72cvL91emV0hzo=
|
||||
github.com/emicklei/go-restful/v3 v3.12.2 h1:DhwDP0vY3k8ZzE0RunuJy8GhNpPL6zqLkDf9B/a0/xU=
|
||||
github.com/emicklei/go-restful/v3 v3.12.2/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
|
||||
github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM=
|
||||
github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ=
|
||||
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
|
||||
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||
github.com/go-openapi/jsonpointer v0.21.1 h1:whnzv/pNXtK2FbX/W9yJfRmE2gsmkfahjMKB0fZvcic=
|
||||
github.com/go-openapi/jsonpointer v0.21.1/go.mod h1:50I1STOfbY1ycR8jGz8DaMeLCdXiI6aDteEdRNNzpdk=
|
||||
github.com/go-openapi/jsonreference v0.21.0 h1:Rs+Y7hSXT83Jacb7kFyjn4ijOuVGSvOdF2+tg1TRrwQ=
|
||||
github.com/go-openapi/jsonreference v0.21.0/go.mod h1:LmZmgsrTkVg9LG4EaHeY8cBDslNPMo06cago5JNLkm4=
|
||||
github.com/go-openapi/swag v0.23.1 h1:lpsStH0n2ittzTnbaSloVZLuB5+fvSY/+hnagBjSNZU=
|
||||
github.com/go-openapi/swag v0.23.1/go.mod h1:STZs8TbRvEQQKUA+JZNAm3EWlgaOBGpyFDqQnDHMef0=
|
||||
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
|
||||
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
|
||||
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
|
||||
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
|
||||
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
|
||||
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
|
||||
github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I=
|
||||
github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U=
|
||||
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
|
||||
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/gnostic-models v0.7.0 h1:qwTtogB15McXDaNqTZdzPJRHvaVJlAl+HVQnLmJEJxo=
|
||||
github.com/google/gnostic-models v0.7.0/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7OUGxBlw57miDrQ=
|
||||
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
|
||||
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1 h1:K6RDEckDVWvDI9JAJYCmNdQXq6neHJOYx3V6jnqNEec=
|
||||
github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
|
||||
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
|
||||
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 h1:BHT72Gu3keYf3ZEu2J0b1vyeLSOYI8bm5wbJM/8yDe8=
|
||||
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
|
||||
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
|
||||
github.com/imdario/mergo v0.3.13 h1:lFzP57bqS/wsqKssCGmtLAb8A0wKjLGrve2q3PPVcBk=
|
||||
github.com/imdario/mergo v0.3.13/go.mod h1:4lJ1jqUDcsbIECGy0RUJAXNIhg+6ocWgb1ALK2O4oXg=
|
||||
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
||||
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
||||
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
|
||||
@@ -61,49 +50,52 @@ github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7
|
||||
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
|
||||
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
|
||||
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
|
||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
|
||||
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
|
||||
github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0 h1:jWpvCLoY8Z/e3VKvlsiIGKtc+UG6U5vzxaoagmhXfyg=
|
||||
github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0/go.mod h1:QUyp042oQthUoa9bqDv0ER0wrtXnBruoNd7aNjkbP+k=
|
||||
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
|
||||
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
|
||||
github.com/mailru/easyjson v0.9.0 h1:PrnmzHw7262yW8sTBwxi1PdJA3Iw/EKBa8psRf7d9a4=
|
||||
github.com/mailru/easyjson v0.9.0/go.mod h1:1+xMtQp2MRNVL/V1bOzuP3aP8VNwRW55fQUto+XFtTU=
|
||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
|
||||
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8=
|
||||
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||
github.com/moul/http2curl v1.0.0 h1:dRMWoAtb+ePxMlLkrCbAqh4TlPHXvoGUSQ323/9Zahs=
|
||||
github.com/moul/http2curl v1.0.0/go.mod h1:8UbvGypXm98wA/IqH45anm5Y2Z6ep6O31QGOAZ3H0fQ=
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
|
||||
github.com/onsi/ginkgo/v2 v2.9.4 h1:xR7vG4IXt5RWx6FfIjyAtsoMAtnc3C/rFXBBd2AjZwE=
|
||||
github.com/onsi/ginkgo/v2 v2.9.4/go.mod h1:gCQYp2Q+kSoIj7ykSVb9nskRSsR6PUj4AiLywzIhbKM=
|
||||
github.com/onsi/gomega v1.27.6 h1:ENqfyGeS5AX/rlXDd/ETokDz93u0YufY1Pgxuy/PvWE=
|
||||
github.com/onsi/gomega v1.27.6/go.mod h1:PIQNjfQwkP3aQAH7lf7j87O/5FiNr+ZR8+ipb+qQlhg=
|
||||
github.com/openshift/api v3.9.0+incompatible h1:fJ/KsefYuZAjmrr3+5U9yZIZbTOpVkDDLDLFresAeYs=
|
||||
github.com/openshift/api v3.9.0+incompatible/go.mod h1:dh9o4Fs58gpFXGSYfnVxGR9PnV53I8TW84pQaJDdGiY=
|
||||
github.com/openshift/client-go v0.0.0-20231024221206-506d798bc61c h1:xfag+wccUqc9EdrWsnprD6x5KG2WE+iKGFfFELCwwRA=
|
||||
github.com/openshift/client-go v0.0.0-20231024221206-506d798bc61c/go.mod h1:3BkYp+FtKD2TypMD0nTPkVsxUaY4fJPLEMFMlOLtrJM=
|
||||
github.com/parnurzeal/gorequest v0.2.16 h1:T/5x+/4BT+nj+3eSknXmCTnEVGSzFzPGdpqmUVVZXHQ=
|
||||
github.com/parnurzeal/gorequest v0.2.16/go.mod h1:3Kh2QUMJoqw3icWAecsyzkpY7UzRfDhbRdTjtNwNiUE=
|
||||
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
|
||||
github.com/onsi/ginkgo/v2 v2.27.2 h1:LzwLj0b89qtIy6SSASkzlNvX6WktqurSHwkk2ipF/Ns=
|
||||
github.com/onsi/ginkgo/v2 v2.27.2/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo=
|
||||
github.com/onsi/gomega v1.38.2 h1:eZCjf2xjZAqe+LeWvKb5weQ+NcPwX84kqJ0cZNxok2A=
|
||||
github.com/onsi/gomega v1.38.2/go.mod h1:W2MJcYxRGV63b418Ai34Ud0hEdTVXq9NW9+Sx6uXf3k=
|
||||
github.com/openshift/api v0.0.0-20260102143802-d2ec16864f86 h1:Vsqg+WqSA91LjrwK5lzkSCjztK/B+T8MPKI3MIALx3w=
|
||||
github.com/openshift/api v0.0.0-20260102143802-d2ec16864f86/go.mod h1:d5uzF0YN2nQQFA0jIEWzzOZ+edmo6wzlGLvx5Fhz4uY=
|
||||
github.com/openshift/client-go v0.0.0-20251223102348-558b0eef16bc h1:nIlRaJfr/yGjPV15MNF5eVHLAGyXFjcUzO+hXeWDDk8=
|
||||
github.com/openshift/client-go v0.0.0-20251223102348-558b0eef16bc/go.mod h1:cs9BwTu96sm2vQvy7r9rOiltgu90M6ju2qIHFG9WU+o=
|
||||
github.com/parnurzeal/gorequest v0.3.0 h1:SoFyqCDC9COr1xuS6VA8fC8RU7XyrJZN2ona1kEX7FI=
|
||||
github.com/parnurzeal/gorequest v0.3.0/go.mod h1:3Kh2QUMJoqw3icWAecsyzkpY7UzRfDhbRdTjtNwNiUE=
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/prometheus/client_golang v1.17.0 h1:rl2sfwZMtSthVU752MqfjQozy7blglC+1SOtjMAMh+Q=
|
||||
github.com/prometheus/client_golang v1.17.0/go.mod h1:VeL+gMmOAxkS2IqfCq0ZmHSL+LjWfWDUmp1mBz9JgUY=
|
||||
github.com/prometheus/client_model v0.5.0 h1:VQw1hfvPvk3Uv6Qf29VrPF32JB6rtbgI6cYPYQjL0Qw=
|
||||
github.com/prometheus/client_model v0.5.0/go.mod h1:dTiFglRmd66nLR9Pv9f0mZi7B7fk5Pm3gvsjB5tr+kI=
|
||||
github.com/prometheus/common v0.45.0 h1:2BGz0eBc2hdMDLnO/8n0jeB3oPrt2D08CekT0lneoxM=
|
||||
github.com/prometheus/common v0.45.0/go.mod h1:YJmSTw9BoKxJplESWWxlbyttQR4uaEcGyv9MZjVOJsY=
|
||||
github.com/prometheus/procfs v0.11.1 h1:xRC8Iq1yyca5ypa9n1EZnWZkt7dwcoRPQwX/5gwaUuI=
|
||||
github.com/prometheus/procfs v0.11.1/go.mod h1:eesXgaPo1q7lBpVMoMy0ZOFTth9hBn4W/y0/p/ScXhY=
|
||||
github.com/rogpeppe/go-charset v0.0.0-20180617210344-2471d30d28b4/go.mod h1:qgYeAmZ5ZIpBWTGllZSQnw97Dj+woV0toclVaRGI8pc=
|
||||
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
|
||||
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
|
||||
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
|
||||
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
|
||||
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
|
||||
github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs=
|
||||
github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA=
|
||||
github.com/prometheus/procfs v0.16.1 h1:hZ15bTNuirocR6u0JZ6BAHHmwS1p8B4P6MRqxtzMyRg=
|
||||
github.com/prometheus/procfs v0.16.1/go.mod h1:teAbpZRB1iIAJYREa1LsoWUXykVXA1KlTmWl8x/U+Is=
|
||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
||||
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
|
||||
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
|
||||
@@ -111,103 +103,106 @@ github.com/smartystreets/assertions v1.2.0 h1:42S6lae5dvLc7BrLu/0ugRtcFVjoJNMC/N
|
||||
github.com/smartystreets/assertions v1.2.0/go.mod h1:tcbTF8ujkAEcZ8TElKY+i30BzYlVhC/LOxJk7iOWnoo=
|
||||
github.com/smartystreets/goconvey v1.7.2 h1:9RBaZCeXEQ3UselpuwUQHltGVXvdwm6cv1hgR6gDIPg=
|
||||
github.com/smartystreets/goconvey v1.7.2/go.mod h1:Vw0tHAZW6lzCRk3xgdin6fKYcG+G3Pg9vgXWeJpQFMM=
|
||||
github.com/spf13/cobra v1.7.0 h1:hyqWnYt1ZQShIddO5kBpj3vu05/++x6tJ6dg8EC572I=
|
||||
github.com/spf13/cobra v1.7.0/go.mod h1:uLxZILRyS/50WlhOIKD7W6V5bgeIt+4sICxh6uRMrb0=
|
||||
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
|
||||
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU=
|
||||
github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4=
|
||||
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=
|
||||
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
||||
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
|
||||
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
|
||||
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
|
||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
|
||||
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
|
||||
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
|
||||
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
|
||||
go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
|
||||
go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=
|
||||
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
|
||||
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.30.0 h1:fDEXFVZ/fmCKProc/yAXXUijritrDzahmwwefnjoPFk=
|
||||
golang.org/x/mod v0.30.0/go.mod h1:lAsf5O2EvJeSFMiBxXDki7sCgAxEUcZHXoXMKT4GJKc=
|
||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM=
|
||||
golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
|
||||
golang.org/x/oauth2 v0.12.0 h1:smVPGxink+n1ZI5pkQa8y6fZT0RW0MgCO5bFpepy4B4=
|
||||
golang.org/x/oauth2 v0.12.0/go.mod h1:A74bZ3aGXgCY0qaIC9Ahg6Lglin4AMAco8cIv9baba4=
|
||||
golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
|
||||
golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
|
||||
golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI=
|
||||
golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
|
||||
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE=
|
||||
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/term v0.13.0 h1:bb+I9cTfFazGW51MZqBVmZy7+JEJMouUHTUSKVQLBek=
|
||||
golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U=
|
||||
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
|
||||
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/term v0.38.0 h1:PQ5pkm/rLO6HnxFR7N2lJHOZX6Kez5Y1gDSJla6jo7Q=
|
||||
golang.org/x/term v0.38.0/go.mod h1:bSEAKrOT1W+VSu9TSCMtoGEOUcKxOKgl3LE5QEF/xVg=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.13.0 h1:ablQoSUd0tRdKxZewP80B+BaqeKJuVhuRxj/dkrun3k=
|
||||
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
|
||||
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
|
||||
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU=
|
||||
golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY=
|
||||
golang.org/x/time v0.11.0 h1:/bpjEDfN9tkoN/ryeYHnv5hcMlc8ncjMcM4XBk5NWV0=
|
||||
golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/tools v0.8.0 h1:vSDcovVPld282ceKgDimkRSC8kpaH1dgyc9UMzlt84Y=
|
||||
golang.org/x/tools v0.8.0/go.mod h1:JxBZ99ISMI5ViVkT1tr6tdNmXeTrcpVSD3vZ1RsRdN4=
|
||||
golang.org/x/tools v0.39.0 h1:ik4ho21kwuQln40uelmciQPp9SipgNDdrafrYA4TmQQ=
|
||||
golang.org/x/tools v0.39.0/go.mod h1:JnefbkDPyD8UU2kI5fuf8ZX4/yUeh9W877ZeBONxUqQ=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
|
||||
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
||||
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
|
||||
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
|
||||
google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8=
|
||||
google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
|
||||
google.golang.org/protobuf v1.36.8 h1:xHScyCOEuuwZEc6UtSOvPbAT4zRh0xcNRYekJwfqyMc=
|
||||
google.golang.org/protobuf v1.36.8/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
gopkg.in/evanphx/json-patch.v4 v4.13.0 h1:czT3CmqEaQ1aanPc5SdlgQrrEIb8w/wwCvWWnfEbYzo=
|
||||
gopkg.in/evanphx/json-patch.v4 v4.13.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M=
|
||||
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
|
||||
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
|
||||
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
|
||||
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.0/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
k8s.io/api v0.28.3 h1:Gj1HtbSdB4P08C8rs9AR94MfSGpRhJgsS+GF9V26xMM=
|
||||
k8s.io/api v0.28.3/go.mod h1:MRCV/jr1dW87/qJnZ57U5Pak65LGmQVkKTzf3AtKFHc=
|
||||
k8s.io/apimachinery v0.28.3 h1:B1wYx8txOaCQG0HmYF6nbpU8dg6HvA06x5tEffvOe7A=
|
||||
k8s.io/apimachinery v0.28.3/go.mod h1:uQTKmIqs+rAYaq+DFaoD2X7pcjLOqbQX2AOiO0nIpb8=
|
||||
k8s.io/client-go v0.28.3 h1:2OqNb72ZuTZPKCl+4gTKvqao0AMOl9f3o2ijbAj3LI4=
|
||||
k8s.io/client-go v0.28.3/go.mod h1:LTykbBp9gsA7SwqirlCXBWtK0guzfhpoW4qSm7i9dxo=
|
||||
k8s.io/klog/v2 v2.100.1 h1:7WCHKK6K8fNhTqfBhISHQ97KrnJNFZMcQvKp7gP/tmg=
|
||||
k8s.io/klog/v2 v2.100.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
|
||||
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 h1:LyMgNKD2P8Wn1iAwQU5OhxCKlKJy0sHc+PcDwFB24dQ=
|
||||
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9/go.mod h1:wZK2AVp1uHCp4VamDVgBP2COHZjqD1T68Rf0CM3YjSM=
|
||||
k8s.io/kubectl v0.28.3 h1:H1Peu1O3EbN9zHkJCcvhiJ4NUj6lb88sGPO5wrWIM6k=
|
||||
k8s.io/kubectl v0.28.3/go.mod h1:RDAudrth/2wQ3Sg46fbKKl4/g+XImzvbsSRZdP2RiyE=
|
||||
k8s.io/utils v0.0.0-20230726121419-3b25d923346b h1:sgn3ZU783SCgtaSJjpcVVlRqd6GSnlTLKgpAAttJvpI=
|
||||
k8s.io/utils v0.0.0-20230726121419-3b25d923346b/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
|
||||
moul.io/http2curl v1.0.0 h1:6XwpyZOYsgZJrU8exnG87ncVkU1FVCcTRpwzOkTDUi8=
|
||||
moul.io/http2curl v1.0.0/go.mod h1:f6cULg+e4Md/oW1cYmwW4IWQOVl2lGbmCNGOHvzX2kE=
|
||||
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo=
|
||||
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 h1:PRbqxJClWWYMNV1dhaG4NsibJbArud9kFxnAMREiWFE=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.3/go.mod h1:qjx8mGObPmV2aSZepjQjbmb2ihdVs8cGKBraizNC69E=
|
||||
sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo=
|
||||
sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8=
|
||||
k8s.io/api v0.35.0 h1:iBAU5LTyBI9vw3L5glmat1njFK34srdLmktWwLTprlY=
|
||||
k8s.io/api v0.35.0/go.mod h1:AQ0SNTzm4ZAczM03QH42c7l3bih1TbAXYo0DkF8ktnA=
|
||||
k8s.io/apimachinery v0.35.0 h1:Z2L3IHvPVv/MJ7xRxHEtk6GoJElaAqDCCU0S6ncYok8=
|
||||
k8s.io/apimachinery v0.35.0/go.mod h1:jQCgFZFR1F4Ik7hvr2g84RTJSZegBc8yHgFWKn//hns=
|
||||
k8s.io/client-go v0.35.0 h1:IAW0ifFbfQQwQmga0UdoH0yvdqrbwMdq9vIFEhRpxBE=
|
||||
k8s.io/client-go v0.35.0/go.mod h1:q2E5AAyqcbeLGPdoRB+Nxe3KYTfPce1Dnu1myQdqz9o=
|
||||
k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
|
||||
k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
|
||||
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912 h1:Y3gxNAuB0OBLImH611+UDZcmKS3g6CthxToOb37KgwE=
|
||||
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912/go.mod h1:kdmbQkyfwUagLfXIad1y2TdrjPFWp2Q89B3qkRwf/pQ=
|
||||
k8s.io/kubectl v0.35.0 h1:cL/wJKHDe8E8+rP3G7avnymcMg6bH6JEcR5w5uo06wc=
|
||||
k8s.io/kubectl v0.35.0/go.mod h1:VR5/TSkYyxZwrRwY5I5dDq6l5KXmiCb+9w8IKplk3Qo=
|
||||
k8s.io/utils v0.0.0-20251222233032-718f0e51e6d2 h1:OfgiEo21hGiwx1oJUU5MpEaeOEg6coWndBkZF/lkFuE=
|
||||
k8s.io/utils v0.0.0-20251222233032-718f0e51e6d2/go.mod h1:xDxuJ0whA3d0I4mf/C4ppKHxXynQ+fxnkmQH0vTHnuk=
|
||||
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 h1:IpInykpT6ceI+QxKBbEflcR5EXP7sU1kvOlxwZh5txg=
|
||||
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg=
|
||||
sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU=
|
||||
sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
|
||||
sigs.k8s.io/secrets-store-csi-driver v1.5.5 h1:LJDpDL5TILhlP68nGvtGSlJFxSDgAD2m148NT0Ts7os=
|
||||
sigs.k8s.io/secrets-store-csi-driver v1.5.5/go.mod h1:i2WqLicYH00hrTG3JAzICPMF4HL4KMEORlDt9UQoZLk=
|
||||
sigs.k8s.io/structured-merge-diff/v6 v6.3.0 h1:jTijUJbW353oVOd9oTlifJqOGEkUw2jB/fXCbTiQEco=
|
||||
sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE=
|
||||
sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs=
|
||||
sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4=
|
||||
|
||||
@@ -9,6 +9,15 @@ import (
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
type AlertSink string
|
||||
|
||||
const (
|
||||
AlertSinkSlack AlertSink = "slack"
|
||||
AlertSinkTeams AlertSink = "teams"
|
||||
AlertSinkGoogleChat AlertSink = "gchat"
|
||||
AlertSinkRaw AlertSink = "raw"
|
||||
)
|
||||
|
||||
// function to send alert msg to webhook service
|
||||
func SendWebhookAlert(msg string) {
|
||||
webhook_url, ok := os.LookupEnv("ALERT_WEBHOOK_URL")
|
||||
@@ -31,10 +40,15 @@ func SendWebhookAlert(msg string) {
|
||||
msg = fmt.Sprintf("%s : %s", alert_additional_info, msg)
|
||||
}
|
||||
|
||||
if alert_sink == "slack" {
|
||||
switch AlertSink(alert_sink) {
|
||||
case AlertSinkSlack:
|
||||
sendSlackAlert(webhook_url, webhook_proxy, msg)
|
||||
} else {
|
||||
msg = strings.Replace(msg, "*", "", -1)
|
||||
case AlertSinkTeams:
|
||||
sendTeamsAlert(webhook_url, webhook_proxy, msg)
|
||||
case AlertSinkGoogleChat:
|
||||
sendGoogleChatAlert(webhook_url, webhook_proxy, msg)
|
||||
default:
|
||||
msg = strings.ReplaceAll(msg, "*", "")
|
||||
sendRawWebhookAlert(webhook_url, webhook_proxy, msg)
|
||||
}
|
||||
}
|
||||
@@ -73,6 +87,52 @@ func sendSlackAlert(webhookUrl string, proxy string, msg string) []error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// function to send alert to Microsoft Teams webhook
|
||||
func sendTeamsAlert(webhookUrl string, proxy string, msg string) []error {
|
||||
attachment := Attachment{
|
||||
Text: msg,
|
||||
}
|
||||
|
||||
request := gorequest.New().Proxy(proxy)
|
||||
resp, _, err := request.
|
||||
Post(webhookUrl).
|
||||
RedirectPolicy(redirectPolicy).
|
||||
Send(attachment).
|
||||
End()
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if resp.StatusCode != 200 {
|
||||
return []error{fmt.Errorf("error sending msg. status: %v", resp.Status)}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// function to send alert to Google Chat webhook
|
||||
func sendGoogleChatAlert(webhookUrl string, proxy string, msg string) []error {
|
||||
payload := map[string]interface{}{
|
||||
"text": msg,
|
||||
}
|
||||
|
||||
request := gorequest.New().Proxy(proxy)
|
||||
resp, _, err := request.
|
||||
Post(webhookUrl).
|
||||
RedirectPolicy(redirectPolicy).
|
||||
Send(payload).
|
||||
End()
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if resp.StatusCode != 200 {
|
||||
return []error{fmt.Errorf("error sending msg. status: %v", resp.Status)}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// function to send alert to webhook service as text
|
||||
func sendRawWebhookAlert(webhookUrl string, proxy string, msg string) []error {
|
||||
request := gorequest.New().Proxy(proxy)
|
||||
|
||||
@@ -2,19 +2,28 @@ package callbacks
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/stakater/Reloader/internal/pkg/options"
|
||||
"github.com/stakater/Reloader/pkg/kube"
|
||||
appsv1 "k8s.io/api/apps/v1"
|
||||
batchv1 "k8s.io/api/batch/v1"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
patchtypes "k8s.io/apimachinery/pkg/types"
|
||||
|
||||
"maps"
|
||||
|
||||
argorolloutv1alpha1 "github.com/argoproj/argo-rollouts/pkg/apis/rollouts/v1alpha1"
|
||||
openshiftv1 "github.com/openshift/api/apps/v1"
|
||||
)
|
||||
|
||||
// ItemFunc is a generic function to return a specific resource in given namespace
|
||||
type ItemFunc func(kube.Clients, string, string) (runtime.Object, error)
|
||||
|
||||
// ItemsFunc is a generic function to return a specific resource array in given namespace
|
||||
type ItemsFunc func(kube.Clients, string) []runtime.Object
|
||||
|
||||
@@ -30,6 +39,12 @@ type VolumesFunc func(runtime.Object) []v1.Volume
|
||||
// UpdateFunc performs the resource update
|
||||
type UpdateFunc func(kube.Clients, string, runtime.Object) error
|
||||
|
||||
// PatchFunc performs the resource patch
|
||||
type PatchFunc func(kube.Clients, string, runtime.Object, patchtypes.PatchType, []byte) error
|
||||
|
||||
// PatchTemplateFunc is a generic func to return strategic merge JSON patch template
|
||||
type PatchTemplatesFunc func() PatchTemplates
|
||||
|
||||
// AnnotationsFunc is a generic func to return annotations
|
||||
type AnnotationsFunc func(runtime.Object) map[string]string
|
||||
|
||||
@@ -38,14 +53,42 @@ type PodAnnotationsFunc func(runtime.Object) map[string]string
|
||||
|
||||
// RollingUpgradeFuncs contains generic functions to perform rolling upgrade
|
||||
type RollingUpgradeFuncs struct {
|
||||
ItemsFunc ItemsFunc
|
||||
AnnotationsFunc AnnotationsFunc
|
||||
PodAnnotationsFunc PodAnnotationsFunc
|
||||
ContainersFunc ContainersFunc
|
||||
InitContainersFunc InitContainersFunc
|
||||
UpdateFunc UpdateFunc
|
||||
VolumesFunc VolumesFunc
|
||||
ResourceType string
|
||||
ItemFunc ItemFunc
|
||||
ItemsFunc ItemsFunc
|
||||
AnnotationsFunc AnnotationsFunc
|
||||
PodAnnotationsFunc PodAnnotationsFunc
|
||||
ContainersFunc ContainersFunc
|
||||
ContainerPatchPathFunc ContainersFunc
|
||||
InitContainersFunc InitContainersFunc
|
||||
UpdateFunc UpdateFunc
|
||||
PatchFunc PatchFunc
|
||||
PatchTemplatesFunc PatchTemplatesFunc
|
||||
VolumesFunc VolumesFunc
|
||||
ResourceType string
|
||||
SupportsPatch bool
|
||||
}
|
||||
|
||||
// PatchTemplates contains merge JSON patch templates
|
||||
type PatchTemplates struct {
|
||||
AnnotationTemplate string
|
||||
EnvVarTemplate string
|
||||
DeleteEnvVarTemplate string
|
||||
}
|
||||
|
||||
// GetDeploymentItem returns the deployment in given namespace
|
||||
func GetDeploymentItem(clients kube.Clients, name string, namespace string) (runtime.Object, error) {
|
||||
deployment, err := clients.KubernetesClient.AppsV1().Deployments(namespace).Get(context.TODO(), name, meta_v1.GetOptions{})
|
||||
if err != nil {
|
||||
logrus.Errorf("Failed to get deployment %v", err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if deployment.Spec.Template.Annotations == nil {
|
||||
annotations := make(map[string]string)
|
||||
deployment.Spec.Template.Annotations = annotations
|
||||
}
|
||||
|
||||
return deployment, nil
|
||||
}
|
||||
|
||||
// GetDeploymentItems returns the deployments in given namespace
|
||||
@@ -58,9 +101,9 @@ func GetDeploymentItems(clients kube.Clients, namespace string) []runtime.Object
|
||||
items := make([]runtime.Object, len(deployments.Items))
|
||||
// Ensure we always have pod annotations to add to
|
||||
for i, v := range deployments.Items {
|
||||
if v.Spec.Template.ObjectMeta.Annotations == nil {
|
||||
if v.Spec.Template.Annotations == nil {
|
||||
annotations := make(map[string]string)
|
||||
deployments.Items[i].Spec.Template.ObjectMeta.Annotations = annotations
|
||||
deployments.Items[i].Spec.Template.Annotations = annotations
|
||||
}
|
||||
items[i] = &deployments.Items[i]
|
||||
}
|
||||
@@ -68,6 +111,17 @@ func GetDeploymentItems(clients kube.Clients, namespace string) []runtime.Object
|
||||
return items
|
||||
}
|
||||
|
||||
// GetCronJobItem returns the job in given namespace
|
||||
func GetCronJobItem(clients kube.Clients, name string, namespace string) (runtime.Object, error) {
|
||||
cronjob, err := clients.KubernetesClient.BatchV1().CronJobs(namespace).Get(context.TODO(), name, meta_v1.GetOptions{})
|
||||
if err != nil {
|
||||
logrus.Errorf("Failed to get cronjob %v", err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return cronjob, nil
|
||||
}
|
||||
|
||||
// GetCronJobItems returns the jobs in given namespace
|
||||
func GetCronJobItems(clients kube.Clients, namespace string) []runtime.Object {
|
||||
cronjobs, err := clients.KubernetesClient.BatchV1().CronJobs(namespace).List(context.TODO(), meta_v1.ListOptions{})
|
||||
@@ -78,9 +132,9 @@ func GetCronJobItems(clients kube.Clients, namespace string) []runtime.Object {
|
||||
items := make([]runtime.Object, len(cronjobs.Items))
|
||||
// Ensure we always have pod annotations to add to
|
||||
for i, v := range cronjobs.Items {
|
||||
if v.Spec.JobTemplate.Spec.Template.ObjectMeta.Annotations == nil {
|
||||
if v.Spec.JobTemplate.Spec.Template.Annotations == nil {
|
||||
annotations := make(map[string]string)
|
||||
cronjobs.Items[i].Spec.JobTemplate.Spec.Template.ObjectMeta.Annotations = annotations
|
||||
cronjobs.Items[i].Spec.JobTemplate.Spec.Template.Annotations = annotations
|
||||
}
|
||||
items[i] = &cronjobs.Items[i]
|
||||
}
|
||||
@@ -88,6 +142,48 @@ func GetCronJobItems(clients kube.Clients, namespace string) []runtime.Object {
|
||||
return items
|
||||
}
|
||||
|
||||
// GetJobItem returns the job in given namespace
|
||||
func GetJobItem(clients kube.Clients, name string, namespace string) (runtime.Object, error) {
|
||||
job, err := clients.KubernetesClient.BatchV1().Jobs(namespace).Get(context.TODO(), name, meta_v1.GetOptions{})
|
||||
if err != nil {
|
||||
logrus.Errorf("Failed to get job %v", err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return job, nil
|
||||
}
|
||||
|
||||
// GetJobItems returns the jobs in given namespace
|
||||
func GetJobItems(clients kube.Clients, namespace string) []runtime.Object {
|
||||
jobs, err := clients.KubernetesClient.BatchV1().Jobs(namespace).List(context.TODO(), meta_v1.ListOptions{})
|
||||
if err != nil {
|
||||
logrus.Errorf("Failed to list jobs %v", err)
|
||||
}
|
||||
|
||||
items := make([]runtime.Object, len(jobs.Items))
|
||||
// Ensure we always have pod annotations to add to
|
||||
for i, v := range jobs.Items {
|
||||
if v.Spec.Template.Annotations == nil {
|
||||
annotations := make(map[string]string)
|
||||
jobs.Items[i].Spec.Template.Annotations = annotations
|
||||
}
|
||||
items[i] = &jobs.Items[i]
|
||||
}
|
||||
|
||||
return items
|
||||
}
|
||||
|
||||
// GetDaemonSetItem returns the daemonSet in given namespace
|
||||
func GetDaemonSetItem(clients kube.Clients, name string, namespace string) (runtime.Object, error) {
|
||||
daemonSet, err := clients.KubernetesClient.AppsV1().DaemonSets(namespace).Get(context.TODO(), name, meta_v1.GetOptions{})
|
||||
if err != nil {
|
||||
logrus.Errorf("Failed to get daemonSet %v", err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return daemonSet, nil
|
||||
}
|
||||
|
||||
// GetDaemonSetItems returns the daemonSets in given namespace
|
||||
func GetDaemonSetItems(clients kube.Clients, namespace string) []runtime.Object {
|
||||
daemonSets, err := clients.KubernetesClient.AppsV1().DaemonSets(namespace).List(context.TODO(), meta_v1.ListOptions{})
|
||||
@@ -98,8 +194,8 @@ func GetDaemonSetItems(clients kube.Clients, namespace string) []runtime.Object
|
||||
items := make([]runtime.Object, len(daemonSets.Items))
|
||||
// Ensure we always have pod annotations to add to
|
||||
for i, v := range daemonSets.Items {
|
||||
if v.Spec.Template.ObjectMeta.Annotations == nil {
|
||||
daemonSets.Items[i].Spec.Template.ObjectMeta.Annotations = make(map[string]string)
|
||||
if v.Spec.Template.Annotations == nil {
|
||||
daemonSets.Items[i].Spec.Template.Annotations = make(map[string]string)
|
||||
}
|
||||
items[i] = &daemonSets.Items[i]
|
||||
}
|
||||
@@ -107,6 +203,17 @@ func GetDaemonSetItems(clients kube.Clients, namespace string) []runtime.Object
|
||||
return items
|
||||
}
|
||||
|
||||
// GetStatefulSetItem returns the statefulSet in given namespace
|
||||
func GetStatefulSetItem(clients kube.Clients, name string, namespace string) (runtime.Object, error) {
|
||||
statefulSet, err := clients.KubernetesClient.AppsV1().StatefulSets(namespace).Get(context.TODO(), name, meta_v1.GetOptions{})
|
||||
if err != nil {
|
||||
logrus.Errorf("Failed to get statefulSet %v", err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return statefulSet, nil
|
||||
}
|
||||
|
||||
// GetStatefulSetItems returns the statefulSets in given namespace
|
||||
func GetStatefulSetItems(clients kube.Clients, namespace string) []runtime.Object {
|
||||
statefulSets, err := clients.KubernetesClient.AppsV1().StatefulSets(namespace).List(context.TODO(), meta_v1.ListOptions{})
|
||||
@@ -117,8 +224,8 @@ func GetStatefulSetItems(clients kube.Clients, namespace string) []runtime.Objec
|
||||
items := make([]runtime.Object, len(statefulSets.Items))
|
||||
// Ensure we always have pod annotations to add to
|
||||
for i, v := range statefulSets.Items {
|
||||
if v.Spec.Template.ObjectMeta.Annotations == nil {
|
||||
statefulSets.Items[i].Spec.Template.ObjectMeta.Annotations = make(map[string]string)
|
||||
if v.Spec.Template.Annotations == nil {
|
||||
statefulSets.Items[i].Spec.Template.Annotations = make(map[string]string)
|
||||
}
|
||||
items[i] = &statefulSets.Items[i]
|
||||
}
|
||||
@@ -126,23 +233,15 @@ func GetStatefulSetItems(clients kube.Clients, namespace string) []runtime.Objec
|
||||
return items
|
||||
}
|
||||
|
||||
// GetDeploymentConfigItems returns the deploymentConfigs in given namespace
|
||||
func GetDeploymentConfigItems(clients kube.Clients, namespace string) []runtime.Object {
|
||||
deploymentConfigs, err := clients.OpenshiftAppsClient.AppsV1().DeploymentConfigs(namespace).List(context.TODO(), meta_v1.ListOptions{})
|
||||
// GetRolloutItem returns the rollout in given namespace
|
||||
func GetRolloutItem(clients kube.Clients, name string, namespace string) (runtime.Object, error) {
|
||||
rollout, err := clients.ArgoRolloutClient.ArgoprojV1alpha1().Rollouts(namespace).Get(context.TODO(), name, meta_v1.GetOptions{})
|
||||
if err != nil {
|
||||
logrus.Errorf("Failed to list deploymentConfigs %v", err)
|
||||
logrus.Errorf("Failed to get Rollout %v", err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
items := make([]runtime.Object, len(deploymentConfigs.Items))
|
||||
// Ensure we always have pod annotations to add to
|
||||
for i, v := range deploymentConfigs.Items {
|
||||
if v.Spec.Template.ObjectMeta.Annotations == nil {
|
||||
deploymentConfigs.Items[i].Spec.Template.ObjectMeta.Annotations = make(map[string]string)
|
||||
}
|
||||
items[i] = &deploymentConfigs.Items[i]
|
||||
}
|
||||
|
||||
return items
|
||||
return rollout, nil
|
||||
}
|
||||
|
||||
// GetRolloutItems returns the rollouts in given namespace
|
||||
@@ -155,8 +254,8 @@ func GetRolloutItems(clients kube.Clients, namespace string) []runtime.Object {
|
||||
items := make([]runtime.Object, len(rollouts.Items))
|
||||
// Ensure we always have pod annotations to add to
|
||||
for i, v := range rollouts.Items {
|
||||
if v.Spec.Template.ObjectMeta.Annotations == nil {
|
||||
rollouts.Items[i].Spec.Template.ObjectMeta.Annotations = make(map[string]string)
|
||||
if v.Spec.Template.Annotations == nil {
|
||||
rollouts.Items[i].Spec.Template.Annotations = make(map[string]string)
|
||||
}
|
||||
items[i] = &rollouts.Items[i]
|
||||
}
|
||||
@@ -166,62 +265,98 @@ func GetRolloutItems(clients kube.Clients, namespace string) []runtime.Object {
|
||||
|
||||
// GetDeploymentAnnotations returns the annotations of given deployment
|
||||
func GetDeploymentAnnotations(item runtime.Object) map[string]string {
|
||||
return item.(*appsv1.Deployment).ObjectMeta.Annotations
|
||||
if item.(*appsv1.Deployment).Annotations == nil {
|
||||
item.(*appsv1.Deployment).Annotations = make(map[string]string)
|
||||
}
|
||||
return item.(*appsv1.Deployment).Annotations
|
||||
}
|
||||
|
||||
// GetCronJobAnnotations returns the annotations of given cronjob
|
||||
func GetCronJobAnnotations(item runtime.Object) map[string]string {
|
||||
return item.(*batchv1.CronJob).ObjectMeta.Annotations
|
||||
if item.(*batchv1.CronJob).Annotations == nil {
|
||||
item.(*batchv1.CronJob).Annotations = make(map[string]string)
|
||||
}
|
||||
return item.(*batchv1.CronJob).Annotations
|
||||
}
|
||||
|
||||
// GetJobAnnotations returns the annotations of given job
|
||||
func GetJobAnnotations(item runtime.Object) map[string]string {
|
||||
if item.(*batchv1.Job).Annotations == nil {
|
||||
item.(*batchv1.Job).Annotations = make(map[string]string)
|
||||
}
|
||||
return item.(*batchv1.Job).Annotations
|
||||
}
|
||||
|
||||
// GetDaemonSetAnnotations returns the annotations of given daemonSet
|
||||
func GetDaemonSetAnnotations(item runtime.Object) map[string]string {
|
||||
return item.(*appsv1.DaemonSet).ObjectMeta.Annotations
|
||||
if item.(*appsv1.DaemonSet).Annotations == nil {
|
||||
item.(*appsv1.DaemonSet).Annotations = make(map[string]string)
|
||||
}
|
||||
return item.(*appsv1.DaemonSet).Annotations
|
||||
}
|
||||
|
||||
// GetStatefulSetAnnotations returns the annotations of given statefulSet
|
||||
func GetStatefulSetAnnotations(item runtime.Object) map[string]string {
|
||||
return item.(*appsv1.StatefulSet).ObjectMeta.Annotations
|
||||
}
|
||||
|
||||
// GetDeploymentConfigAnnotations returns the annotations of given deploymentConfig
|
||||
func GetDeploymentConfigAnnotations(item runtime.Object) map[string]string {
|
||||
return item.(*openshiftv1.DeploymentConfig).ObjectMeta.Annotations
|
||||
if item.(*appsv1.StatefulSet).Annotations == nil {
|
||||
item.(*appsv1.StatefulSet).Annotations = make(map[string]string)
|
||||
}
|
||||
return item.(*appsv1.StatefulSet).Annotations
|
||||
}
|
||||
|
||||
// GetRolloutAnnotations returns the annotations of given rollout
|
||||
func GetRolloutAnnotations(item runtime.Object) map[string]string {
|
||||
return item.(*argorolloutv1alpha1.Rollout).ObjectMeta.Annotations
|
||||
if item.(*argorolloutv1alpha1.Rollout).Annotations == nil {
|
||||
item.(*argorolloutv1alpha1.Rollout).Annotations = make(map[string]string)
|
||||
}
|
||||
return item.(*argorolloutv1alpha1.Rollout).Annotations
|
||||
}
|
||||
|
||||
// GetDeploymentPodAnnotations returns the pod's annotations of given deployment
|
||||
func GetDeploymentPodAnnotations(item runtime.Object) map[string]string {
|
||||
return item.(*appsv1.Deployment).Spec.Template.ObjectMeta.Annotations
|
||||
if item.(*appsv1.Deployment).Spec.Template.Annotations == nil {
|
||||
item.(*appsv1.Deployment).Spec.Template.Annotations = make(map[string]string)
|
||||
}
|
||||
return item.(*appsv1.Deployment).Spec.Template.Annotations
|
||||
}
|
||||
|
||||
// GetCronJobPodAnnotations returns the pod's annotations of given cronjob
|
||||
func GetCronJobPodAnnotations(item runtime.Object) map[string]string {
|
||||
return item.(*batchv1.CronJob).Spec.JobTemplate.Spec.Template.ObjectMeta.Annotations
|
||||
if item.(*batchv1.CronJob).Spec.JobTemplate.Spec.Template.Annotations == nil {
|
||||
item.(*batchv1.CronJob).Spec.JobTemplate.Spec.Template.Annotations = make(map[string]string)
|
||||
}
|
||||
return item.(*batchv1.CronJob).Spec.JobTemplate.Spec.Template.Annotations
|
||||
}
|
||||
|
||||
// GetJobPodAnnotations returns the pod's annotations of given job
|
||||
func GetJobPodAnnotations(item runtime.Object) map[string]string {
|
||||
if item.(*batchv1.Job).Spec.Template.Annotations == nil {
|
||||
item.(*batchv1.Job).Spec.Template.Annotations = make(map[string]string)
|
||||
}
|
||||
return item.(*batchv1.Job).Spec.Template.Annotations
|
||||
}
|
||||
|
||||
// GetDaemonSetPodAnnotations returns the pod's annotations of given daemonSet
|
||||
func GetDaemonSetPodAnnotations(item runtime.Object) map[string]string {
|
||||
return item.(*appsv1.DaemonSet).Spec.Template.ObjectMeta.Annotations
|
||||
if item.(*appsv1.DaemonSet).Spec.Template.Annotations == nil {
|
||||
item.(*appsv1.DaemonSet).Spec.Template.Annotations = make(map[string]string)
|
||||
}
|
||||
return item.(*appsv1.DaemonSet).Spec.Template.Annotations
|
||||
}
|
||||
|
||||
// GetStatefulSetPodAnnotations returns the pod's annotations of given statefulSet
|
||||
func GetStatefulSetPodAnnotations(item runtime.Object) map[string]string {
|
||||
return item.(*appsv1.StatefulSet).Spec.Template.ObjectMeta.Annotations
|
||||
}
|
||||
|
||||
// GetDeploymentConfigPodAnnotations returns the pod's annotations of given deploymentConfig
|
||||
func GetDeploymentConfigPodAnnotations(item runtime.Object) map[string]string {
|
||||
return item.(*openshiftv1.DeploymentConfig).Spec.Template.ObjectMeta.Annotations
|
||||
if item.(*appsv1.StatefulSet).Spec.Template.Annotations == nil {
|
||||
item.(*appsv1.StatefulSet).Spec.Template.Annotations = make(map[string]string)
|
||||
}
|
||||
return item.(*appsv1.StatefulSet).Spec.Template.Annotations
|
||||
}
|
||||
|
||||
// GetRolloutPodAnnotations returns the pod's annotations of given rollout
|
||||
func GetRolloutPodAnnotations(item runtime.Object) map[string]string {
|
||||
return item.(*argorolloutv1alpha1.Rollout).Spec.Template.ObjectMeta.Annotations
|
||||
if item.(*argorolloutv1alpha1.Rollout).Spec.Template.Annotations == nil {
|
||||
item.(*argorolloutv1alpha1.Rollout).Spec.Template.Annotations = make(map[string]string)
|
||||
}
|
||||
return item.(*argorolloutv1alpha1.Rollout).Spec.Template.Annotations
|
||||
}
|
||||
|
||||
// GetDeploymentContainers returns the containers of given deployment
|
||||
@@ -234,6 +369,11 @@ func GetCronJobContainers(item runtime.Object) []v1.Container {
|
||||
return item.(*batchv1.CronJob).Spec.JobTemplate.Spec.Template.Spec.Containers
|
||||
}
|
||||
|
||||
// GetJobContainers returns the containers of given job
|
||||
func GetJobContainers(item runtime.Object) []v1.Container {
|
||||
return item.(*batchv1.Job).Spec.Template.Spec.Containers
|
||||
}
|
||||
|
||||
// GetDaemonSetContainers returns the containers of given daemonSet
|
||||
func GetDaemonSetContainers(item runtime.Object) []v1.Container {
|
||||
return item.(*appsv1.DaemonSet).Spec.Template.Spec.Containers
|
||||
@@ -244,11 +384,6 @@ func GetStatefulSetContainers(item runtime.Object) []v1.Container {
|
||||
return item.(*appsv1.StatefulSet).Spec.Template.Spec.Containers
|
||||
}
|
||||
|
||||
// GetDeploymentConfigContainers returns the containers of given deploymentConfig
|
||||
func GetDeploymentConfigContainers(item runtime.Object) []v1.Container {
|
||||
return item.(*openshiftv1.DeploymentConfig).Spec.Template.Spec.Containers
|
||||
}
|
||||
|
||||
// GetRolloutContainers returns the containers of given rollout
|
||||
func GetRolloutContainers(item runtime.Object) []v1.Container {
|
||||
return item.(*argorolloutv1alpha1.Rollout).Spec.Template.Spec.Containers
|
||||
@@ -264,6 +399,11 @@ func GetCronJobInitContainers(item runtime.Object) []v1.Container {
|
||||
return item.(*batchv1.CronJob).Spec.JobTemplate.Spec.Template.Spec.InitContainers
|
||||
}
|
||||
|
||||
// GetJobInitContainers returns the containers of given job
|
||||
func GetJobInitContainers(item runtime.Object) []v1.Container {
|
||||
return item.(*batchv1.Job).Spec.Template.Spec.InitContainers
|
||||
}
|
||||
|
||||
// GetDaemonSetInitContainers returns the containers of given daemonSet
|
||||
func GetDaemonSetInitContainers(item runtime.Object) []v1.Container {
|
||||
return item.(*appsv1.DaemonSet).Spec.Template.Spec.InitContainers
|
||||
@@ -274,16 +414,20 @@ func GetStatefulSetInitContainers(item runtime.Object) []v1.Container {
|
||||
return item.(*appsv1.StatefulSet).Spec.Template.Spec.InitContainers
|
||||
}
|
||||
|
||||
// GetDeploymentConfigInitContainers returns the containers of given deploymentConfig
|
||||
func GetDeploymentConfigInitContainers(item runtime.Object) []v1.Container {
|
||||
return item.(*openshiftv1.DeploymentConfig).Spec.Template.Spec.InitContainers
|
||||
}
|
||||
|
||||
// GetRolloutInitContainers returns the containers of given rollout
|
||||
func GetRolloutInitContainers(item runtime.Object) []v1.Container {
|
||||
return item.(*argorolloutv1alpha1.Rollout).Spec.Template.Spec.InitContainers
|
||||
}
|
||||
|
||||
// GetPatchTemplates returns patch templates
|
||||
func GetPatchTemplates() PatchTemplates {
|
||||
return PatchTemplates{
|
||||
AnnotationTemplate: `{"spec":{"template":{"metadata":{"annotations":{"%s":"%s"}}}}}`, // strategic merge patch
|
||||
EnvVarTemplate: `{"spec":{"template":{"spec":{"containers":[{"name":"%s","env":[{"name":"%s","value":"%s"}]}]}}}}`, // strategic merge patch
|
||||
DeleteEnvVarTemplate: `[{"op":"remove","path":"/spec/template/spec/containers/%d/env/%d"}]`, // JSON patch
|
||||
}
|
||||
}
|
||||
|
||||
// UpdateDeployment performs rolling upgrade on deployment
|
||||
func UpdateDeployment(clients kube.Clients, namespace string, resource runtime.Object) error {
|
||||
deployment := resource.(*appsv1.Deployment)
|
||||
@@ -291,18 +435,75 @@ func UpdateDeployment(clients kube.Clients, namespace string, resource runtime.O
|
||||
return err
|
||||
}
|
||||
|
||||
// PatchDeployment performs rolling upgrade on deployment
|
||||
func PatchDeployment(clients kube.Clients, namespace string, resource runtime.Object, patchType patchtypes.PatchType, bytes []byte) error {
|
||||
deployment := resource.(*appsv1.Deployment)
|
||||
_, err := clients.KubernetesClient.AppsV1().Deployments(namespace).Patch(context.TODO(), deployment.Name, patchType, bytes, meta_v1.PatchOptions{FieldManager: "Reloader"})
|
||||
return err
|
||||
}
|
||||
|
||||
// CreateJobFromCronjob performs rolling upgrade on cronjob
|
||||
func CreateJobFromCronjob(clients kube.Clients, namespace string, resource runtime.Object) error {
|
||||
cronJob := resource.(*batchv1.CronJob)
|
||||
|
||||
annotations := make(map[string]string)
|
||||
annotations["cronjob.kubernetes.io/instantiate"] = "manual"
|
||||
maps.Copy(annotations, cronJob.Spec.JobTemplate.Annotations)
|
||||
|
||||
job := &batchv1.Job{
|
||||
ObjectMeta: cronJob.Spec.JobTemplate.ObjectMeta,
|
||||
Spec: cronJob.Spec.JobTemplate.Spec,
|
||||
ObjectMeta: meta_v1.ObjectMeta{
|
||||
GenerateName: cronJob.Name + "-",
|
||||
Namespace: cronJob.Namespace,
|
||||
Annotations: annotations,
|
||||
Labels: cronJob.Spec.JobTemplate.Labels,
|
||||
OwnerReferences: []meta_v1.OwnerReference{*meta_v1.NewControllerRef(cronJob, batchv1.SchemeGroupVersion.WithKind("CronJob"))},
|
||||
},
|
||||
Spec: cronJob.Spec.JobTemplate.Spec,
|
||||
}
|
||||
job.GenerateName = cronJob.Name + "-"
|
||||
_, err := clients.KubernetesClient.BatchV1().Jobs(namespace).Create(context.TODO(), job, meta_v1.CreateOptions{FieldManager: "Reloader"})
|
||||
return err
|
||||
}
|
||||
|
||||
func PatchCronJob(clients kube.Clients, namespace string, resource runtime.Object, patchType patchtypes.PatchType, bytes []byte) error {
|
||||
return errors.New("not supported patching: CronJob")
|
||||
}
|
||||
|
||||
// ReCreateJobFromjob performs rolling upgrade on job
|
||||
func ReCreateJobFromjob(clients kube.Clients, namespace string, resource runtime.Object) error {
|
||||
oldJob := resource.(*batchv1.Job)
|
||||
job := oldJob.DeepCopy()
|
||||
|
||||
// Delete the old job
|
||||
policy := meta_v1.DeletePropagationBackground
|
||||
err := clients.KubernetesClient.BatchV1().Jobs(namespace).Delete(context.TODO(), job.Name, meta_v1.DeleteOptions{PropagationPolicy: &policy})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Remove fields that should not be specified when creating a new Job
|
||||
job.ResourceVersion = ""
|
||||
job.UID = ""
|
||||
job.CreationTimestamp = meta_v1.Time{}
|
||||
job.Status = batchv1.JobStatus{}
|
||||
|
||||
// Remove problematic labels
|
||||
delete(job.Spec.Template.Labels, "controller-uid")
|
||||
delete(job.Spec.Template.Labels, batchv1.ControllerUidLabel)
|
||||
delete(job.Spec.Template.Labels, batchv1.JobNameLabel)
|
||||
delete(job.Spec.Template.Labels, "job-name")
|
||||
|
||||
// Remove the selector to allow it to be auto-generated
|
||||
job.Spec.Selector = nil
|
||||
|
||||
// Create the new job with same spec
|
||||
_, err = clients.KubernetesClient.BatchV1().Jobs(namespace).Create(context.TODO(), job, meta_v1.CreateOptions{FieldManager: "Reloader"})
|
||||
return err
|
||||
}
|
||||
|
||||
func PatchJob(clients kube.Clients, namespace string, resource runtime.Object, patchType patchtypes.PatchType, bytes []byte) error {
|
||||
return errors.New("not supported patching: Job")
|
||||
}
|
||||
|
||||
// UpdateDaemonSet performs rolling upgrade on daemonSet
|
||||
func UpdateDaemonSet(clients kube.Clients, namespace string, resource runtime.Object) error {
|
||||
daemonSet := resource.(*appsv1.DaemonSet)
|
||||
@@ -310,6 +511,12 @@ func UpdateDaemonSet(clients kube.Clients, namespace string, resource runtime.Ob
|
||||
return err
|
||||
}
|
||||
|
||||
func PatchDaemonSet(clients kube.Clients, namespace string, resource runtime.Object, patchType patchtypes.PatchType, bytes []byte) error {
|
||||
daemonSet := resource.(*appsv1.DaemonSet)
|
||||
_, err := clients.KubernetesClient.AppsV1().DaemonSets(namespace).Patch(context.TODO(), daemonSet.Name, patchType, bytes, meta_v1.PatchOptions{FieldManager: "Reloader"})
|
||||
return err
|
||||
}
|
||||
|
||||
// UpdateStatefulSet performs rolling upgrade on statefulSet
|
||||
func UpdateStatefulSet(clients kube.Clients, namespace string, resource runtime.Object) error {
|
||||
statefulSet := resource.(*appsv1.StatefulSet)
|
||||
@@ -317,23 +524,30 @@ func UpdateStatefulSet(clients kube.Clients, namespace string, resource runtime.
|
||||
return err
|
||||
}
|
||||
|
||||
// UpdateDeploymentConfig performs rolling upgrade on deploymentConfig
|
||||
func UpdateDeploymentConfig(clients kube.Clients, namespace string, resource runtime.Object) error {
|
||||
deploymentConfig := resource.(*openshiftv1.DeploymentConfig)
|
||||
_, err := clients.OpenshiftAppsClient.AppsV1().DeploymentConfigs(namespace).Update(context.TODO(), deploymentConfig, meta_v1.UpdateOptions{FieldManager: "Reloader"})
|
||||
func PatchStatefulSet(clients kube.Clients, namespace string, resource runtime.Object, patchType patchtypes.PatchType, bytes []byte) error {
|
||||
statefulSet := resource.(*appsv1.StatefulSet)
|
||||
_, err := clients.KubernetesClient.AppsV1().StatefulSets(namespace).Patch(context.TODO(), statefulSet.Name, patchType, bytes, meta_v1.PatchOptions{FieldManager: "Reloader"})
|
||||
return err
|
||||
}
|
||||
|
||||
// UpdateRollout performs rolling upgrade on rollout
|
||||
func UpdateRollout(clients kube.Clients, namespace string, resource runtime.Object) error {
|
||||
rollout := resource.(*argorolloutv1alpha1.Rollout)
|
||||
rolloutBefore, _ := clients.ArgoRolloutClient.ArgoprojV1alpha1().Rollouts(namespace).Get(context.TODO(), rollout.Name, meta_v1.GetOptions{})
|
||||
logrus.Warnf("Before: %+v", rolloutBefore.Spec.Template.Spec.Containers[0].Env)
|
||||
logrus.Warnf("After: %+v", rollout.Spec.Template.Spec.Containers[0].Env)
|
||||
_, err := clients.ArgoRolloutClient.ArgoprojV1alpha1().Rollouts(namespace).Update(context.TODO(), rollout, meta_v1.UpdateOptions{FieldManager: "Reloader"})
|
||||
strategy := rollout.GetAnnotations()[options.RolloutStrategyAnnotation]
|
||||
var err error
|
||||
switch options.ToArgoRolloutStrategy(strategy) {
|
||||
case options.RestartStrategy:
|
||||
_, err = clients.ArgoRolloutClient.ArgoprojV1alpha1().Rollouts(namespace).Patch(context.TODO(), rollout.Name, patchtypes.MergePatchType, []byte(fmt.Sprintf(`{"spec": {"restartAt": "%s"}}`, time.Now().Format(time.RFC3339))), meta_v1.PatchOptions{FieldManager: "Reloader"})
|
||||
case options.RolloutStrategy:
|
||||
_, err = clients.ArgoRolloutClient.ArgoprojV1alpha1().Rollouts(namespace).Update(context.TODO(), rollout, meta_v1.UpdateOptions{FieldManager: "Reloader"})
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func PatchRollout(clients kube.Clients, namespace string, resource runtime.Object, patchType patchtypes.PatchType, bytes []byte) error {
|
||||
return errors.New("not supported patching: Rollout")
|
||||
}
|
||||
|
||||
// GetDeploymentVolumes returns the Volumes of given deployment
|
||||
func GetDeploymentVolumes(item runtime.Object) []v1.Volume {
|
||||
return item.(*appsv1.Deployment).Spec.Template.Spec.Volumes
|
||||
@@ -344,6 +558,11 @@ func GetCronJobVolumes(item runtime.Object) []v1.Volume {
|
||||
return item.(*batchv1.CronJob).Spec.JobTemplate.Spec.Template.Spec.Volumes
|
||||
}
|
||||
|
||||
// GetJobVolumes returns the Volumes of given job
|
||||
func GetJobVolumes(item runtime.Object) []v1.Volume {
|
||||
return item.(*batchv1.Job).Spec.Template.Spec.Volumes
|
||||
}
|
||||
|
||||
// GetDaemonSetVolumes returns the Volumes of given daemonSet
|
||||
func GetDaemonSetVolumes(item runtime.Object) []v1.Volume {
|
||||
return item.(*appsv1.DaemonSet).Spec.Template.Spec.Volumes
|
||||
@@ -354,11 +573,6 @@ func GetStatefulSetVolumes(item runtime.Object) []v1.Volume {
|
||||
return item.(*appsv1.StatefulSet).Spec.Template.Spec.Volumes
|
||||
}
|
||||
|
||||
// GetDeploymentConfigVolumes returns the Volumes of given deploymentConfig
|
||||
func GetDeploymentConfigVolumes(item runtime.Object) []v1.Volume {
|
||||
return item.(*openshiftv1.DeploymentConfig).Spec.Template.Spec.Volumes
|
||||
}
|
||||
|
||||
// GetRolloutVolumes returns the Volumes of given rollout
|
||||
func GetRolloutVolumes(item runtime.Object) []v1.Volume {
|
||||
return item.(*argorolloutv1alpha1.Rollout).Spec.Template.Spec.Volumes
|
||||
|
||||
773
internal/pkg/callbacks/rolling_upgrade_test.go
Normal file
773
internal/pkg/callbacks/rolling_upgrade_test.go
Normal file
@@ -0,0 +1,773 @@
|
||||
package callbacks_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
appsv1 "k8s.io/api/apps/v1"
|
||||
batchv1 "k8s.io/api/batch/v1"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
"k8s.io/apimachinery/pkg/api/meta"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
watch "k8s.io/apimachinery/pkg/watch"
|
||||
"k8s.io/client-go/kubernetes/fake"
|
||||
|
||||
argorolloutv1alpha1 "github.com/argoproj/argo-rollouts/pkg/apis/rollouts/v1alpha1"
|
||||
fakeargoclientset "github.com/argoproj/argo-rollouts/pkg/client/clientset/versioned/fake"
|
||||
patchtypes "k8s.io/apimachinery/pkg/types"
|
||||
|
||||
"github.com/stakater/Reloader/internal/pkg/callbacks"
|
||||
"github.com/stakater/Reloader/internal/pkg/options"
|
||||
"github.com/stakater/Reloader/internal/pkg/testutil"
|
||||
"github.com/stakater/Reloader/pkg/kube"
|
||||
)
|
||||
|
||||
var (
|
||||
clients = setupTestClients()
|
||||
)
|
||||
|
||||
type testFixtures struct {
|
||||
defaultContainers []v1.Container
|
||||
defaultInitContainers []v1.Container
|
||||
defaultVolumes []v1.Volume
|
||||
namespace string
|
||||
}
|
||||
|
||||
func newTestFixtures() testFixtures {
|
||||
return testFixtures{
|
||||
defaultContainers: []v1.Container{{Name: "container1"}, {Name: "container2"}},
|
||||
defaultInitContainers: []v1.Container{{Name: "init-container1"}, {Name: "init-container2"}},
|
||||
defaultVolumes: []v1.Volume{{Name: "volume1"}, {Name: "volume2"}},
|
||||
namespace: "default",
|
||||
}
|
||||
}
|
||||
|
||||
func setupTestClients() kube.Clients {
|
||||
return kube.Clients{
|
||||
KubernetesClient: fake.NewClientset(),
|
||||
ArgoRolloutClient: fakeargoclientset.NewSimpleClientset(),
|
||||
}
|
||||
}
|
||||
|
||||
// TestUpdateRollout test update rollout strategy annotation
|
||||
func TestUpdateRollout(t *testing.T) {
|
||||
namespace := "test-ns"
|
||||
|
||||
cases := map[string]struct {
|
||||
name string
|
||||
strategy string
|
||||
isRestart bool
|
||||
}{
|
||||
"test-without-strategy": {
|
||||
name: "defaults to rollout strategy",
|
||||
strategy: "",
|
||||
isRestart: false,
|
||||
},
|
||||
"test-with-restart-strategy": {
|
||||
name: "triggers a restart strategy",
|
||||
strategy: "restart",
|
||||
isRestart: true,
|
||||
},
|
||||
"test-with-rollout-strategy": {
|
||||
name: "triggers a rollout strategy",
|
||||
strategy: "rollout",
|
||||
isRestart: false,
|
||||
},
|
||||
}
|
||||
for name, tc := range cases {
|
||||
t.Run(name, func(t *testing.T) {
|
||||
rollout, err := testutil.CreateRollout(
|
||||
clients.ArgoRolloutClient, name, namespace,
|
||||
map[string]string{options.RolloutStrategyAnnotation: tc.strategy},
|
||||
)
|
||||
if err != nil {
|
||||
t.Errorf("creating rollout: %v", err)
|
||||
}
|
||||
modifiedChan := watchRollout(rollout.Name, namespace)
|
||||
|
||||
err = callbacks.UpdateRollout(clients, namespace, rollout)
|
||||
if err != nil {
|
||||
t.Errorf("updating rollout: %v", err)
|
||||
}
|
||||
rollout, err = clients.ArgoRolloutClient.ArgoprojV1alpha1().Rollouts(
|
||||
namespace).Get(context.TODO(), rollout.Name, metav1.GetOptions{})
|
||||
|
||||
if err != nil {
|
||||
t.Errorf("getting rollout: %v", err)
|
||||
}
|
||||
if isRestartStrategy(rollout) == tc.isRestart {
|
||||
t.Errorf("Should not be a restart strategy")
|
||||
}
|
||||
select {
|
||||
case <-modifiedChan:
|
||||
// object has been modified
|
||||
case <-time.After(1 * time.Second):
|
||||
t.Errorf("Rollout has not been updated")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestPatchRollout(t *testing.T) {
|
||||
namespace := "test-ns"
|
||||
rollout := testutil.GetRollout(namespace, "test", map[string]string{options.RolloutStrategyAnnotation: ""})
|
||||
err := callbacks.PatchRollout(clients, namespace, rollout, patchtypes.StrategicMergePatchType, []byte(`{"spec": {}}`))
|
||||
assert.EqualError(t, err, "not supported patching: Rollout")
|
||||
}
|
||||
|
||||
func TestResourceItem(t *testing.T) {
|
||||
fixtures := newTestFixtures()
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
createFunc func(kube.Clients, string, string) (runtime.Object, error)
|
||||
getItemFunc func(kube.Clients, string, string) (runtime.Object, error)
|
||||
deleteFunc func(kube.Clients, string, string) error
|
||||
}{
|
||||
{
|
||||
name: "Deployment",
|
||||
createFunc: createTestDeploymentWithAnnotations,
|
||||
getItemFunc: callbacks.GetDeploymentItem,
|
||||
deleteFunc: deleteTestDeployment,
|
||||
},
|
||||
{
|
||||
name: "CronJob",
|
||||
createFunc: createTestCronJobWithAnnotations,
|
||||
getItemFunc: callbacks.GetCronJobItem,
|
||||
deleteFunc: deleteTestCronJob,
|
||||
},
|
||||
{
|
||||
name: "Job",
|
||||
createFunc: createTestJobWithAnnotations,
|
||||
getItemFunc: callbacks.GetJobItem,
|
||||
deleteFunc: deleteTestJob,
|
||||
},
|
||||
{
|
||||
name: "DaemonSet",
|
||||
createFunc: createTestDaemonSetWithAnnotations,
|
||||
getItemFunc: callbacks.GetDaemonSetItem,
|
||||
deleteFunc: deleteTestDaemonSet,
|
||||
},
|
||||
{
|
||||
name: "StatefulSet",
|
||||
createFunc: createTestStatefulSetWithAnnotations,
|
||||
getItemFunc: callbacks.GetStatefulSetItem,
|
||||
deleteFunc: deleteTestStatefulSet,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
resource, err := tt.createFunc(clients, fixtures.namespace, "1")
|
||||
assert.NoError(t, err)
|
||||
|
||||
accessor, err := meta.Accessor(resource)
|
||||
assert.NoError(t, err)
|
||||
|
||||
_, err = tt.getItemFunc(clients, accessor.GetName(), fixtures.namespace)
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = tt.deleteFunc(clients, fixtures.namespace, accessor.GetName())
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestResourceItems(t *testing.T) {
|
||||
fixtures := newTestFixtures()
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
createFunc func(kube.Clients, string) error
|
||||
getItemsFunc func(kube.Clients, string) []runtime.Object
|
||||
deleteFunc func(kube.Clients, string) error
|
||||
expectedCount int
|
||||
}{
|
||||
{
|
||||
name: "Deployments",
|
||||
createFunc: createTestDeployments,
|
||||
getItemsFunc: callbacks.GetDeploymentItems,
|
||||
deleteFunc: deleteTestDeployments,
|
||||
expectedCount: 2,
|
||||
},
|
||||
{
|
||||
name: "CronJobs",
|
||||
createFunc: createTestCronJobs,
|
||||
getItemsFunc: callbacks.GetCronJobItems,
|
||||
deleteFunc: deleteTestCronJobs,
|
||||
expectedCount: 2,
|
||||
},
|
||||
{
|
||||
name: "Jobs",
|
||||
createFunc: createTestJobs,
|
||||
getItemsFunc: callbacks.GetJobItems,
|
||||
deleteFunc: deleteTestJobs,
|
||||
expectedCount: 2,
|
||||
},
|
||||
{
|
||||
name: "DaemonSets",
|
||||
createFunc: createTestDaemonSets,
|
||||
getItemsFunc: callbacks.GetDaemonSetItems,
|
||||
deleteFunc: deleteTestDaemonSets,
|
||||
expectedCount: 2,
|
||||
},
|
||||
{
|
||||
name: "StatefulSets",
|
||||
createFunc: createTestStatefulSets,
|
||||
getItemsFunc: callbacks.GetStatefulSetItems,
|
||||
deleteFunc: deleteTestStatefulSets,
|
||||
expectedCount: 2,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
err := tt.createFunc(clients, fixtures.namespace)
|
||||
assert.NoError(t, err)
|
||||
|
||||
items := tt.getItemsFunc(clients, fixtures.namespace)
|
||||
assert.Equal(t, tt.expectedCount, len(items))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetAnnotations(t *testing.T) {
|
||||
testAnnotations := map[string]string{"version": "1"}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
resource runtime.Object
|
||||
getFunc func(runtime.Object) map[string]string
|
||||
}{
|
||||
{"Deployment", &appsv1.Deployment{ObjectMeta: metav1.ObjectMeta{Annotations: testAnnotations}}, callbacks.GetDeploymentAnnotations},
|
||||
{"CronJob", &batchv1.CronJob{ObjectMeta: metav1.ObjectMeta{Annotations: testAnnotations}}, callbacks.GetCronJobAnnotations},
|
||||
{"Job", &batchv1.Job{ObjectMeta: metav1.ObjectMeta{Annotations: testAnnotations}}, callbacks.GetJobAnnotations},
|
||||
{"DaemonSet", &appsv1.DaemonSet{ObjectMeta: metav1.ObjectMeta{Annotations: testAnnotations}}, callbacks.GetDaemonSetAnnotations},
|
||||
{"StatefulSet", &appsv1.StatefulSet{ObjectMeta: metav1.ObjectMeta{Annotations: testAnnotations}}, callbacks.GetStatefulSetAnnotations},
|
||||
{"Rollout", &argorolloutv1alpha1.Rollout{ObjectMeta: metav1.ObjectMeta{Annotations: testAnnotations}}, callbacks.GetRolloutAnnotations},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
assert.Equal(t, testAnnotations, tt.getFunc(tt.resource))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetPodAnnotations(t *testing.T) {
|
||||
testAnnotations := map[string]string{"version": "1"}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
resource runtime.Object
|
||||
getFunc func(runtime.Object) map[string]string
|
||||
}{
|
||||
{"Deployment", createResourceWithPodAnnotations(&appsv1.Deployment{}, testAnnotations), callbacks.GetDeploymentPodAnnotations},
|
||||
{"CronJob", createResourceWithPodAnnotations(&batchv1.CronJob{}, testAnnotations), callbacks.GetCronJobPodAnnotations},
|
||||
{"Job", createResourceWithPodAnnotations(&batchv1.Job{}, testAnnotations), callbacks.GetJobPodAnnotations},
|
||||
{"DaemonSet", createResourceWithPodAnnotations(&appsv1.DaemonSet{}, testAnnotations), callbacks.GetDaemonSetPodAnnotations},
|
||||
{"StatefulSet", createResourceWithPodAnnotations(&appsv1.StatefulSet{}, testAnnotations), callbacks.GetStatefulSetPodAnnotations},
|
||||
{"Rollout", createResourceWithPodAnnotations(&argorolloutv1alpha1.Rollout{}, testAnnotations), callbacks.GetRolloutPodAnnotations},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
assert.Equal(t, testAnnotations, tt.getFunc(tt.resource))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetContainers(t *testing.T) {
|
||||
fixtures := newTestFixtures()
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
resource runtime.Object
|
||||
getFunc func(runtime.Object) []v1.Container
|
||||
}{
|
||||
{"Deployment", createResourceWithContainers(&appsv1.Deployment{}, fixtures.defaultContainers), callbacks.GetDeploymentContainers},
|
||||
{"DaemonSet", createResourceWithContainers(&appsv1.DaemonSet{}, fixtures.defaultContainers), callbacks.GetDaemonSetContainers},
|
||||
{"StatefulSet", createResourceWithContainers(&appsv1.StatefulSet{}, fixtures.defaultContainers), callbacks.GetStatefulSetContainers},
|
||||
{"CronJob", createResourceWithContainers(&batchv1.CronJob{}, fixtures.defaultContainers), callbacks.GetCronJobContainers},
|
||||
{"Job", createResourceWithContainers(&batchv1.Job{}, fixtures.defaultContainers), callbacks.GetJobContainers},
|
||||
{"Rollout", createResourceWithContainers(&argorolloutv1alpha1.Rollout{}, fixtures.defaultContainers), callbacks.GetRolloutContainers},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
assert.Equal(t, fixtures.defaultContainers, tt.getFunc(tt.resource))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetInitContainers(t *testing.T) {
|
||||
fixtures := newTestFixtures()
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
resource runtime.Object
|
||||
getFunc func(runtime.Object) []v1.Container
|
||||
}{
|
||||
{"Deployment", createResourceWithInitContainers(&appsv1.Deployment{}, fixtures.defaultInitContainers), callbacks.GetDeploymentInitContainers},
|
||||
{"DaemonSet", createResourceWithInitContainers(&appsv1.DaemonSet{}, fixtures.defaultInitContainers), callbacks.GetDaemonSetInitContainers},
|
||||
{"StatefulSet", createResourceWithInitContainers(&appsv1.StatefulSet{}, fixtures.defaultInitContainers), callbacks.GetStatefulSetInitContainers},
|
||||
{"CronJob", createResourceWithInitContainers(&batchv1.CronJob{}, fixtures.defaultInitContainers), callbacks.GetCronJobInitContainers},
|
||||
{"Job", createResourceWithInitContainers(&batchv1.Job{}, fixtures.defaultInitContainers), callbacks.GetJobInitContainers},
|
||||
{"Rollout", createResourceWithInitContainers(&argorolloutv1alpha1.Rollout{}, fixtures.defaultInitContainers), callbacks.GetRolloutInitContainers},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
assert.Equal(t, fixtures.defaultInitContainers, tt.getFunc(tt.resource))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateResources(t *testing.T) {
|
||||
fixtures := newTestFixtures()
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
createFunc func(kube.Clients, string, string) (runtime.Object, error)
|
||||
updateFunc func(kube.Clients, string, runtime.Object) error
|
||||
deleteFunc func(kube.Clients, string, string) error
|
||||
}{
|
||||
{"Deployment", createTestDeploymentWithAnnotations, callbacks.UpdateDeployment, deleteTestDeployment},
|
||||
{"DaemonSet", createTestDaemonSetWithAnnotations, callbacks.UpdateDaemonSet, deleteTestDaemonSet},
|
||||
{"StatefulSet", createTestStatefulSetWithAnnotations, callbacks.UpdateStatefulSet, deleteTestStatefulSet},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
resource, err := tt.createFunc(clients, fixtures.namespace, "1")
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = tt.updateFunc(clients, fixtures.namespace, resource)
|
||||
assert.NoError(t, err)
|
||||
|
||||
accessor, err := meta.Accessor(resource)
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = tt.deleteFunc(clients, fixtures.namespace, accessor.GetName())
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestPatchResources(t *testing.T) {
|
||||
fixtures := newTestFixtures()
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
createFunc func(kube.Clients, string, string) (runtime.Object, error)
|
||||
patchFunc func(kube.Clients, string, runtime.Object, patchtypes.PatchType, []byte) error
|
||||
deleteFunc func(kube.Clients, string, string) error
|
||||
assertFunc func(err error)
|
||||
}{
|
||||
{"Deployment", createTestDeploymentWithAnnotations, callbacks.PatchDeployment, deleteTestDeployment, func(err error) {
|
||||
assert.NoError(t, err)
|
||||
patchedResource, err := callbacks.GetDeploymentItem(clients, "test-deployment", fixtures.namespace)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "test", patchedResource.(*appsv1.Deployment).Annotations["test"])
|
||||
}},
|
||||
{"DaemonSet", createTestDaemonSetWithAnnotations, callbacks.PatchDaemonSet, deleteTestDaemonSet, func(err error) {
|
||||
assert.NoError(t, err)
|
||||
patchedResource, err := callbacks.GetDaemonSetItem(clients, "test-daemonset", fixtures.namespace)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "test", patchedResource.(*appsv1.DaemonSet).Annotations["test"])
|
||||
}},
|
||||
{"StatefulSet", createTestStatefulSetWithAnnotations, callbacks.PatchStatefulSet, deleteTestStatefulSet, func(err error) {
|
||||
assert.NoError(t, err)
|
||||
patchedResource, err := callbacks.GetStatefulSetItem(clients, "test-statefulset", fixtures.namespace)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "test", patchedResource.(*appsv1.StatefulSet).Annotations["test"])
|
||||
}},
|
||||
{"CronJob", createTestCronJobWithAnnotations, callbacks.PatchCronJob, deleteTestCronJob, func(err error) {
|
||||
assert.EqualError(t, err, "not supported patching: CronJob")
|
||||
}},
|
||||
{"Job", createTestJobWithAnnotations, callbacks.PatchJob, deleteTestJob, func(err error) {
|
||||
assert.EqualError(t, err, "not supported patching: Job")
|
||||
}},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
resource, err := tt.createFunc(clients, fixtures.namespace, "1")
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = tt.patchFunc(clients, fixtures.namespace, resource, patchtypes.StrategicMergePatchType, []byte(`{"metadata":{"annotations":{"test":"test"}}}`))
|
||||
tt.assertFunc(err)
|
||||
|
||||
accessor, err := meta.Accessor(resource)
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = tt.deleteFunc(clients, fixtures.namespace, accessor.GetName())
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestCreateJobFromCronjob(t *testing.T) {
|
||||
fixtures := newTestFixtures()
|
||||
|
||||
runtimeObj, err := createTestCronJobWithAnnotations(clients, fixtures.namespace, "1")
|
||||
assert.NoError(t, err)
|
||||
|
||||
cronJob := runtimeObj.(*batchv1.CronJob)
|
||||
err = callbacks.CreateJobFromCronjob(clients, fixtures.namespace, cronJob)
|
||||
assert.NoError(t, err)
|
||||
|
||||
jobList, err := clients.KubernetesClient.BatchV1().Jobs(fixtures.namespace).List(context.TODO(), metav1.ListOptions{})
|
||||
assert.NoError(t, err)
|
||||
|
||||
ownerFound := false
|
||||
for _, job := range jobList.Items {
|
||||
if isControllerOwner("CronJob", cronJob.Name, job.OwnerReferences) {
|
||||
ownerFound = true
|
||||
break
|
||||
}
|
||||
}
|
||||
assert.Truef(t, ownerFound, "Missing CronJob owner reference")
|
||||
|
||||
err = deleteTestCronJob(clients, fixtures.namespace, cronJob.Name)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestReCreateJobFromJob(t *testing.T) {
|
||||
fixtures := newTestFixtures()
|
||||
|
||||
job, err := createTestJobWithAnnotations(clients, fixtures.namespace, "1")
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = callbacks.ReCreateJobFromjob(clients, fixtures.namespace, job.(*batchv1.Job))
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = deleteTestJob(clients, fixtures.namespace, "test-job")
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestGetVolumes(t *testing.T) {
|
||||
fixtures := newTestFixtures()
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
resource runtime.Object
|
||||
getFunc func(runtime.Object) []v1.Volume
|
||||
}{
|
||||
{"Deployment", createResourceWithVolumes(&appsv1.Deployment{}, fixtures.defaultVolumes), callbacks.GetDeploymentVolumes},
|
||||
{"CronJob", createResourceWithVolumes(&batchv1.CronJob{}, fixtures.defaultVolumes), callbacks.GetCronJobVolumes},
|
||||
{"Job", createResourceWithVolumes(&batchv1.Job{}, fixtures.defaultVolumes), callbacks.GetJobVolumes},
|
||||
{"DaemonSet", createResourceWithVolumes(&appsv1.DaemonSet{}, fixtures.defaultVolumes), callbacks.GetDaemonSetVolumes},
|
||||
{"StatefulSet", createResourceWithVolumes(&appsv1.StatefulSet{}, fixtures.defaultVolumes), callbacks.GetStatefulSetVolumes},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
assert.Equal(t, fixtures.defaultVolumes, tt.getFunc(tt.resource))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TesGetPatchTemplateAnnotation(t *testing.T) {
|
||||
templates := callbacks.GetPatchTemplates()
|
||||
assert.NotEmpty(t, templates.AnnotationTemplate)
|
||||
assert.Equal(t, 2, strings.Count(templates.AnnotationTemplate, "%s"))
|
||||
}
|
||||
|
||||
func TestGetPatchTemplateEnvVar(t *testing.T) {
|
||||
templates := callbacks.GetPatchTemplates()
|
||||
assert.NotEmpty(t, templates.EnvVarTemplate)
|
||||
assert.Equal(t, 3, strings.Count(templates.EnvVarTemplate, "%s"))
|
||||
}
|
||||
|
||||
func TestGetPatchDeleteTemplateEnvVar(t *testing.T) {
|
||||
templates := callbacks.GetPatchTemplates()
|
||||
assert.NotEmpty(t, templates.DeleteEnvVarTemplate)
|
||||
assert.Equal(t, 2, strings.Count(templates.DeleteEnvVarTemplate, "%d"))
|
||||
}
|
||||
|
||||
// Helper functions
|
||||
|
||||
func isRestartStrategy(rollout *argorolloutv1alpha1.Rollout) bool {
|
||||
return rollout.Spec.RestartAt == nil
|
||||
}
|
||||
|
||||
func watchRollout(name, namespace string) chan interface{} {
|
||||
timeOut := int64(1)
|
||||
modifiedChan := make(chan interface{})
|
||||
watcher, _ := clients.ArgoRolloutClient.ArgoprojV1alpha1().Rollouts(namespace).Watch(context.Background(), metav1.ListOptions{TimeoutSeconds: &timeOut})
|
||||
go watchModified(watcher, name, modifiedChan)
|
||||
return modifiedChan
|
||||
}
|
||||
|
||||
func watchModified(watcher watch.Interface, name string, modifiedChan chan interface{}) {
|
||||
for event := range watcher.ResultChan() {
|
||||
item := event.Object.(*argorolloutv1alpha1.Rollout)
|
||||
if item.Name == name {
|
||||
switch event.Type {
|
||||
case watch.Modified:
|
||||
modifiedChan <- nil
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func createTestDeployments(clients kube.Clients, namespace string) error {
|
||||
for i := 1; i <= 2; i++ {
|
||||
_, err := testutil.CreateDeployment(clients.KubernetesClient, fmt.Sprintf("test-deployment-%d", i), namespace, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func deleteTestDeployments(clients kube.Clients, namespace string) error {
|
||||
for i := 1; i <= 2; i++ {
|
||||
err := testutil.DeleteDeployment(clients.KubernetesClient, namespace, fmt.Sprintf("test-deployment-%d", i))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func createTestCronJobs(clients kube.Clients, namespace string) error {
|
||||
for i := 1; i <= 2; i++ {
|
||||
_, err := testutil.CreateCronJob(clients.KubernetesClient, fmt.Sprintf("test-cron-%d", i), namespace, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func deleteTestCronJobs(clients kube.Clients, namespace string) error {
|
||||
for i := 1; i <= 2; i++ {
|
||||
err := testutil.DeleteCronJob(clients.KubernetesClient, namespace, fmt.Sprintf("test-cron-%d", i))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func createTestJobs(clients kube.Clients, namespace string) error {
|
||||
for i := 1; i <= 2; i++ {
|
||||
_, err := testutil.CreateJob(clients.KubernetesClient, fmt.Sprintf("test-job-%d", i), namespace, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func deleteTestJobs(clients kube.Clients, namespace string) error {
|
||||
for i := 1; i <= 2; i++ {
|
||||
err := testutil.DeleteJob(clients.KubernetesClient, namespace, fmt.Sprintf("test-job-%d", i))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func createTestDaemonSets(clients kube.Clients, namespace string) error {
|
||||
for i := 1; i <= 2; i++ {
|
||||
_, err := testutil.CreateDaemonSet(clients.KubernetesClient, fmt.Sprintf("test-daemonset-%d", i), namespace, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func deleteTestDaemonSets(clients kube.Clients, namespace string) error {
|
||||
for i := 1; i <= 2; i++ {
|
||||
err := testutil.DeleteDaemonSet(clients.KubernetesClient, namespace, fmt.Sprintf("test-daemonset-%d", i))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func createTestStatefulSets(clients kube.Clients, namespace string) error {
|
||||
for i := 1; i <= 2; i++ {
|
||||
_, err := testutil.CreateStatefulSet(clients.KubernetesClient, fmt.Sprintf("test-statefulset-%d", i), namespace, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func deleteTestStatefulSets(clients kube.Clients, namespace string) error {
|
||||
for i := 1; i <= 2; i++ {
|
||||
err := testutil.DeleteStatefulSet(clients.KubernetesClient, namespace, fmt.Sprintf("test-statefulset-%d", i))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func createResourceWithPodAnnotations(obj runtime.Object, annotations map[string]string) runtime.Object {
|
||||
switch v := obj.(type) {
|
||||
case *appsv1.Deployment:
|
||||
v.Spec.Template.Annotations = annotations
|
||||
case *appsv1.DaemonSet:
|
||||
v.Spec.Template.Annotations = annotations
|
||||
case *appsv1.StatefulSet:
|
||||
v.Spec.Template.Annotations = annotations
|
||||
case *batchv1.CronJob:
|
||||
v.Spec.JobTemplate.Spec.Template.Annotations = annotations
|
||||
case *batchv1.Job:
|
||||
v.Spec.Template.Annotations = annotations
|
||||
case *argorolloutv1alpha1.Rollout:
|
||||
v.Spec.Template.Annotations = annotations
|
||||
}
|
||||
return obj
|
||||
}
|
||||
|
||||
func createResourceWithContainers(obj runtime.Object, containers []v1.Container) runtime.Object {
|
||||
switch v := obj.(type) {
|
||||
case *appsv1.Deployment:
|
||||
v.Spec.Template.Spec.Containers = containers
|
||||
case *appsv1.DaemonSet:
|
||||
v.Spec.Template.Spec.Containers = containers
|
||||
case *appsv1.StatefulSet:
|
||||
v.Spec.Template.Spec.Containers = containers
|
||||
case *batchv1.CronJob:
|
||||
v.Spec.JobTemplate.Spec.Template.Spec.Containers = containers
|
||||
case *batchv1.Job:
|
||||
v.Spec.Template.Spec.Containers = containers
|
||||
case *argorolloutv1alpha1.Rollout:
|
||||
v.Spec.Template.Spec.Containers = containers
|
||||
}
|
||||
return obj
|
||||
}
|
||||
|
||||
func createResourceWithInitContainers(obj runtime.Object, initContainers []v1.Container) runtime.Object {
|
||||
switch v := obj.(type) {
|
||||
case *appsv1.Deployment:
|
||||
v.Spec.Template.Spec.InitContainers = initContainers
|
||||
case *appsv1.DaemonSet:
|
||||
v.Spec.Template.Spec.InitContainers = initContainers
|
||||
case *appsv1.StatefulSet:
|
||||
v.Spec.Template.Spec.InitContainers = initContainers
|
||||
case *batchv1.CronJob:
|
||||
v.Spec.JobTemplate.Spec.Template.Spec.InitContainers = initContainers
|
||||
case *batchv1.Job:
|
||||
v.Spec.Template.Spec.InitContainers = initContainers
|
||||
case *argorolloutv1alpha1.Rollout:
|
||||
v.Spec.Template.Spec.InitContainers = initContainers
|
||||
}
|
||||
return obj
|
||||
}
|
||||
|
||||
func createResourceWithVolumes(obj runtime.Object, volumes []v1.Volume) runtime.Object {
|
||||
switch v := obj.(type) {
|
||||
case *appsv1.Deployment:
|
||||
v.Spec.Template.Spec.Volumes = volumes
|
||||
case *batchv1.CronJob:
|
||||
v.Spec.JobTemplate.Spec.Template.Spec.Volumes = volumes
|
||||
case *batchv1.Job:
|
||||
v.Spec.Template.Spec.Volumes = volumes
|
||||
case *appsv1.DaemonSet:
|
||||
v.Spec.Template.Spec.Volumes = volumes
|
||||
case *appsv1.StatefulSet:
|
||||
v.Spec.Template.Spec.Volumes = volumes
|
||||
}
|
||||
return obj
|
||||
}
|
||||
|
||||
func createTestDeploymentWithAnnotations(clients kube.Clients, namespace, version string) (runtime.Object, error) {
|
||||
deployment := &appsv1.Deployment{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-deployment",
|
||||
Namespace: namespace,
|
||||
Annotations: map[string]string{"version": version},
|
||||
},
|
||||
}
|
||||
return clients.KubernetesClient.AppsV1().Deployments(namespace).Create(context.TODO(), deployment, metav1.CreateOptions{})
|
||||
}
|
||||
|
||||
func deleteTestDeployment(clients kube.Clients, namespace, name string) error {
|
||||
return clients.KubernetesClient.AppsV1().Deployments(namespace).Delete(context.TODO(), name, metav1.DeleteOptions{})
|
||||
}
|
||||
|
||||
func createTestDaemonSetWithAnnotations(clients kube.Clients, namespace, version string) (runtime.Object, error) {
|
||||
daemonSet := &appsv1.DaemonSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-daemonset",
|
||||
Namespace: namespace,
|
||||
Annotations: map[string]string{"version": version},
|
||||
},
|
||||
}
|
||||
return clients.KubernetesClient.AppsV1().DaemonSets(namespace).Create(context.TODO(), daemonSet, metav1.CreateOptions{})
|
||||
}
|
||||
|
||||
func deleteTestDaemonSet(clients kube.Clients, namespace, name string) error {
|
||||
return clients.KubernetesClient.AppsV1().DaemonSets(namespace).Delete(context.TODO(), name, metav1.DeleteOptions{})
|
||||
}
|
||||
|
||||
func createTestStatefulSetWithAnnotations(clients kube.Clients, namespace, version string) (runtime.Object, error) {
|
||||
statefulSet := &appsv1.StatefulSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-statefulset",
|
||||
Namespace: namespace,
|
||||
Annotations: map[string]string{"version": version},
|
||||
},
|
||||
}
|
||||
return clients.KubernetesClient.AppsV1().StatefulSets(namespace).Create(context.TODO(), statefulSet, metav1.CreateOptions{})
|
||||
}
|
||||
|
||||
func deleteTestStatefulSet(clients kube.Clients, namespace, name string) error {
|
||||
return clients.KubernetesClient.AppsV1().StatefulSets(namespace).Delete(context.TODO(), name, metav1.DeleteOptions{})
|
||||
}
|
||||
|
||||
func createTestCronJobWithAnnotations(clients kube.Clients, namespace, version string) (runtime.Object, error) {
|
||||
cronJob := &batchv1.CronJob{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-cronjob",
|
||||
Namespace: namespace,
|
||||
Annotations: map[string]string{"version": version},
|
||||
},
|
||||
}
|
||||
return clients.KubernetesClient.BatchV1().CronJobs(namespace).Create(context.TODO(), cronJob, metav1.CreateOptions{})
|
||||
}
|
||||
|
||||
func deleteTestCronJob(clients kube.Clients, namespace, name string) error {
|
||||
return clients.KubernetesClient.BatchV1().CronJobs(namespace).Delete(context.TODO(), name, metav1.DeleteOptions{})
|
||||
}
|
||||
|
||||
func createTestJobWithAnnotations(clients kube.Clients, namespace, version string) (runtime.Object, error) {
|
||||
job := &batchv1.Job{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-job",
|
||||
Namespace: namespace,
|
||||
Annotations: map[string]string{"version": version},
|
||||
},
|
||||
}
|
||||
return clients.KubernetesClient.BatchV1().Jobs(namespace).Create(context.TODO(), job, metav1.CreateOptions{})
|
||||
}
|
||||
|
||||
func deleteTestJob(clients kube.Clients, namespace, name string) error {
|
||||
return clients.KubernetesClient.BatchV1().Jobs(namespace).Delete(context.TODO(), name, metav1.DeleteOptions{})
|
||||
}
|
||||
|
||||
func isControllerOwner(kind, name string, ownerRefs []metav1.OwnerReference) bool {
|
||||
for _, ownerRef := range ownerRefs {
|
||||
if *ownerRef.Controller && ownerRef.Kind == kind && ownerRef.Name == name {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
_ "net/http/pprof"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
@@ -13,13 +14,14 @@ import (
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/spf13/cobra"
|
||||
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
||||
"github.com/stakater/Reloader/internal/pkg/controller"
|
||||
"github.com/stakater/Reloader/internal/pkg/metrics"
|
||||
"github.com/stakater/Reloader/internal/pkg/options"
|
||||
"github.com/stakater/Reloader/internal/pkg/util"
|
||||
"github.com/stakater/Reloader/pkg/common"
|
||||
"github.com/stakater/Reloader/pkg/kube"
|
||||
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/labels"
|
||||
)
|
||||
|
||||
// NewReloaderCommand starts the reloader controller
|
||||
@@ -32,23 +34,7 @@ func NewReloaderCommand() *cobra.Command {
|
||||
}
|
||||
|
||||
// options
|
||||
cmd.PersistentFlags().BoolVar(&options.AutoReloadAll, "auto-reload-all", false, "Auto reload all resources")
|
||||
cmd.PersistentFlags().StringVar(&options.ConfigmapUpdateOnChangeAnnotation, "configmap-annotation", "configmap.reloader.stakater.com/reload", "annotation to detect changes in configmaps, specified by name")
|
||||
cmd.PersistentFlags().StringVar(&options.SecretUpdateOnChangeAnnotation, "secret-annotation", "secret.reloader.stakater.com/reload", "annotation to detect changes in secrets, specified by name")
|
||||
cmd.PersistentFlags().StringVar(&options.ReloaderAutoAnnotation, "auto-annotation", "reloader.stakater.com/auto", "annotation to detect changes in secrets")
|
||||
cmd.PersistentFlags().StringVar(&options.AutoSearchAnnotation, "auto-search-annotation", "reloader.stakater.com/search", "annotation to detect changes in configmaps or secrets tagged with special match annotation")
|
||||
cmd.PersistentFlags().StringVar(&options.SearchMatchAnnotation, "search-match-annotation", "reloader.stakater.com/match", "annotation to mark secrets or configmaps to match the search")
|
||||
cmd.PersistentFlags().StringVar(&options.LogFormat, "log-format", "", "Log format to use (empty string for text, or JSON")
|
||||
cmd.PersistentFlags().StringVar(&options.WebhookUrl, "webhook-url", "", "webhook to trigger instead of performing a reload")
|
||||
cmd.PersistentFlags().StringSlice("resources-to-ignore", []string{}, "list of resources to ignore (valid options 'configMaps' or 'secrets')")
|
||||
cmd.PersistentFlags().StringSlice("namespaces-to-ignore", []string{}, "list of namespaces to ignore")
|
||||
cmd.PersistentFlags().StringSlice("namespace-selector", []string{}, "list of key:value labels to filter on for namespaces")
|
||||
cmd.PersistentFlags().StringSlice("resource-label-selector", []string{}, "list of key:value labels to filter on for configmaps and secrets")
|
||||
cmd.PersistentFlags().StringVar(&options.IsArgoRollouts, "is-Argo-Rollouts", "false", "Add support for argo rollouts")
|
||||
cmd.PersistentFlags().StringVar(&options.ReloadStrategy, constants.ReloadStrategyFlag, constants.EnvVarsReloadStrategy, "Specifies the desired reload strategy")
|
||||
cmd.PersistentFlags().StringVar(&options.ReloadOnCreate, "reload-on-create", "false", "Add support to watch create events")
|
||||
cmd.PersistentFlags().BoolVar(&options.EnableHA, "enable-ha", false, "Adds support for running multiple replicas via leadership election")
|
||||
cmd.PersistentFlags().BoolVar(&options.SyncAfterRestart, "sync-after-restart", false, "Sync add events after reloader restarts")
|
||||
util.ConfigureReloaderFlags(cmd)
|
||||
|
||||
return cmd
|
||||
}
|
||||
@@ -78,7 +64,7 @@ func validateFlags(*cobra.Command, []string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func configureLogging(logFormat string) error {
|
||||
func configureLogging(logFormat, logLevel string) error {
|
||||
switch logFormat {
|
||||
case "json":
|
||||
logrus.SetFormatter(&logrus.JSONFormatter{})
|
||||
@@ -88,6 +74,12 @@ func configureLogging(logFormat string) error {
|
||||
return fmt.Errorf("unsupported logging formatter: %q", logFormat)
|
||||
}
|
||||
}
|
||||
// set log level
|
||||
level, err := logrus.ParseLevel(logLevel)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
logrus.SetLevel(level)
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -111,15 +103,18 @@ func getHAEnvs() (string, string) {
|
||||
}
|
||||
|
||||
func startReloader(cmd *cobra.Command, args []string) {
|
||||
err := configureLogging(options.LogFormat)
|
||||
common.GetCommandLineOptions()
|
||||
err := configureLogging(options.LogFormat, options.LogLevel)
|
||||
if err != nil {
|
||||
logrus.Warn(err)
|
||||
}
|
||||
|
||||
logrus.Info("Starting Reloader")
|
||||
isGlobal := false
|
||||
currentNamespace := os.Getenv("KUBERNETES_NAMESPACE")
|
||||
if len(currentNamespace) == 0 {
|
||||
currentNamespace = v1.NamespaceAll
|
||||
isGlobal = true
|
||||
logrus.Warnf("KUBERNETES_NAMESPACE is unset, will detect changes in all namespaces.")
|
||||
}
|
||||
|
||||
@@ -129,22 +124,22 @@ func startReloader(cmd *cobra.Command, args []string) {
|
||||
logrus.Fatal(err)
|
||||
}
|
||||
|
||||
ignoredResourcesList, err := getIgnoredResourcesList(cmd)
|
||||
ignoredResourcesList, err := util.GetIgnoredResourcesList()
|
||||
if err != nil {
|
||||
logrus.Fatal(err)
|
||||
}
|
||||
|
||||
ignoredNamespacesList, err := getIgnoredNamespacesList(cmd)
|
||||
if err != nil {
|
||||
logrus.Fatal(err)
|
||||
ignoredNamespacesList := options.NamespacesToIgnore
|
||||
namespaceLabelSelector := ""
|
||||
|
||||
if isGlobal {
|
||||
namespaceLabelSelector, err = common.GetNamespaceLabelSelector(options.NamespaceSelectors)
|
||||
if err != nil {
|
||||
logrus.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
namespaceLabelSelector, err := getNamespaceLabelSelector(cmd)
|
||||
if err != nil {
|
||||
logrus.Fatal(err)
|
||||
}
|
||||
|
||||
resourceLabelSelector, err := getResourceLabelSelector(cmd)
|
||||
resourceLabelSelector, err := common.GetResourceLabelSelector(options.ResourceSelectors)
|
||||
if err != nil {
|
||||
logrus.Fatal(err)
|
||||
}
|
||||
@@ -165,6 +160,10 @@ func startReloader(cmd *cobra.Command, args []string) {
|
||||
|
||||
var controllers []*controller.Controller
|
||||
for k := range kube.ResourceMap {
|
||||
if k == constants.SecretProviderClassController && !shouldRunCSIController() {
|
||||
continue
|
||||
}
|
||||
|
||||
if ignoredResourcesList.Contains(k) || (len(namespaceLabelSelector) == 0 && k == "namespaces") {
|
||||
continue
|
||||
}
|
||||
@@ -196,107 +195,31 @@ func startReloader(cmd *cobra.Command, args []string) {
|
||||
go leadership.RunLeaderElection(lock, ctx, cancel, podName, controllers)
|
||||
}
|
||||
|
||||
common.PublishMetaInfoConfigmap(clientset)
|
||||
|
||||
if options.EnablePProf {
|
||||
go startPProfServer()
|
||||
}
|
||||
|
||||
leadership.SetupLivenessEndpoint()
|
||||
logrus.Fatal(http.ListenAndServe(constants.DefaultHttpListenAddr, nil))
|
||||
}
|
||||
|
||||
func getIgnoredNamespacesList(cmd *cobra.Command) (util.List, error) {
|
||||
return getStringSliceFromFlags(cmd, "namespaces-to-ignore")
|
||||
func startPProfServer() {
|
||||
logrus.Infof("Starting pprof server on %s", options.PProfAddr)
|
||||
if err := http.ListenAndServe(options.PProfAddr, nil); err != nil {
|
||||
logrus.Errorf("Failed to start pprof server: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func getNamespaceLabelSelector(cmd *cobra.Command) (string, error) {
|
||||
slice, err := getStringSliceFromFlags(cmd, "namespace-selector")
|
||||
if err != nil {
|
||||
logrus.Fatal(err)
|
||||
func shouldRunCSIController() bool {
|
||||
if !options.EnableCSIIntegration {
|
||||
logrus.Info("Skipping secretproviderclasspodstatuses controller: EnableCSIIntegration is disabled")
|
||||
return false
|
||||
}
|
||||
|
||||
for i, kv := range slice {
|
||||
// Legacy support for ":" as a delimiter and "*" for wildcard.
|
||||
if strings.Contains(kv, ":") {
|
||||
split := strings.Split(kv, ":")
|
||||
if split[1] == "*" {
|
||||
slice[i] = split[0]
|
||||
} else {
|
||||
slice[i] = split[0] + "=" + split[1]
|
||||
}
|
||||
}
|
||||
// Convert wildcard to valid apimachinery operator
|
||||
if strings.Contains(kv, "=") {
|
||||
split := strings.Split(kv, "=")
|
||||
if split[1] == "*" {
|
||||
slice[i] = split[0]
|
||||
}
|
||||
}
|
||||
if !kube.IsCSIInstalled {
|
||||
logrus.Info("Skipping secretproviderclasspodstatuses controller: CSI CRDs not installed")
|
||||
return false
|
||||
}
|
||||
|
||||
namespaceLabelSelector := strings.Join(slice[:], ",")
|
||||
_, err = labels.Parse(namespaceLabelSelector)
|
||||
if err != nil {
|
||||
logrus.Fatal(err)
|
||||
}
|
||||
|
||||
return namespaceLabelSelector, nil
|
||||
}
|
||||
|
||||
func getResourceLabelSelector(cmd *cobra.Command) (string, error) {
|
||||
slice, err := getStringSliceFromFlags(cmd, "resource-label-selector")
|
||||
if err != nil {
|
||||
logrus.Fatal(err)
|
||||
}
|
||||
|
||||
for i, kv := range slice {
|
||||
// Legacy support for ":" as a delimiter and "*" for wildcard.
|
||||
if strings.Contains(kv, ":") {
|
||||
split := strings.Split(kv, ":")
|
||||
if split[1] == "*" {
|
||||
slice[i] = split[0]
|
||||
} else {
|
||||
slice[i] = split[0] + "=" + split[1]
|
||||
}
|
||||
}
|
||||
// Convert wildcard to valid apimachinery operator
|
||||
if strings.Contains(kv, "=") {
|
||||
split := strings.Split(kv, "=")
|
||||
if split[1] == "*" {
|
||||
slice[i] = split[0]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
resourceLabelSelector := strings.Join(slice[:], ",")
|
||||
_, err = labels.Parse(resourceLabelSelector)
|
||||
if err != nil {
|
||||
logrus.Fatal(err)
|
||||
}
|
||||
|
||||
return resourceLabelSelector, nil
|
||||
}
|
||||
|
||||
func getStringSliceFromFlags(cmd *cobra.Command, flag string) ([]string, error) {
|
||||
slice, err := cmd.Flags().GetStringSlice(flag)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return slice, nil
|
||||
}
|
||||
|
||||
func getIgnoredResourcesList(cmd *cobra.Command) (util.List, error) {
|
||||
|
||||
ignoredResourcesList, err := getStringSliceFromFlags(cmd, "resources-to-ignore")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for _, v := range ignoredResourcesList {
|
||||
if v != "configMaps" && v != "secrets" {
|
||||
return nil, fmt.Errorf("'resources-to-ignore' only accepts 'configMaps' or 'secrets', not '%s'", v)
|
||||
}
|
||||
}
|
||||
|
||||
if len(ignoredResourcesList) > 1 {
|
||||
return nil, errors.New("'resources-to-ignore' only accepts 'configMaps' or 'secrets', not both")
|
||||
}
|
||||
|
||||
return ignoredResourcesList, nil
|
||||
return true
|
||||
}
|
||||
|
||||
@@ -8,6 +8,8 @@ const (
|
||||
ConfigmapEnvVarPostfix = "CONFIGMAP"
|
||||
// SecretEnvVarPostfix is a postfix for secret envVar
|
||||
SecretEnvVarPostfix = "SECRET"
|
||||
// SecretProviderClassEnvVarPostfix is a postfix for secretproviderclasspodstatus envVar
|
||||
SecretProviderClassEnvVarPostfix = "SECRETPROVIDERCLASS"
|
||||
// EnvVarPrefix is a Prefix for environment variable
|
||||
EnvVarPrefix = "STAKATER_"
|
||||
|
||||
@@ -22,6 +24,8 @@ const (
|
||||
EnvVarsReloadStrategy = "env-vars"
|
||||
// AnnotationsReloadStrategy instructs Reloader to add pod template annotations to facilitate a restart
|
||||
AnnotationsReloadStrategy = "annotations"
|
||||
// SecretProviderClassController enables support for SecretProviderClassPodStatus resources
|
||||
SecretProviderClassController = "secretproviderclasspodstatuses"
|
||||
)
|
||||
|
||||
// Leadership election related consts
|
||||
|
||||
@@ -2,9 +2,11 @@ package controller
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"slices"
|
||||
"time"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/stakater/Reloader/internal/pkg/constants"
|
||||
"github.com/stakater/Reloader/internal/pkg/handler"
|
||||
"github.com/stakater/Reloader/internal/pkg/metrics"
|
||||
"github.com/stakater/Reloader/internal/pkg/options"
|
||||
@@ -21,14 +23,14 @@ import (
|
||||
"k8s.io/client-go/tools/record"
|
||||
"k8s.io/client-go/util/workqueue"
|
||||
"k8s.io/kubectl/pkg/scheme"
|
||||
"k8s.io/utils/strings/slices"
|
||||
csiv1 "sigs.k8s.io/secrets-store-csi-driver/apis/v1"
|
||||
)
|
||||
|
||||
// Controller for checking events
|
||||
type Controller struct {
|
||||
client kubernetes.Interface
|
||||
indexer cache.Indexer
|
||||
queue workqueue.RateLimitingInterface
|
||||
queue workqueue.TypedRateLimitingInterface[any]
|
||||
informer cache.Controller
|
||||
namespace string
|
||||
resource string
|
||||
@@ -67,7 +69,7 @@ func NewController(
|
||||
})
|
||||
recorder := eventBroadcaster.NewRecorder(scheme.Scheme, v1.EventSource{Component: fmt.Sprintf("reloader-%s", resource)})
|
||||
|
||||
queue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())
|
||||
queue := workqueue.NewTypedRateLimitingQueue(workqueue.DefaultTypedControllerRateLimiter[any]())
|
||||
|
||||
optionsModifier := func(options *metav1.ListOptions) {
|
||||
if resource == "namespaces" {
|
||||
@@ -79,14 +81,24 @@ func NewController(
|
||||
}
|
||||
}
|
||||
|
||||
listWatcher := cache.NewFilteredListWatchFromClient(client.CoreV1().RESTClient(), resource, namespace, optionsModifier)
|
||||
getterRESTClient, err := getClientForResource(resource, client)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to initialize REST client for %s: %w", resource, err)
|
||||
}
|
||||
|
||||
indexer, informer := cache.NewIndexerInformer(listWatcher, kube.ResourceMap[resource], 0, cache.ResourceEventHandlerFuncs{
|
||||
AddFunc: c.Add,
|
||||
UpdateFunc: c.Update,
|
||||
DeleteFunc: c.Delete,
|
||||
}, cache.Indexers{})
|
||||
c.indexer = indexer
|
||||
listWatcher := cache.NewFilteredListWatchFromClient(getterRESTClient, resource, namespace, optionsModifier)
|
||||
|
||||
_, informer := cache.NewInformerWithOptions(cache.InformerOptions{
|
||||
ListerWatcher: listWatcher,
|
||||
ObjectType: kube.ResourceMap[resource],
|
||||
ResyncPeriod: 0,
|
||||
Handler: cache.ResourceEventHandlerFuncs{
|
||||
AddFunc: c.Add,
|
||||
UpdateFunc: c.Update,
|
||||
DeleteFunc: c.Delete,
|
||||
},
|
||||
Indexers: cache.Indexers{},
|
||||
})
|
||||
c.informer = informer
|
||||
c.queue = queue
|
||||
c.collectors = collectors
|
||||
@@ -98,30 +110,38 @@ func NewController(
|
||||
|
||||
// Add function to add a new object to the queue in case of creating a resource
|
||||
func (c *Controller) Add(obj interface{}) {
|
||||
c.collectors.RecordEventReceived("add", c.resource)
|
||||
|
||||
switch object := obj.(type) {
|
||||
case *v1.Namespace:
|
||||
c.addSelectedNamespaceToCache(*object)
|
||||
return
|
||||
case *csiv1.SecretProviderClassPodStatus:
|
||||
return
|
||||
}
|
||||
|
||||
if options.ReloadOnCreate == "true" {
|
||||
if !c.resourceInIgnoredNamespace(obj) && c.resourceInSelectedNamespaces(obj) && secretControllerInitialized && configmapControllerInitialized {
|
||||
c.queue.Add(handler.ResourceCreatedHandler{
|
||||
Resource: obj,
|
||||
Collectors: c.collectors,
|
||||
Recorder: c.recorder,
|
||||
c.enqueue(handler.ResourceCreatedHandler{
|
||||
Resource: obj,
|
||||
Collectors: c.collectors,
|
||||
Recorder: c.recorder,
|
||||
EnqueueTime: time.Now(),
|
||||
})
|
||||
} else {
|
||||
c.collectors.RecordSkipped("ignored_or_not_selected")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Controller) resourceInIgnoredNamespace(raw interface{}) bool {
|
||||
switch object := raw.(type) {
|
||||
switch obj := raw.(type) {
|
||||
case *v1.ConfigMap:
|
||||
return c.ignoredNamespaces.Contains(object.ObjectMeta.Namespace)
|
||||
return c.ignoredNamespaces.Contains(obj.Namespace)
|
||||
case *v1.Secret:
|
||||
return c.ignoredNamespaces.Contains(object.ObjectMeta.Namespace)
|
||||
return c.ignoredNamespaces.Contains(obj.Namespace)
|
||||
case *csiv1.SecretProviderClassPodStatus:
|
||||
return c.ignoredNamespaces.Contains(obj.Namespace)
|
||||
}
|
||||
return false
|
||||
}
|
||||
@@ -140,6 +160,10 @@ func (c *Controller) resourceInSelectedNamespaces(raw interface{}) bool {
|
||||
if slices.Contains(selectedNamespacesCache, object.GetNamespace()) {
|
||||
return true
|
||||
}
|
||||
case *csiv1.SecretProviderClassPodStatus:
|
||||
if slices.Contains(selectedNamespacesCache, object.GetNamespace()) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
@@ -161,30 +185,59 @@ func (c *Controller) removeSelectedNamespaceFromCache(namespace v1.Namespace) {
|
||||
|
||||
// Update function to add an old object and a new object to the queue in case of updating a resource
|
||||
func (c *Controller) Update(old interface{}, new interface{}) {
|
||||
c.collectors.RecordEventReceived("update", c.resource)
|
||||
|
||||
switch new.(type) {
|
||||
case *v1.Namespace:
|
||||
return
|
||||
}
|
||||
|
||||
if !c.resourceInIgnoredNamespace(new) && c.resourceInSelectedNamespaces(new) {
|
||||
c.queue.Add(handler.ResourceUpdatedHandler{
|
||||
c.enqueue(handler.ResourceUpdatedHandler{
|
||||
Resource: new,
|
||||
OldResource: old,
|
||||
Collectors: c.collectors,
|
||||
Recorder: c.recorder,
|
||||
EnqueueTime: time.Now(),
|
||||
})
|
||||
} else {
|
||||
c.collectors.RecordSkipped("ignored_or_not_selected")
|
||||
}
|
||||
}
|
||||
|
||||
// Delete function to add an object to the queue in case of deleting a resource
|
||||
func (c *Controller) Delete(old interface{}) {
|
||||
c.collectors.RecordEventReceived("delete", c.resource)
|
||||
|
||||
if _, ok := old.(*csiv1.SecretProviderClassPodStatus); ok {
|
||||
return
|
||||
}
|
||||
|
||||
if options.ReloadOnDelete == "true" {
|
||||
if !c.resourceInIgnoredNamespace(old) && c.resourceInSelectedNamespaces(old) && secretControllerInitialized && configmapControllerInitialized {
|
||||
c.enqueue(handler.ResourceDeleteHandler{
|
||||
Resource: old,
|
||||
Collectors: c.collectors,
|
||||
Recorder: c.recorder,
|
||||
EnqueueTime: time.Now(),
|
||||
})
|
||||
} else {
|
||||
c.collectors.RecordSkipped("ignored_or_not_selected")
|
||||
}
|
||||
}
|
||||
|
||||
switch object := old.(type) {
|
||||
case *v1.Namespace:
|
||||
c.removeSelectedNamespaceFromCache(*object)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Todo: Any future delete event can be handled here
|
||||
// enqueue adds an item to the queue and records metrics
|
||||
func (c *Controller) enqueue(item interface{}) {
|
||||
c.queue.Add(item)
|
||||
c.collectors.RecordQueueAdd()
|
||||
c.collectors.SetQueueDepth(c.queue.Len())
|
||||
}
|
||||
|
||||
// Run function for controller which handles the queue
|
||||
@@ -198,7 +251,7 @@ func (c *Controller) Run(threadiness int, stopCh chan struct{}) {
|
||||
|
||||
// Wait for all involved caches to be synced, before processing items from the queue is started
|
||||
if !cache.WaitForCacheSync(stopCh, c.informer.HasSynced) {
|
||||
runtime.HandleError(fmt.Errorf("Timed out waiting for caches to sync"))
|
||||
runtime.HandleError(fmt.Errorf("timed out waiting for caches to sync"))
|
||||
return
|
||||
}
|
||||
|
||||
@@ -212,9 +265,9 @@ func (c *Controller) Run(threadiness int, stopCh chan struct{}) {
|
||||
|
||||
func (c *Controller) runWorker() {
|
||||
// At this point the controller is fully initialized and we can start processing the resources
|
||||
if c.resource == "secrets" {
|
||||
if c.resource == string(v1.ResourceSecrets) {
|
||||
secretControllerInitialized = true
|
||||
} else if c.resource == "configMaps" {
|
||||
} else if c.resource == string(v1.ResourceConfigMaps) {
|
||||
configmapControllerInitialized = true
|
||||
}
|
||||
|
||||
@@ -228,13 +281,34 @@ func (c *Controller) processNextItem() bool {
|
||||
if quit {
|
||||
return false
|
||||
}
|
||||
|
||||
c.collectors.SetQueueDepth(c.queue.Len())
|
||||
|
||||
// Tell the queue that we are done with processing this key. This unblocks the key for other workers
|
||||
// This allows safe parallel processing because two events with the same key are never processed in
|
||||
// parallel.
|
||||
defer c.queue.Done(resourceHandler)
|
||||
|
||||
// Record queue latency if the handler supports it
|
||||
if h, ok := resourceHandler.(handler.TimedHandler); ok {
|
||||
queueLatency := time.Since(h.GetEnqueueTime())
|
||||
c.collectors.RecordQueueLatency(queueLatency)
|
||||
}
|
||||
|
||||
// Track reconcile/handler duration
|
||||
startTime := time.Now()
|
||||
|
||||
// Invoke the method containing the business logic
|
||||
err := resourceHandler.(handler.ResourceHandler).Handle()
|
||||
|
||||
duration := time.Since(startTime)
|
||||
|
||||
if err != nil {
|
||||
c.collectors.RecordReconcile("error", duration)
|
||||
} else {
|
||||
c.collectors.RecordReconcile("success", duration)
|
||||
}
|
||||
|
||||
// Handle the error if something went wrong during the execution of the business logic
|
||||
c.handleErr(err, resourceHandler)
|
||||
return true
|
||||
@@ -247,16 +321,26 @@ func (c *Controller) handleErr(err error, key interface{}) {
|
||||
// This ensures that future processing of updates for this key is not delayed because of
|
||||
// an outdated error history.
|
||||
c.queue.Forget(key)
|
||||
|
||||
// Record successful event processing
|
||||
c.collectors.RecordEventProcessed("unknown", c.resource, "success")
|
||||
return
|
||||
}
|
||||
|
||||
// Record error
|
||||
c.collectors.RecordError("handler_error")
|
||||
|
||||
// This controller retries 5 times if something goes wrong. After that, it stops trying.
|
||||
if c.queue.NumRequeues(key) < 5 {
|
||||
logrus.Errorf("Error syncing events: %v", err)
|
||||
|
||||
// Record retry
|
||||
c.collectors.RecordRetry()
|
||||
|
||||
// Re-enqueue the key rate limited. Based on the rate limiter on the
|
||||
// queue and the re-enqueue history, the key will be processed later again.
|
||||
c.queue.AddRateLimited(key)
|
||||
c.collectors.SetQueueDepth(c.queue.Len())
|
||||
return
|
||||
}
|
||||
|
||||
@@ -265,4 +349,17 @@ func (c *Controller) handleErr(err error, key interface{}) {
|
||||
runtime.HandleError(err)
|
||||
logrus.Errorf("Dropping key out of the queue: %v", err)
|
||||
logrus.Debugf("Dropping the key %q out of the queue: %v", key, err)
|
||||
|
||||
c.collectors.RecordEventProcessed("unknown", c.resource, "dropped")
|
||||
}
|
||||
|
||||
func getClientForResource(resource string, coreClient kubernetes.Interface) (cache.Getter, error) {
|
||||
if resource == constants.SecretProviderClassController {
|
||||
csiClient, err := kube.GetCSIClient()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get CSI client: %w", err)
|
||||
}
|
||||
return csiClient.SecretsstoreV1().RESTClient(), nil
|
||||
}
|
||||
return coreClient.CoreV1().RESTClient(), nil
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -13,3 +13,16 @@ func TestGenerateSHA(t *testing.T) {
|
||||
t.Errorf("Failed to generate SHA")
|
||||
}
|
||||
}
|
||||
|
||||
// TestGenerateSHAEmptyString verifies that empty string generates a valid hash
|
||||
// This ensures consistent behavior and avoids issues with string matching operations
|
||||
func TestGenerateSHAEmptyString(t *testing.T) {
|
||||
result := GenerateSHA("")
|
||||
expected := "da39a3ee5e6b4b0d3255bfef95601890afd80709"
|
||||
if result != expected {
|
||||
t.Errorf("Failed to generate SHA for empty string. Expected: %s, Got: %s", expected, result)
|
||||
}
|
||||
if len(result) != 40 {
|
||||
t.Errorf("SHA hash should be 40 characters long, got %d", len(result))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,45 +1,68 @@
|
||||
package handler
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/stakater/Reloader/internal/pkg/metrics"
|
||||
"github.com/stakater/Reloader/internal/pkg/options"
|
||||
"github.com/stakater/Reloader/internal/pkg/util"
|
||||
"github.com/stakater/Reloader/pkg/common"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
"k8s.io/client-go/tools/record"
|
||||
)
|
||||
|
||||
// ResourceCreatedHandler contains new objects
|
||||
type ResourceCreatedHandler struct {
|
||||
Resource interface{}
|
||||
Collectors metrics.Collectors
|
||||
Recorder record.EventRecorder
|
||||
Resource interface{}
|
||||
Collectors metrics.Collectors
|
||||
Recorder record.EventRecorder
|
||||
EnqueueTime time.Time // Time when this handler was added to the queue
|
||||
}
|
||||
|
||||
// GetEnqueueTime returns when this handler was enqueued
|
||||
func (r ResourceCreatedHandler) GetEnqueueTime() time.Time {
|
||||
return r.EnqueueTime
|
||||
}
|
||||
|
||||
// Handle processes the newly created resource
|
||||
func (r ResourceCreatedHandler) Handle() error {
|
||||
startTime := time.Now()
|
||||
result := "error"
|
||||
|
||||
defer func() {
|
||||
r.Collectors.RecordReconcile(result, time.Since(startTime))
|
||||
}()
|
||||
|
||||
if r.Resource == nil {
|
||||
logrus.Errorf("Resource creation handler received nil resource")
|
||||
} else {
|
||||
config, _ := r.GetConfig()
|
||||
// Send webhook
|
||||
if options.WebhookUrl != "" {
|
||||
return sendUpgradeWebhook(config, options.WebhookUrl)
|
||||
}
|
||||
// process resource based on its type
|
||||
return doRollingUpgrade(config, r.Collectors, r.Recorder)
|
||||
return nil
|
||||
}
|
||||
return nil
|
||||
|
||||
config, _ := r.GetConfig()
|
||||
// Send webhook
|
||||
if options.WebhookUrl != "" {
|
||||
err := sendUpgradeWebhook(config, options.WebhookUrl)
|
||||
if err == nil {
|
||||
result = "success"
|
||||
}
|
||||
return err
|
||||
}
|
||||
// process resource based on its type
|
||||
err := doRollingUpgrade(config, r.Collectors, r.Recorder, invokeReloadStrategy)
|
||||
if err == nil {
|
||||
result = "success"
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// GetConfig gets configurations containing SHA, annotations, namespace and resource name
|
||||
func (r ResourceCreatedHandler) GetConfig() (util.Config, string) {
|
||||
func (r ResourceCreatedHandler) GetConfig() (common.Config, string) {
|
||||
var oldSHAData string
|
||||
var config util.Config
|
||||
var config common.Config
|
||||
if _, ok := r.Resource.(*v1.ConfigMap); ok {
|
||||
config = util.GetConfigmapConfig(r.Resource.(*v1.ConfigMap))
|
||||
config = common.GetConfigmapConfig(r.Resource.(*v1.ConfigMap))
|
||||
} else if _, ok := r.Resource.(*v1.Secret); ok {
|
||||
config = util.GetSecretConfig(r.Resource.(*v1.Secret))
|
||||
config = common.GetSecretConfig(r.Resource.(*v1.Secret))
|
||||
} else {
|
||||
logrus.Warnf("Invalid resource: Resource should be 'Secret' or 'Configmap' but found, %v", r.Resource)
|
||||
}
|
||||
|
||||
122
internal/pkg/handler/delete.go
Normal file
122
internal/pkg/handler/delete.go
Normal file
@@ -0,0 +1,122 @@
|
||||
package handler
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"slices"
|
||||
"time"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/stakater/Reloader/internal/pkg/callbacks"
|
||||
"github.com/stakater/Reloader/internal/pkg/constants"
|
||||
"github.com/stakater/Reloader/internal/pkg/metrics"
|
||||
"github.com/stakater/Reloader/internal/pkg/options"
|
||||
"github.com/stakater/Reloader/internal/pkg/testutil"
|
||||
"github.com/stakater/Reloader/pkg/common"
|
||||
|
||||
v1 "k8s.io/api/core/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
patchtypes "k8s.io/apimachinery/pkg/types"
|
||||
"k8s.io/client-go/tools/record"
|
||||
)
|
||||
|
||||
// ResourceDeleteHandler contains new objects
|
||||
type ResourceDeleteHandler struct {
|
||||
Resource interface{}
|
||||
Collectors metrics.Collectors
|
||||
Recorder record.EventRecorder
|
||||
EnqueueTime time.Time // Time when this handler was added to the queue
|
||||
}
|
||||
|
||||
// GetEnqueueTime returns when this handler was enqueued
|
||||
func (r ResourceDeleteHandler) GetEnqueueTime() time.Time {
|
||||
return r.EnqueueTime
|
||||
}
|
||||
|
||||
// Handle processes resources being deleted
|
||||
func (r ResourceDeleteHandler) Handle() error {
|
||||
startTime := time.Now()
|
||||
result := "error"
|
||||
|
||||
defer func() {
|
||||
r.Collectors.RecordReconcile(result, time.Since(startTime))
|
||||
}()
|
||||
|
||||
if r.Resource == nil {
|
||||
logrus.Errorf("Resource delete handler received nil resource")
|
||||
return nil
|
||||
}
|
||||
|
||||
config, _ := r.GetConfig()
|
||||
// Send webhook
|
||||
if options.WebhookUrl != "" {
|
||||
err := sendUpgradeWebhook(config, options.WebhookUrl)
|
||||
if err == nil {
|
||||
result = "success"
|
||||
}
|
||||
return err
|
||||
}
|
||||
// process resource based on its type
|
||||
err := doRollingUpgrade(config, r.Collectors, r.Recorder, invokeDeleteStrategy)
|
||||
if err == nil {
|
||||
result = "success"
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// GetConfig gets configurations containing SHA, annotations, namespace and resource name
|
||||
func (r ResourceDeleteHandler) GetConfig() (common.Config, string) {
|
||||
var oldSHAData string
|
||||
var config common.Config
|
||||
if _, ok := r.Resource.(*v1.ConfigMap); ok {
|
||||
config = common.GetConfigmapConfig(r.Resource.(*v1.ConfigMap))
|
||||
} else if _, ok := r.Resource.(*v1.Secret); ok {
|
||||
config = common.GetSecretConfig(r.Resource.(*v1.Secret))
|
||||
} else {
|
||||
logrus.Warnf("Invalid resource: Resource should be 'Secret' or 'Configmap' but found, %v", r.Resource)
|
||||
}
|
||||
return config, oldSHAData
|
||||
}
|
||||
|
||||
func invokeDeleteStrategy(upgradeFuncs callbacks.RollingUpgradeFuncs, item runtime.Object, config common.Config, autoReload bool) InvokeStrategyResult {
|
||||
if options.ReloadStrategy == constants.AnnotationsReloadStrategy {
|
||||
return removePodAnnotations(upgradeFuncs, item, config, autoReload)
|
||||
}
|
||||
|
||||
return removeContainerEnvVars(upgradeFuncs, item, config, autoReload)
|
||||
}
|
||||
|
||||
func removePodAnnotations(upgradeFuncs callbacks.RollingUpgradeFuncs, item runtime.Object, config common.Config, autoReload bool) InvokeStrategyResult {
|
||||
config.SHAValue = testutil.GetSHAfromEmptyData()
|
||||
return updatePodAnnotations(upgradeFuncs, item, config, autoReload)
|
||||
}
|
||||
|
||||
func removeContainerEnvVars(upgradeFuncs callbacks.RollingUpgradeFuncs, item runtime.Object, config common.Config, autoReload bool) InvokeStrategyResult {
|
||||
envVar := getEnvVarName(config.ResourceName, config.Type)
|
||||
container := getContainerUsingResource(upgradeFuncs, item, config, autoReload)
|
||||
|
||||
if container == nil {
|
||||
return InvokeStrategyResult{constants.NoContainerFound, nil}
|
||||
}
|
||||
|
||||
//remove if env var exists
|
||||
if len(container.Env) > 0 {
|
||||
index := slices.IndexFunc(container.Env, func(envVariable v1.EnvVar) bool {
|
||||
return envVariable.Name == envVar
|
||||
})
|
||||
if index != -1 {
|
||||
var patch []byte
|
||||
if upgradeFuncs.SupportsPatch {
|
||||
containers := upgradeFuncs.ContainersFunc(item)
|
||||
containerIndex := slices.IndexFunc(containers, func(c v1.Container) bool {
|
||||
return c.Name == container.Name
|
||||
})
|
||||
patch = fmt.Appendf(nil, upgradeFuncs.PatchTemplatesFunc().DeleteEnvVarTemplate, containerIndex, index)
|
||||
}
|
||||
|
||||
container.Env = append(container.Env[:index], container.Env[index+1:]...)
|
||||
return InvokeStrategyResult{constants.Updated, &Patch{Type: patchtypes.JSONPatchType, Bytes: patch}}
|
||||
}
|
||||
}
|
||||
|
||||
return InvokeStrategyResult{constants.NotUpdated, nil}
|
||||
}
|
||||
@@ -1,11 +1,18 @@
|
||||
package handler
|
||||
|
||||
import (
|
||||
"github.com/stakater/Reloader/internal/pkg/util"
|
||||
"time"
|
||||
|
||||
"github.com/stakater/Reloader/pkg/common"
|
||||
)
|
||||
|
||||
// ResourceHandler handles the creation and update of resources
|
||||
type ResourceHandler interface {
|
||||
Handle() error
|
||||
GetConfig() (util.Config, string)
|
||||
GetConfig() (common.Config, string)
|
||||
}
|
||||
|
||||
// TimedHandler is a handler that tracks when it was enqueued
|
||||
type TimedHandler interface {
|
||||
GetEnqueueTime() time.Time
|
||||
}
|
||||
|
||||
242
internal/pkg/handler/pause_deployment.go
Normal file
242
internal/pkg/handler/pause_deployment.go
Normal file
@@ -0,0 +1,242 @@
|
||||
package handler
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/stakater/Reloader/internal/pkg/options"
|
||||
"github.com/stakater/Reloader/pkg/kube"
|
||||
app "k8s.io/api/apps/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
patchtypes "k8s.io/apimachinery/pkg/types"
|
||||
)
|
||||
|
||||
// Keeps track of currently active timers
|
||||
var activeTimers = make(map[string]*time.Timer)
|
||||
|
||||
// Returns unique key for the activeTimers map
|
||||
func getTimerKey(namespace, deploymentName string) string {
|
||||
return fmt.Sprintf("%s/%s", namespace, deploymentName)
|
||||
}
|
||||
|
||||
// Checks if a deployment is currently paused
|
||||
func IsPaused(deployment *app.Deployment) bool {
|
||||
return deployment.Spec.Paused
|
||||
}
|
||||
|
||||
// Deployment paused by reloader ?
|
||||
func IsPausedByReloader(deployment *app.Deployment) bool {
|
||||
if IsPaused(deployment) {
|
||||
pausedAtAnnotationValue := deployment.Annotations[options.PauseDeploymentTimeAnnotation]
|
||||
return pausedAtAnnotationValue != ""
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Returns the time, the deployment was paused by reloader, nil otherwise
|
||||
func GetPauseStartTime(deployment *app.Deployment) (*time.Time, error) {
|
||||
if !IsPausedByReloader(deployment) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
pausedAtStr := deployment.Annotations[options.PauseDeploymentTimeAnnotation]
|
||||
parsedTime, err := time.Parse(time.RFC3339, pausedAtStr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &parsedTime, nil
|
||||
}
|
||||
|
||||
// ParsePauseDuration parses the pause interval value and returns a time.Duration
|
||||
func ParsePauseDuration(pauseIntervalValue string) (time.Duration, error) {
|
||||
pauseDuration, err := time.ParseDuration(pauseIntervalValue)
|
||||
if err != nil {
|
||||
logrus.Warnf("Failed to parse pause interval value '%s': %v", pauseIntervalValue, err)
|
||||
return 0, err
|
||||
}
|
||||
return pauseDuration, nil
|
||||
}
|
||||
|
||||
// Pauses a deployment for a specified duration and creates a timer to resume it
|
||||
// after the specified duration
|
||||
func PauseDeployment(deployment *app.Deployment, clients kube.Clients, namespace, pauseIntervalValue string) (*app.Deployment, error) {
|
||||
deploymentName := deployment.Name
|
||||
pauseDuration, err := ParsePauseDuration(pauseIntervalValue)
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if !IsPaused(deployment) {
|
||||
logrus.Infof("Pausing Deployment '%s' in namespace '%s' for %s", deploymentName, namespace, pauseDuration)
|
||||
|
||||
deploymentFuncs := GetDeploymentRollingUpgradeFuncs()
|
||||
|
||||
pausePatch, err := CreatePausePatch()
|
||||
if err != nil {
|
||||
logrus.Errorf("Failed to create pause patch for deployment '%s': %v", deploymentName, err)
|
||||
return deployment, err
|
||||
}
|
||||
|
||||
err = deploymentFuncs.PatchFunc(clients, namespace, deployment, patchtypes.StrategicMergePatchType, pausePatch)
|
||||
|
||||
if err != nil {
|
||||
logrus.Errorf("Failed to patch deployment '%s' in namespace '%s': %v", deploymentName, namespace, err)
|
||||
return deployment, err
|
||||
}
|
||||
|
||||
updatedDeployment, err := clients.KubernetesClient.AppsV1().Deployments(namespace).Get(context.TODO(), deploymentName, metav1.GetOptions{})
|
||||
|
||||
CreateResumeTimer(deployment, clients, namespace, pauseDuration)
|
||||
return updatedDeployment, err
|
||||
}
|
||||
|
||||
if !IsPausedByReloader(deployment) {
|
||||
logrus.Infof("Deployment '%s' in namespace '%s' already paused", deploymentName, namespace)
|
||||
return deployment, nil
|
||||
}
|
||||
|
||||
// Deployment has already been paused by reloader, check for timer
|
||||
logrus.Debugf("Deployment '%s' in namespace '%s' is already paused by reloader", deploymentName, namespace)
|
||||
|
||||
timerKey := getTimerKey(namespace, deploymentName)
|
||||
_, timerExists := activeTimers[timerKey]
|
||||
|
||||
if !timerExists {
|
||||
logrus.Warnf("Timer does not exist for already paused deployment '%s' in namespace '%s', creating new one",
|
||||
deploymentName, namespace)
|
||||
HandleMissingTimer(deployment, pauseDuration, clients, namespace)
|
||||
}
|
||||
return deployment, nil
|
||||
}
|
||||
|
||||
// Handles the case where missing timers for deployments that have been paused by reloader.
|
||||
// Could occur after new leader election or reloader restart
|
||||
func HandleMissingTimer(deployment *app.Deployment, pauseDuration time.Duration, clients kube.Clients, namespace string) {
|
||||
deploymentName := deployment.Name
|
||||
pauseStartTime, err := GetPauseStartTime(deployment)
|
||||
if err != nil {
|
||||
logrus.Errorf("Error parsing pause start time for deployment '%s' in namespace '%s': %v. Resuming deployment immediately",
|
||||
deploymentName, namespace, err)
|
||||
ResumeDeployment(deployment, namespace, clients)
|
||||
return
|
||||
}
|
||||
|
||||
if pauseStartTime == nil {
|
||||
return
|
||||
}
|
||||
|
||||
elapsedPauseTime := time.Since(*pauseStartTime)
|
||||
remainingPauseTime := pauseDuration - elapsedPauseTime
|
||||
|
||||
if remainingPauseTime <= 0 {
|
||||
logrus.Infof("Pause period for deployment '%s' in namespace '%s' has expired. Resuming immediately",
|
||||
deploymentName, namespace)
|
||||
ResumeDeployment(deployment, namespace, clients)
|
||||
return
|
||||
}
|
||||
|
||||
logrus.Infof("Creating missing timer for already paused deployment '%s' in namespace '%s' with remaining time %s",
|
||||
deploymentName, namespace, remainingPauseTime)
|
||||
CreateResumeTimer(deployment, clients, namespace, remainingPauseTime)
|
||||
}
|
||||
|
||||
// CreateResumeTimer creates a timer to resume the deployment after the specified duration
|
||||
func CreateResumeTimer(deployment *app.Deployment, clients kube.Clients, namespace string, pauseDuration time.Duration) {
|
||||
deploymentName := deployment.Name
|
||||
timerKey := getTimerKey(namespace, deployment.Name)
|
||||
|
||||
// Check if there's an existing timer for this deployment
|
||||
if _, exists := activeTimers[timerKey]; exists {
|
||||
logrus.Debugf("Timer already exists for deployment '%s' in namespace '%s', Skipping creation",
|
||||
deploymentName, namespace)
|
||||
return
|
||||
}
|
||||
|
||||
// Create and store the new timer
|
||||
timer := time.AfterFunc(pauseDuration, func() {
|
||||
ResumeDeployment(deployment, namespace, clients)
|
||||
})
|
||||
|
||||
// Add the new timer to the map
|
||||
activeTimers[timerKey] = timer
|
||||
|
||||
logrus.Debugf("Created pause timer for deployment '%s' in namespace '%s' with duration %s",
|
||||
deploymentName, namespace, pauseDuration)
|
||||
}
|
||||
|
||||
// ResumeDeployment resumes a deployment that has been paused by reloader
|
||||
func ResumeDeployment(deployment *app.Deployment, namespace string, clients kube.Clients) {
|
||||
deploymentName := deployment.Name
|
||||
|
||||
currentDeployment, err := clients.KubernetesClient.AppsV1().Deployments(namespace).Get(context.TODO(), deploymentName, metav1.GetOptions{})
|
||||
|
||||
if err != nil {
|
||||
logrus.Errorf("Failed to get deployment '%s' in namespace '%s': %v", deploymentName, namespace, err)
|
||||
return
|
||||
}
|
||||
|
||||
if !IsPausedByReloader(currentDeployment) {
|
||||
logrus.Infof("Deployment '%s' in namespace '%s' not paused by Reloader. Skipping resume", deploymentName, namespace)
|
||||
return
|
||||
}
|
||||
|
||||
deploymentFuncs := GetDeploymentRollingUpgradeFuncs()
|
||||
|
||||
resumePatch, err := CreateResumePatch()
|
||||
if err != nil {
|
||||
logrus.Errorf("Failed to create resume patch for deployment '%s': %v", deploymentName, err)
|
||||
return
|
||||
}
|
||||
|
||||
// Remove the timer
|
||||
timerKey := getTimerKey(namespace, deploymentName)
|
||||
if timer, exists := activeTimers[timerKey]; exists {
|
||||
timer.Stop()
|
||||
delete(activeTimers, timerKey)
|
||||
logrus.Debugf("Removed pause timer for deployment '%s' in namespace '%s'", deploymentName, namespace)
|
||||
}
|
||||
|
||||
err = deploymentFuncs.PatchFunc(clients, namespace, currentDeployment, patchtypes.StrategicMergePatchType, resumePatch)
|
||||
|
||||
if err != nil {
|
||||
logrus.Errorf("Failed to resume deployment '%s' in namespace '%s': %v", deploymentName, namespace, err)
|
||||
return
|
||||
}
|
||||
|
||||
logrus.Infof("Successfully resumed deployment '%s' in namespace '%s'", deploymentName, namespace)
|
||||
}
|
||||
|
||||
func CreatePausePatch() ([]byte, error) {
|
||||
patchData := map[string]interface{}{
|
||||
"spec": map[string]interface{}{
|
||||
"paused": true,
|
||||
},
|
||||
"metadata": map[string]interface{}{
|
||||
"annotations": map[string]string{
|
||||
options.PauseDeploymentTimeAnnotation: time.Now().Format(time.RFC3339),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
return json.Marshal(patchData)
|
||||
}
|
||||
|
||||
func CreateResumePatch() ([]byte, error) {
|
||||
patchData := map[string]interface{}{
|
||||
"spec": map[string]interface{}{
|
||||
"paused": false,
|
||||
},
|
||||
"metadata": map[string]interface{}{
|
||||
"annotations": map[string]interface{}{
|
||||
options.PauseDeploymentTimeAnnotation: nil,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
return json.Marshal(patchData)
|
||||
}
|
||||
391
internal/pkg/handler/pause_deployment_test.go
Normal file
391
internal/pkg/handler/pause_deployment_test.go
Normal file
@@ -0,0 +1,391 @@
|
||||
package handler
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stakater/Reloader/internal/pkg/options"
|
||||
"github.com/stakater/Reloader/pkg/kube"
|
||||
"github.com/stretchr/testify/assert"
|
||||
appsv1 "k8s.io/api/apps/v1"
|
||||
"k8s.io/apimachinery/pkg/api/meta"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
testclient "k8s.io/client-go/kubernetes/fake"
|
||||
)
|
||||
|
||||
func TestIsPaused(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
deployment *appsv1.Deployment
|
||||
paused bool
|
||||
}{
|
||||
{
|
||||
name: "paused deployment",
|
||||
deployment: &appsv1.Deployment{
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Paused: true,
|
||||
},
|
||||
},
|
||||
paused: true,
|
||||
},
|
||||
{
|
||||
name: "unpaused deployment",
|
||||
deployment: &appsv1.Deployment{
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Paused: false,
|
||||
},
|
||||
},
|
||||
paused: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
result := IsPaused(test.deployment)
|
||||
assert.Equal(t, test.paused, result)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsPausedByReloader(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
deployment *appsv1.Deployment
|
||||
pausedByReloader bool
|
||||
}{
|
||||
{
|
||||
name: "paused by reloader",
|
||||
deployment: &appsv1.Deployment{
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Paused: true,
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Annotations: map[string]string{
|
||||
options.PauseDeploymentTimeAnnotation: time.Now().Format(time.RFC3339),
|
||||
},
|
||||
},
|
||||
},
|
||||
pausedByReloader: true,
|
||||
},
|
||||
{
|
||||
name: "not paused by reloader",
|
||||
deployment: &appsv1.Deployment{
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Paused: true,
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Annotations: map[string]string{},
|
||||
},
|
||||
},
|
||||
pausedByReloader: false,
|
||||
},
|
||||
{
|
||||
name: "not paused",
|
||||
deployment: &appsv1.Deployment{
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Paused: false,
|
||||
},
|
||||
},
|
||||
pausedByReloader: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
pausedByReloader := IsPausedByReloader(test.deployment)
|
||||
assert.Equal(t, test.pausedByReloader, pausedByReloader)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetPauseStartTime(t *testing.T) {
|
||||
now := time.Now()
|
||||
nowStr := now.Format(time.RFC3339)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
deployment *appsv1.Deployment
|
||||
pausedByReloader bool
|
||||
expectedStartTime time.Time
|
||||
}{
|
||||
{
|
||||
name: "valid pause time",
|
||||
deployment: &appsv1.Deployment{
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Paused: true,
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Annotations: map[string]string{
|
||||
options.PauseDeploymentTimeAnnotation: nowStr,
|
||||
},
|
||||
},
|
||||
},
|
||||
pausedByReloader: true,
|
||||
expectedStartTime: now,
|
||||
},
|
||||
{
|
||||
name: "not paused by reloader",
|
||||
deployment: &appsv1.Deployment{
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Paused: false,
|
||||
},
|
||||
},
|
||||
pausedByReloader: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
actualStartTime, err := GetPauseStartTime(test.deployment)
|
||||
|
||||
assert.NoError(t, err)
|
||||
|
||||
if !test.pausedByReloader {
|
||||
assert.Nil(t, actualStartTime)
|
||||
} else {
|
||||
assert.NotNil(t, actualStartTime)
|
||||
assert.WithinDuration(t, test.expectedStartTime, *actualStartTime, time.Second)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestParsePauseDuration(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
pauseIntervalValue string
|
||||
expectedDuration time.Duration
|
||||
invalidDuration bool
|
||||
}{
|
||||
{
|
||||
name: "valid duration",
|
||||
pauseIntervalValue: "10s",
|
||||
expectedDuration: 10 * time.Second,
|
||||
invalidDuration: false,
|
||||
},
|
||||
{
|
||||
name: "valid minute duration",
|
||||
pauseIntervalValue: "2m",
|
||||
expectedDuration: 2 * time.Minute,
|
||||
invalidDuration: false,
|
||||
},
|
||||
{
|
||||
name: "invalid duration",
|
||||
pauseIntervalValue: "invalid",
|
||||
expectedDuration: 0,
|
||||
invalidDuration: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
actualDuration, err := ParsePauseDuration(test.pauseIntervalValue)
|
||||
|
||||
if test.invalidDuration {
|
||||
assert.Error(t, err)
|
||||
} else {
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, test.expectedDuration, actualDuration)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandleMissingTimerSimple(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
deployment *appsv1.Deployment
|
||||
shouldBePaused bool // Should be unpaused after HandleMissingTimer ?
|
||||
}{
|
||||
{
|
||||
name: "deployment paused by reloader, pause period has expired and no timer",
|
||||
deployment: &appsv1.Deployment{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-deployment-1",
|
||||
Annotations: map[string]string{
|
||||
options.PauseDeploymentTimeAnnotation: time.Now().Add(-6 * time.Minute).Format(time.RFC3339),
|
||||
options.PauseDeploymentAnnotation: "5m",
|
||||
},
|
||||
},
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Paused: true,
|
||||
},
|
||||
},
|
||||
shouldBePaused: false,
|
||||
},
|
||||
{
|
||||
name: "deployment paused by reloader, pause period expires in the future and no timer",
|
||||
deployment: &appsv1.Deployment{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-deployment-2",
|
||||
Annotations: map[string]string{
|
||||
options.PauseDeploymentTimeAnnotation: time.Now().Add(1 * time.Minute).Format(time.RFC3339),
|
||||
options.PauseDeploymentAnnotation: "5m",
|
||||
},
|
||||
},
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Paused: true,
|
||||
},
|
||||
},
|
||||
shouldBePaused: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
// Clean up any timers at the end of the test
|
||||
defer func() {
|
||||
for key, timer := range activeTimers {
|
||||
timer.Stop()
|
||||
delete(activeTimers, key)
|
||||
}
|
||||
}()
|
||||
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
fakeClient := testclient.NewClientset()
|
||||
clients := kube.Clients{
|
||||
KubernetesClient: fakeClient,
|
||||
}
|
||||
|
||||
_, err := fakeClient.AppsV1().Deployments("default").Create(
|
||||
context.TODO(),
|
||||
test.deployment,
|
||||
metav1.CreateOptions{})
|
||||
assert.NoError(t, err, "Expected no error when creating deployment")
|
||||
|
||||
pauseDuration, _ := ParsePauseDuration(test.deployment.Annotations[options.PauseDeploymentAnnotation])
|
||||
HandleMissingTimer(test.deployment, pauseDuration, clients, "default")
|
||||
|
||||
updatedDeployment, _ := fakeClient.AppsV1().Deployments("default").Get(context.TODO(), test.deployment.Name, metav1.GetOptions{})
|
||||
|
||||
assert.Equal(t, test.shouldBePaused, updatedDeployment.Spec.Paused,
|
||||
"Deployment should have correct paused state after timer expiration")
|
||||
|
||||
if test.shouldBePaused {
|
||||
pausedAtAnnotationValue := updatedDeployment.Annotations[options.PauseDeploymentTimeAnnotation]
|
||||
assert.NotEmpty(t, pausedAtAnnotationValue,
|
||||
"Pause annotation should be present and contain a value when deployment is paused")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestPauseDeployment(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
deployment *appsv1.Deployment
|
||||
expectedError bool
|
||||
expectedPaused bool
|
||||
expectedAnnotation bool // Should have pause time annotation
|
||||
pauseInterval string
|
||||
}{
|
||||
{
|
||||
name: "deployment without pause annotation",
|
||||
deployment: &appsv1.Deployment{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-deployment",
|
||||
Annotations: map[string]string{},
|
||||
},
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Paused: false,
|
||||
},
|
||||
},
|
||||
expectedError: true,
|
||||
expectedPaused: false,
|
||||
expectedAnnotation: false,
|
||||
pauseInterval: "",
|
||||
},
|
||||
{
|
||||
name: "deployment already paused but not by reloader",
|
||||
deployment: &appsv1.Deployment{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-deployment",
|
||||
Annotations: map[string]string{
|
||||
options.PauseDeploymentAnnotation: "5m",
|
||||
},
|
||||
},
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Paused: true,
|
||||
},
|
||||
},
|
||||
expectedError: false,
|
||||
expectedPaused: true,
|
||||
expectedAnnotation: false,
|
||||
pauseInterval: "5m",
|
||||
},
|
||||
{
|
||||
name: "deployment unpaused that needs to be paused by reloader",
|
||||
deployment: &appsv1.Deployment{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-deployment-3",
|
||||
Annotations: map[string]string{
|
||||
options.PauseDeploymentAnnotation: "5m",
|
||||
},
|
||||
},
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Paused: false,
|
||||
},
|
||||
},
|
||||
expectedError: false,
|
||||
expectedPaused: true,
|
||||
expectedAnnotation: true,
|
||||
pauseInterval: "5m",
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
fakeClient := testclient.NewClientset()
|
||||
clients := kube.Clients{
|
||||
KubernetesClient: fakeClient,
|
||||
}
|
||||
|
||||
_, err := fakeClient.AppsV1().Deployments("default").Create(
|
||||
context.TODO(),
|
||||
test.deployment,
|
||||
metav1.CreateOptions{})
|
||||
assert.NoError(t, err, "Expected no error when creating deployment")
|
||||
|
||||
updatedDeployment, err := PauseDeployment(test.deployment, clients, "default", test.pauseInterval)
|
||||
if test.expectedError {
|
||||
assert.Error(t, err, "Expected an error pausing the deployment")
|
||||
return
|
||||
} else {
|
||||
assert.NoError(t, err, "Expected no error pausing the deployment")
|
||||
}
|
||||
|
||||
assert.Equal(t, test.expectedPaused, updatedDeployment.Spec.Paused,
|
||||
"Deployment should have correct paused state after pause")
|
||||
|
||||
if test.expectedAnnotation {
|
||||
pausedAtAnnotationValue := updatedDeployment.Annotations[options.PauseDeploymentTimeAnnotation]
|
||||
assert.NotEmpty(t, pausedAtAnnotationValue,
|
||||
"Pause annotation should be present and contain a value when deployment is paused")
|
||||
} else {
|
||||
pausedAtAnnotationValue := updatedDeployment.Annotations[options.PauseDeploymentTimeAnnotation]
|
||||
assert.Empty(t, pausedAtAnnotationValue,
|
||||
"Pause annotation should not be present when deployment has not been paused by reloader")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Simple helper function for test cases
|
||||
func FindDeploymentByName(deployments []runtime.Object, deploymentName string) (*appsv1.Deployment, error) {
|
||||
for _, deployment := range deployments {
|
||||
accessor, err := meta.Accessor(deployment)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error getting accessor for item: %v", err)
|
||||
}
|
||||
if accessor.GetName() == deploymentName {
|
||||
deploymentObj, ok := deployment.(*appsv1.Deployment)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("failed to cast to Deployment")
|
||||
}
|
||||
return deploymentObj, nil
|
||||
}
|
||||
}
|
||||
return nil, fmt.Errorf("deployment '%s' not found", deploymentName)
|
||||
}
|
||||
@@ -1,12 +1,16 @@
|
||||
package handler
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/stakater/Reloader/internal/pkg/metrics"
|
||||
"github.com/stakater/Reloader/internal/pkg/options"
|
||||
"github.com/stakater/Reloader/internal/pkg/util"
|
||||
"github.com/stakater/Reloader/pkg/common"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
"k8s.io/client-go/tools/record"
|
||||
csiv1 "sigs.k8s.io/secrets-store-csi-driver/apis/v1"
|
||||
)
|
||||
|
||||
// ResourceUpdatedHandler contains updated objects
|
||||
@@ -15,38 +19,79 @@ type ResourceUpdatedHandler struct {
|
||||
OldResource interface{}
|
||||
Collectors metrics.Collectors
|
||||
Recorder record.EventRecorder
|
||||
EnqueueTime time.Time // Time when this handler was added to the queue
|
||||
}
|
||||
|
||||
// GetEnqueueTime returns when this handler was enqueued
|
||||
func (r ResourceUpdatedHandler) GetEnqueueTime() time.Time {
|
||||
return r.EnqueueTime
|
||||
}
|
||||
|
||||
// Handle processes the updated resource
|
||||
func (r ResourceUpdatedHandler) Handle() error {
|
||||
startTime := time.Now()
|
||||
result := "error"
|
||||
|
||||
defer func() {
|
||||
r.Collectors.RecordReconcile(result, time.Since(startTime))
|
||||
}()
|
||||
|
||||
if r.Resource == nil || r.OldResource == nil {
|
||||
logrus.Errorf("Resource update handler received nil resource")
|
||||
} else {
|
||||
config, oldSHAData := r.GetConfig()
|
||||
if config.SHAValue != oldSHAData {
|
||||
// Send a webhook if update
|
||||
if options.WebhookUrl != "" {
|
||||
return sendUpgradeWebhook(config, options.WebhookUrl)
|
||||
}
|
||||
// process resource based on its type
|
||||
return doRollingUpgrade(config, r.Collectors, r.Recorder)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
config, oldSHAData := r.GetConfig()
|
||||
if config.SHAValue != oldSHAData {
|
||||
// Send a webhook if update
|
||||
if options.WebhookUrl != "" {
|
||||
err := sendUpgradeWebhook(config, options.WebhookUrl)
|
||||
if err == nil {
|
||||
result = "success"
|
||||
}
|
||||
return err
|
||||
}
|
||||
// process resource based on its type
|
||||
err := doRollingUpgrade(config, r.Collectors, r.Recorder, invokeReloadStrategy)
|
||||
if err == nil {
|
||||
result = "success"
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// No data change - skip
|
||||
result = "skipped"
|
||||
r.Collectors.RecordSkipped("no_data_change")
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetConfig gets configurations containing SHA, annotations, namespace and resource name
|
||||
func (r ResourceUpdatedHandler) GetConfig() (util.Config, string) {
|
||||
var oldSHAData string
|
||||
var config util.Config
|
||||
if _, ok := r.Resource.(*v1.ConfigMap); ok {
|
||||
oldSHAData = util.GetSHAfromConfigmap(r.OldResource.(*v1.ConfigMap))
|
||||
config = util.GetConfigmapConfig(r.Resource.(*v1.ConfigMap))
|
||||
} else if _, ok := r.Resource.(*v1.Secret); ok {
|
||||
oldSHAData = util.GetSHAfromSecret(r.OldResource.(*v1.Secret).Data)
|
||||
config = util.GetSecretConfig(r.Resource.(*v1.Secret))
|
||||
} else {
|
||||
logrus.Warnf("Invalid resource: Resource should be 'Secret' or 'Configmap' but found, %v", r.Resource)
|
||||
func (r ResourceUpdatedHandler) GetConfig() (common.Config, string) {
|
||||
var (
|
||||
oldSHAData string
|
||||
config common.Config
|
||||
)
|
||||
|
||||
switch res := r.Resource.(type) {
|
||||
case *v1.ConfigMap:
|
||||
if old, ok := r.OldResource.(*v1.ConfigMap); ok && old != nil {
|
||||
oldSHAData = util.GetSHAfromConfigmap(old)
|
||||
}
|
||||
config = common.GetConfigmapConfig(res)
|
||||
|
||||
case *v1.Secret:
|
||||
if old, ok := r.OldResource.(*v1.Secret); ok && old != nil {
|
||||
oldSHAData = util.GetSHAfromSecret(old.Data)
|
||||
}
|
||||
config = common.GetSecretConfig(res)
|
||||
|
||||
case *csiv1.SecretProviderClassPodStatus:
|
||||
if old, ok := r.OldResource.(*csiv1.SecretProviderClassPodStatus); ok && old != nil && old.Status.Objects != nil {
|
||||
oldSHAData = util.GetSHAfromSecretProviderClassPodStatus(old.Status)
|
||||
}
|
||||
config = common.GetSecretProviderClassPodStatusConfig(res)
|
||||
default:
|
||||
logrus.Warnf("Invalid resource: Resource should be 'Secret', 'Configmap' or 'SecretProviderClassPodStatus' but found, %T", r.Resource)
|
||||
}
|
||||
return config, oldSHAData
|
||||
}
|
||||
|
||||
@@ -2,14 +2,14 @@ package handler
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/parnurzeal/gorequest"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
@@ -20,101 +20,132 @@ import (
|
||||
"github.com/stakater/Reloader/internal/pkg/metrics"
|
||||
"github.com/stakater/Reloader/internal/pkg/options"
|
||||
"github.com/stakater/Reloader/internal/pkg/util"
|
||||
"github.com/stakater/Reloader/pkg/common"
|
||||
"github.com/stakater/Reloader/pkg/kube"
|
||||
app "k8s.io/api/apps/v1"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
apierrors "k8s.io/apimachinery/pkg/api/errors"
|
||||
"k8s.io/apimachinery/pkg/api/meta"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
patchtypes "k8s.io/apimachinery/pkg/types"
|
||||
"k8s.io/apimachinery/pkg/util/wait"
|
||||
"k8s.io/client-go/tools/record"
|
||||
"k8s.io/client-go/util/retry"
|
||||
)
|
||||
|
||||
// GetDeploymentRollingUpgradeFuncs returns all callback funcs for a deployment
|
||||
func GetDeploymentRollingUpgradeFuncs() callbacks.RollingUpgradeFuncs {
|
||||
return callbacks.RollingUpgradeFuncs{
|
||||
ItemFunc: callbacks.GetDeploymentItem,
|
||||
ItemsFunc: callbacks.GetDeploymentItems,
|
||||
AnnotationsFunc: callbacks.GetDeploymentAnnotations,
|
||||
PodAnnotationsFunc: callbacks.GetDeploymentPodAnnotations,
|
||||
ContainersFunc: callbacks.GetDeploymentContainers,
|
||||
InitContainersFunc: callbacks.GetDeploymentInitContainers,
|
||||
UpdateFunc: callbacks.UpdateDeployment,
|
||||
PatchFunc: callbacks.PatchDeployment,
|
||||
PatchTemplatesFunc: callbacks.GetPatchTemplates,
|
||||
VolumesFunc: callbacks.GetDeploymentVolumes,
|
||||
ResourceType: "Deployment",
|
||||
SupportsPatch: true,
|
||||
}
|
||||
}
|
||||
|
||||
// GetDeploymentRollingUpgradeFuncs returns all callback funcs for a cronjob
|
||||
func GetCronJobCreateJobFuncs() callbacks.RollingUpgradeFuncs {
|
||||
return callbacks.RollingUpgradeFuncs{
|
||||
ItemFunc: callbacks.GetCronJobItem,
|
||||
ItemsFunc: callbacks.GetCronJobItems,
|
||||
AnnotationsFunc: callbacks.GetCronJobAnnotations,
|
||||
PodAnnotationsFunc: callbacks.GetCronJobPodAnnotations,
|
||||
ContainersFunc: callbacks.GetCronJobContainers,
|
||||
InitContainersFunc: callbacks.GetCronJobInitContainers,
|
||||
UpdateFunc: callbacks.CreateJobFromCronjob,
|
||||
PatchFunc: callbacks.PatchCronJob,
|
||||
PatchTemplatesFunc: func() callbacks.PatchTemplates { return callbacks.PatchTemplates{} },
|
||||
VolumesFunc: callbacks.GetCronJobVolumes,
|
||||
ResourceType: "CronJob",
|
||||
SupportsPatch: false,
|
||||
}
|
||||
}
|
||||
|
||||
// GetDeploymentRollingUpgradeFuncs returns all callback funcs for a cronjob
|
||||
func GetJobCreateJobFuncs() callbacks.RollingUpgradeFuncs {
|
||||
return callbacks.RollingUpgradeFuncs{
|
||||
ItemFunc: callbacks.GetJobItem,
|
||||
ItemsFunc: callbacks.GetJobItems,
|
||||
AnnotationsFunc: callbacks.GetJobAnnotations,
|
||||
PodAnnotationsFunc: callbacks.GetJobPodAnnotations,
|
||||
ContainersFunc: callbacks.GetJobContainers,
|
||||
InitContainersFunc: callbacks.GetJobInitContainers,
|
||||
UpdateFunc: callbacks.ReCreateJobFromjob,
|
||||
PatchFunc: callbacks.PatchJob,
|
||||
PatchTemplatesFunc: func() callbacks.PatchTemplates { return callbacks.PatchTemplates{} },
|
||||
VolumesFunc: callbacks.GetJobVolumes,
|
||||
ResourceType: "Job",
|
||||
SupportsPatch: false,
|
||||
}
|
||||
}
|
||||
|
||||
// GetDaemonSetRollingUpgradeFuncs returns all callback funcs for a daemonset
|
||||
func GetDaemonSetRollingUpgradeFuncs() callbacks.RollingUpgradeFuncs {
|
||||
return callbacks.RollingUpgradeFuncs{
|
||||
ItemFunc: callbacks.GetDaemonSetItem,
|
||||
ItemsFunc: callbacks.GetDaemonSetItems,
|
||||
AnnotationsFunc: callbacks.GetDaemonSetAnnotations,
|
||||
PodAnnotationsFunc: callbacks.GetDaemonSetPodAnnotations,
|
||||
ContainersFunc: callbacks.GetDaemonSetContainers,
|
||||
InitContainersFunc: callbacks.GetDaemonSetInitContainers,
|
||||
UpdateFunc: callbacks.UpdateDaemonSet,
|
||||
PatchFunc: callbacks.PatchDaemonSet,
|
||||
PatchTemplatesFunc: callbacks.GetPatchTemplates,
|
||||
VolumesFunc: callbacks.GetDaemonSetVolumes,
|
||||
ResourceType: "DaemonSet",
|
||||
SupportsPatch: true,
|
||||
}
|
||||
}
|
||||
|
||||
// GetStatefulSetRollingUpgradeFuncs returns all callback funcs for a statefulSet
|
||||
func GetStatefulSetRollingUpgradeFuncs() callbacks.RollingUpgradeFuncs {
|
||||
return callbacks.RollingUpgradeFuncs{
|
||||
ItemFunc: callbacks.GetStatefulSetItem,
|
||||
ItemsFunc: callbacks.GetStatefulSetItems,
|
||||
AnnotationsFunc: callbacks.GetStatefulSetAnnotations,
|
||||
PodAnnotationsFunc: callbacks.GetStatefulSetPodAnnotations,
|
||||
ContainersFunc: callbacks.GetStatefulSetContainers,
|
||||
InitContainersFunc: callbacks.GetStatefulSetInitContainers,
|
||||
UpdateFunc: callbacks.UpdateStatefulSet,
|
||||
PatchFunc: callbacks.PatchStatefulSet,
|
||||
PatchTemplatesFunc: callbacks.GetPatchTemplates,
|
||||
VolumesFunc: callbacks.GetStatefulSetVolumes,
|
||||
ResourceType: "StatefulSet",
|
||||
}
|
||||
}
|
||||
|
||||
// GetDeploymentConfigRollingUpgradeFuncs returns all callback funcs for a deploymentConfig
|
||||
func GetDeploymentConfigRollingUpgradeFuncs() callbacks.RollingUpgradeFuncs {
|
||||
return callbacks.RollingUpgradeFuncs{
|
||||
ItemsFunc: callbacks.GetDeploymentConfigItems,
|
||||
AnnotationsFunc: callbacks.GetDeploymentConfigAnnotations,
|
||||
PodAnnotationsFunc: callbacks.GetDeploymentConfigPodAnnotations,
|
||||
ContainersFunc: callbacks.GetDeploymentConfigContainers,
|
||||
InitContainersFunc: callbacks.GetDeploymentConfigInitContainers,
|
||||
UpdateFunc: callbacks.UpdateDeploymentConfig,
|
||||
VolumesFunc: callbacks.GetDeploymentConfigVolumes,
|
||||
ResourceType: "DeploymentConfig",
|
||||
SupportsPatch: true,
|
||||
}
|
||||
}
|
||||
|
||||
// GetArgoRolloutRollingUpgradeFuncs returns all callback funcs for a rollout
|
||||
func GetArgoRolloutRollingUpgradeFuncs() callbacks.RollingUpgradeFuncs {
|
||||
return callbacks.RollingUpgradeFuncs{
|
||||
ItemFunc: callbacks.GetRolloutItem,
|
||||
ItemsFunc: callbacks.GetRolloutItems,
|
||||
AnnotationsFunc: callbacks.GetRolloutAnnotations,
|
||||
PodAnnotationsFunc: callbacks.GetRolloutPodAnnotations,
|
||||
ContainersFunc: callbacks.GetRolloutContainers,
|
||||
InitContainersFunc: callbacks.GetRolloutInitContainers,
|
||||
UpdateFunc: callbacks.UpdateRollout,
|
||||
PatchFunc: callbacks.PatchRollout,
|
||||
PatchTemplatesFunc: func() callbacks.PatchTemplates { return callbacks.PatchTemplates{} },
|
||||
VolumesFunc: callbacks.GetRolloutVolumes,
|
||||
ResourceType: "Rollout",
|
||||
SupportsPatch: false,
|
||||
}
|
||||
}
|
||||
|
||||
func sendUpgradeWebhook(config util.Config, webhookUrl string) error {
|
||||
message := fmt.Sprintf("Changes detected in '%s' of type '%s' in namespace '%s'", config.ResourceName, config.Type, config.Namespace)
|
||||
message += fmt.Sprintf(", Sending webhook to '%s'", webhookUrl)
|
||||
logrus.Infof(message)
|
||||
func sendUpgradeWebhook(config common.Config, webhookUrl string) error {
|
||||
logrus.Infof("Changes detected in '%s' of type '%s' in namespace '%s', Sending webhook to '%s'",
|
||||
config.ResourceName, config.Type, config.Namespace, webhookUrl)
|
||||
|
||||
body, errs := sendWebhook(webhookUrl)
|
||||
if errs != nil {
|
||||
// return the first error
|
||||
@@ -133,6 +164,12 @@ func sendWebhook(url string) (string, []error) {
|
||||
// the reloader seems to retry automatically so no retry logic added
|
||||
return "", err
|
||||
}
|
||||
defer func() {
|
||||
closeErr := resp.Body.Close()
|
||||
if closeErr != nil {
|
||||
logrus.Error(closeErr)
|
||||
}
|
||||
}()
|
||||
var buffer bytes.Buffer
|
||||
_, bufferErr := io.Copy(&buffer, resp.Body)
|
||||
if bufferErr != nil {
|
||||
@@ -141,35 +178,48 @@ func sendWebhook(url string) (string, []error) {
|
||||
return buffer.String(), nil
|
||||
}
|
||||
|
||||
func doRollingUpgrade(config util.Config, collectors metrics.Collectors, recorder record.EventRecorder) error {
|
||||
func doRollingUpgrade(config common.Config, collectors metrics.Collectors, recorder record.EventRecorder, invoke invokeStrategy) error {
|
||||
clients := kube.GetClients()
|
||||
|
||||
err := rollingUpgrade(clients, config, GetDeploymentRollingUpgradeFuncs(), collectors, recorder)
|
||||
// Get ignored workload types to avoid listing resources without RBAC permissions
|
||||
ignoredWorkloadTypes, err := util.GetIgnoredWorkloadTypesList()
|
||||
if err != nil {
|
||||
return err
|
||||
logrus.Errorf("Failed to parse ignored workload types: %v", err)
|
||||
ignoredWorkloadTypes = util.List{} // Continue with empty list if parsing fails
|
||||
}
|
||||
err = rollingUpgrade(clients, config, GetCronJobCreateJobFuncs(), collectors, recorder)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = rollingUpgrade(clients, config, GetDaemonSetRollingUpgradeFuncs(), collectors, recorder)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = rollingUpgrade(clients, config, GetStatefulSetRollingUpgradeFuncs(), collectors, recorder)
|
||||
|
||||
err = rollingUpgrade(clients, config, GetDeploymentRollingUpgradeFuncs(), collectors, recorder, invoke)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if kube.IsOpenshift {
|
||||
err = rollingUpgrade(clients, config, GetDeploymentConfigRollingUpgradeFuncs(), collectors, recorder)
|
||||
// Only process CronJobs if they are not ignored
|
||||
if !ignoredWorkloadTypes.Contains("cronjobs") {
|
||||
err = rollingUpgrade(clients, config, GetCronJobCreateJobFuncs(), collectors, recorder, invoke)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Only process Jobs if they are not ignored
|
||||
if !ignoredWorkloadTypes.Contains("jobs") {
|
||||
err = rollingUpgrade(clients, config, GetJobCreateJobFuncs(), collectors, recorder, invoke)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
err = rollingUpgrade(clients, config, GetDaemonSetRollingUpgradeFuncs(), collectors, recorder, invoke)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = rollingUpgrade(clients, config, GetStatefulSetRollingUpgradeFuncs(), collectors, recorder, invoke)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if options.IsArgoRollouts == "true" {
|
||||
err = rollingUpgrade(clients, config, GetArgoRolloutRollingUpgradeFuncs(), collectors, recorder)
|
||||
err = rollingUpgrade(clients, config, GetArgoRolloutRollingUpgradeFuncs(), collectors, recorder, invoke)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -178,97 +228,162 @@ func doRollingUpgrade(config util.Config, collectors metrics.Collectors, recorde
|
||||
return nil
|
||||
}
|
||||
|
||||
func rollingUpgrade(clients kube.Clients, config util.Config, upgradeFuncs callbacks.RollingUpgradeFuncs, collectors metrics.Collectors, recorder record.EventRecorder) error {
|
||||
|
||||
err := PerformRollingUpgrade(clients, config, upgradeFuncs, collectors, recorder)
|
||||
func rollingUpgrade(clients kube.Clients, config common.Config, upgradeFuncs callbacks.RollingUpgradeFuncs, collectors metrics.Collectors, recorder record.EventRecorder, strategy invokeStrategy) error {
|
||||
err := PerformAction(clients, config, upgradeFuncs, collectors, recorder, strategy)
|
||||
if err != nil {
|
||||
logrus.Errorf("Rolling upgrade for '%s' failed with error = %v", config.ResourceName, err)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// PerformRollingUpgrade upgrades the deployment if there is any change in configmap or secret data
|
||||
func PerformRollingUpgrade(clients kube.Clients, config util.Config, upgradeFuncs callbacks.RollingUpgradeFuncs, collectors metrics.Collectors, recorder record.EventRecorder) error {
|
||||
// PerformAction invokes the deployment if there is any change in configmap or secret data
|
||||
func PerformAction(clients kube.Clients, config common.Config, upgradeFuncs callbacks.RollingUpgradeFuncs, collectors metrics.Collectors, recorder record.EventRecorder, strategy invokeStrategy) error {
|
||||
items := upgradeFuncs.ItemsFunc(clients, config.Namespace)
|
||||
|
||||
for _, i := range items {
|
||||
// find correct annotation and update the resource
|
||||
annotations := upgradeFuncs.AnnotationsFunc(i)
|
||||
annotationValue, found := annotations[config.Annotation]
|
||||
searchAnnotationValue, foundSearchAnn := annotations[options.AutoSearchAnnotation]
|
||||
reloaderEnabledValue, foundAuto := annotations[options.ReloaderAutoAnnotation]
|
||||
if !found && !foundAuto && !foundSearchAnn {
|
||||
annotations = upgradeFuncs.PodAnnotationsFunc(i)
|
||||
annotationValue = annotations[config.Annotation]
|
||||
searchAnnotationValue = annotations[options.AutoSearchAnnotation]
|
||||
reloaderEnabledValue = annotations[options.ReloaderAutoAnnotation]
|
||||
}
|
||||
result := constants.NotUpdated
|
||||
reloaderEnabled, _ := strconv.ParseBool(reloaderEnabledValue)
|
||||
if reloaderEnabled || reloaderEnabledValue == "" && options.AutoReloadAll {
|
||||
result = invokeReloadStrategy(upgradeFuncs, i, config, true)
|
||||
}
|
||||
// Record workloads scanned
|
||||
collectors.RecordWorkloadsScanned(upgradeFuncs.ResourceType, len(items))
|
||||
|
||||
if result != constants.Updated && annotationValue != "" {
|
||||
values := strings.Split(annotationValue, ",")
|
||||
for _, value := range values {
|
||||
value = strings.TrimSpace(value)
|
||||
re := regexp.MustCompile("^" + value + "$")
|
||||
if re.Match([]byte(config.ResourceName)) {
|
||||
result = invokeReloadStrategy(upgradeFuncs, i, config, false)
|
||||
if result == constants.Updated {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
matchedCount := 0
|
||||
for _, item := range items {
|
||||
matched, err := retryOnConflict(retry.DefaultRetry, func(fetchResource bool) (bool, error) {
|
||||
return upgradeResource(clients, config, upgradeFuncs, collectors, recorder, strategy, item, fetchResource)
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if result != constants.Updated && searchAnnotationValue == "true" {
|
||||
matchAnnotationValue := config.ResourceAnnotations[options.SearchMatchAnnotation]
|
||||
if matchAnnotationValue == "true" {
|
||||
result = invokeReloadStrategy(upgradeFuncs, i, config, true)
|
||||
}
|
||||
if matched {
|
||||
matchedCount++
|
||||
}
|
||||
}
|
||||
|
||||
if result == constants.Updated {
|
||||
accessor, err := meta.Accessor(i)
|
||||
// Record workloads matched
|
||||
collectors.RecordWorkloadsMatched(upgradeFuncs.ResourceType, matchedCount)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func retryOnConflict(backoff wait.Backoff, fn func(_ bool) (bool, error)) (bool, error) {
|
||||
var lastError error
|
||||
var matched bool
|
||||
fetchResource := false // do not fetch resource on first attempt, already done by ItemsFunc
|
||||
err := wait.ExponentialBackoff(backoff, func() (bool, error) {
|
||||
var err error
|
||||
matched, err = fn(fetchResource)
|
||||
fetchResource = true
|
||||
switch {
|
||||
case err == nil:
|
||||
return true, nil
|
||||
case apierrors.IsConflict(err):
|
||||
lastError = err
|
||||
return false, nil
|
||||
default:
|
||||
return false, err
|
||||
}
|
||||
})
|
||||
if wait.Interrupted(err) {
|
||||
err = lastError
|
||||
}
|
||||
return matched, err
|
||||
}
|
||||
|
||||
func upgradeResource(clients kube.Clients, config common.Config, upgradeFuncs callbacks.RollingUpgradeFuncs, collectors metrics.Collectors, recorder record.EventRecorder, strategy invokeStrategy, resource runtime.Object, fetchResource bool) (bool, error) {
|
||||
actionStartTime := time.Now()
|
||||
|
||||
accessor, err := meta.Accessor(resource)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
resourceName := accessor.GetName()
|
||||
if fetchResource {
|
||||
resource, err = upgradeFuncs.ItemFunc(clients, resourceName, config.Namespace)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
}
|
||||
if config.Type == constants.SecretProviderClassEnvVarPostfix {
|
||||
populateAnnotationsFromSecretProviderClass(clients, &config)
|
||||
}
|
||||
|
||||
annotations := upgradeFuncs.AnnotationsFunc(resource)
|
||||
podAnnotations := upgradeFuncs.PodAnnotationsFunc(resource)
|
||||
result := common.ShouldReload(config, upgradeFuncs.ResourceType, annotations, podAnnotations, common.GetCommandLineOptions())
|
||||
|
||||
if !result.ShouldReload {
|
||||
logrus.Debugf("No changes detected in '%s' of type '%s' in namespace '%s'", config.ResourceName, config.Type, config.Namespace)
|
||||
return false, nil
|
||||
}
|
||||
|
||||
strategyResult := strategy(upgradeFuncs, resource, config, result.AutoReload)
|
||||
|
||||
if strategyResult.Result != constants.Updated {
|
||||
collectors.RecordSkipped("strategy_not_updated")
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// find correct annotation and update the resource
|
||||
pauseInterval, foundPauseInterval := annotations[options.PauseDeploymentAnnotation]
|
||||
|
||||
if foundPauseInterval {
|
||||
deployment, ok := resource.(*app.Deployment)
|
||||
if !ok {
|
||||
logrus.Warnf("Annotation '%s' only applicable for deployments", options.PauseDeploymentAnnotation)
|
||||
} else {
|
||||
_, err = PauseDeployment(deployment, clients, config.Namespace, pauseInterval)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
resourceName := accessor.GetName()
|
||||
err = upgradeFuncs.UpdateFunc(clients, config.Namespace, i)
|
||||
if err != nil {
|
||||
message := fmt.Sprintf("Update for '%s' of type '%s' in namespace '%s' failed with error %v", resourceName, upgradeFuncs.ResourceType, config.Namespace, err)
|
||||
logrus.Errorf(message)
|
||||
collectors.Reloaded.With(prometheus.Labels{"success": "false"}).Inc()
|
||||
if recorder != nil {
|
||||
recorder.Event(i, v1.EventTypeWarning, "ReloadFail", message)
|
||||
}
|
||||
return err
|
||||
} else {
|
||||
message := fmt.Sprintf("Changes detected in '%s' of type '%s' in namespace '%s'", config.ResourceName, config.Type, config.Namespace)
|
||||
message += fmt.Sprintf(", Updated '%s' of type '%s' in namespace '%s'", resourceName, upgradeFuncs.ResourceType, config.Namespace)
|
||||
logrus.Infof(message)
|
||||
collectors.Reloaded.With(prometheus.Labels{"success": "true"}).Inc()
|
||||
alert_on_reload, ok := os.LookupEnv("ALERT_ON_RELOAD")
|
||||
if recorder != nil {
|
||||
recorder.Event(i, v1.EventTypeNormal, "Reloaded", message)
|
||||
}
|
||||
if ok && alert_on_reload == "true" {
|
||||
msg := fmt.Sprintf(
|
||||
"Reloader detected changes in *%s* of type *%s* in namespace *%s*. Hence reloaded *%s* of type *%s* in namespace *%s*",
|
||||
config.ResourceName, config.Type, config.Namespace, resourceName, upgradeFuncs.ResourceType, config.Namespace)
|
||||
alert.SendWebhookAlert(msg)
|
||||
}
|
||||
logrus.Errorf("Failed to pause deployment '%s' in namespace '%s': %v", resourceName, config.Namespace, err)
|
||||
return true, err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
|
||||
if upgradeFuncs.SupportsPatch && strategyResult.Patch != nil {
|
||||
err = upgradeFuncs.PatchFunc(clients, config.Namespace, resource, strategyResult.Patch.Type, strategyResult.Patch.Bytes)
|
||||
} else {
|
||||
err = upgradeFuncs.UpdateFunc(clients, config.Namespace, resource)
|
||||
}
|
||||
|
||||
actionLatency := time.Since(actionStartTime)
|
||||
|
||||
if err != nil {
|
||||
message := fmt.Sprintf("Update for '%s' of type '%s' in namespace '%s' failed with error %v", resourceName, upgradeFuncs.ResourceType, config.Namespace, err)
|
||||
logrus.Errorf("Update for '%s' of type '%s' in namespace '%s' failed with error %v", resourceName, upgradeFuncs.ResourceType, config.Namespace, err)
|
||||
|
||||
collectors.Reloaded.With(prometheus.Labels{"success": "false"}).Inc()
|
||||
collectors.ReloadedByNamespace.With(prometheus.Labels{"success": "false", "namespace": config.Namespace}).Inc()
|
||||
collectors.RecordAction(upgradeFuncs.ResourceType, "error", actionLatency)
|
||||
if recorder != nil {
|
||||
recorder.Event(resource, v1.EventTypeWarning, "ReloadFail", message)
|
||||
}
|
||||
return true, err
|
||||
} else {
|
||||
message := fmt.Sprintf("Changes detected in '%s' of type '%s' in namespace '%s'", config.ResourceName, config.Type, config.Namespace)
|
||||
message += fmt.Sprintf(", Updated '%s' of type '%s' in namespace '%s'", resourceName, upgradeFuncs.ResourceType, config.Namespace)
|
||||
|
||||
logrus.Infof("Changes detected in '%s' of type '%s' in namespace '%s'; updated '%s' of type '%s' in namespace '%s'", config.ResourceName, config.Type, config.Namespace, resourceName, upgradeFuncs.ResourceType, config.Namespace)
|
||||
|
||||
collectors.Reloaded.With(prometheus.Labels{"success": "true"}).Inc()
|
||||
collectors.ReloadedByNamespace.With(prometheus.Labels{"success": "true", "namespace": config.Namespace}).Inc()
|
||||
collectors.RecordAction(upgradeFuncs.ResourceType, "success", actionLatency)
|
||||
alert_on_reload, ok := os.LookupEnv("ALERT_ON_RELOAD")
|
||||
if recorder != nil {
|
||||
recorder.Event(resource, v1.EventTypeNormal, "Reloaded", message)
|
||||
}
|
||||
if ok && alert_on_reload == "true" {
|
||||
msg := fmt.Sprintf(
|
||||
"Reloader detected changes in *%s* of type *%s* in namespace *%s*. Hence reloaded *%s* of type *%s* in namespace *%s*",
|
||||
config.ResourceName, config.Type, config.Namespace, resourceName, upgradeFuncs.ResourceType, config.Namespace)
|
||||
alert.SendWebhookAlert(msg)
|
||||
}
|
||||
}
|
||||
|
||||
return true, nil
|
||||
}
|
||||
|
||||
func getVolumeMountName(volumes []v1.Volume, mountType string, volumeName string) string {
|
||||
for i := range volumes {
|
||||
if mountType == constants.ConfigmapEnvVarPostfix {
|
||||
switch mountType {
|
||||
case constants.ConfigmapEnvVarPostfix:
|
||||
if volumes[i].ConfigMap != nil && volumes[i].ConfigMap.Name == volumeName {
|
||||
return volumes[i].Name
|
||||
}
|
||||
@@ -280,7 +395,7 @@ func getVolumeMountName(volumes []v1.Volume, mountType string, volumeName string
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if mountType == constants.SecretEnvVarPostfix {
|
||||
case constants.SecretEnvVarPostfix:
|
||||
if volumes[i].Secret != nil && volumes[i].Secret.SecretName == volumeName {
|
||||
return volumes[i].Name
|
||||
}
|
||||
@@ -292,6 +407,10 @@ func getVolumeMountName(volumes []v1.Volume, mountType string, volumeName string
|
||||
}
|
||||
}
|
||||
}
|
||||
case constants.SecretProviderClassEnvVarPostfix:
|
||||
if volumes[i].CSI != nil && volumes[i].CSI.VolumeAttributes["secretProviderClass"] == volumeName {
|
||||
return volumes[i].Name
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -317,9 +436,9 @@ func getContainerWithEnvReference(containers []v1.Container, resourceName string
|
||||
for j := range envs {
|
||||
envVarSource := envs[j].ValueFrom
|
||||
if envVarSource != nil {
|
||||
if resourceType == constants.SecretEnvVarPostfix && envVarSource.SecretKeyRef != nil && envVarSource.SecretKeyRef.LocalObjectReference.Name == resourceName {
|
||||
if resourceType == constants.SecretEnvVarPostfix && envVarSource.SecretKeyRef != nil && envVarSource.SecretKeyRef.Name == resourceName {
|
||||
return &containers[i]
|
||||
} else if resourceType == constants.ConfigmapEnvVarPostfix && envVarSource.ConfigMapKeyRef != nil && envVarSource.ConfigMapKeyRef.LocalObjectReference.Name == resourceName {
|
||||
} else if resourceType == constants.ConfigmapEnvVarPostfix && envVarSource.ConfigMapKeyRef != nil && envVarSource.ConfigMapKeyRef.Name == resourceName {
|
||||
return &containers[i]
|
||||
}
|
||||
}
|
||||
@@ -327,9 +446,9 @@ func getContainerWithEnvReference(containers []v1.Container, resourceName string
|
||||
|
||||
envsFrom := containers[i].EnvFrom
|
||||
for j := range envsFrom {
|
||||
if resourceType == constants.SecretEnvVarPostfix && envsFrom[j].SecretRef != nil && envsFrom[j].SecretRef.LocalObjectReference.Name == resourceName {
|
||||
if resourceType == constants.SecretEnvVarPostfix && envsFrom[j].SecretRef != nil && envsFrom[j].SecretRef.Name == resourceName {
|
||||
return &containers[i]
|
||||
} else if resourceType == constants.ConfigmapEnvVarPostfix && envsFrom[j].ConfigMapRef != nil && envsFrom[j].ConfigMapRef.LocalObjectReference.Name == resourceName {
|
||||
} else if resourceType == constants.ConfigmapEnvVarPostfix && envsFrom[j].ConfigMapRef != nil && envsFrom[j].ConfigMapRef.Name == resourceName {
|
||||
return &containers[i]
|
||||
}
|
||||
}
|
||||
@@ -337,7 +456,7 @@ func getContainerWithEnvReference(containers []v1.Container, resourceName string
|
||||
return nil
|
||||
}
|
||||
|
||||
func getContainerUsingResource(upgradeFuncs callbacks.RollingUpgradeFuncs, item runtime.Object, config util.Config, autoReload bool) *v1.Container {
|
||||
func getContainerUsingResource(upgradeFuncs callbacks.RollingUpgradeFuncs, item runtime.Object, config common.Config, autoReload bool) *v1.Container {
|
||||
volumes := upgradeFuncs.VolumesFunc(item)
|
||||
containers := upgradeFuncs.ContainersFunc(item)
|
||||
initContainers := upgradeFuncs.InitContainersFunc(item)
|
||||
@@ -351,7 +470,11 @@ func getContainerUsingResource(upgradeFuncs callbacks.RollingUpgradeFuncs, item
|
||||
container = getContainerWithVolumeMount(initContainers, volumeMountName)
|
||||
if container != nil {
|
||||
// if configmap/secret is being used in init container then return the first Pod container to save reloader env
|
||||
return &containers[0]
|
||||
if len(containers) > 0 {
|
||||
return &containers[0]
|
||||
}
|
||||
// No containers available, return nil to avoid crash
|
||||
return nil
|
||||
}
|
||||
} else if container != nil {
|
||||
return container
|
||||
@@ -364,57 +487,92 @@ func getContainerUsingResource(upgradeFuncs callbacks.RollingUpgradeFuncs, item
|
||||
container = getContainerWithEnvReference(initContainers, config.ResourceName, config.Type)
|
||||
if container != nil {
|
||||
// if configmap/secret is being used in init container then return the first Pod container to save reloader env
|
||||
return &containers[0]
|
||||
if len(containers) > 0 {
|
||||
return &containers[0]
|
||||
}
|
||||
// No containers available, return nil to avoid crash
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Get the first container if the annotation is related to specified configmap or secret i.e. configmap.reloader.stakater.com/reload
|
||||
if container == nil && !autoReload {
|
||||
return &containers[0]
|
||||
if len(containers) > 0 {
|
||||
return &containers[0]
|
||||
}
|
||||
// No containers available, return nil to avoid crash
|
||||
return nil
|
||||
}
|
||||
|
||||
return container
|
||||
}
|
||||
|
||||
func invokeReloadStrategy(upgradeFuncs callbacks.RollingUpgradeFuncs, item runtime.Object, config util.Config, autoReload bool) constants.Result {
|
||||
type Patch struct {
|
||||
Type patchtypes.PatchType
|
||||
Bytes []byte
|
||||
}
|
||||
|
||||
type InvokeStrategyResult struct {
|
||||
Result constants.Result
|
||||
Patch *Patch
|
||||
}
|
||||
|
||||
type invokeStrategy func(upgradeFuncs callbacks.RollingUpgradeFuncs, item runtime.Object, config common.Config, autoReload bool) InvokeStrategyResult
|
||||
|
||||
func invokeReloadStrategy(upgradeFuncs callbacks.RollingUpgradeFuncs, item runtime.Object, config common.Config, autoReload bool) InvokeStrategyResult {
|
||||
if options.ReloadStrategy == constants.AnnotationsReloadStrategy {
|
||||
return updatePodAnnotations(upgradeFuncs, item, config, autoReload)
|
||||
}
|
||||
|
||||
return updateContainerEnvVars(upgradeFuncs, item, config, autoReload)
|
||||
}
|
||||
|
||||
func updatePodAnnotations(upgradeFuncs callbacks.RollingUpgradeFuncs, item runtime.Object, config util.Config, autoReload bool) constants.Result {
|
||||
func updatePodAnnotations(upgradeFuncs callbacks.RollingUpgradeFuncs, item runtime.Object, config common.Config, autoReload bool) InvokeStrategyResult {
|
||||
container := getContainerUsingResource(upgradeFuncs, item, config, autoReload)
|
||||
if container == nil {
|
||||
return constants.NoContainerFound
|
||||
return InvokeStrategyResult{constants.NoContainerFound, nil}
|
||||
}
|
||||
|
||||
// Generate reloaded annotations. Attaching this to the item's annotation will trigger a rollout
|
||||
// Note: the data on this struct is purely informational and is not used for future updates
|
||||
reloadSource := util.NewReloadSourceFromConfig(config, []string{container.Name})
|
||||
annotations, err := createReloadedAnnotations(&reloadSource)
|
||||
reloadSource := common.NewReloadSourceFromConfig(config, []string{container.Name})
|
||||
annotations, patch, err := createReloadedAnnotations(&reloadSource, upgradeFuncs)
|
||||
if err != nil {
|
||||
logrus.Errorf("Failed to create reloaded annotations for %s! error = %v", config.ResourceName, err)
|
||||
return constants.NotUpdated
|
||||
return InvokeStrategyResult{constants.NotUpdated, nil}
|
||||
}
|
||||
|
||||
// Copy the all annotations to the item's annotations
|
||||
pa := upgradeFuncs.PodAnnotationsFunc(item)
|
||||
if pa == nil {
|
||||
return constants.NotUpdated
|
||||
return InvokeStrategyResult{constants.NotUpdated, nil}
|
||||
}
|
||||
|
||||
if config.Type == constants.SecretProviderClassEnvVarPostfix && secretProviderClassAnnotationReloaded(pa, config) {
|
||||
return InvokeStrategyResult{constants.NotUpdated, nil}
|
||||
}
|
||||
|
||||
for k, v := range annotations {
|
||||
pa[k] = v
|
||||
}
|
||||
|
||||
return constants.Updated
|
||||
return InvokeStrategyResult{constants.Updated, &Patch{Type: patchtypes.StrategicMergePatchType, Bytes: patch}}
|
||||
}
|
||||
|
||||
func createReloadedAnnotations(target *util.ReloadSource) (map[string]string, error) {
|
||||
func secretProviderClassAnnotationReloaded(oldAnnotations map[string]string, newConfig common.Config) bool {
|
||||
annotation := oldAnnotations[getReloaderAnnotationKey()]
|
||||
return strings.Contains(annotation, newConfig.ResourceName) && strings.Contains(annotation, newConfig.SHAValue)
|
||||
}
|
||||
|
||||
func getReloaderAnnotationKey() string {
|
||||
return fmt.Sprintf("%s/%s",
|
||||
constants.ReloaderAnnotationPrefix,
|
||||
constants.LastReloadedFromAnnotation,
|
||||
)
|
||||
}
|
||||
|
||||
func createReloadedAnnotations(target *common.ReloadSource, upgradeFuncs callbacks.RollingUpgradeFuncs) (map[string]string, []byte, error) {
|
||||
if target == nil {
|
||||
return nil, errors.New("target is required")
|
||||
return nil, nil, errors.New("target is required")
|
||||
}
|
||||
|
||||
// Create a single "last-invokeReloadStrategy-from" annotation that stores metadata about the
|
||||
@@ -422,56 +580,110 @@ func createReloadedAnnotations(target *util.ReloadSource) (map[string]string, er
|
||||
// Intentionally only storing the last item in order to keep
|
||||
// the generated annotations as small as possible.
|
||||
annotations := make(map[string]string)
|
||||
lastReloadedResourceName := fmt.Sprintf("%s/%s",
|
||||
constants.ReloaderAnnotationPrefix,
|
||||
constants.LastReloadedFromAnnotation,
|
||||
)
|
||||
lastReloadedResourceName := getReloaderAnnotationKey()
|
||||
|
||||
lastReloadedResource, err := json.Marshal(target)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
annotations[lastReloadedResourceName] = string(lastReloadedResource)
|
||||
return annotations, nil
|
||||
|
||||
var patch []byte
|
||||
if upgradeFuncs.SupportsPatch {
|
||||
escapedValue, err := jsonEscape(annotations[lastReloadedResourceName])
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
patch = fmt.Appendf(nil, upgradeFuncs.PatchTemplatesFunc().AnnotationTemplate, lastReloadedResourceName, escapedValue)
|
||||
}
|
||||
|
||||
return annotations, patch, nil
|
||||
}
|
||||
|
||||
func updateContainerEnvVars(upgradeFuncs callbacks.RollingUpgradeFuncs, item runtime.Object, config util.Config, autoReload bool) constants.Result {
|
||||
var result constants.Result
|
||||
envVar := constants.EnvVarPrefix + util.ConvertToEnvVarName(config.ResourceName) + "_" + config.Type
|
||||
func getEnvVarName(resourceName string, typeName string) string {
|
||||
return constants.EnvVarPrefix + util.ConvertToEnvVarName(resourceName) + "_" + typeName
|
||||
}
|
||||
|
||||
func updateContainerEnvVars(upgradeFuncs callbacks.RollingUpgradeFuncs, item runtime.Object, config common.Config, autoReload bool) InvokeStrategyResult {
|
||||
envVar := getEnvVarName(config.ResourceName, config.Type)
|
||||
container := getContainerUsingResource(upgradeFuncs, item, config, autoReload)
|
||||
|
||||
if container == nil {
|
||||
return constants.NoContainerFound
|
||||
return InvokeStrategyResult{constants.NoContainerFound, nil}
|
||||
}
|
||||
|
||||
if config.Type == constants.SecretProviderClassEnvVarPostfix && secretProviderClassEnvReloaded(upgradeFuncs.ContainersFunc(item), envVar, config.SHAValue) {
|
||||
return InvokeStrategyResult{constants.NotUpdated, nil}
|
||||
}
|
||||
|
||||
//update if env var exists
|
||||
result = updateEnvVar(upgradeFuncs.ContainersFunc(item), envVar, config.SHAValue)
|
||||
updateResult := updateEnvVar(container, envVar, config.SHAValue)
|
||||
|
||||
// if no existing env var exists lets create one
|
||||
if result == constants.NoEnvVarFound {
|
||||
if updateResult == constants.NoEnvVarFound {
|
||||
e := v1.EnvVar{
|
||||
Name: envVar,
|
||||
Value: config.SHAValue,
|
||||
}
|
||||
container.Env = append(container.Env, e)
|
||||
result = constants.Updated
|
||||
updateResult = constants.Updated
|
||||
}
|
||||
return result
|
||||
|
||||
var patch []byte
|
||||
if upgradeFuncs.SupportsPatch {
|
||||
patch = fmt.Appendf(nil, upgradeFuncs.PatchTemplatesFunc().EnvVarTemplate, container.Name, envVar, config.SHAValue)
|
||||
}
|
||||
|
||||
return InvokeStrategyResult{updateResult, &Patch{Type: patchtypes.StrategicMergePatchType, Bytes: patch}}
|
||||
}
|
||||
|
||||
func updateEnvVar(containers []v1.Container, envVar string, shaData string) constants.Result {
|
||||
for i := range containers {
|
||||
envs := containers[i].Env
|
||||
for j := range envs {
|
||||
if envs[j].Name == envVar {
|
||||
if envs[j].Value != shaData {
|
||||
envs[j].Value = shaData
|
||||
return constants.Updated
|
||||
}
|
||||
return constants.NotUpdated
|
||||
func updateEnvVar(container *v1.Container, envVar string, shaData string) constants.Result {
|
||||
envs := container.Env
|
||||
for j := range envs {
|
||||
if envs[j].Name == envVar {
|
||||
if envs[j].Value != shaData {
|
||||
envs[j].Value = shaData
|
||||
return constants.Updated
|
||||
}
|
||||
return constants.NotUpdated
|
||||
}
|
||||
}
|
||||
|
||||
return constants.NoEnvVarFound
|
||||
}
|
||||
|
||||
func secretProviderClassEnvReloaded(containers []v1.Container, envVar string, shaData string) bool {
|
||||
for _, container := range containers {
|
||||
for _, env := range container.Env {
|
||||
if env.Name == envVar {
|
||||
return env.Value == shaData
|
||||
}
|
||||
}
|
||||
}
|
||||
return constants.NoEnvVarFound
|
||||
return false
|
||||
}
|
||||
|
||||
func populateAnnotationsFromSecretProviderClass(clients kube.Clients, config *common.Config) {
|
||||
obj, err := clients.CSIClient.SecretsstoreV1().SecretProviderClasses(config.Namespace).Get(context.Background(), config.ResourceName, metav1.GetOptions{})
|
||||
annotations := make(map[string]string)
|
||||
if err != nil {
|
||||
if apierrors.IsNotFound(err) {
|
||||
logrus.Warnf("SecretProviderClass '%s' not found in namespace '%s'", config.ResourceName, config.Namespace)
|
||||
} else {
|
||||
logrus.Errorf("Failed to get SecretProviderClass '%s' in namespace '%s': %v", config.ResourceName, config.Namespace, err)
|
||||
}
|
||||
} else if obj.Annotations != nil {
|
||||
annotations = obj.Annotations
|
||||
}
|
||||
config.ResourceAnnotations = annotations
|
||||
}
|
||||
|
||||
func jsonEscape(toEscape string) (string, error) {
|
||||
bytes, err := json.Marshal(toEscape)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
escaped := string(bytes)
|
||||
return escaped[1 : len(escaped)-1], nil
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -16,7 +16,7 @@ import (
|
||||
"github.com/stakater/Reloader/internal/pkg/metrics"
|
||||
"github.com/stakater/Reloader/internal/pkg/options"
|
||||
"github.com/stakater/Reloader/internal/pkg/testutil"
|
||||
"github.com/stakater/Reloader/internal/pkg/util"
|
||||
"github.com/stakater/Reloader/pkg/common"
|
||||
"github.com/stakater/Reloader/pkg/kube"
|
||||
)
|
||||
|
||||
@@ -45,7 +45,7 @@ func TestHealthz(t *testing.T) {
|
||||
want := 200
|
||||
|
||||
if got != want {
|
||||
t.Fatalf("got: %q, want: %q", got, want)
|
||||
t.Fatalf("got: %d, want: %d", got, want)
|
||||
}
|
||||
|
||||
// Have the liveness probe serve a 500
|
||||
@@ -63,7 +63,7 @@ func TestHealthz(t *testing.T) {
|
||||
want = 500
|
||||
|
||||
if got != want {
|
||||
t.Fatalf("got: %q, want: %q", got, want)
|
||||
t.Fatalf("got: %d, want: %d", got, want)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -89,7 +89,7 @@ func TestRunLeaderElection(t *testing.T) {
|
||||
want := 500
|
||||
|
||||
if got != want {
|
||||
t.Fatalf("got: %q, want: %q", got, want)
|
||||
t.Fatalf("got: %d, want: %d", got, want)
|
||||
}
|
||||
|
||||
// Cancel the leader election context, so leadership is released and
|
||||
@@ -108,12 +108,12 @@ func TestRunLeaderElection(t *testing.T) {
|
||||
want = 500
|
||||
|
||||
if got != want {
|
||||
t.Fatalf("got: %q, want: %q", got, want)
|
||||
t.Fatalf("got: %d, want: %d", got, want)
|
||||
}
|
||||
}
|
||||
|
||||
// TestRunLeaderElectionWithControllers tests that leadership election works
|
||||
// wiht real controllers and that on context cancellation the controllers stop
|
||||
// with real controllers and that on context cancellation the controllers stop
|
||||
// running.
|
||||
func TestRunLeaderElectionWithControllers(t *testing.T) {
|
||||
t.Logf("Creating controller")
|
||||
@@ -159,7 +159,7 @@ func TestRunLeaderElectionWithControllers(t *testing.T) {
|
||||
// Verifying deployment update
|
||||
logrus.Infof("Verifying pod envvars has been created")
|
||||
shaData := testutil.ConvertResourceToSHA(testutil.ConfigmapResourceType, testutil.Namespace, configmapName, "www.stakater.com")
|
||||
config := util.Config{
|
||||
config := common.Config{
|
||||
Namespace: testutil.Namespace,
|
||||
ResourceName: configmapName,
|
||||
SHAValue: shaData,
|
||||
@@ -186,7 +186,7 @@ func TestRunLeaderElectionWithControllers(t *testing.T) {
|
||||
// Verifying that the deployment was not updated as leadership has been lost
|
||||
logrus.Infof("Verifying pod envvars has not been updated")
|
||||
shaData = testutil.ConvertResourceToSHA(testutil.ConfigmapResourceType, testutil.Namespace, configmapName, "www.stakater.com/new")
|
||||
config = util.Config{
|
||||
config = common.Config{
|
||||
Namespace: testutil.Namespace,
|
||||
ResourceName: configmapName,
|
||||
SHAValue: shaData,
|
||||
|
||||
@@ -1,16 +1,204 @@
|
||||
package metrics
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||
"net/http"
|
||||
"k8s.io/client-go/tools/metrics"
|
||||
)
|
||||
|
||||
// clientGoRequestMetrics implements metrics.LatencyMetric and metrics.ResultMetric
|
||||
// to expose client-go's rest_client_requests_total metric
|
||||
type clientGoRequestMetrics struct {
|
||||
requestCounter *prometheus.CounterVec
|
||||
requestLatency *prometheus.HistogramVec
|
||||
}
|
||||
|
||||
func (m *clientGoRequestMetrics) Increment(ctx context.Context, code string, method string, host string) {
|
||||
m.requestCounter.WithLabelValues(code, method, host).Inc()
|
||||
}
|
||||
|
||||
func (m *clientGoRequestMetrics) Observe(ctx context.Context, verb string, u url.URL, latency time.Duration) {
|
||||
m.requestLatency.WithLabelValues(verb, u.Host).Observe(latency.Seconds())
|
||||
}
|
||||
|
||||
var clientGoMetrics = &clientGoRequestMetrics{
|
||||
requestCounter: prometheus.NewCounterVec(
|
||||
prometheus.CounterOpts{
|
||||
Name: "rest_client_requests_total",
|
||||
Help: "Number of HTTP requests, partitioned by status code, method, and host.",
|
||||
},
|
||||
[]string{"code", "method", "host"},
|
||||
),
|
||||
requestLatency: prometheus.NewHistogramVec(
|
||||
prometheus.HistogramOpts{
|
||||
Name: "rest_client_request_duration_seconds",
|
||||
Help: "Request latency in seconds. Broken down by verb and host.",
|
||||
Buckets: []float64{0.001, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 30},
|
||||
},
|
||||
[]string{"verb", "host"},
|
||||
),
|
||||
}
|
||||
|
||||
func init() {
|
||||
// Register the metrics collectors
|
||||
prometheus.MustRegister(clientGoMetrics.requestCounter)
|
||||
prometheus.MustRegister(clientGoMetrics.requestLatency)
|
||||
|
||||
// Register our metrics implementation with client-go
|
||||
metrics.RequestResult = clientGoMetrics
|
||||
metrics.RequestLatency = clientGoMetrics
|
||||
}
|
||||
|
||||
// Collectors holds all Prometheus metrics collectors for Reloader.
|
||||
type Collectors struct {
|
||||
Reloaded *prometheus.CounterVec
|
||||
Reloaded *prometheus.CounterVec
|
||||
ReloadedByNamespace *prometheus.CounterVec
|
||||
countByNamespace bool
|
||||
|
||||
ReconcileTotal *prometheus.CounterVec // Total reconcile calls by result
|
||||
ReconcileDuration *prometheus.HistogramVec // Time spent in reconcile/handler
|
||||
ActionTotal *prometheus.CounterVec // Total actions by workload kind and result
|
||||
ActionLatency *prometheus.HistogramVec // Time from event to action applied
|
||||
SkippedTotal *prometheus.CounterVec // Skipped operations by reason
|
||||
QueueDepth prometheus.Gauge // Current queue depth
|
||||
QueueAdds prometheus.Counter // Total items added to queue
|
||||
QueueLatency *prometheus.HistogramVec // Time spent in queue
|
||||
ErrorsTotal *prometheus.CounterVec // Errors by type
|
||||
RetriesTotal prometheus.Counter // Total retries
|
||||
EventsReceived *prometheus.CounterVec // Events received by type (add/update/delete)
|
||||
EventsProcessed *prometheus.CounterVec // Events processed by type and result
|
||||
WorkloadsScanned *prometheus.CounterVec // Workloads scanned by kind
|
||||
WorkloadsMatched *prometheus.CounterVec // Workloads matched for reload by kind
|
||||
}
|
||||
|
||||
// RecordReload records a reload event with the given success status and namespace.
|
||||
// Preserved for backward compatibility.
|
||||
func (c *Collectors) RecordReload(success bool, namespace string) {
|
||||
if c == nil {
|
||||
return
|
||||
}
|
||||
|
||||
successLabel := "false"
|
||||
if success {
|
||||
successLabel = "true"
|
||||
}
|
||||
|
||||
c.Reloaded.With(prometheus.Labels{"success": successLabel}).Inc()
|
||||
|
||||
if c.countByNamespace {
|
||||
c.ReloadedByNamespace.With(prometheus.Labels{
|
||||
"success": successLabel,
|
||||
"namespace": namespace,
|
||||
}).Inc()
|
||||
}
|
||||
}
|
||||
|
||||
// RecordReconcile records a reconcile/handler invocation.
|
||||
func (c *Collectors) RecordReconcile(result string, duration time.Duration) {
|
||||
if c == nil {
|
||||
return
|
||||
}
|
||||
c.ReconcileTotal.With(prometheus.Labels{"result": result}).Inc()
|
||||
c.ReconcileDuration.With(prometheus.Labels{"result": result}).Observe(duration.Seconds())
|
||||
}
|
||||
|
||||
// RecordAction records a reload action on a workload.
|
||||
func (c *Collectors) RecordAction(workloadKind string, result string, latency time.Duration) {
|
||||
if c == nil {
|
||||
return
|
||||
}
|
||||
c.ActionTotal.With(prometheus.Labels{"workload_kind": workloadKind, "result": result}).Inc()
|
||||
c.ActionLatency.With(prometheus.Labels{"workload_kind": workloadKind}).Observe(latency.Seconds())
|
||||
}
|
||||
|
||||
// RecordSkipped records a skipped operation with reason.
|
||||
func (c *Collectors) RecordSkipped(reason string) {
|
||||
if c == nil {
|
||||
return
|
||||
}
|
||||
c.SkippedTotal.With(prometheus.Labels{"reason": reason}).Inc()
|
||||
}
|
||||
|
||||
// RecordQueueAdd records an item being added to the queue.
|
||||
func (c *Collectors) RecordQueueAdd() {
|
||||
if c == nil {
|
||||
return
|
||||
}
|
||||
c.QueueAdds.Inc()
|
||||
}
|
||||
|
||||
// SetQueueDepth sets the current queue depth.
|
||||
func (c *Collectors) SetQueueDepth(depth int) {
|
||||
if c == nil {
|
||||
return
|
||||
}
|
||||
c.QueueDepth.Set(float64(depth))
|
||||
}
|
||||
|
||||
// RecordQueueLatency records how long an item spent in the queue.
|
||||
func (c *Collectors) RecordQueueLatency(latency time.Duration) {
|
||||
if c == nil {
|
||||
return
|
||||
}
|
||||
c.QueueLatency.With(prometheus.Labels{}).Observe(latency.Seconds())
|
||||
}
|
||||
|
||||
// RecordError records an error by type.
|
||||
func (c *Collectors) RecordError(errorType string) {
|
||||
if c == nil {
|
||||
return
|
||||
}
|
||||
c.ErrorsTotal.With(prometheus.Labels{"type": errorType}).Inc()
|
||||
}
|
||||
|
||||
// RecordRetry records a retry attempt.
|
||||
func (c *Collectors) RecordRetry() {
|
||||
if c == nil {
|
||||
return
|
||||
}
|
||||
c.RetriesTotal.Inc()
|
||||
}
|
||||
|
||||
// RecordEventReceived records an event being received.
|
||||
func (c *Collectors) RecordEventReceived(eventType string, resourceType string) {
|
||||
if c == nil {
|
||||
return
|
||||
}
|
||||
c.EventsReceived.With(prometheus.Labels{"event_type": eventType, "resource_type": resourceType}).Inc()
|
||||
}
|
||||
|
||||
// RecordEventProcessed records an event being processed.
|
||||
func (c *Collectors) RecordEventProcessed(eventType string, resourceType string, result string) {
|
||||
if c == nil {
|
||||
return
|
||||
}
|
||||
c.EventsProcessed.With(prometheus.Labels{"event_type": eventType, "resource_type": resourceType, "result": result}).Inc()
|
||||
}
|
||||
|
||||
// RecordWorkloadsScanned records workloads scanned during a reconcile.
|
||||
func (c *Collectors) RecordWorkloadsScanned(kind string, count int) {
|
||||
if c == nil {
|
||||
return
|
||||
}
|
||||
c.WorkloadsScanned.With(prometheus.Labels{"kind": kind}).Add(float64(count))
|
||||
}
|
||||
|
||||
// RecordWorkloadsMatched records workloads matched for reload.
|
||||
func (c *Collectors) RecordWorkloadsMatched(kind string, count int) {
|
||||
if c == nil {
|
||||
return
|
||||
}
|
||||
c.WorkloadsMatched.With(prometheus.Labels{"kind": kind}).Add(float64(count))
|
||||
}
|
||||
|
||||
func NewCollectors() Collectors {
|
||||
// Existing metrics (preserved)
|
||||
reloaded := prometheus.NewCounterVec(
|
||||
prometheus.CounterOpts{
|
||||
Namespace: "reloader",
|
||||
@@ -19,19 +207,189 @@ func NewCollectors() Collectors {
|
||||
},
|
||||
[]string{"success"},
|
||||
)
|
||||
|
||||
//set 0 as default value
|
||||
reloaded.With(prometheus.Labels{"success": "true"}).Add(0)
|
||||
reloaded.With(prometheus.Labels{"success": "false"}).Add(0)
|
||||
|
||||
reloadedByNamespace := prometheus.NewCounterVec(
|
||||
prometheus.CounterOpts{
|
||||
Namespace: "reloader",
|
||||
Name: "reload_executed_total_by_namespace",
|
||||
Help: "Counter of reloads executed by Reloader by namespace.",
|
||||
},
|
||||
[]string{"success", "namespace"},
|
||||
)
|
||||
|
||||
reconcileTotal := prometheus.NewCounterVec(
|
||||
prometheus.CounterOpts{
|
||||
Namespace: "reloader",
|
||||
Name: "reconcile_total",
|
||||
Help: "Total number of reconcile/handler invocations by result.",
|
||||
},
|
||||
[]string{"result"},
|
||||
)
|
||||
|
||||
reconcileDuration := prometheus.NewHistogramVec(
|
||||
prometheus.HistogramOpts{
|
||||
Namespace: "reloader",
|
||||
Name: "reconcile_duration_seconds",
|
||||
Help: "Time spent in reconcile/handler in seconds.",
|
||||
Buckets: []float64{0.001, 0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10},
|
||||
},
|
||||
[]string{"result"},
|
||||
)
|
||||
|
||||
actionTotal := prometheus.NewCounterVec(
|
||||
prometheus.CounterOpts{
|
||||
Namespace: "reloader",
|
||||
Name: "action_total",
|
||||
Help: "Total number of reload actions by workload kind and result.",
|
||||
},
|
||||
[]string{"workload_kind", "result"},
|
||||
)
|
||||
|
||||
actionLatency := prometheus.NewHistogramVec(
|
||||
prometheus.HistogramOpts{
|
||||
Namespace: "reloader",
|
||||
Name: "action_latency_seconds",
|
||||
Help: "Time from event received to action applied in seconds.",
|
||||
Buckets: []float64{0.01, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10, 30, 60},
|
||||
},
|
||||
[]string{"workload_kind"},
|
||||
)
|
||||
|
||||
skippedTotal := prometheus.NewCounterVec(
|
||||
prometheus.CounterOpts{
|
||||
Namespace: "reloader",
|
||||
Name: "skipped_total",
|
||||
Help: "Total number of skipped operations by reason.",
|
||||
},
|
||||
[]string{"reason"},
|
||||
)
|
||||
|
||||
queueDepth := prometheus.NewGauge(
|
||||
prometheus.GaugeOpts{
|
||||
Namespace: "reloader",
|
||||
Name: "workqueue_depth",
|
||||
Help: "Current depth of the work queue.",
|
||||
},
|
||||
)
|
||||
|
||||
queueAdds := prometheus.NewCounter(
|
||||
prometheus.CounterOpts{
|
||||
Namespace: "reloader",
|
||||
Name: "workqueue_adds_total",
|
||||
Help: "Total number of items added to the work queue.",
|
||||
},
|
||||
)
|
||||
|
||||
queueLatency := prometheus.NewHistogramVec(
|
||||
prometheus.HistogramOpts{
|
||||
Namespace: "reloader",
|
||||
Name: "workqueue_latency_seconds",
|
||||
Help: "Time spent in the work queue in seconds.",
|
||||
Buckets: []float64{0.001, 0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5},
|
||||
},
|
||||
[]string{},
|
||||
)
|
||||
|
||||
errorsTotal := prometheus.NewCounterVec(
|
||||
prometheus.CounterOpts{
|
||||
Namespace: "reloader",
|
||||
Name: "errors_total",
|
||||
Help: "Total number of errors by type.",
|
||||
},
|
||||
[]string{"type"},
|
||||
)
|
||||
|
||||
retriesTotal := prometheus.NewCounter(
|
||||
prometheus.CounterOpts{
|
||||
Namespace: "reloader",
|
||||
Name: "retries_total",
|
||||
Help: "Total number of retry attempts.",
|
||||
},
|
||||
)
|
||||
|
||||
eventsReceived := prometheus.NewCounterVec(
|
||||
prometheus.CounterOpts{
|
||||
Namespace: "reloader",
|
||||
Name: "events_received_total",
|
||||
Help: "Total number of events received by type and resource.",
|
||||
},
|
||||
[]string{"event_type", "resource_type"},
|
||||
)
|
||||
|
||||
eventsProcessed := prometheus.NewCounterVec(
|
||||
prometheus.CounterOpts{
|
||||
Namespace: "reloader",
|
||||
Name: "events_processed_total",
|
||||
Help: "Total number of events processed by type, resource, and result.",
|
||||
},
|
||||
[]string{"event_type", "resource_type", "result"},
|
||||
)
|
||||
|
||||
workloadsScanned := prometheus.NewCounterVec(
|
||||
prometheus.CounterOpts{
|
||||
Namespace: "reloader",
|
||||
Name: "workloads_scanned_total",
|
||||
Help: "Total number of workloads scanned by kind.",
|
||||
},
|
||||
[]string{"kind"},
|
||||
)
|
||||
|
||||
workloadsMatched := prometheus.NewCounterVec(
|
||||
prometheus.CounterOpts{
|
||||
Namespace: "reloader",
|
||||
Name: "workloads_matched_total",
|
||||
Help: "Total number of workloads matched for reload by kind.",
|
||||
},
|
||||
[]string{"kind"},
|
||||
)
|
||||
|
||||
return Collectors{
|
||||
Reloaded: reloaded,
|
||||
Reloaded: reloaded,
|
||||
ReloadedByNamespace: reloadedByNamespace,
|
||||
countByNamespace: os.Getenv("METRICS_COUNT_BY_NAMESPACE") == "enabled",
|
||||
|
||||
ReconcileTotal: reconcileTotal,
|
||||
ReconcileDuration: reconcileDuration,
|
||||
ActionTotal: actionTotal,
|
||||
ActionLatency: actionLatency,
|
||||
SkippedTotal: skippedTotal,
|
||||
QueueDepth: queueDepth,
|
||||
QueueAdds: queueAdds,
|
||||
QueueLatency: queueLatency,
|
||||
ErrorsTotal: errorsTotal,
|
||||
RetriesTotal: retriesTotal,
|
||||
EventsReceived: eventsReceived,
|
||||
EventsProcessed: eventsProcessed,
|
||||
WorkloadsScanned: workloadsScanned,
|
||||
WorkloadsMatched: workloadsMatched,
|
||||
}
|
||||
}
|
||||
|
||||
func SetupPrometheusEndpoint() Collectors {
|
||||
collectors := NewCollectors()
|
||||
|
||||
prometheus.MustRegister(collectors.Reloaded)
|
||||
prometheus.MustRegister(collectors.ReconcileTotal)
|
||||
prometheus.MustRegister(collectors.ReconcileDuration)
|
||||
prometheus.MustRegister(collectors.ActionTotal)
|
||||
prometheus.MustRegister(collectors.ActionLatency)
|
||||
prometheus.MustRegister(collectors.SkippedTotal)
|
||||
prometheus.MustRegister(collectors.QueueDepth)
|
||||
prometheus.MustRegister(collectors.QueueAdds)
|
||||
prometheus.MustRegister(collectors.QueueLatency)
|
||||
prometheus.MustRegister(collectors.ErrorsTotal)
|
||||
prometheus.MustRegister(collectors.RetriesTotal)
|
||||
prometheus.MustRegister(collectors.EventsReceived)
|
||||
prometheus.MustRegister(collectors.EventsProcessed)
|
||||
prometheus.MustRegister(collectors.WorkloadsScanned)
|
||||
prometheus.MustRegister(collectors.WorkloadsMatched)
|
||||
|
||||
if os.Getenv("METRICS_COUNT_BY_NAMESPACE") == "enabled" {
|
||||
prometheus.MustRegister(collectors.ReloadedByNamespace)
|
||||
}
|
||||
|
||||
http.Handle("/metrics", promhttp.Handler())
|
||||
|
||||
return collectors
|
||||
|
||||
@@ -2,6 +2,15 @@ package options
|
||||
|
||||
import "github.com/stakater/Reloader/internal/pkg/constants"
|
||||
|
||||
type ArgoRolloutStrategy int
|
||||
|
||||
const (
|
||||
// RestartStrategy is the annotation value for restart strategy for rollouts
|
||||
RestartStrategy ArgoRolloutStrategy = iota
|
||||
// RolloutStrategy is the annotation value for rollout strategy for rollouts
|
||||
RolloutStrategy
|
||||
)
|
||||
|
||||
var (
|
||||
// Auto reload all resources when their corresponding configmaps/secrets are updated
|
||||
AutoReloadAll = false
|
||||
@@ -11,25 +20,82 @@ var (
|
||||
// SecretUpdateOnChangeAnnotation is an annotation to detect changes in
|
||||
// secrets specified by name
|
||||
SecretUpdateOnChangeAnnotation = "secret.reloader.stakater.com/reload"
|
||||
// ReloaderAutoAnnotation is an annotation to detect changes in secrets
|
||||
// SecretProviderClassUpdateOnChangeAnnotation is an annotation to detect changes in
|
||||
// secretproviderclasses specified by name
|
||||
SecretProviderClassUpdateOnChangeAnnotation = "secretproviderclass.reloader.stakater.com/reload"
|
||||
// ReloaderAutoAnnotation is an annotation to detect changes in secrets/configmaps
|
||||
ReloaderAutoAnnotation = "reloader.stakater.com/auto"
|
||||
// IgnoreResourceAnnotation is an annotation to ignore changes in secrets/configmaps
|
||||
IgnoreResourceAnnotation = "reloader.stakater.com/ignore"
|
||||
// ConfigmapReloaderAutoAnnotation is an annotation to detect changes in configmaps
|
||||
ConfigmapReloaderAutoAnnotation = "configmap.reloader.stakater.com/auto"
|
||||
// SecretReloaderAutoAnnotation is an annotation to detect changes in secrets
|
||||
SecretReloaderAutoAnnotation = "secret.reloader.stakater.com/auto"
|
||||
// SecretProviderClassReloaderAutoAnnotation is an annotation to detect changes in secretproviderclasses
|
||||
SecretProviderClassReloaderAutoAnnotation = "secretproviderclass.reloader.stakater.com/auto"
|
||||
// ConfigmapReloaderAutoAnnotation is a comma separated list of configmaps that excludes detecting changes on cms
|
||||
ConfigmapExcludeReloaderAnnotation = "configmaps.exclude.reloader.stakater.com/reload"
|
||||
// SecretExcludeReloaderAnnotation is a comma separated list of secrets that excludes detecting changes on secrets
|
||||
SecretExcludeReloaderAnnotation = "secrets.exclude.reloader.stakater.com/reload"
|
||||
// SecretProviderClassExcludeReloaderAnnotation is a comma separated list of secret provider classes that excludes detecting changes on secret provider class
|
||||
SecretProviderClassExcludeReloaderAnnotation = "secretproviderclasses.exclude.reloader.stakater.com/reload"
|
||||
// AutoSearchAnnotation is an annotation to detect changes in
|
||||
// configmaps or triggers with the SearchMatchAnnotation
|
||||
AutoSearchAnnotation = "reloader.stakater.com/search"
|
||||
// SearchMatchAnnotation is an annotation to tag secrets to be found with
|
||||
// AutoSearchAnnotation
|
||||
SearchMatchAnnotation = "reloader.stakater.com/match"
|
||||
// RolloutStrategyAnnotation is an annotation to define rollout update strategy
|
||||
RolloutStrategyAnnotation = "reloader.stakater.com/rollout-strategy"
|
||||
// PauseDeploymentAnnotation is an annotation to define the time period to pause a deployment after
|
||||
// a configmap/secret change has been detected. Valid values are described here: https://pkg.go.dev/time#ParseDuration
|
||||
// only positive values are allowed
|
||||
PauseDeploymentAnnotation = "deployment.reloader.stakater.com/pause-period"
|
||||
// Annotation set by reloader to indicate that the deployment has been paused
|
||||
PauseDeploymentTimeAnnotation = "deployment.reloader.stakater.com/paused-at"
|
||||
// LogFormat is the log format to use (json, or empty string for default)
|
||||
LogFormat = ""
|
||||
// LogLevel is the log level to use (trace, debug, info, warning, error, fatal and panic)
|
||||
LogLevel = ""
|
||||
// IsArgoRollouts Adds support for argo rollouts
|
||||
IsArgoRollouts = "false"
|
||||
// ReloadStrategy Specify the update strategy
|
||||
ReloadStrategy = constants.EnvVarsReloadStrategy
|
||||
// ReloadOnCreate Adds support to watch create events
|
||||
ReloadOnCreate = "false"
|
||||
ReloadOnCreate = "false"
|
||||
// ReloadOnDelete Adds support to watch delete events
|
||||
ReloadOnDelete = "false"
|
||||
SyncAfterRestart = false
|
||||
// EnableHA adds support for running multiple replicas via leadership election
|
||||
EnableHA = false
|
||||
// Url to send a request to instead of triggering a reload
|
||||
WebhookUrl = ""
|
||||
// EnableCSIIntegration Adds support to watch SecretProviderClassPodStatus and restart deployment based on it
|
||||
EnableCSIIntegration = false
|
||||
// ResourcesToIgnore is a list of resources to ignore when watching for changes
|
||||
ResourcesToIgnore = []string{}
|
||||
// WorkloadTypesToIgnore is a list of workload types to ignore when watching for changes
|
||||
WorkloadTypesToIgnore = []string{}
|
||||
// NamespacesToIgnore is a list of namespace names to ignore when watching for changes
|
||||
NamespacesToIgnore = []string{}
|
||||
// NamespaceSelectors is a list of namespace selectors to watch for changes
|
||||
NamespaceSelectors = []string{}
|
||||
// ResourceSelectors is a list of resource selectors to watch for changes
|
||||
ResourceSelectors = []string{}
|
||||
// EnablePProf enables pprof for profiling
|
||||
EnablePProf = false
|
||||
// PProfAddr is the address to start pprof server on
|
||||
// Default is :6060
|
||||
PProfAddr = ":6060"
|
||||
)
|
||||
|
||||
func ToArgoRolloutStrategy(s string) ArgoRolloutStrategy {
|
||||
switch s {
|
||||
case "restart":
|
||||
return RestartStrategy
|
||||
case "rollout":
|
||||
fallthrough
|
||||
default:
|
||||
return RolloutStrategy
|
||||
}
|
||||
}
|
||||
|
||||
@@ -10,6 +10,8 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
argorolloutv1alpha1 "github.com/argoproj/argo-rollouts/pkg/apis/rollouts/v1alpha1"
|
||||
argorollout "github.com/argoproj/argo-rollouts/pkg/client/clientset/versioned"
|
||||
openshiftv1 "github.com/openshift/api/apps/v1"
|
||||
appsclient "github.com/openshift/client-go/apps/clientset/versioned"
|
||||
"github.com/sirupsen/logrus"
|
||||
@@ -19,13 +21,18 @@ import (
|
||||
"github.com/stakater/Reloader/internal/pkg/metrics"
|
||||
"github.com/stakater/Reloader/internal/pkg/options"
|
||||
"github.com/stakater/Reloader/internal/pkg/util"
|
||||
"github.com/stakater/Reloader/pkg/common"
|
||||
"github.com/stakater/Reloader/pkg/kube"
|
||||
appsv1 "k8s.io/api/apps/v1"
|
||||
batchv1 "k8s.io/api/batch/v1"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
"k8s.io/apimachinery/pkg/api/meta"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/client-go/kubernetes"
|
||||
core_v1 "k8s.io/client-go/kubernetes/typed/core/v1"
|
||||
csiv1 "sigs.k8s.io/secrets-store-csi-driver/apis/v1"
|
||||
csiclient "sigs.k8s.io/secrets-store-csi-driver/pkg/client/clientset/versioned"
|
||||
csiclient_v1 "sigs.k8s.io/secrets-store-csi-driver/pkg/client/clientset/versioned/typed/apis/v1"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -34,6 +41,8 @@ var (
|
||||
ConfigmapResourceType = "configMaps"
|
||||
// SecretResourceType is a resource type which controller watches for changes
|
||||
SecretResourceType = "secrets"
|
||||
// SecretproviderclasspodstatusResourceType is a resource type which controller watches for changes
|
||||
SecretProviderClassPodStatusResourceType = "secretproviderclasspodstatuses"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -69,24 +78,41 @@ func DeleteNamespace(namespace string, client kubernetes.Interface) {
|
||||
}
|
||||
}
|
||||
|
||||
func getObjectMeta(namespace string, name string, autoReload bool) metav1.ObjectMeta {
|
||||
func getObjectMeta(namespace string, name string, autoReload bool, secretAutoReload bool, configmapAutoReload bool, secretproviderclass bool, extraAnnotations map[string]string) metav1.ObjectMeta {
|
||||
return metav1.ObjectMeta{
|
||||
Name: name,
|
||||
Namespace: namespace,
|
||||
Labels: map[string]string{"firstLabel": "temp"},
|
||||
Annotations: getAnnotations(name, autoReload),
|
||||
Annotations: getAnnotations(name, autoReload, secretAutoReload, configmapAutoReload, secretproviderclass, extraAnnotations),
|
||||
}
|
||||
}
|
||||
|
||||
func getAnnotations(name string, autoReload bool) map[string]string {
|
||||
func getAnnotations(name string, autoReload bool, secretAutoReload bool, configmapAutoReload bool, secretproviderclass bool, extraAnnotations map[string]string) map[string]string {
|
||||
annotations := make(map[string]string)
|
||||
if autoReload {
|
||||
return map[string]string{
|
||||
options.ReloaderAutoAnnotation: "true"}
|
||||
annotations[options.ReloaderAutoAnnotation] = "true"
|
||||
}
|
||||
if secretAutoReload {
|
||||
annotations[options.SecretReloaderAutoAnnotation] = "true"
|
||||
}
|
||||
if configmapAutoReload {
|
||||
annotations[options.ConfigmapReloaderAutoAnnotation] = "true"
|
||||
}
|
||||
if secretproviderclass {
|
||||
annotations[options.SecretProviderClassReloaderAutoAnnotation] = "true"
|
||||
}
|
||||
|
||||
return map[string]string{
|
||||
options.ConfigmapUpdateOnChangeAnnotation: name,
|
||||
options.SecretUpdateOnChangeAnnotation: name}
|
||||
if len(annotations) == 0 {
|
||||
annotations = map[string]string{
|
||||
options.ConfigmapUpdateOnChangeAnnotation: name,
|
||||
options.SecretUpdateOnChangeAnnotation: name,
|
||||
options.SecretProviderClassUpdateOnChangeAnnotation: name,
|
||||
}
|
||||
}
|
||||
for k, v := range extraAnnotations {
|
||||
annotations[k] = v
|
||||
}
|
||||
return annotations
|
||||
}
|
||||
|
||||
func getEnvVarSources(name string) []v1.EnvFromSource {
|
||||
@@ -160,10 +186,19 @@ func getVolumes(name string) []v1.Volume {
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "secretproviderclass",
|
||||
VolumeSource: v1.VolumeSource{
|
||||
CSI: &v1.CSIVolumeSource{
|
||||
Driver: "secrets-store.csi.k8s.io",
|
||||
VolumeAttributes: map[string]string{"secretProviderClass": name},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func getVolumeMounts(name string) []v1.VolumeMount {
|
||||
func getVolumeMounts() []v1.VolumeMount {
|
||||
return []v1.VolumeMount{
|
||||
{
|
||||
MountPath: "etc/config",
|
||||
@@ -173,6 +208,10 @@ func getVolumeMounts(name string) []v1.VolumeMount {
|
||||
MountPath: "etc/sec",
|
||||
Name: "secret",
|
||||
},
|
||||
{
|
||||
MountPath: "etc/spc",
|
||||
Name: "secretproviderclass",
|
||||
},
|
||||
{
|
||||
MountPath: "etc/projectedconfig",
|
||||
Name: "projectedconfigmap",
|
||||
@@ -261,7 +300,7 @@ func getPodTemplateSpecWithVolumes(name string) v1.PodTemplateSpec {
|
||||
Value: "test",
|
||||
},
|
||||
},
|
||||
VolumeMounts: getVolumeMounts(name),
|
||||
VolumeMounts: getVolumeMounts(),
|
||||
},
|
||||
},
|
||||
Volumes: getVolumes(name),
|
||||
@@ -279,7 +318,7 @@ func getPodTemplateSpecWithInitContainer(name string) v1.PodTemplateSpec {
|
||||
{
|
||||
Image: "busybox",
|
||||
Name: "busyBox",
|
||||
VolumeMounts: getVolumeMounts(name),
|
||||
VolumeMounts: getVolumeMounts(),
|
||||
},
|
||||
},
|
||||
Containers: []v1.Container{
|
||||
@@ -332,7 +371,7 @@ func getPodTemplateSpecWithInitContainerAndEnv(name string) v1.PodTemplateSpec {
|
||||
func GetDeployment(namespace string, deploymentName string) *appsv1.Deployment {
|
||||
replicaset := int32(1)
|
||||
return &appsv1.Deployment{
|
||||
ObjectMeta: getObjectMeta(namespace, deploymentName, false),
|
||||
ObjectMeta: getObjectMeta(namespace, deploymentName, false, false, false, false, map[string]string{}),
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"secondLabel": "temp"},
|
||||
@@ -351,7 +390,7 @@ func GetDeploymentConfig(namespace string, deploymentConfigName string) *openshi
|
||||
replicaset := int32(1)
|
||||
podTemplateSpecWithVolume := getPodTemplateSpecWithVolumes(deploymentConfigName)
|
||||
return &openshiftv1.DeploymentConfig{
|
||||
ObjectMeta: getObjectMeta(namespace, deploymentConfigName, false),
|
||||
ObjectMeta: getObjectMeta(namespace, deploymentConfigName, false, false, false, false, map[string]string{}),
|
||||
Spec: openshiftv1.DeploymentConfigSpec{
|
||||
Replicas: replicaset,
|
||||
Strategy: openshiftv1.DeploymentStrategy{
|
||||
@@ -366,7 +405,7 @@ func GetDeploymentConfig(namespace string, deploymentConfigName string) *openshi
|
||||
func GetDeploymentWithInitContainer(namespace string, deploymentName string) *appsv1.Deployment {
|
||||
replicaset := int32(1)
|
||||
return &appsv1.Deployment{
|
||||
ObjectMeta: getObjectMeta(namespace, deploymentName, false),
|
||||
ObjectMeta: getObjectMeta(namespace, deploymentName, false, false, false, false, map[string]string{}),
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"secondLabel": "temp"},
|
||||
@@ -384,7 +423,7 @@ func GetDeploymentWithInitContainer(namespace string, deploymentName string) *ap
|
||||
func GetDeploymentWithInitContainerAndEnv(namespace string, deploymentName string) *appsv1.Deployment {
|
||||
replicaset := int32(1)
|
||||
return &appsv1.Deployment{
|
||||
ObjectMeta: getObjectMeta(namespace, deploymentName, true),
|
||||
ObjectMeta: getObjectMeta(namespace, deploymentName, true, false, false, false, map[string]string{}),
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"secondLabel": "temp"},
|
||||
@@ -401,7 +440,7 @@ func GetDeploymentWithInitContainerAndEnv(namespace string, deploymentName strin
|
||||
func GetDeploymentWithEnvVars(namespace string, deploymentName string) *appsv1.Deployment {
|
||||
replicaset := int32(1)
|
||||
return &appsv1.Deployment{
|
||||
ObjectMeta: getObjectMeta(namespace, deploymentName, true),
|
||||
ObjectMeta: getObjectMeta(namespace, deploymentName, true, false, false, false, map[string]string{}),
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"secondLabel": "temp"},
|
||||
@@ -419,7 +458,7 @@ func GetDeploymentConfigWithEnvVars(namespace string, deploymentConfigName strin
|
||||
replicaset := int32(1)
|
||||
podTemplateSpecWithEnvVars := getPodTemplateSpecWithEnvVars(deploymentConfigName)
|
||||
return &openshiftv1.DeploymentConfig{
|
||||
ObjectMeta: getObjectMeta(namespace, deploymentConfigName, false),
|
||||
ObjectMeta: getObjectMeta(namespace, deploymentConfigName, false, false, false, false, map[string]string{}),
|
||||
Spec: openshiftv1.DeploymentConfigSpec{
|
||||
Replicas: replicaset,
|
||||
Strategy: openshiftv1.DeploymentStrategy{
|
||||
@@ -433,7 +472,7 @@ func GetDeploymentConfigWithEnvVars(namespace string, deploymentConfigName strin
|
||||
func GetDeploymentWithEnvVarSources(namespace string, deploymentName string) *appsv1.Deployment {
|
||||
replicaset := int32(1)
|
||||
return &appsv1.Deployment{
|
||||
ObjectMeta: getObjectMeta(namespace, deploymentName, true),
|
||||
ObjectMeta: getObjectMeta(namespace, deploymentName, true, false, false, false, map[string]string{}),
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"secondLabel": "temp"},
|
||||
@@ -450,7 +489,7 @@ func GetDeploymentWithEnvVarSources(namespace string, deploymentName string) *ap
|
||||
func GetDeploymentWithPodAnnotations(namespace string, deploymentName string, both bool) *appsv1.Deployment {
|
||||
replicaset := int32(1)
|
||||
deployment := &appsv1.Deployment{
|
||||
ObjectMeta: getObjectMeta(namespace, deploymentName, false),
|
||||
ObjectMeta: getObjectMeta(namespace, deploymentName, false, false, false, false, map[string]string{}),
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"secondLabel": "temp"},
|
||||
@@ -463,16 +502,77 @@ func GetDeploymentWithPodAnnotations(namespace string, deploymentName string, bo
|
||||
},
|
||||
}
|
||||
if !both {
|
||||
deployment.ObjectMeta.Annotations = nil
|
||||
deployment.Annotations = nil
|
||||
}
|
||||
deployment.Spec.Template.ObjectMeta.Annotations = getAnnotations(deploymentName, true)
|
||||
deployment.Spec.Template.Annotations = getAnnotations(deploymentName, true, false, false, false, map[string]string{})
|
||||
return deployment
|
||||
}
|
||||
|
||||
func GetDeploymentWithTypedAutoAnnotation(namespace string, deploymentName string, resourceType string) *appsv1.Deployment {
|
||||
replicaset := int32(1)
|
||||
var objectMeta metav1.ObjectMeta
|
||||
switch resourceType {
|
||||
case SecretResourceType:
|
||||
objectMeta = getObjectMeta(namespace, deploymentName, false, true, false, false, map[string]string{})
|
||||
case ConfigmapResourceType:
|
||||
objectMeta = getObjectMeta(namespace, deploymentName, false, false, true, false, map[string]string{})
|
||||
case SecretProviderClassPodStatusResourceType:
|
||||
objectMeta = getObjectMeta(namespace, deploymentName, false, false, false, true, map[string]string{})
|
||||
}
|
||||
|
||||
return &appsv1.Deployment{
|
||||
ObjectMeta: objectMeta,
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"secondLabel": "temp"},
|
||||
},
|
||||
Replicas: &replicaset,
|
||||
Strategy: appsv1.DeploymentStrategy{
|
||||
Type: appsv1.RollingUpdateDeploymentStrategyType,
|
||||
},
|
||||
Template: getPodTemplateSpecWithVolumes(deploymentName),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func GetDeploymentWithExcludeAnnotation(namespace string, deploymentName string, resourceType string) *appsv1.Deployment {
|
||||
replicaset := int32(1)
|
||||
|
||||
annotation := map[string]string{}
|
||||
|
||||
switch resourceType {
|
||||
case SecretResourceType:
|
||||
annotation[options.SecretExcludeReloaderAnnotation] = deploymentName
|
||||
case ConfigmapResourceType:
|
||||
annotation[options.ConfigmapExcludeReloaderAnnotation] = deploymentName
|
||||
case SecretProviderClassPodStatusResourceType:
|
||||
annotation[options.SecretProviderClassExcludeReloaderAnnotation] = deploymentName
|
||||
}
|
||||
|
||||
return &appsv1.Deployment{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: deploymentName,
|
||||
Namespace: namespace,
|
||||
Labels: map[string]string{"firstLabel": "temp"},
|
||||
Annotations: annotation,
|
||||
},
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"secondLabel": "temp"},
|
||||
},
|
||||
Replicas: &replicaset,
|
||||
Strategy: appsv1.DeploymentStrategy{
|
||||
Type: appsv1.RollingUpdateDeploymentStrategyType,
|
||||
},
|
||||
Template: getPodTemplateSpecWithVolumes(deploymentName),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// GetDaemonSet provides daemonset for testing
|
||||
func GetDaemonSet(namespace string, daemonsetName string) *appsv1.DaemonSet {
|
||||
return &appsv1.DaemonSet{
|
||||
ObjectMeta: getObjectMeta(namespace, daemonsetName, false),
|
||||
ObjectMeta: getObjectMeta(namespace, daemonsetName, false, false, false, false, map[string]string{}),
|
||||
Spec: appsv1.DaemonSetSpec{
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"secondLabel": "temp"},
|
||||
@@ -487,7 +587,7 @@ func GetDaemonSet(namespace string, daemonsetName string) *appsv1.DaemonSet {
|
||||
|
||||
func GetDaemonSetWithEnvVars(namespace string, daemonSetName string) *appsv1.DaemonSet {
|
||||
return &appsv1.DaemonSet{
|
||||
ObjectMeta: getObjectMeta(namespace, daemonSetName, true),
|
||||
ObjectMeta: getObjectMeta(namespace, daemonSetName, true, false, false, false, map[string]string{}),
|
||||
Spec: appsv1.DaemonSetSpec{
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"secondLabel": "temp"},
|
||||
@@ -503,7 +603,7 @@ func GetDaemonSetWithEnvVars(namespace string, daemonSetName string) *appsv1.Dae
|
||||
// GetStatefulSet provides statefulset for testing
|
||||
func GetStatefulSet(namespace string, statefulsetName string) *appsv1.StatefulSet {
|
||||
return &appsv1.StatefulSet{
|
||||
ObjectMeta: getObjectMeta(namespace, statefulsetName, false),
|
||||
ObjectMeta: getObjectMeta(namespace, statefulsetName, false, false, false, false, map[string]string{}),
|
||||
Spec: appsv1.StatefulSetSpec{
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"secondLabel": "temp"},
|
||||
@@ -519,7 +619,7 @@ func GetStatefulSet(namespace string, statefulsetName string) *appsv1.StatefulSe
|
||||
// GetStatefulSet provides statefulset for testing
|
||||
func GetStatefulSetWithEnvVar(namespace string, statefulsetName string) *appsv1.StatefulSet {
|
||||
return &appsv1.StatefulSet{
|
||||
ObjectMeta: getObjectMeta(namespace, statefulsetName, true),
|
||||
ObjectMeta: getObjectMeta(namespace, statefulsetName, true, false, false, false, map[string]string{}),
|
||||
Spec: appsv1.StatefulSetSpec{
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"secondLabel": "temp"},
|
||||
@@ -544,6 +644,42 @@ func GetConfigmap(namespace string, configmapName string, testData string) *v1.C
|
||||
}
|
||||
}
|
||||
|
||||
func GetSecretProviderClass(namespace string, secretProviderClassName string, data string) *csiv1.SecretProviderClass {
|
||||
return &csiv1.SecretProviderClass{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: secretProviderClassName,
|
||||
Namespace: namespace,
|
||||
},
|
||||
Spec: csiv1.SecretProviderClassSpec{
|
||||
Provider: "Test",
|
||||
Parameters: map[string]string{
|
||||
"parameter1": data,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func GetSecretProviderClassPodStatus(namespace string, secretProviderClassPodStatusName string, data string) *csiv1.SecretProviderClassPodStatus {
|
||||
return &csiv1.SecretProviderClassPodStatus{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: secretProviderClassPodStatusName,
|
||||
Namespace: namespace,
|
||||
},
|
||||
Status: csiv1.SecretProviderClassPodStatusStatus{
|
||||
PodName: "test123",
|
||||
SecretProviderClassName: secretProviderClassPodStatusName,
|
||||
TargetPath: "/var/lib/kubelet/d8771ddf-935a-4199-a20b-f35f71c1d9e7/volumes/kubernetes.io~csi/secrets-store-inline/mount",
|
||||
Mounted: true,
|
||||
Objects: []csiv1.SecretProviderClassObject{
|
||||
{
|
||||
ID: "parameter1",
|
||||
Version: data,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// GetConfigmapWithUpdatedLabel provides configmap for testing
|
||||
func GetConfigmapWithUpdatedLabel(namespace string, configmapName string, testLabel string, testData string) *v1.ConfigMap {
|
||||
return &v1.ConfigMap{
|
||||
@@ -568,6 +704,64 @@ func GetSecret(namespace string, secretName string, data string) *v1.Secret {
|
||||
}
|
||||
}
|
||||
|
||||
func GetCronJob(namespace string, cronJobName string) *batchv1.CronJob {
|
||||
return &batchv1.CronJob{
|
||||
ObjectMeta: getObjectMeta(namespace, cronJobName, false, false, false, false, map[string]string{}),
|
||||
Spec: batchv1.CronJobSpec{
|
||||
Schedule: "*/5 * * * *", // Run every 5 minutes
|
||||
JobTemplate: batchv1.JobTemplateSpec{
|
||||
Spec: batchv1.JobSpec{
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"secondLabel": "temp"},
|
||||
},
|
||||
Template: getPodTemplateSpecWithVolumes(cronJobName),
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func GetJob(namespace string, jobName string) *batchv1.Job {
|
||||
return &batchv1.Job{
|
||||
ObjectMeta: getObjectMeta(namespace, jobName, false, false, false, false, map[string]string{}),
|
||||
Spec: batchv1.JobSpec{
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"secondLabel": "temp"},
|
||||
},
|
||||
Template: getPodTemplateSpecWithVolumes(jobName),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func GetCronJobWithEnvVar(namespace string, cronJobName string) *batchv1.CronJob {
|
||||
return &batchv1.CronJob{
|
||||
ObjectMeta: getObjectMeta(namespace, cronJobName, true, false, false, false, map[string]string{}),
|
||||
Spec: batchv1.CronJobSpec{
|
||||
Schedule: "*/5 * * * *", // Run every 5 minutes
|
||||
JobTemplate: batchv1.JobTemplateSpec{
|
||||
Spec: batchv1.JobSpec{
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"secondLabel": "temp"},
|
||||
},
|
||||
Template: getPodTemplateSpecWithEnvVars(cronJobName),
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func GetJobWithEnvVar(namespace string, jobName string) *batchv1.Job {
|
||||
return &batchv1.Job{
|
||||
ObjectMeta: getObjectMeta(namespace, jobName, true, false, false, false, map[string]string{}),
|
||||
Spec: batchv1.JobSpec{
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"secondLabel": "temp"},
|
||||
},
|
||||
Template: getPodTemplateSpecWithEnvVars(jobName),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// GetSecretWithUpdatedLabel provides secret for testing
|
||||
func GetSecretWithUpdatedLabel(namespace string, secretName string, label string, data string) *v1.Secret {
|
||||
return &v1.Secret{
|
||||
@@ -605,7 +799,7 @@ func GetResourceSHAFromAnnotation(podAnnotations map[string]string) string {
|
||||
return ""
|
||||
}
|
||||
|
||||
var last util.ReloadSource
|
||||
var last common.ReloadSource
|
||||
bytes := []byte(annotationJson)
|
||||
err := json.Unmarshal(bytes, &last)
|
||||
if err != nil {
|
||||
@@ -615,19 +809,26 @@ func GetResourceSHAFromAnnotation(podAnnotations map[string]string) string {
|
||||
return last.Hash
|
||||
}
|
||||
|
||||
// ConvertResourceToSHA generates SHA from secret or configmap data
|
||||
// ConvertResourceToSHA generates SHA from secret, configmap or secretproviderclasspodstatus data
|
||||
func ConvertResourceToSHA(resourceType string, namespace string, resourceName string, data string) string {
|
||||
values := []string{}
|
||||
if resourceType == SecretResourceType {
|
||||
switch resourceType {
|
||||
case SecretResourceType:
|
||||
secret := GetSecret(namespace, resourceName, data)
|
||||
for k, v := range secret.Data {
|
||||
values = append(values, k+"="+string(v[:]))
|
||||
}
|
||||
} else if resourceType == ConfigmapResourceType {
|
||||
case ConfigmapResourceType:
|
||||
configmap := GetConfigmap(namespace, resourceName, data)
|
||||
for k, v := range configmap.Data {
|
||||
values = append(values, k+"="+v)
|
||||
}
|
||||
case SecretProviderClassPodStatusResourceType:
|
||||
secretproviderclasspodstatus := GetSecretProviderClassPodStatus(namespace, resourceName, data)
|
||||
for _, v := range secretproviderclasspodstatus.Status.Objects {
|
||||
values = append(values, v.ID+"="+v.Version)
|
||||
}
|
||||
values = append(values, "SecretProviderClassName="+secretproviderclasspodstatus.Status.SecretProviderClassName)
|
||||
}
|
||||
sort.Strings(values)
|
||||
return crypto.GenerateSHA(strings.Join(values, ";"))
|
||||
@@ -642,6 +843,25 @@ func CreateConfigMap(client kubernetes.Interface, namespace string, configmapNam
|
||||
return configmapClient, err
|
||||
}
|
||||
|
||||
// CreateSecretProviderClass creates a SecretProviderClass in given namespace and returns the SecretProviderClassInterface
|
||||
func CreateSecretProviderClass(client csiclient.Interface, namespace string, secretProviderClassName string, data string) (csiclient_v1.SecretProviderClassInterface, error) {
|
||||
logrus.Infof("Creating SecretProviderClass")
|
||||
secretProviderClassClient := client.SecretsstoreV1().SecretProviderClasses(namespace)
|
||||
_, err := secretProviderClassClient.Create(context.TODO(), GetSecretProviderClass(namespace, secretProviderClassName, data), metav1.CreateOptions{})
|
||||
time.Sleep(3 * time.Second)
|
||||
return secretProviderClassClient, err
|
||||
}
|
||||
|
||||
// CreateSecretProviderClassPodStatus creates a SecretProviderClassPodStatus in given namespace and returns the SecretProviderClassPodStatusInterface
|
||||
func CreateSecretProviderClassPodStatus(client csiclient.Interface, namespace string, secretProviderClassPodStatusName string, data string) (csiclient_v1.SecretProviderClassPodStatusInterface, error) {
|
||||
logrus.Infof("Creating SecretProviderClassPodStatus")
|
||||
secretProviderClassPodStatusClient := client.SecretsstoreV1().SecretProviderClassPodStatuses(namespace)
|
||||
secretProviderClassPodStatus := GetSecretProviderClassPodStatus(namespace, secretProviderClassPodStatusName, data)
|
||||
_, err := secretProviderClassPodStatusClient.Create(context.TODO(), secretProviderClassPodStatus, metav1.CreateOptions{})
|
||||
time.Sleep(3 * time.Second)
|
||||
return secretProviderClassPodStatusClient, err
|
||||
}
|
||||
|
||||
// CreateSecret creates a secret in given namespace and returns the SecretInterface
|
||||
func CreateSecret(client kubernetes.Interface, namespace string, secretName string, data string) (core_v1.SecretInterface, error) {
|
||||
logrus.Infof("Creating secret")
|
||||
@@ -666,6 +886,26 @@ func CreateDeployment(client kubernetes.Interface, deploymentName string, namesp
|
||||
return deployment, err
|
||||
}
|
||||
|
||||
// CreateDeployment creates a deployment in given namespace and returns the Deployment
|
||||
func CreateDeploymentWithAnnotations(client kubernetes.Interface, deploymentName string, namespace string, additionalAnnotations map[string]string, volumeMount bool) (*appsv1.Deployment, error) {
|
||||
logrus.Infof("Creating Deployment")
|
||||
deploymentClient := client.AppsV1().Deployments(namespace)
|
||||
var deploymentObj *appsv1.Deployment
|
||||
if volumeMount {
|
||||
deploymentObj = GetDeployment(namespace, deploymentName)
|
||||
} else {
|
||||
deploymentObj = GetDeploymentWithEnvVars(namespace, deploymentName)
|
||||
}
|
||||
|
||||
for annotationKey, annotationValue := range additionalAnnotations {
|
||||
deploymentObj.Annotations[annotationKey] = annotationValue
|
||||
}
|
||||
|
||||
deployment, err := deploymentClient.Create(context.TODO(), deploymentObj, metav1.CreateOptions{})
|
||||
time.Sleep(3 * time.Second)
|
||||
return deployment, err
|
||||
}
|
||||
|
||||
// CreateDeploymentConfig creates a deploymentConfig in given namespace and returns the DeploymentConfig
|
||||
func CreateDeploymentConfig(client appsclient.Interface, deploymentName string, namespace string, volumeMount bool) (*openshiftv1.DeploymentConfig, error) {
|
||||
logrus.Infof("Creating DeploymentConfig")
|
||||
@@ -729,6 +969,25 @@ func CreateDeploymentWithEnvVarSourceAndAnnotations(client kubernetes.Interface,
|
||||
return deployment, err
|
||||
}
|
||||
|
||||
// CreateDeploymentWithTypedAutoAnnotation creates a deployment in given namespace and returns the Deployment with typed auto annotation
|
||||
func CreateDeploymentWithTypedAutoAnnotation(client kubernetes.Interface, deploymentName string, namespace string, resourceType string) (*appsv1.Deployment, error) {
|
||||
logrus.Infof("Creating Deployment")
|
||||
deploymentClient := client.AppsV1().Deployments(namespace)
|
||||
deploymentObj := GetDeploymentWithTypedAutoAnnotation(namespace, deploymentName, resourceType)
|
||||
deployment, err := deploymentClient.Create(context.TODO(), deploymentObj, metav1.CreateOptions{})
|
||||
time.Sleep(3 * time.Second)
|
||||
return deployment, err
|
||||
}
|
||||
|
||||
// CreateDeploymentWithExcludeAnnotation creates a deployment in given namespace and returns the Deployment with typed auto annotation
|
||||
func CreateDeploymentWithExcludeAnnotation(client kubernetes.Interface, deploymentName string, namespace string, resourceType string) (*appsv1.Deployment, error) {
|
||||
logrus.Infof("Creating Deployment")
|
||||
deploymentClient := client.AppsV1().Deployments(namespace)
|
||||
deploymentObj := GetDeploymentWithExcludeAnnotation(namespace, deploymentName, resourceType)
|
||||
deployment, err := deploymentClient.Create(context.TODO(), deploymentObj, metav1.CreateOptions{})
|
||||
return deployment, err
|
||||
}
|
||||
|
||||
// CreateDaemonSet creates a deployment in given namespace and returns the DaemonSet
|
||||
func CreateDaemonSet(client kubernetes.Interface, daemonsetName string, namespace string, volumeMount bool) (*appsv1.DaemonSet, error) {
|
||||
logrus.Infof("Creating DaemonSet")
|
||||
@@ -759,6 +1018,36 @@ func CreateStatefulSet(client kubernetes.Interface, statefulsetName string, name
|
||||
return statefulset, err
|
||||
}
|
||||
|
||||
// CreateCronJob creates a cronjob in given namespace and returns the CronJob
|
||||
func CreateCronJob(client kubernetes.Interface, cronJobName string, namespace string, volumeMount bool) (*batchv1.CronJob, error) {
|
||||
logrus.Infof("Creating CronJob")
|
||||
cronJobClient := client.BatchV1().CronJobs(namespace)
|
||||
var cronJobObj *batchv1.CronJob
|
||||
if volumeMount {
|
||||
cronJobObj = GetCronJob(namespace, cronJobName)
|
||||
} else {
|
||||
cronJobObj = GetCronJobWithEnvVar(namespace, cronJobName)
|
||||
}
|
||||
cronJob, err := cronJobClient.Create(context.TODO(), cronJobObj, metav1.CreateOptions{})
|
||||
time.Sleep(3 * time.Second)
|
||||
return cronJob, err
|
||||
}
|
||||
|
||||
// CreateJob creates a job in given namespace and returns the Job
|
||||
func CreateJob(client kubernetes.Interface, jobName string, namespace string, volumeMount bool) (*batchv1.Job, error) {
|
||||
logrus.Infof("Creating Job")
|
||||
jobClient := client.BatchV1().Jobs(namespace)
|
||||
var jobObj *batchv1.Job
|
||||
if volumeMount {
|
||||
jobObj = GetJob(namespace, jobName)
|
||||
} else {
|
||||
jobObj = GetJobWithEnvVar(namespace, jobName)
|
||||
}
|
||||
job, err := jobClient.Create(context.TODO(), jobObj, metav1.CreateOptions{})
|
||||
time.Sleep(3 * time.Second)
|
||||
return job, err
|
||||
}
|
||||
|
||||
// DeleteDeployment creates a deployment in given namespace and returns the error if any
|
||||
func DeleteDeployment(client kubernetes.Interface, namespace string, deploymentName string) error {
|
||||
logrus.Infof("Deleting Deployment")
|
||||
@@ -791,6 +1080,22 @@ func DeleteStatefulSet(client kubernetes.Interface, namespace string, statefulse
|
||||
return statefulsetError
|
||||
}
|
||||
|
||||
// DeleteCronJob deletes a cronJob in given namespace and returns the error if any
|
||||
func DeleteCronJob(client kubernetes.Interface, namespace string, cronJobName string) error {
|
||||
logrus.Infof("Deleting CronJob %s", cronJobName)
|
||||
cronJobError := client.BatchV1().CronJobs(namespace).Delete(context.TODO(), cronJobName, metav1.DeleteOptions{})
|
||||
time.Sleep(3 * time.Second)
|
||||
return cronJobError
|
||||
}
|
||||
|
||||
// Deleteob deletes a job in given namespace and returns the error if any
|
||||
func DeleteJob(client kubernetes.Interface, namespace string, jobName string) error {
|
||||
logrus.Infof("Deleting Job %s", jobName)
|
||||
jobError := client.BatchV1().Jobs(namespace).Delete(context.TODO(), jobName, metav1.DeleteOptions{})
|
||||
time.Sleep(3 * time.Second)
|
||||
return jobError
|
||||
}
|
||||
|
||||
// UpdateConfigMap updates a configmap in given namespace and returns the error if any
|
||||
func UpdateConfigMap(configmapClient core_v1.ConfigMapInterface, namespace string, configmapName string, label string, data string) error {
|
||||
logrus.Infof("Updating configmap %q.\n", configmapName)
|
||||
@@ -819,6 +1124,27 @@ func UpdateSecret(secretClient core_v1.SecretInterface, namespace string, secret
|
||||
return updateErr
|
||||
}
|
||||
|
||||
// UpdateSecretProviderClassPodStatus updates a secretproviderclasspodstatus in given namespace and returns the error if any
|
||||
func UpdateSecretProviderClassPodStatus(spcpsClient csiclient_v1.SecretProviderClassPodStatusInterface, namespace string, spcpsName string, label string, data string) error {
|
||||
logrus.Infof("Updating secretproviderclasspodstatus %q.\n", spcpsName)
|
||||
updatedStatus := GetSecretProviderClassPodStatus(namespace, spcpsName, data).Status
|
||||
secretproviderclasspodstatus, err := spcpsClient.Get(context.TODO(), spcpsName, metav1.GetOptions{})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
secretproviderclasspodstatus.Status = updatedStatus
|
||||
if label != "" {
|
||||
labels := secretproviderclasspodstatus.Labels
|
||||
if labels == nil {
|
||||
labels = make(map[string]string)
|
||||
}
|
||||
labels["firstLabel"] = label
|
||||
}
|
||||
_, updateErr := spcpsClient.Update(context.TODO(), secretproviderclasspodstatus, metav1.UpdateOptions{})
|
||||
time.Sleep(3 * time.Second)
|
||||
return updateErr
|
||||
}
|
||||
|
||||
// DeleteConfigMap deletes a configmap in given namespace and returns the error if any
|
||||
func DeleteConfigMap(client kubernetes.Interface, namespace string, configmapName string) error {
|
||||
logrus.Infof("Deleting configmap %q.\n", configmapName)
|
||||
@@ -835,6 +1161,22 @@ func DeleteSecret(client kubernetes.Interface, namespace string, secretName stri
|
||||
return err
|
||||
}
|
||||
|
||||
// DeleteSecretProviderClass deletes a secretproviderclass in given namespace and returns the error if any
|
||||
func DeleteSecretProviderClass(client csiclient.Interface, namespace string, secretProviderClassName string) error {
|
||||
logrus.Infof("Deleting secretproviderclass %q.\n", secretProviderClassName)
|
||||
err := client.SecretsstoreV1().SecretProviderClasses(namespace).Delete(context.TODO(), secretProviderClassName, metav1.DeleteOptions{})
|
||||
time.Sleep(3 * time.Second)
|
||||
return err
|
||||
}
|
||||
|
||||
// DeleteSecretProviderClassPodStatus deletes a secretproviderclasspodstatus in given namespace and returns the error if any
|
||||
func DeleteSecretProviderClassPodStatus(client csiclient.Interface, namespace string, secretProviderClassPodStatusName string) error {
|
||||
logrus.Infof("Deleting secretproviderclasspodstatus %q.\n", secretProviderClassPodStatusName)
|
||||
err := client.SecretsstoreV1().SecretProviderClassPodStatuses(namespace).Delete(context.TODO(), secretProviderClassPodStatusName, metav1.DeleteOptions{})
|
||||
time.Sleep(3 * time.Second)
|
||||
return err
|
||||
}
|
||||
|
||||
// RandSeq generates a random sequence
|
||||
func RandSeq(n int) string {
|
||||
b := make([]rune, n)
|
||||
@@ -845,7 +1187,7 @@ func RandSeq(n int) string {
|
||||
}
|
||||
|
||||
// VerifyResourceEnvVarUpdate verifies whether the rolling upgrade happened or not
|
||||
func VerifyResourceEnvVarUpdate(clients kube.Clients, config util.Config, envVarPostfix string, upgradeFuncs callbacks.RollingUpgradeFuncs) bool {
|
||||
func VerifyResourceEnvVarUpdate(clients kube.Clients, config common.Config, envVarPostfix string, upgradeFuncs callbacks.RollingUpgradeFuncs) bool {
|
||||
items := upgradeFuncs.ItemsFunc(clients, config.Namespace)
|
||||
for _, i := range items {
|
||||
containers := upgradeFuncs.ContainersFunc(i)
|
||||
@@ -858,9 +1200,11 @@ func VerifyResourceEnvVarUpdate(clients kube.Clients, config util.Config, envVar
|
||||
annotationValue := annotations[config.Annotation]
|
||||
searchAnnotationValue := annotations[options.AutoSearchAnnotation]
|
||||
reloaderEnabledValue := annotations[options.ReloaderAutoAnnotation]
|
||||
typedAutoAnnotationEnabledValue := annotations[config.TypedAutoAnnotation]
|
||||
reloaderEnabled, err := strconv.ParseBool(reloaderEnabledValue)
|
||||
typedAutoAnnotationEnabled, errTyped := strconv.ParseBool(typedAutoAnnotationEnabledValue)
|
||||
matches := false
|
||||
if err == nil && reloaderEnabled {
|
||||
if err == nil && reloaderEnabled || errTyped == nil && typedAutoAnnotationEnabled {
|
||||
matches = true
|
||||
} else if annotationValue != "" {
|
||||
values := strings.Split(annotationValue, ",")
|
||||
@@ -888,8 +1232,57 @@ func VerifyResourceEnvVarUpdate(clients kube.Clients, config util.Config, envVar
|
||||
return false
|
||||
}
|
||||
|
||||
// VerifyResourceEnvVarRemoved verifies whether the rolling upgrade happened or not and all Envvars SKAKATER_name_CONFIGMAP/SECRET are removed
|
||||
func VerifyResourceEnvVarRemoved(clients kube.Clients, config common.Config, envVarPostfix string, upgradeFuncs callbacks.RollingUpgradeFuncs) bool {
|
||||
items := upgradeFuncs.ItemsFunc(clients, config.Namespace)
|
||||
for _, i := range items {
|
||||
containers := upgradeFuncs.ContainersFunc(i)
|
||||
accessor, err := meta.Accessor(i)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
annotations := accessor.GetAnnotations()
|
||||
// match statefulsets with the correct annotation
|
||||
|
||||
annotationValue := annotations[config.Annotation]
|
||||
searchAnnotationValue := annotations[options.AutoSearchAnnotation]
|
||||
reloaderEnabledValue := annotations[options.ReloaderAutoAnnotation]
|
||||
typedAutoAnnotationEnabledValue := annotations[config.TypedAutoAnnotation]
|
||||
reloaderEnabled, err := strconv.ParseBool(reloaderEnabledValue)
|
||||
typedAutoAnnotationEnabled, errTyped := strconv.ParseBool(typedAutoAnnotationEnabledValue)
|
||||
|
||||
matches := false
|
||||
if err == nil && reloaderEnabled || errTyped == nil && typedAutoAnnotationEnabled {
|
||||
matches = true
|
||||
} else if annotationValue != "" {
|
||||
values := strings.Split(annotationValue, ",")
|
||||
for _, value := range values {
|
||||
value = strings.Trim(value, " ")
|
||||
if value == config.ResourceName {
|
||||
matches = true
|
||||
break
|
||||
}
|
||||
}
|
||||
} else if searchAnnotationValue == "true" {
|
||||
if config.ResourceAnnotations[options.SearchMatchAnnotation] == "true" {
|
||||
matches = true
|
||||
}
|
||||
}
|
||||
|
||||
if matches {
|
||||
envName := constants.EnvVarPrefix + util.ConvertToEnvVarName(config.ResourceName) + "_" + envVarPostfix
|
||||
value := GetResourceSHAFromEnvVar(containers, envName)
|
||||
if value == "" {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// VerifyResourceAnnotationUpdate verifies whether the rolling upgrade happened or not
|
||||
func VerifyResourceAnnotationUpdate(clients kube.Clients, config util.Config, upgradeFuncs callbacks.RollingUpgradeFuncs) bool {
|
||||
func VerifyResourceAnnotationUpdate(clients kube.Clients, config common.Config, upgradeFuncs callbacks.RollingUpgradeFuncs) bool {
|
||||
items := upgradeFuncs.ItemsFunc(clients, config.Namespace)
|
||||
for _, i := range items {
|
||||
podAnnotations := upgradeFuncs.PodAnnotationsFunc(i)
|
||||
@@ -902,9 +1295,11 @@ func VerifyResourceAnnotationUpdate(clients kube.Clients, config util.Config, up
|
||||
annotationValue := annotations[config.Annotation]
|
||||
searchAnnotationValue := annotations[options.AutoSearchAnnotation]
|
||||
reloaderEnabledValue := annotations[options.ReloaderAutoAnnotation]
|
||||
typedAutoAnnotationEnabledValue := annotations[config.TypedAutoAnnotation]
|
||||
reloaderEnabled, _ := strconv.ParseBool(reloaderEnabledValue)
|
||||
typedAutoAnnotationEnabled, _ := strconv.ParseBool(typedAutoAnnotationEnabledValue)
|
||||
matches := false
|
||||
if reloaderEnabled || reloaderEnabledValue == "" && options.AutoReloadAll {
|
||||
if reloaderEnabled || typedAutoAnnotationEnabled || reloaderEnabledValue == "" && typedAutoAnnotationEnabledValue == "" && options.AutoReloadAll {
|
||||
matches = true
|
||||
} else if annotationValue != "" {
|
||||
values := strings.Split(annotationValue, ",")
|
||||
@@ -930,3 +1325,36 @@ func VerifyResourceAnnotationUpdate(clients kube.Clients, config util.Config, up
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func GetSHAfromEmptyData() string {
|
||||
// Use a special marker that represents "deleted" or "empty" state
|
||||
// This ensures we have a distinct, deterministic hash for the delete strategy
|
||||
// Note: We could use GenerateSHA("") which now returns a hash, but using a marker
|
||||
// makes the intent clearer and avoids potential confusion with actual empty data
|
||||
return crypto.GenerateSHA("__RELOADER_EMPTY_DELETE_MARKER__")
|
||||
}
|
||||
|
||||
// GetRollout provides rollout for testing
|
||||
func GetRollout(namespace string, rolloutName string, annotations map[string]string) *argorolloutv1alpha1.Rollout {
|
||||
replicaset := int32(1)
|
||||
return &argorolloutv1alpha1.Rollout{
|
||||
ObjectMeta: getObjectMeta(namespace, rolloutName, false, false, false, false, annotations),
|
||||
Spec: argorolloutv1alpha1.RolloutSpec{
|
||||
Selector: &metav1.LabelSelector{
|
||||
MatchLabels: map[string]string{"secondLabel": "temp"},
|
||||
},
|
||||
Replicas: &replicaset,
|
||||
Template: getPodTemplateSpecWithVolumes(rolloutName),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// CreateRollout creates a rolout in given namespace and returns the Rollout
|
||||
func CreateRollout(client argorollout.Interface, rolloutName string, namespace string, annotations map[string]string) (*argorolloutv1alpha1.Rollout, error) {
|
||||
logrus.Infof("Creating Rollout")
|
||||
rolloutClient := client.ArgoprojV1alpha1().Rollouts(namespace)
|
||||
rolloutObj := GetRollout(namespace, rolloutName, annotations)
|
||||
rollout, err := rolloutClient.Create(context.TODO(), rolloutObj, metav1.CreateOptions{})
|
||||
time.Sleep(3 * time.Second)
|
||||
return rollout, err
|
||||
}
|
||||
|
||||
@@ -1,41 +0,0 @@
|
||||
package util
|
||||
|
||||
import (
|
||||
"github.com/stakater/Reloader/internal/pkg/constants"
|
||||
"github.com/stakater/Reloader/internal/pkg/options"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
)
|
||||
|
||||
//Config contains rolling upgrade configuration parameters
|
||||
type Config struct {
|
||||
Namespace string
|
||||
ResourceName string
|
||||
ResourceAnnotations map[string]string
|
||||
Annotation string
|
||||
SHAValue string
|
||||
Type string
|
||||
}
|
||||
|
||||
// GetConfigmapConfig provides utility config for configmap
|
||||
func GetConfigmapConfig(configmap *v1.ConfigMap) Config {
|
||||
return Config{
|
||||
Namespace: configmap.Namespace,
|
||||
ResourceName: configmap.Name,
|
||||
ResourceAnnotations: configmap.Annotations,
|
||||
Annotation: options.ConfigmapUpdateOnChangeAnnotation,
|
||||
SHAValue: GetSHAfromConfigmap(configmap),
|
||||
Type: constants.ConfigmapEnvVarPostfix,
|
||||
}
|
||||
}
|
||||
|
||||
// GetSecretConfig provides utility config for secret
|
||||
func GetSecretConfig(secret *v1.Secret) Config {
|
||||
return Config{
|
||||
Namespace: secret.Namespace,
|
||||
ResourceName: secret.Name,
|
||||
ResourceAnnotations: secret.Annotations,
|
||||
Annotation: options.SecretUpdateOnChangeAnnotation,
|
||||
SHAValue: GetSHAfromSecret(secret.Data),
|
||||
Type: constants.SecretEnvVarPostfix,
|
||||
}
|
||||
}
|
||||
@@ -3,11 +3,17 @@ package util
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/stakater/Reloader/internal/pkg/constants"
|
||||
"github.com/stakater/Reloader/internal/pkg/crypto"
|
||||
"github.com/stakater/Reloader/internal/pkg/options"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
csiv1 "sigs.k8s.io/secrets-store-csi-driver/apis/v1"
|
||||
)
|
||||
|
||||
// ConvertToEnvVarName converts the given text into a usable env var
|
||||
@@ -52,9 +58,17 @@ func GetSHAfromSecret(data map[string][]byte) string {
|
||||
return crypto.GenerateSHA(strings.Join(values, ";"))
|
||||
}
|
||||
|
||||
type List []string
|
||||
func GetSHAfromSecretProviderClassPodStatus(data csiv1.SecretProviderClassPodStatusStatus) string {
|
||||
values := []string{}
|
||||
for _, v := range data.Objects {
|
||||
values = append(values, v.ID+"="+v.Version)
|
||||
}
|
||||
values = append(values, "SecretProviderClassName="+data.SecretProviderClassName)
|
||||
sort.Strings(values)
|
||||
return crypto.GenerateSHA(strings.Join(values, ";"))
|
||||
}
|
||||
|
||||
type Map map[string]string
|
||||
type List []string
|
||||
|
||||
func (l *List) Contains(s string) bool {
|
||||
for _, v := range *l {
|
||||
@@ -64,3 +78,63 @@ func (l *List) Contains(s string) bool {
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func ConfigureReloaderFlags(cmd *cobra.Command) {
|
||||
cmd.PersistentFlags().BoolVar(&options.AutoReloadAll, "auto-reload-all", false, "Auto reload all resources")
|
||||
cmd.PersistentFlags().StringVar(&options.ConfigmapUpdateOnChangeAnnotation, "configmap-annotation", "configmap.reloader.stakater.com/reload", "annotation to detect changes in configmaps, specified by name")
|
||||
cmd.PersistentFlags().StringVar(&options.SecretUpdateOnChangeAnnotation, "secret-annotation", "secret.reloader.stakater.com/reload", "annotation to detect changes in secrets, specified by name")
|
||||
cmd.PersistentFlags().StringVar(&options.ReloaderAutoAnnotation, "auto-annotation", "reloader.stakater.com/auto", "annotation to detect changes in secrets/configmaps")
|
||||
cmd.PersistentFlags().StringVar(&options.ConfigmapReloaderAutoAnnotation, "configmap-auto-annotation", "configmap.reloader.stakater.com/auto", "annotation to detect changes in configmaps")
|
||||
cmd.PersistentFlags().StringVar(&options.SecretReloaderAutoAnnotation, "secret-auto-annotation", "secret.reloader.stakater.com/auto", "annotation to detect changes in secrets")
|
||||
cmd.PersistentFlags().StringVar(&options.AutoSearchAnnotation, "auto-search-annotation", "reloader.stakater.com/search", "annotation to detect changes in configmaps or secrets tagged with special match annotation")
|
||||
cmd.PersistentFlags().StringVar(&options.SearchMatchAnnotation, "search-match-annotation", "reloader.stakater.com/match", "annotation to mark secrets or configmaps to match the search")
|
||||
cmd.PersistentFlags().StringVar(&options.PauseDeploymentAnnotation, "pause-deployment-annotation", "deployment.reloader.stakater.com/pause-period", "annotation to define the time period to pause a deployment after a configmap/secret change has been detected")
|
||||
cmd.PersistentFlags().StringVar(&options.PauseDeploymentTimeAnnotation, "pause-deployment-time-annotation", "deployment.reloader.stakater.com/paused-at", "annotation to indicate when a deployment was paused by Reloader")
|
||||
cmd.PersistentFlags().StringVar(&options.LogFormat, "log-format", "", "Log format to use (empty string for text, or JSON)")
|
||||
cmd.PersistentFlags().StringVar(&options.LogLevel, "log-level", "info", "Log level to use (trace, debug, info, warning, error, fatal and panic)")
|
||||
cmd.PersistentFlags().StringVar(&options.WebhookUrl, "webhook-url", "", "webhook to trigger instead of performing a reload")
|
||||
cmd.PersistentFlags().StringSliceVar(&options.ResourcesToIgnore, "resources-to-ignore", options.ResourcesToIgnore, "list of resources to ignore (valid options 'configMaps' or 'secrets')")
|
||||
cmd.PersistentFlags().StringSliceVar(&options.WorkloadTypesToIgnore, "ignored-workload-types", options.WorkloadTypesToIgnore, "list of workload types to ignore (valid options: 'jobs', 'cronjobs', or both)")
|
||||
cmd.PersistentFlags().StringSliceVar(&options.NamespacesToIgnore, "namespaces-to-ignore", options.NamespacesToIgnore, "list of namespaces to ignore")
|
||||
cmd.PersistentFlags().StringSliceVar(&options.NamespaceSelectors, "namespace-selector", options.NamespaceSelectors, "list of key:value labels to filter on for namespaces")
|
||||
cmd.PersistentFlags().StringSliceVar(&options.ResourceSelectors, "resource-label-selector", options.ResourceSelectors, "list of key:value labels to filter on for configmaps and secrets")
|
||||
cmd.PersistentFlags().StringVar(&options.IsArgoRollouts, "is-Argo-Rollouts", "false", "Add support for argo rollouts")
|
||||
cmd.PersistentFlags().StringVar(&options.ReloadStrategy, constants.ReloadStrategyFlag, constants.EnvVarsReloadStrategy, "Specifies the desired reload strategy")
|
||||
cmd.PersistentFlags().StringVar(&options.ReloadOnCreate, "reload-on-create", "false", "Add support to watch create events")
|
||||
cmd.PersistentFlags().StringVar(&options.ReloadOnDelete, "reload-on-delete", "false", "Add support to watch delete events")
|
||||
cmd.PersistentFlags().BoolVar(&options.EnableHA, "enable-ha", false, "Adds support for running multiple replicas via leadership election")
|
||||
cmd.PersistentFlags().BoolVar(&options.SyncAfterRestart, "sync-after-restart", false, "Sync add events after reloader restarts")
|
||||
cmd.PersistentFlags().BoolVar(&options.EnablePProf, "enable-pprof", false, "Enable pprof for profiling")
|
||||
cmd.PersistentFlags().StringVar(&options.PProfAddr, "pprof-addr", ":6060", "Address to start pprof server on. Default is :6060")
|
||||
cmd.PersistentFlags().BoolVar(&options.EnableCSIIntegration, "enable-csi-integration", false, "Enables CSI integration. Default is :false")
|
||||
}
|
||||
|
||||
func GetIgnoredResourcesList() (List, error) {
|
||||
|
||||
ignoredResourcesList := options.ResourcesToIgnore // getStringSliceFromFlags(cmd, "resources-to-ignore")
|
||||
|
||||
for _, v := range ignoredResourcesList {
|
||||
if v != "configMaps" && v != "secrets" {
|
||||
return nil, fmt.Errorf("'resources-to-ignore' only accepts 'configMaps' or 'secrets', not '%s'", v)
|
||||
}
|
||||
}
|
||||
|
||||
if len(ignoredResourcesList) > 1 {
|
||||
return nil, errors.New("'resources-to-ignore' only accepts 'configMaps' or 'secrets', not both")
|
||||
}
|
||||
|
||||
return ignoredResourcesList, nil
|
||||
}
|
||||
|
||||
func GetIgnoredWorkloadTypesList() (List, error) {
|
||||
|
||||
ignoredWorkloadTypesList := options.WorkloadTypesToIgnore
|
||||
|
||||
for _, v := range ignoredWorkloadTypesList {
|
||||
if v != "jobs" && v != "cronjobs" {
|
||||
return nil, fmt.Errorf("'ignored-workload-types' accepts 'jobs', 'cronjobs', or both, not '%s'", v)
|
||||
}
|
||||
}
|
||||
|
||||
return ignoredWorkloadTypesList, nil
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ package util
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stakater/Reloader/internal/pkg/options"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
)
|
||||
|
||||
@@ -45,3 +46,141 @@ func TestGetHashFromConfigMap(t *testing.T) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetIgnoredWorkloadTypesList(t *testing.T) {
|
||||
// Save original state
|
||||
originalWorkloadTypes := options.WorkloadTypesToIgnore
|
||||
defer func() {
|
||||
options.WorkloadTypesToIgnore = originalWorkloadTypes
|
||||
}()
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
workloadTypes []string
|
||||
expectError bool
|
||||
expected []string
|
||||
}{
|
||||
{
|
||||
name: "Both jobs and cronjobs",
|
||||
workloadTypes: []string{"jobs", "cronjobs"},
|
||||
expectError: false,
|
||||
expected: []string{"jobs", "cronjobs"},
|
||||
},
|
||||
{
|
||||
name: "Only jobs",
|
||||
workloadTypes: []string{"jobs"},
|
||||
expectError: false,
|
||||
expected: []string{"jobs"},
|
||||
},
|
||||
{
|
||||
name: "Only cronjobs",
|
||||
workloadTypes: []string{"cronjobs"},
|
||||
expectError: false,
|
||||
expected: []string{"cronjobs"},
|
||||
},
|
||||
{
|
||||
name: "Empty list",
|
||||
workloadTypes: []string{},
|
||||
expectError: false,
|
||||
expected: []string{},
|
||||
},
|
||||
{
|
||||
name: "Invalid workload type",
|
||||
workloadTypes: []string{"invalid"},
|
||||
expectError: true,
|
||||
expected: nil,
|
||||
},
|
||||
{
|
||||
name: "Mixed valid and invalid",
|
||||
workloadTypes: []string{"jobs", "invalid"},
|
||||
expectError: true,
|
||||
expected: nil,
|
||||
},
|
||||
{
|
||||
name: "Duplicate values",
|
||||
workloadTypes: []string{"jobs", "jobs"},
|
||||
expectError: false,
|
||||
expected: []string{"jobs", "jobs"},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Set the global option
|
||||
options.WorkloadTypesToIgnore = tt.workloadTypes
|
||||
|
||||
result, err := GetIgnoredWorkloadTypesList()
|
||||
|
||||
if tt.expectError && err == nil {
|
||||
t.Errorf("Expected error but got none")
|
||||
}
|
||||
|
||||
if !tt.expectError && err != nil {
|
||||
t.Errorf("Expected no error but got: %v", err)
|
||||
}
|
||||
|
||||
if !tt.expectError {
|
||||
if len(result) != len(tt.expected) {
|
||||
t.Errorf("Expected %v, got %v", tt.expected, result)
|
||||
return
|
||||
}
|
||||
|
||||
for i, expected := range tt.expected {
|
||||
if i >= len(result) || result[i] != expected {
|
||||
t.Errorf("Expected %v, got %v", tt.expected, result)
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestListContains(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
list List
|
||||
item string
|
||||
expected bool
|
||||
}{
|
||||
{
|
||||
name: "List contains item",
|
||||
list: List{"jobs", "cronjobs"},
|
||||
item: "jobs",
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "List does not contain item",
|
||||
list: List{"jobs"},
|
||||
item: "cronjobs",
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Empty list",
|
||||
list: List{},
|
||||
item: "jobs",
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Case sensitive matching",
|
||||
list: List{"jobs", "cronjobs"},
|
||||
item: "Jobs",
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "Multiple occurrences",
|
||||
list: List{"jobs", "jobs", "cronjobs"},
|
||||
item: "jobs",
|
||||
expected: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result := tt.list.Contains(tt.item)
|
||||
if result != tt.expected {
|
||||
t.Errorf("Expected %v, got %v", tt.expected, result)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
376
pkg/common/common.go
Normal file
376
pkg/common/common.go
Normal file
@@ -0,0 +1,376 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/stakater/Reloader/internal/pkg/constants"
|
||||
"github.com/stakater/Reloader/internal/pkg/options"
|
||||
"github.com/stakater/Reloader/internal/pkg/util"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/labels"
|
||||
"k8s.io/client-go/kubernetes"
|
||||
)
|
||||
|
||||
type Map map[string]string
|
||||
|
||||
type ReloadCheckResult struct {
|
||||
ShouldReload bool
|
||||
AutoReload bool
|
||||
}
|
||||
|
||||
// ReloaderOptions contains all configurable options for the Reloader controller.
|
||||
// These options control how Reloader behaves when watching for changes in ConfigMaps and Secrets.
|
||||
type ReloaderOptions struct {
|
||||
// AutoReloadAll enables automatic reloading of all resources when their corresponding ConfigMaps/Secrets are updated
|
||||
AutoReloadAll bool `json:"autoReloadAll"`
|
||||
// ConfigmapUpdateOnChangeAnnotation is the annotation key used to detect changes in ConfigMaps specified by name
|
||||
ConfigmapUpdateOnChangeAnnotation string `json:"configmapUpdateOnChangeAnnotation"`
|
||||
// SecretUpdateOnChangeAnnotation is the annotation key used to detect changes in Secrets specified by name
|
||||
SecretUpdateOnChangeAnnotation string `json:"secretUpdateOnChangeAnnotation"`
|
||||
// SecretProviderClassUpdateOnChangeAnnotation is the annotation key used to detect changes in SecretProviderClasses specified by name
|
||||
SecretProviderClassUpdateOnChangeAnnotation string `json:"secretProviderClassUpdateOnChangeAnnotation"`
|
||||
// ReloaderAutoAnnotation is the annotation key used to detect changes in any referenced ConfigMaps or Secrets
|
||||
ReloaderAutoAnnotation string `json:"reloaderAutoAnnotation"`
|
||||
// IgnoreResourceAnnotation is the annotation key used to ignore resources from being watched
|
||||
IgnoreResourceAnnotation string `json:"ignoreResourceAnnotation"`
|
||||
// ConfigmapReloaderAutoAnnotation is the annotation key used to detect changes in ConfigMaps only
|
||||
ConfigmapReloaderAutoAnnotation string `json:"configmapReloaderAutoAnnotation"`
|
||||
// SecretReloaderAutoAnnotation is the annotation key used to detect changes in Secrets only
|
||||
SecretReloaderAutoAnnotation string `json:"secretReloaderAutoAnnotation"`
|
||||
// SecretProviderClassReloaderAutoAnnotation is the annotation key used to detect changes in SecretProviderClasses only
|
||||
SecretProviderClassReloaderAutoAnnotation string `json:"secretProviderClassReloaderAutoAnnotation"`
|
||||
// ConfigmapExcludeReloaderAnnotation is the annotation key containing comma-separated list of ConfigMaps to exclude from watching
|
||||
ConfigmapExcludeReloaderAnnotation string `json:"configmapExcludeReloaderAnnotation"`
|
||||
// SecretExcludeReloaderAnnotation is the annotation key containing comma-separated list of Secrets to exclude from watching
|
||||
SecretExcludeReloaderAnnotation string `json:"secretExcludeReloaderAnnotation"`
|
||||
// SecretProviderClassExcludeReloaderAnnotation is the annotation key containing comma-separated list of SecretProviderClasses to exclude from watching
|
||||
SecretProviderClassExcludeReloaderAnnotation string `json:"secretProviderClassExcludeReloaderAnnotation"`
|
||||
// AutoSearchAnnotation is the annotation key used to detect changes in ConfigMaps/Secrets tagged with SearchMatchAnnotation
|
||||
AutoSearchAnnotation string `json:"autoSearchAnnotation"`
|
||||
// SearchMatchAnnotation is the annotation key used to tag ConfigMaps/Secrets to be found by AutoSearchAnnotation
|
||||
SearchMatchAnnotation string `json:"searchMatchAnnotation"`
|
||||
// RolloutStrategyAnnotation is the annotation key used to define the rollout update strategy for workloads
|
||||
RolloutStrategyAnnotation string `json:"rolloutStrategyAnnotation"`
|
||||
// PauseDeploymentAnnotation is the annotation key used to define the time period to pause a deployment after
|
||||
PauseDeploymentAnnotation string `json:"pauseDeploymentAnnotation"`
|
||||
// PauseDeploymentTimeAnnotation is the annotation key used to indicate when a deployment was paused by Reloader
|
||||
PauseDeploymentTimeAnnotation string `json:"pauseDeploymentTimeAnnotation"`
|
||||
|
||||
// LogFormat specifies the log format to use (json, or empty string for default text format)
|
||||
LogFormat string `json:"logFormat"`
|
||||
// LogLevel specifies the log level to use (trace, debug, info, warning, error, fatal, panic)
|
||||
LogLevel string `json:"logLevel"`
|
||||
// IsArgoRollouts indicates whether support for Argo Rollouts is enabled
|
||||
IsArgoRollouts bool `json:"isArgoRollouts"`
|
||||
// ReloadStrategy specifies the strategy used to trigger resource reloads (env-vars or annotations)
|
||||
ReloadStrategy string `json:"reloadStrategy"`
|
||||
// ReloadOnCreate indicates whether to trigger reloads when ConfigMaps/Secrets are created
|
||||
ReloadOnCreate bool `json:"reloadOnCreate"`
|
||||
// ReloadOnDelete indicates whether to trigger reloads when ConfigMaps/Secrets are deleted
|
||||
ReloadOnDelete bool `json:"reloadOnDelete"`
|
||||
// SyncAfterRestart indicates whether to sync add events after Reloader restarts (only works when ReloadOnCreate is true)
|
||||
SyncAfterRestart bool `json:"syncAfterRestart"`
|
||||
// EnableHA indicates whether High Availability mode is enabled with leader election
|
||||
EnableHA bool `json:"enableHA"`
|
||||
// EnableCSIIntegration indicates whether CSI integration is enabled to watch SecretProviderClassPodStatus
|
||||
EnableCSIIntegration bool `json:"enableCSIIntegration"`
|
||||
// WebhookUrl is the URL to send webhook notifications to instead of performing reloads
|
||||
WebhookUrl string `json:"webhookUrl"`
|
||||
// ResourcesToIgnore is a list of resource types to ignore (e.g., "configmaps" or "secrets")
|
||||
ResourcesToIgnore []string `json:"resourcesToIgnore"`
|
||||
// WorkloadTypesToIgnore is a list of workload types to ignore (e.g., "jobs" or "cronjobs")
|
||||
WorkloadTypesToIgnore []string `json:"workloadTypesToIgnore"`
|
||||
// NamespaceSelectors is a list of label selectors to filter namespaces to watch
|
||||
NamespaceSelectors []string `json:"namespaceSelectors"`
|
||||
// ResourceSelectors is a list of label selectors to filter ConfigMaps and Secrets to watch
|
||||
ResourceSelectors []string `json:"resourceSelectors"`
|
||||
// NamespacesToIgnore is a list of namespace names to ignore when watching for changes
|
||||
NamespacesToIgnore []string `json:"namespacesToIgnore"`
|
||||
// EnablePProf enables pprof for profiling
|
||||
EnablePProf bool `json:"enablePProf"`
|
||||
// PProfAddr is the address to start pprof server on
|
||||
PProfAddr string `json:"pprofAddr"`
|
||||
}
|
||||
|
||||
var CommandLineOptions *ReloaderOptions
|
||||
|
||||
func PublishMetaInfoConfigmap(clientset kubernetes.Interface) {
|
||||
namespace := os.Getenv("RELOADER_NAMESPACE")
|
||||
if namespace == "" {
|
||||
logrus.Warn("RELOADER_NAMESPACE is not set, skipping meta info configmap creation")
|
||||
return
|
||||
}
|
||||
|
||||
metaInfo := &MetaInfo{
|
||||
BuildInfo: *NewBuildInfo(),
|
||||
ReloaderOptions: *GetCommandLineOptions(),
|
||||
DeploymentInfo: metav1.ObjectMeta{
|
||||
Name: os.Getenv("RELOADER_DEPLOYMENT_NAME"),
|
||||
Namespace: namespace,
|
||||
},
|
||||
}
|
||||
|
||||
configMap := metaInfo.ToConfigMap()
|
||||
|
||||
if _, err := clientset.CoreV1().ConfigMaps(namespace).Get(context.Background(), configMap.Name, metav1.GetOptions{}); err == nil {
|
||||
logrus.Info("Meta info configmap already exists, updating it")
|
||||
_, err = clientset.CoreV1().ConfigMaps(namespace).Update(context.Background(), configMap, metav1.UpdateOptions{})
|
||||
if err != nil {
|
||||
logrus.Warn("Failed to update existing meta info configmap: ", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
_, err := clientset.CoreV1().ConfigMaps(namespace).Create(context.Background(), configMap, metav1.CreateOptions{})
|
||||
if err != nil {
|
||||
logrus.Warn("Failed to create meta info configmap: ", err)
|
||||
}
|
||||
}
|
||||
|
||||
func GetNamespaceLabelSelector(slice []string) (string, error) {
|
||||
for i, kv := range slice {
|
||||
// Legacy support for ":" as a delimiter and "*" for wildcard.
|
||||
if strings.Contains(kv, ":") {
|
||||
split := strings.Split(kv, ":")
|
||||
if split[1] == "*" {
|
||||
slice[i] = split[0]
|
||||
} else {
|
||||
slice[i] = split[0] + "=" + split[1]
|
||||
}
|
||||
}
|
||||
// Convert wildcard to valid apimachinery operator
|
||||
if strings.Contains(kv, "=") {
|
||||
split := strings.Split(kv, "=")
|
||||
if split[1] == "*" {
|
||||
slice[i] = split[0]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
namespaceLabelSelector := strings.Join(slice[:], ",")
|
||||
_, err := labels.Parse(namespaceLabelSelector)
|
||||
if err != nil {
|
||||
logrus.Fatal(err)
|
||||
}
|
||||
|
||||
return namespaceLabelSelector, nil
|
||||
}
|
||||
|
||||
func GetResourceLabelSelector(slice []string) (string, error) {
|
||||
for i, kv := range slice {
|
||||
// Legacy support for ":" as a delimiter and "*" for wildcard.
|
||||
if strings.Contains(kv, ":") {
|
||||
split := strings.Split(kv, ":")
|
||||
if split[1] == "*" {
|
||||
slice[i] = split[0]
|
||||
} else {
|
||||
slice[i] = split[0] + "=" + split[1]
|
||||
}
|
||||
}
|
||||
// Convert wildcard to valid apimachinery operator
|
||||
if strings.Contains(kv, "=") {
|
||||
split := strings.Split(kv, "=")
|
||||
if split[1] == "*" {
|
||||
slice[i] = split[0]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
resourceLabelSelector := strings.Join(slice[:], ",")
|
||||
_, err := labels.Parse(resourceLabelSelector)
|
||||
if err != nil {
|
||||
logrus.Fatal(err)
|
||||
}
|
||||
|
||||
return resourceLabelSelector, nil
|
||||
}
|
||||
|
||||
// ShouldReload checks if a resource should be reloaded based on its annotations and the provided options.
|
||||
func ShouldReload(config Config, resourceType string, annotations Map, podAnnotations Map, options *ReloaderOptions) ReloadCheckResult {
|
||||
|
||||
// Check if this workload type should be ignored
|
||||
if len(options.WorkloadTypesToIgnore) > 0 {
|
||||
ignoredWorkloadTypes, err := util.GetIgnoredWorkloadTypesList()
|
||||
if err != nil {
|
||||
logrus.Errorf("Failed to parse ignored workload types: %v", err)
|
||||
} else {
|
||||
// Map Kubernetes resource types to CLI-friendly names for comparison
|
||||
var resourceToCheck string
|
||||
switch resourceType {
|
||||
case "Job":
|
||||
resourceToCheck = "jobs"
|
||||
case "CronJob":
|
||||
resourceToCheck = "cronjobs"
|
||||
default:
|
||||
resourceToCheck = resourceType // For other types, use as-is
|
||||
}
|
||||
|
||||
// Check if current resource type should be ignored
|
||||
if ignoredWorkloadTypes.Contains(resourceToCheck) {
|
||||
return ReloadCheckResult{
|
||||
ShouldReload: false,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
ignoreResourceAnnotatonValue := config.ResourceAnnotations[options.IgnoreResourceAnnotation]
|
||||
if ignoreResourceAnnotatonValue == "true" {
|
||||
return ReloadCheckResult{
|
||||
ShouldReload: false,
|
||||
}
|
||||
}
|
||||
|
||||
annotationValue, found := annotations[config.Annotation]
|
||||
searchAnnotationValue, foundSearchAnn := annotations[options.AutoSearchAnnotation]
|
||||
reloaderEnabledValue, foundAuto := annotations[options.ReloaderAutoAnnotation]
|
||||
typedAutoAnnotationEnabledValue, foundTypedAuto := annotations[config.TypedAutoAnnotation]
|
||||
excludeConfigmapAnnotationValue, foundExcludeConfigmap := annotations[options.ConfigmapExcludeReloaderAnnotation]
|
||||
excludeSecretAnnotationValue, foundExcludeSecret := annotations[options.SecretExcludeReloaderAnnotation]
|
||||
excludeSecretProviderClassProviderAnnotationValue, foundExcludeSecretProviderClass := annotations[options.SecretProviderClassExcludeReloaderAnnotation]
|
||||
|
||||
if !found && !foundAuto && !foundTypedAuto && !foundSearchAnn {
|
||||
annotations = podAnnotations
|
||||
annotationValue = annotations[config.Annotation]
|
||||
searchAnnotationValue = annotations[options.AutoSearchAnnotation]
|
||||
reloaderEnabledValue = annotations[options.ReloaderAutoAnnotation]
|
||||
typedAutoAnnotationEnabledValue = annotations[config.TypedAutoAnnotation]
|
||||
}
|
||||
|
||||
isResourceExcluded := false
|
||||
|
||||
switch config.Type {
|
||||
case constants.ConfigmapEnvVarPostfix:
|
||||
if foundExcludeConfigmap {
|
||||
isResourceExcluded = checkIfResourceIsExcluded(config.ResourceName, excludeConfigmapAnnotationValue)
|
||||
}
|
||||
case constants.SecretEnvVarPostfix:
|
||||
if foundExcludeSecret {
|
||||
isResourceExcluded = checkIfResourceIsExcluded(config.ResourceName, excludeSecretAnnotationValue)
|
||||
}
|
||||
|
||||
case constants.SecretProviderClassEnvVarPostfix:
|
||||
if foundExcludeSecretProviderClass {
|
||||
isResourceExcluded = checkIfResourceIsExcluded(config.ResourceName, excludeSecretProviderClassProviderAnnotationValue)
|
||||
}
|
||||
}
|
||||
|
||||
if isResourceExcluded {
|
||||
return ReloadCheckResult{
|
||||
ShouldReload: false,
|
||||
}
|
||||
}
|
||||
|
||||
values := strings.Split(annotationValue, ",")
|
||||
for _, value := range values {
|
||||
value = strings.TrimSpace(value)
|
||||
re := regexp.MustCompile("^" + value + "$")
|
||||
if re.Match([]byte(config.ResourceName)) {
|
||||
return ReloadCheckResult{
|
||||
ShouldReload: true,
|
||||
AutoReload: false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if searchAnnotationValue == "true" {
|
||||
matchAnnotationValue := config.ResourceAnnotations[options.SearchMatchAnnotation]
|
||||
if matchAnnotationValue == "true" {
|
||||
return ReloadCheckResult{
|
||||
ShouldReload: true,
|
||||
AutoReload: true,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
reloaderEnabled, _ := strconv.ParseBool(reloaderEnabledValue)
|
||||
typedAutoAnnotationEnabled, _ := strconv.ParseBool(typedAutoAnnotationEnabledValue)
|
||||
if reloaderEnabled || typedAutoAnnotationEnabled || reloaderEnabledValue == "" && typedAutoAnnotationEnabledValue == "" && options.AutoReloadAll {
|
||||
return ReloadCheckResult{
|
||||
ShouldReload: true,
|
||||
AutoReload: true,
|
||||
}
|
||||
}
|
||||
|
||||
return ReloadCheckResult{
|
||||
ShouldReload: false,
|
||||
}
|
||||
}
|
||||
|
||||
func checkIfResourceIsExcluded(resourceName, excludedResources string) bool {
|
||||
if excludedResources == "" {
|
||||
return false
|
||||
}
|
||||
|
||||
excludedResourcesList := strings.Split(excludedResources, ",")
|
||||
for _, excludedResource := range excludedResourcesList {
|
||||
if strings.TrimSpace(excludedResource) == resourceName {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
func init() {
|
||||
GetCommandLineOptions()
|
||||
}
|
||||
|
||||
func GetCommandLineOptions() *ReloaderOptions {
|
||||
if CommandLineOptions == nil {
|
||||
CommandLineOptions = &ReloaderOptions{}
|
||||
}
|
||||
|
||||
CommandLineOptions.AutoReloadAll = options.AutoReloadAll
|
||||
CommandLineOptions.ConfigmapUpdateOnChangeAnnotation = options.ConfigmapUpdateOnChangeAnnotation
|
||||
CommandLineOptions.SecretUpdateOnChangeAnnotation = options.SecretUpdateOnChangeAnnotation
|
||||
CommandLineOptions.SecretProviderClassUpdateOnChangeAnnotation = options.SecretProviderClassUpdateOnChangeAnnotation
|
||||
CommandLineOptions.ReloaderAutoAnnotation = options.ReloaderAutoAnnotation
|
||||
CommandLineOptions.IgnoreResourceAnnotation = options.IgnoreResourceAnnotation
|
||||
CommandLineOptions.ConfigmapReloaderAutoAnnotation = options.ConfigmapReloaderAutoAnnotation
|
||||
CommandLineOptions.SecretReloaderAutoAnnotation = options.SecretReloaderAutoAnnotation
|
||||
CommandLineOptions.SecretProviderClassReloaderAutoAnnotation = options.SecretProviderClassReloaderAutoAnnotation
|
||||
CommandLineOptions.ConfigmapExcludeReloaderAnnotation = options.ConfigmapExcludeReloaderAnnotation
|
||||
CommandLineOptions.SecretExcludeReloaderAnnotation = options.SecretExcludeReloaderAnnotation
|
||||
CommandLineOptions.SecretProviderClassExcludeReloaderAnnotation = options.SecretProviderClassExcludeReloaderAnnotation
|
||||
CommandLineOptions.AutoSearchAnnotation = options.AutoSearchAnnotation
|
||||
CommandLineOptions.SearchMatchAnnotation = options.SearchMatchAnnotation
|
||||
CommandLineOptions.RolloutStrategyAnnotation = options.RolloutStrategyAnnotation
|
||||
CommandLineOptions.PauseDeploymentAnnotation = options.PauseDeploymentAnnotation
|
||||
CommandLineOptions.PauseDeploymentTimeAnnotation = options.PauseDeploymentTimeAnnotation
|
||||
CommandLineOptions.LogFormat = options.LogFormat
|
||||
CommandLineOptions.LogLevel = options.LogLevel
|
||||
CommandLineOptions.ReloadStrategy = options.ReloadStrategy
|
||||
CommandLineOptions.SyncAfterRestart = options.SyncAfterRestart
|
||||
CommandLineOptions.EnableHA = options.EnableHA
|
||||
CommandLineOptions.EnableCSIIntegration = options.EnableCSIIntegration
|
||||
CommandLineOptions.WebhookUrl = options.WebhookUrl
|
||||
CommandLineOptions.ResourcesToIgnore = options.ResourcesToIgnore
|
||||
CommandLineOptions.WorkloadTypesToIgnore = options.WorkloadTypesToIgnore
|
||||
CommandLineOptions.NamespaceSelectors = options.NamespaceSelectors
|
||||
CommandLineOptions.ResourceSelectors = options.ResourceSelectors
|
||||
CommandLineOptions.NamespacesToIgnore = options.NamespacesToIgnore
|
||||
CommandLineOptions.IsArgoRollouts = parseBool(options.IsArgoRollouts)
|
||||
CommandLineOptions.ReloadOnCreate = parseBool(options.ReloadOnCreate)
|
||||
CommandLineOptions.ReloadOnDelete = parseBool(options.ReloadOnDelete)
|
||||
CommandLineOptions.EnablePProf = options.EnablePProf
|
||||
CommandLineOptions.PProfAddr = options.PProfAddr
|
||||
|
||||
return CommandLineOptions
|
||||
}
|
||||
|
||||
func parseBool(value string) bool {
|
||||
if value == "" {
|
||||
return false
|
||||
}
|
||||
result, err := strconv.ParseBool(value)
|
||||
if err != nil {
|
||||
return false // Default to false if parsing fails
|
||||
}
|
||||
return result
|
||||
}
|
||||
224
pkg/common/common_test.go
Normal file
224
pkg/common/common_test.go
Normal file
@@ -0,0 +1,224 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stakater/Reloader/internal/pkg/options"
|
||||
)
|
||||
|
||||
func TestShouldReload_IgnoredWorkloadTypes(t *testing.T) {
|
||||
// Save original state
|
||||
originalWorkloadTypes := options.WorkloadTypesToIgnore
|
||||
defer func() {
|
||||
options.WorkloadTypesToIgnore = originalWorkloadTypes
|
||||
}()
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
ignoredWorkloadTypes []string
|
||||
resourceType string
|
||||
shouldReload bool
|
||||
description string
|
||||
}{
|
||||
{
|
||||
name: "Jobs ignored - Job should not reload",
|
||||
ignoredWorkloadTypes: []string{"jobs"},
|
||||
resourceType: "Job",
|
||||
shouldReload: false,
|
||||
description: "When jobs are ignored, Job resources should not be reloaded",
|
||||
},
|
||||
{
|
||||
name: "Jobs ignored - CronJob should reload",
|
||||
ignoredWorkloadTypes: []string{"jobs"},
|
||||
resourceType: "CronJob",
|
||||
shouldReload: true,
|
||||
description: "When jobs are ignored, CronJob resources should still be processed",
|
||||
},
|
||||
{
|
||||
name: "CronJobs ignored - CronJob should not reload",
|
||||
ignoredWorkloadTypes: []string{"cronjobs"},
|
||||
resourceType: "CronJob",
|
||||
shouldReload: false,
|
||||
description: "When cronjobs are ignored, CronJob resources should not be reloaded",
|
||||
},
|
||||
{
|
||||
name: "CronJobs ignored - Job should reload",
|
||||
ignoredWorkloadTypes: []string{"cronjobs"},
|
||||
resourceType: "Job",
|
||||
shouldReload: true,
|
||||
description: "When cronjobs are ignored, Job resources should still be processed",
|
||||
},
|
||||
{
|
||||
name: "Both ignored - Job should not reload",
|
||||
ignoredWorkloadTypes: []string{"jobs", "cronjobs"},
|
||||
resourceType: "Job",
|
||||
shouldReload: false,
|
||||
description: "When both are ignored, Job resources should not be reloaded",
|
||||
},
|
||||
{
|
||||
name: "Both ignored - CronJob should not reload",
|
||||
ignoredWorkloadTypes: []string{"jobs", "cronjobs"},
|
||||
resourceType: "CronJob",
|
||||
shouldReload: false,
|
||||
description: "When both are ignored, CronJob resources should not be reloaded",
|
||||
},
|
||||
{
|
||||
name: "Both ignored - Deployment should reload",
|
||||
ignoredWorkloadTypes: []string{"jobs", "cronjobs"},
|
||||
resourceType: "Deployment",
|
||||
shouldReload: true,
|
||||
description: "When both are ignored, other workload types should still be processed",
|
||||
},
|
||||
{
|
||||
name: "None ignored - Job should reload",
|
||||
ignoredWorkloadTypes: []string{},
|
||||
resourceType: "Job",
|
||||
shouldReload: true,
|
||||
description: "When nothing is ignored, all workload types should be processed",
|
||||
},
|
||||
{
|
||||
name: "None ignored - CronJob should reload",
|
||||
ignoredWorkloadTypes: []string{},
|
||||
resourceType: "CronJob",
|
||||
shouldReload: true,
|
||||
description: "When nothing is ignored, all workload types should be processed",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Set the ignored workload types
|
||||
options.WorkloadTypesToIgnore = tt.ignoredWorkloadTypes
|
||||
|
||||
// Create minimal test config and options
|
||||
config := Config{
|
||||
ResourceName: "test-resource",
|
||||
Annotation: "configmap.reloader.stakater.com/reload",
|
||||
}
|
||||
|
||||
annotations := Map{
|
||||
"configmap.reloader.stakater.com/reload": "test-config",
|
||||
}
|
||||
|
||||
// Create ReloaderOptions with the ignored workload types
|
||||
opts := &ReloaderOptions{
|
||||
WorkloadTypesToIgnore: tt.ignoredWorkloadTypes,
|
||||
AutoReloadAll: true, // Enable auto-reload to simplify test
|
||||
ReloaderAutoAnnotation: "reloader.stakater.com/auto",
|
||||
}
|
||||
|
||||
// Call ShouldReload
|
||||
result := ShouldReload(config, tt.resourceType, annotations, Map{}, opts)
|
||||
|
||||
// Check the result
|
||||
if result.ShouldReload != tt.shouldReload {
|
||||
t.Errorf("For resource type %s with ignored types %v, expected ShouldReload=%v, got=%v",
|
||||
tt.resourceType, tt.ignoredWorkloadTypes, tt.shouldReload, result.ShouldReload)
|
||||
}
|
||||
|
||||
t.Logf("✓ %s", tt.description)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestShouldReload_IgnoredWorkloadTypes_ValidationError(t *testing.T) {
|
||||
// Save original state
|
||||
originalWorkloadTypes := options.WorkloadTypesToIgnore
|
||||
defer func() {
|
||||
options.WorkloadTypesToIgnore = originalWorkloadTypes
|
||||
}()
|
||||
|
||||
// Test with invalid workload type - should still continue processing
|
||||
options.WorkloadTypesToIgnore = []string{"invalid"}
|
||||
|
||||
config := Config{
|
||||
ResourceName: "test-resource",
|
||||
Annotation: "configmap.reloader.stakater.com/reload",
|
||||
}
|
||||
|
||||
annotations := Map{
|
||||
"configmap.reloader.stakater.com/reload": "test-config",
|
||||
}
|
||||
|
||||
opts := &ReloaderOptions{
|
||||
WorkloadTypesToIgnore: []string{"invalid"},
|
||||
AutoReloadAll: true, // Enable auto-reload to simplify test
|
||||
ReloaderAutoAnnotation: "reloader.stakater.com/auto",
|
||||
}
|
||||
|
||||
// Should not panic and should continue with normal processing
|
||||
result := ShouldReload(config, "Job", annotations, Map{}, opts)
|
||||
|
||||
// Since validation failed, it should continue with normal processing (should reload)
|
||||
if !result.ShouldReload {
|
||||
t.Errorf("Expected ShouldReload=true when validation fails, got=%v", result.ShouldReload)
|
||||
}
|
||||
}
|
||||
|
||||
// Test that validates the fix for issue #996
|
||||
func TestShouldReload_IssueRBACPermissionFixed(t *testing.T) {
|
||||
// Save original state
|
||||
originalWorkloadTypes := options.WorkloadTypesToIgnore
|
||||
defer func() {
|
||||
options.WorkloadTypesToIgnore = originalWorkloadTypes
|
||||
}()
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
ignoredWorkloadTypes []string
|
||||
resourceType string
|
||||
description string
|
||||
}{
|
||||
{
|
||||
name: "Issue #996 - ignoreJobs prevents Job processing",
|
||||
ignoredWorkloadTypes: []string{"jobs"},
|
||||
resourceType: "Job",
|
||||
description: "Job resources are skipped entirely, preventing RBAC permission errors",
|
||||
},
|
||||
{
|
||||
name: "Issue #996 - ignoreCronJobs prevents CronJob processing",
|
||||
ignoredWorkloadTypes: []string{"cronjobs"},
|
||||
resourceType: "CronJob",
|
||||
description: "CronJob resources are skipped entirely, preventing RBAC permission errors",
|
||||
},
|
||||
{
|
||||
name: "Issue #996 - both ignored prevent both types",
|
||||
ignoredWorkloadTypes: []string{"jobs", "cronjobs"},
|
||||
resourceType: "Job",
|
||||
description: "Job resources are skipped entirely when both types are ignored",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Set the ignored workload types
|
||||
options.WorkloadTypesToIgnore = tt.ignoredWorkloadTypes
|
||||
|
||||
config := Config{
|
||||
ResourceName: "test-resource",
|
||||
Annotation: "configmap.reloader.stakater.com/reload",
|
||||
}
|
||||
|
||||
annotations := Map{
|
||||
"configmap.reloader.stakater.com/reload": "test-config",
|
||||
}
|
||||
|
||||
opts := &ReloaderOptions{
|
||||
WorkloadTypesToIgnore: tt.ignoredWorkloadTypes,
|
||||
AutoReloadAll: true, // Enable auto-reload to simplify test
|
||||
ReloaderAutoAnnotation: "reloader.stakater.com/auto",
|
||||
}
|
||||
|
||||
// Call ShouldReload
|
||||
result := ShouldReload(config, tt.resourceType, annotations, Map{}, opts)
|
||||
|
||||
// Should not reload when workload type is ignored
|
||||
if result.ShouldReload {
|
||||
t.Errorf("Expected ShouldReload=false for ignored workload type %s, got=%v",
|
||||
tt.resourceType, result.ShouldReload)
|
||||
}
|
||||
|
||||
t.Logf("✓ %s", tt.description)
|
||||
})
|
||||
}
|
||||
}
|
||||
62
pkg/common/config.go
Normal file
62
pkg/common/config.go
Normal file
@@ -0,0 +1,62 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"github.com/stakater/Reloader/internal/pkg/constants"
|
||||
"github.com/stakater/Reloader/internal/pkg/options"
|
||||
"github.com/stakater/Reloader/internal/pkg/util"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
csiv1 "sigs.k8s.io/secrets-store-csi-driver/apis/v1"
|
||||
)
|
||||
|
||||
// Config contains rolling upgrade configuration parameters
|
||||
type Config struct {
|
||||
Namespace string
|
||||
ResourceName string
|
||||
ResourceAnnotations map[string]string
|
||||
Annotation string
|
||||
TypedAutoAnnotation string
|
||||
SHAValue string
|
||||
Type string
|
||||
Labels map[string]string
|
||||
}
|
||||
|
||||
// GetConfigmapConfig provides utility config for configmap
|
||||
func GetConfigmapConfig(configmap *v1.ConfigMap) Config {
|
||||
return Config{
|
||||
Namespace: configmap.Namespace,
|
||||
ResourceName: configmap.Name,
|
||||
ResourceAnnotations: configmap.Annotations,
|
||||
Annotation: options.ConfigmapUpdateOnChangeAnnotation,
|
||||
TypedAutoAnnotation: options.ConfigmapReloaderAutoAnnotation,
|
||||
SHAValue: util.GetSHAfromConfigmap(configmap),
|
||||
Type: constants.ConfigmapEnvVarPostfix,
|
||||
Labels: configmap.Labels,
|
||||
}
|
||||
}
|
||||
|
||||
// GetSecretConfig provides utility config for secret
|
||||
func GetSecretConfig(secret *v1.Secret) Config {
|
||||
return Config{
|
||||
Namespace: secret.Namespace,
|
||||
ResourceName: secret.Name,
|
||||
ResourceAnnotations: secret.Annotations,
|
||||
Annotation: options.SecretUpdateOnChangeAnnotation,
|
||||
TypedAutoAnnotation: options.SecretReloaderAutoAnnotation,
|
||||
SHAValue: util.GetSHAfromSecret(secret.Data),
|
||||
Type: constants.SecretEnvVarPostfix,
|
||||
Labels: secret.Labels,
|
||||
}
|
||||
}
|
||||
|
||||
func GetSecretProviderClassPodStatusConfig(podStatus *csiv1.SecretProviderClassPodStatus) Config {
|
||||
// As csi injects SecretProviderClass, we will create config for it instead of SecretProviderClassPodStatus
|
||||
// ResourceAnnotations will be retrieved during PerformAction call
|
||||
return Config{
|
||||
Namespace: podStatus.Namespace,
|
||||
ResourceName: podStatus.Status.SecretProviderClassName,
|
||||
Annotation: options.SecretProviderClassUpdateOnChangeAnnotation,
|
||||
TypedAutoAnnotation: options.SecretProviderClassReloaderAutoAnnotation,
|
||||
SHAValue: util.GetSHAfromSecretProviderClassPodStatus(podStatus.Status),
|
||||
Type: constants.SecretProviderClassEnvVarPostfix,
|
||||
}
|
||||
}
|
||||
134
pkg/common/metainfo.go
Normal file
134
pkg/common/metainfo.go
Normal file
@@ -0,0 +1,134 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"runtime"
|
||||
"time"
|
||||
|
||||
v1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
)
|
||||
|
||||
// Version, Commit, and BuildDate are set during the build process
|
||||
// using the -X linker flag to inject these values into the binary.
|
||||
// They provide metadata about the build version, commit hash, build date, and whether there are
|
||||
// uncommitted changes in the source code at the time of build.
|
||||
// This information is useful for debugging and tracking the specific build of the Reloader binary.
|
||||
var Version = "dev"
|
||||
var Commit = "unknown"
|
||||
var BuildDate = "unknown"
|
||||
var Edition = "oss"
|
||||
|
||||
const (
|
||||
MetaInfoConfigmapName = "reloader-meta-info"
|
||||
MetaInfoConfigmapLabelKey = "reloader.stakater.com/meta-info"
|
||||
MetaInfoConfigmapLabelValue = "reloader"
|
||||
)
|
||||
|
||||
// MetaInfo contains comprehensive metadata about the Reloader instance.
|
||||
// This includes build information, configuration options, and deployment details.
|
||||
type MetaInfo struct {
|
||||
// BuildInfo contains information about the build version, commit, and compilation details
|
||||
BuildInfo BuildInfo `json:"buildInfo"`
|
||||
// ReloaderOptions contains all the configuration options and flags used by this Reloader instance
|
||||
ReloaderOptions ReloaderOptions `json:"reloaderOptions"`
|
||||
// DeploymentInfo contains metadata about the Kubernetes deployment of this Reloader instance
|
||||
DeploymentInfo metav1.ObjectMeta `json:"deploymentInfo"`
|
||||
}
|
||||
|
||||
// BuildInfo contains information about the build and version of the Reloader binary.
|
||||
// This includes Go version, release version, commit details, and build timestamp.
|
||||
type BuildInfo struct {
|
||||
// GoVersion is the version of Go used to compile the binary
|
||||
GoVersion string `json:"goVersion"`
|
||||
// ReleaseVersion is the version tag or branch of the Reloader release
|
||||
ReleaseVersion string `json:"releaseVersion"`
|
||||
// CommitHash is the Git commit hash of the source code used to build this binary
|
||||
CommitHash string `json:"commitHash"`
|
||||
// CommitTime is the timestamp of the Git commit used to build this binary
|
||||
CommitTime time.Time `json:"commitTime"`
|
||||
|
||||
// Edition indicates the edition of Reloader (e.g., OSS, Enterprise)
|
||||
Edition string `json:"edition"`
|
||||
}
|
||||
|
||||
func NewBuildInfo() *BuildInfo {
|
||||
metaInfo := &BuildInfo{
|
||||
GoVersion: runtime.Version(),
|
||||
ReleaseVersion: Version,
|
||||
CommitHash: Commit,
|
||||
CommitTime: ParseUTCTime(BuildDate),
|
||||
Edition: Edition,
|
||||
}
|
||||
|
||||
return metaInfo
|
||||
}
|
||||
|
||||
func (m *MetaInfo) ToConfigMap() *v1.ConfigMap {
|
||||
return &v1.ConfigMap{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: MetaInfoConfigmapName,
|
||||
Namespace: m.DeploymentInfo.Namespace,
|
||||
Labels: map[string]string{
|
||||
MetaInfoConfigmapLabelKey: MetaInfoConfigmapLabelValue,
|
||||
},
|
||||
},
|
||||
Data: map[string]string{
|
||||
"buildInfo": toJson(m.BuildInfo),
|
||||
"reloaderOptions": toJson(m.ReloaderOptions),
|
||||
"deploymentInfo": toJson(m.DeploymentInfo),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func NewMetaInfo(configmap *v1.ConfigMap) (*MetaInfo, error) {
|
||||
var buildInfo BuildInfo
|
||||
if val, ok := configmap.Data["buildInfo"]; ok {
|
||||
err := json.Unmarshal([]byte(val), &buildInfo)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to unmarshal buildInfo: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
var reloaderOptions ReloaderOptions
|
||||
if val, ok := configmap.Data["reloaderOptions"]; ok {
|
||||
err := json.Unmarshal([]byte(val), &reloaderOptions)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to unmarshal reloaderOptions: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
var deploymentInfo metav1.ObjectMeta
|
||||
if val, ok := configmap.Data["deploymentInfo"]; ok {
|
||||
err := json.Unmarshal([]byte(val), &deploymentInfo)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to unmarshal deploymentInfo: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
return &MetaInfo{
|
||||
BuildInfo: buildInfo,
|
||||
ReloaderOptions: reloaderOptions,
|
||||
DeploymentInfo: deploymentInfo,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func toJson(data interface{}) string {
|
||||
jsonData, err := json.Marshal(data)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
return string(jsonData)
|
||||
}
|
||||
|
||||
func ParseUTCTime(value string) time.Time {
|
||||
if value == "" {
|
||||
return time.Time{} // Return zero time if value is empty
|
||||
}
|
||||
t, err := time.Parse(time.RFC3339, value)
|
||||
if err != nil {
|
||||
return time.Time{} // Return zero time if parsing fails
|
||||
}
|
||||
return t
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
package util
|
||||
package common
|
||||
|
||||
import "time"
|
||||
|
||||
@@ -11,6 +11,7 @@ import (
|
||||
"github.com/sirupsen/logrus"
|
||||
"k8s.io/client-go/kubernetes"
|
||||
"k8s.io/client-go/rest"
|
||||
csiclient "sigs.k8s.io/secrets-store-csi-driver/pkg/client/clientset/versioned"
|
||||
)
|
||||
|
||||
// Clients struct exposes interfaces for kubernetes as well as openshift if available
|
||||
@@ -18,11 +19,14 @@ type Clients struct {
|
||||
KubernetesClient kubernetes.Interface
|
||||
OpenshiftAppsClient appsclient.Interface
|
||||
ArgoRolloutClient argorollout.Interface
|
||||
CSIClient csiclient.Interface
|
||||
}
|
||||
|
||||
var (
|
||||
// IsOpenshift is true if environment is Openshift, it is false if environment is Kubernetes
|
||||
IsOpenshift = isOpenshift()
|
||||
// IsCSIEnabled is true if environment has CSI provider installed, otherwise false
|
||||
IsCSIInstalled = isCSIInstalled()
|
||||
)
|
||||
|
||||
// GetClients returns a `Clients` object containing both openshift and kubernetes clients with an openshift identifier
|
||||
@@ -48,10 +52,20 @@ func GetClients() Clients {
|
||||
logrus.Warnf("Unable to create ArgoRollout client error = %v", err)
|
||||
}
|
||||
|
||||
var csiClient *csiclient.Clientset
|
||||
|
||||
if IsCSIInstalled {
|
||||
csiClient, err = GetCSIClient()
|
||||
if err != nil {
|
||||
logrus.Warnf("Unable to create CSI client error = %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
return Clients{
|
||||
KubernetesClient: client,
|
||||
OpenshiftAppsClient: appsClient,
|
||||
ArgoRolloutClient: rolloutClient,
|
||||
CSIClient: csiClient,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -63,6 +77,28 @@ func GetArgoRolloutClient() (*argorollout.Clientset, error) {
|
||||
return argorollout.NewForConfig(config)
|
||||
}
|
||||
|
||||
func isCSIInstalled() bool {
|
||||
client, err := GetKubernetesClient()
|
||||
if err != nil {
|
||||
logrus.Fatalf("Unable to create Kubernetes client error = %v", err)
|
||||
}
|
||||
_, err = client.RESTClient().Get().AbsPath("/apis/secrets-store.csi.x-k8s.io/v1").Do(context.TODO()).Raw()
|
||||
if err == nil {
|
||||
logrus.Info("CSI provider is installed")
|
||||
return true
|
||||
}
|
||||
logrus.Info("CSI provider is not installed")
|
||||
return false
|
||||
}
|
||||
|
||||
func GetCSIClient() (*csiclient.Clientset, error) {
|
||||
config, err := getConfig()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return csiclient.NewForConfig(config)
|
||||
}
|
||||
|
||||
func isOpenshift() bool {
|
||||
client, err := GetKubernetesClient()
|
||||
if err != nil {
|
||||
|
||||
@@ -3,11 +3,13 @@ package kube
|
||||
import (
|
||||
v1 "k8s.io/api/core/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
csiv1 "sigs.k8s.io/secrets-store-csi-driver/apis/v1"
|
||||
)
|
||||
|
||||
// ResourceMap are resources from where changes are going to be detected
|
||||
var ResourceMap = map[string]runtime.Object{
|
||||
"configMaps": &v1.ConfigMap{},
|
||||
"secrets": &v1.Secret{},
|
||||
"namespaces": &v1.Namespace{},
|
||||
"configmaps": &v1.ConfigMap{},
|
||||
"secrets": &v1.Secret{},
|
||||
"namespaces": &v1.Namespace{},
|
||||
"secretproviderclasspodstatuses": &csiv1.SecretProviderClassPodStatus{},
|
||||
}
|
||||
|
||||
@@ -1,6 +1,38 @@
|
||||
{
|
||||
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
|
||||
"extends": [
|
||||
"config:base"
|
||||
"config:recommended"
|
||||
],
|
||||
"labels": [
|
||||
"dependencies"
|
||||
],
|
||||
"rebaseWhen": "never",
|
||||
"vulnerabilityAlerts": {
|
||||
"enabled": true,
|
||||
"labels": ["security"]
|
||||
},
|
||||
|
||||
"customManagers": [
|
||||
{
|
||||
"customType": "regex",
|
||||
"fileMatch": [
|
||||
".vale.ini"
|
||||
],
|
||||
"matchStrings": [
|
||||
"https:\/\/github\\.com\/(?<depName>.*)\/releases\/download\/(?<currentValue>.*)\/.*\\.zip"
|
||||
],
|
||||
"datasourceTemplate": "github-releases"
|
||||
},
|
||||
{
|
||||
"customType": "regex",
|
||||
"description": "Update Helm Chart values file",
|
||||
"fileMatch": [
|
||||
"values\\.yaml$"
|
||||
],
|
||||
"matchStrings": [
|
||||
"image:\\s*name: (?<depName>[a-zA-Z0-9\\.\\/]*)\\s*tag: (?<currentValue>[a-zA-Z0-9\\.\\/]*)"
|
||||
],
|
||||
"datasourceTemplate": "docker"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
8
sonar-project.properties
Normal file
8
sonar-project.properties
Normal file
@@ -0,0 +1,8 @@
|
||||
sonar.projectKey=Reloader
|
||||
sonar.sources=.
|
||||
sonar.exclusions=**/*_test.go
|
||||
sonar.language=go
|
||||
|
||||
sonar.tests=.
|
||||
sonar.test.inclusions=**/*_test.go
|
||||
sonar.analysisCache.enabled=false
|
||||
544
test/loadtest/README.md
Normal file
544
test/loadtest/README.md
Normal file
@@ -0,0 +1,544 @@
|
||||
# Reloader Load Test Framework
|
||||
|
||||
This framework provides A/B comparison testing between two Reloader container images.
|
||||
|
||||
## Overview
|
||||
|
||||
The load test framework:
|
||||
1. Creates a local kind cluster (1 control-plane + 6 worker nodes)
|
||||
2. Deploys Prometheus for metrics collection
|
||||
3. Loads the provided Reloader container images into the cluster
|
||||
4. Runs standardized test scenarios (S1-S13)
|
||||
5. Collects metrics via Prometheus scraping
|
||||
6. Generates comparison reports with pass/fail criteria
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker or Podman
|
||||
- kind (Kubernetes in Docker)
|
||||
- kubectl
|
||||
- Go 1.22+
|
||||
|
||||
## Building
|
||||
|
||||
```bash
|
||||
cd test/loadtest
|
||||
go build -o loadtest ./cmd/loadtest
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Compare two published images (e.g., different versions)
|
||||
./loadtest run \
|
||||
--old-image=stakater/reloader:v1.0.0 \
|
||||
--new-image=stakater/reloader:v1.1.0
|
||||
|
||||
# Run a specific scenario
|
||||
./loadtest run \
|
||||
--old-image=stakater/reloader:v1.0.0 \
|
||||
--new-image=stakater/reloader:v1.1.0 \
|
||||
--scenario=S2 \
|
||||
--duration=120
|
||||
|
||||
# Test only a single image (no comparison)
|
||||
./loadtest run --new-image=myregistry/reloader:dev
|
||||
|
||||
# Use local images built with docker/podman
|
||||
./loadtest run \
|
||||
--old-image=localhost/reloader:baseline \
|
||||
--new-image=localhost/reloader:feature-branch
|
||||
|
||||
# Skip cluster creation (use existing kind cluster)
|
||||
./loadtest run \
|
||||
--old-image=stakater/reloader:v1.0.0 \
|
||||
--new-image=stakater/reloader:v1.1.0 \
|
||||
--skip-cluster
|
||||
|
||||
# Run all scenarios in parallel on 4 clusters (faster execution)
|
||||
./loadtest run \
|
||||
--new-image=localhost/reloader:dev \
|
||||
--parallelism=4
|
||||
|
||||
# Run all 13 scenarios in parallel (one cluster per scenario)
|
||||
./loadtest run \
|
||||
--new-image=localhost/reloader:dev \
|
||||
--parallelism=13
|
||||
|
||||
# Generate report from existing results
|
||||
./loadtest report --scenario=S2 --results-dir=./results
|
||||
```
|
||||
|
||||
## Command Line Options
|
||||
|
||||
### Run Command
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--old-image=IMAGE` | Container image for "old" version | - |
|
||||
| `--new-image=IMAGE` | Container image for "new" version | - |
|
||||
| `--scenario=ID` | Test scenario: S1-S13 or "all" | all |
|
||||
| `--duration=SECONDS` | Test duration in seconds | 60 |
|
||||
| `--parallelism=N` | Run N scenarios in parallel on N kind clusters | 1 |
|
||||
| `--skip-cluster` | Skip kind cluster creation (use existing, only for parallelism=1) | false |
|
||||
| `--results-dir=DIR` | Directory for results | ./results |
|
||||
|
||||
**Note:** At least one of `--old-image` or `--new-image` is required. Provide both for A/B comparison.
|
||||
|
||||
### Report Command
|
||||
|
||||
| Option | Description | Default |
|
||||
|--------|-------------|---------|
|
||||
| `--scenario=ID` | Scenario to report on (required) | - |
|
||||
| `--results-dir=DIR` | Directory containing results | ./results |
|
||||
| `--output=FILE` | Output file (default: stdout) | - |
|
||||
|
||||
## Test Scenarios
|
||||
|
||||
| ID | Name | Description |
|
||||
|-----|-----------------------|-------------------------------------------------|
|
||||
| S1 | Burst Updates | Many ConfigMap/Secret updates in quick succession |
|
||||
| S2 | Fan-Out | One ConfigMap used by many (50) workloads |
|
||||
| S3 | High Cardinality | Many CMs/Secrets across many namespaces |
|
||||
| S4 | No-Op Updates | Updates that don't change data (annotation only)|
|
||||
| S5 | Workload Churn | Deployments created/deleted rapidly |
|
||||
| S6 | Controller Restart | Restart controller pod under load |
|
||||
| S7 | API Pressure | Many concurrent update requests |
|
||||
| S8 | Large Objects | ConfigMaps > 100KB |
|
||||
| S9 | Multi-Workload Types | Tests all workload types (Deploy, STS, DS) |
|
||||
| S10 | Secrets + Mixed | Secrets and mixed ConfigMap+Secret workloads |
|
||||
| S11 | Annotation Strategy | Tests `--reload-strategy=annotations` |
|
||||
| S12 | Pause & Resume | Tests pause-period during rapid updates |
|
||||
| S13 | Complex References | Init containers, valueFrom, projected volumes |
|
||||
|
||||
## Metrics Reference
|
||||
|
||||
This section explains each metric collected during load tests, what it measures, and what different values might indicate.
|
||||
|
||||
### Counter Metrics (Totals)
|
||||
|
||||
#### `reconcile_total`
|
||||
**What it measures:** The total number of reconciliation loops executed by the controller.
|
||||
|
||||
**What it indicates:**
|
||||
- **Higher in new vs old:** The new controller-runtime implementation may batch events differently. This is often expected behavior, not a problem.
|
||||
- **Lower in new vs old:** Better event batching/deduplication. Controller-runtime's work queue naturally deduplicates events.
|
||||
- **Expected behavior:** The new implementation typically has *fewer* reconciles due to intelligent event batching.
|
||||
|
||||
#### `action_total`
|
||||
**What it measures:** The total number of reload actions triggered (rolling restarts of Deployments/StatefulSets/DaemonSets).
|
||||
|
||||
**What it indicates:**
|
||||
- **Should match expected value:** Both implementations should trigger the same number of reloads for the same workload.
|
||||
- **Lower than expected:** Some updates were missed - potential bug or race condition.
|
||||
- **Higher than expected:** Duplicate reloads triggered - inefficiency but not data loss.
|
||||
|
||||
#### `reload_executed_total`
|
||||
**What it measures:** Successful reload operations executed, labeled by `success=true/false`.
|
||||
|
||||
**What it indicates:**
|
||||
- **`success=true` count:** Number of workloads successfully restarted.
|
||||
- **`success=false` count:** Failed restart attempts (API errors, permission issues).
|
||||
- **Should match `action_total`:** If significantly lower, reloads are failing.
|
||||
|
||||
#### `workloads_scanned_total`
|
||||
**What it measures:** Number of workloads (Deployments, etc.) scanned when checking for ConfigMap/Secret references.
|
||||
|
||||
**What it indicates:**
|
||||
- **High count:** Controller is scanning many workloads per reconcile.
|
||||
- **Expected behavior:** Should roughly match the number of workloads × number of reconciles.
|
||||
- **Optimization signal:** If very high, namespace filtering or label selectors could help.
|
||||
|
||||
#### `workloads_matched_total`
|
||||
**What it measures:** Number of workloads that matched (reference the changed ConfigMap/Secret).
|
||||
|
||||
**What it indicates:**
|
||||
- **Should match `reload_executed_total`:** Every matched workload should be reloaded.
|
||||
- **Higher than reloads:** Some matched workloads weren't reloaded (potential issue).
|
||||
|
||||
#### `errors_total`
|
||||
**What it measures:** Total errors encountered, labeled by error type.
|
||||
|
||||
**What it indicates:**
|
||||
- **Should be 0:** Any errors indicate problems.
|
||||
- **Common causes:** API server timeouts, RBAC issues, resource conflicts.
|
||||
- **Critical metric:** Non-zero errors in production should be investigated.
|
||||
|
||||
### API Efficiency Metrics (REST Client)
|
||||
|
||||
These metrics track Kubernetes API server calls made by Reloader. Lower values indicate more efficient operation with less API server load.
|
||||
|
||||
#### `rest_client_requests_total`
|
||||
**What it measures:** Total number of HTTP requests made to the Kubernetes API server.
|
||||
|
||||
**What it indicates:**
|
||||
- **Lower is better:** Fewer API calls means less load on the API server.
|
||||
- **High count:** May indicate inefficient caching or excessive reconciles.
|
||||
- **Comparison use:** Shows overall API efficiency between implementations.
|
||||
|
||||
#### `rest_client_requests_get`
|
||||
**What it measures:** Number of GET requests (fetching individual resources or listings).
|
||||
|
||||
**What it indicates:**
|
||||
- **Includes:** Fetching ConfigMaps, Secrets, Deployments, etc.
|
||||
- **Higher count:** More frequent resource fetching, possibly due to cache misses.
|
||||
- **Expected behavior:** Controller-runtime's caching should reduce GET requests compared to direct API calls.
|
||||
|
||||
#### `rest_client_requests_patch`
|
||||
**What it measures:** Number of PATCH requests (partial updates to resources).
|
||||
|
||||
**What it indicates:**
|
||||
- **Used for:** Rolling restart annotations on workloads.
|
||||
- **Should correlate with:** `reload_executed_total` - each reload typically requires one PATCH.
|
||||
- **Lower is better:** Fewer patches means more efficient batching or deduplication.
|
||||
|
||||
#### `rest_client_requests_put`
|
||||
**What it measures:** Number of PUT requests (full resource updates).
|
||||
|
||||
**What it indicates:**
|
||||
- **Used for:** Full object replacements (less common than PATCH).
|
||||
- **Should be low:** Most updates use PATCH for efficiency.
|
||||
- **High count:** May indicate suboptimal update strategy.
|
||||
|
||||
#### `rest_client_requests_errors`
|
||||
**What it measures:** Number of failed API requests (4xx/5xx responses).
|
||||
|
||||
**What it indicates:**
|
||||
- **Should be 0:** Errors indicate API server issues or permission problems.
|
||||
- **Common causes:** Rate limiting, RBAC issues, resource conflicts, network issues.
|
||||
- **Non-zero:** Investigate API server logs and Reloader permissions.
|
||||
|
||||
### Latency Metrics (Percentiles)
|
||||
|
||||
All latency metrics are reported in **seconds**. The report shows p50 (median), p95, and p99 percentiles.
|
||||
|
||||
#### `reconcile_duration (s)`
|
||||
**What it measures:** Time spent inside each reconcile loop, from start to finish.
|
||||
|
||||
**What it indicates:**
|
||||
- **p50 (median):** Typical reconcile time. Should be < 100ms for good performance.
|
||||
- **p95:** 95th percentile - only 5% of reconciles take longer than this.
|
||||
- **p99:** 99th percentile - indicates worst-case performance.
|
||||
|
||||
**Interpreting differences:**
|
||||
- **New higher than old:** Controller-runtime reconciles may do more work per loop but run fewer times. Check `reconcile_total` - if it's lower, this is expected.
|
||||
- **Minor differences (< 0.5s absolute):** Not significant for sub-second values.
|
||||
|
||||
#### `action_latency (s)`
|
||||
**What it measures:** End-to-end time from ConfigMap/Secret change detection to workload restart triggered.
|
||||
|
||||
**What it indicates:**
|
||||
- **This is the user-facing latency:** How long users wait for their config changes to take effect.
|
||||
- **p50 < 1s:** Excellent - most changes apply within a second.
|
||||
- **p95 < 5s:** Good - even under load, changes apply quickly.
|
||||
- **p99 > 10s:** May need investigation - some changes take too long.
|
||||
|
||||
**What affects this:**
|
||||
- API server responsiveness
|
||||
- Number of workloads to scan
|
||||
- Concurrent updates competing for resources
|
||||
|
||||
### Understanding the Report
|
||||
|
||||
#### Report Columns
|
||||
|
||||
```
|
||||
Metric Old New Expected Old✓ New✓ Status
|
||||
------ --- --- -------- ---- ---- ------
|
||||
action_total 100.00 100.00 100 ✓ ✓ pass
|
||||
action_latency_p95 (s) 0.15 0.04 - - - pass
|
||||
```
|
||||
|
||||
- **Old/New:** Measured values from each implementation
|
||||
- **Expected:** Known expected value (for throughput metrics)
|
||||
- **Old✓/New✓:** Whether the value is within 15% of expected (✓ = yes, ✗ = no, - = no expected value)
|
||||
- **Status:** pass/fail based on comparison thresholds
|
||||
|
||||
#### Pass/Fail Logic
|
||||
|
||||
| Metric Type | Pass Condition |
|
||||
|-------------|----------------|
|
||||
| Throughput (action_total, reload_executed_total) | New value within 15% of expected |
|
||||
| Latency (p50, p95, p99) | New not more than threshold% worse than old, OR absolute difference < minimum threshold |
|
||||
| Errors | New ≤ Old (ideally both 0) |
|
||||
| API Efficiency (rest_client_requests_*) | New ≤ Old (lower is better), or New not more than 50% higher |
|
||||
|
||||
#### Latency Thresholds
|
||||
|
||||
Latency comparisons use both percentage AND absolute thresholds to avoid false failures:
|
||||
|
||||
| Metric | Max % Worse | Min Absolute Diff |
|
||||
|--------|-------------|-------------------|
|
||||
| p50 | 100% | 0.5s |
|
||||
| p95 | 100% | 1.0s |
|
||||
| p99 | 100% | 1.0s |
|
||||
|
||||
**Example:** If old p50 = 0.01s and new p50 = 0.08s:
|
||||
- Percentage difference: +700% (would fail % check)
|
||||
- Absolute difference: 0.07s (< 0.5s threshold)
|
||||
- **Result: PASS** (both values are fast enough that the difference doesn't matter)
|
||||
|
||||
### Resource Consumption Metrics
|
||||
|
||||
These metrics track CPU, memory, and Go runtime resource usage. Lower values generally indicate more efficient operation.
|
||||
|
||||
#### Memory Metrics
|
||||
|
||||
| Metric | Description | Unit |
|
||||
|--------|-------------|------|
|
||||
| `memory_rss_mb_avg` | Average RSS (resident set size) memory | MB |
|
||||
| `memory_rss_mb_max` | Peak RSS memory during test | MB |
|
||||
| `memory_heap_mb_avg` | Average Go heap allocation | MB |
|
||||
| `memory_heap_mb_max` | Peak Go heap allocation | MB |
|
||||
|
||||
**What to watch for:**
|
||||
- **High RSS:** May indicate memory leaks or inefficient caching
|
||||
- **High heap:** Many objects being created (check GC metrics)
|
||||
- **Growing over time:** Potential memory leak
|
||||
|
||||
#### CPU Metrics
|
||||
|
||||
| Metric | Description | Unit |
|
||||
|--------|-------------|------|
|
||||
| `cpu_cores_avg` | Average CPU usage rate | cores |
|
||||
| `cpu_cores_max` | Peak CPU usage rate | cores |
|
||||
|
||||
**What to watch for:**
|
||||
- **High CPU:** Inefficient algorithms or excessive reconciles
|
||||
- **Spiky max:** May indicate burst handling issues
|
||||
|
||||
#### Go Runtime Metrics
|
||||
|
||||
| Metric | Description | Unit |
|
||||
|--------|-------------|------|
|
||||
| `goroutines_avg` | Average goroutine count | count |
|
||||
| `goroutines_max` | Peak goroutine count | count |
|
||||
| `gc_pause_p99_ms` | 99th percentile GC pause time | ms |
|
||||
|
||||
**What to watch for:**
|
||||
- **High goroutines:** Potential goroutine leak or unbounded concurrency
|
||||
- **High GC pause:** Large heap or allocation pressure
|
||||
|
||||
### Scenario-Specific Expectations
|
||||
|
||||
| Scenario | Key Metrics to Watch | Expected Behavior |
|
||||
|----------|---------------------|-------------------|
|
||||
| S1 (Burst) | action_latency_p99, cpu_cores_max, goroutines_max | Should handle bursts without queue backup |
|
||||
| S2 (Fan-Out) | reconcile_total, workloads_matched, memory_rss_mb_max | One CM change → 50 workload reloads |
|
||||
| S3 (High Cardinality) | reconcile_duration, memory_heap_mb_avg | Many namespaces shouldn't increase memory |
|
||||
| S4 (No-Op) | action_total = 0, cpu_cores_avg should be low | Minimal resource usage for no-op |
|
||||
| S5 (Churn) | errors_total, goroutines_avg | Graceful handling, no goroutine leak |
|
||||
| S6 (Restart) | All metrics captured | Metrics survive controller restart |
|
||||
| S7 (API Pressure) | errors_total, cpu_cores_max, goroutines_max | No errors under concurrent load |
|
||||
| S8 (Large Objects) | memory_rss_mb_max, gc_pause_p99_ms | Large ConfigMaps don't cause OOM or GC issues |
|
||||
| S9 (Multi-Workload) | reload_executed_total per type | All workload types (Deploy, STS, DS) reload |
|
||||
| S10 (Secrets) | reload_executed_total, workloads_matched | Both Secrets and ConfigMaps trigger reloads |
|
||||
| S11 (Annotation) | workload annotations present | Deployments get `last-reloaded-from` annotation |
|
||||
| S12 (Pause) | reload_executed_total << updates | Pause-period reduces reload frequency |
|
||||
| S13 (Complex) | reload_executed_total | All reference types trigger reloads |
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
#### New implementation shows 0 for all metrics
|
||||
- Check if Prometheus is scraping the new Reloader pod
|
||||
- Verify pod annotations: `prometheus.io/scrape: "true"`
|
||||
- Check Prometheus targets: `http://localhost:9091/targets`
|
||||
|
||||
#### Metrics don't match expected values
|
||||
- Verify test ran to completion (check logs)
|
||||
- Ensure Prometheus scraped final metrics (18s wait after test)
|
||||
- Check for pod restarts during test (metrics reset on restart - handled by `increase()`)
|
||||
|
||||
#### High latency in new implementation
|
||||
- Check Reloader pod resource limits
|
||||
- Look for API server throttling in logs
|
||||
- Compare `reconcile_total` - fewer reconciles with higher duration may be normal
|
||||
|
||||
#### REST client errors are non-zero
|
||||
- **Common causes:**
|
||||
- Optional CRD schemes registered but CRDs not installed (e.g., Argo Rollouts, OpenShift DeploymentConfig)
|
||||
- API server rate limiting under high load
|
||||
- RBAC permissions missing for certain resource types
|
||||
- **Argo Rollouts errors:** If you see ~4 errors per test, ensure `--enable-argo-rollouts=false` if not using Argo Rollouts
|
||||
- **OpenShift errors:** Similarly, ensure DeploymentConfig support is disabled on non-OpenShift clusters
|
||||
|
||||
#### REST client requests much higher in new implementation
|
||||
- Check if caching is working correctly
|
||||
- Look for excessive re-queuing in controller logs
|
||||
- Compare `reconcile_total` - more reconciles naturally means more API calls
|
||||
|
||||
## Report Format
|
||||
|
||||
The report generator produces a comparison table with units and expected value indicators:
|
||||
|
||||
```
|
||||
================================================================================
|
||||
RELOADER A/B COMPARISON REPORT
|
||||
================================================================================
|
||||
|
||||
Scenario: S2
|
||||
Generated: 2026-01-03 14:30:00
|
||||
Status: PASS
|
||||
Summary: All metrics within acceptable thresholds
|
||||
|
||||
Test: S2: Fan-out test - 1 CM update triggers 50 deployment reloads
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
METRIC COMPARISONS
|
||||
--------------------------------------------------------------------------------
|
||||
(Old✓/New✓ = meets expected value within 15%)
|
||||
|
||||
Metric Old New Expected Old✓ New✓ Status
|
||||
------ --- --- -------- ---- ---- ------
|
||||
reconcile_total 50.00 25.00 - - - pass
|
||||
reconcile_duration_p50 (s) 0.01 0.05 - - - pass
|
||||
reconcile_duration_p95 (s) 0.02 0.15 - - - pass
|
||||
action_total 50.00 50.00 50 ✓ ✓ pass
|
||||
action_latency_p50 (s) 0.05 0.03 - - - pass
|
||||
action_latency_p95 (s) 0.12 0.08 - - - pass
|
||||
errors_total 0.00 0.00 - - - pass
|
||||
reload_executed_total 50.00 50.00 50 ✓ ✓ pass
|
||||
workloads_scanned_total 50.00 50.00 50 ✓ ✓ pass
|
||||
workloads_matched_total 50.00 50.00 50 ✓ ✓ pass
|
||||
rest_client_requests_total 850 720 - - - pass
|
||||
rest_client_requests_get 500 420 - - - pass
|
||||
rest_client_requests_patch 300 250 - - - pass
|
||||
rest_client_requests_errors 0 0 - - - pass
|
||||
```
|
||||
|
||||
Reports are saved to `results/<scenario>/report.txt` after each test.
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
test/loadtest/
|
||||
├── cmd/
|
||||
│ └── loadtest/ # Unified CLI (run + report)
|
||||
│ └── main.go
|
||||
├── internal/
|
||||
│ ├── cluster/ # Kind cluster management
|
||||
│ │ └── kind.go
|
||||
│ ├── prometheus/ # Prometheus deployment & querying
|
||||
│ │ └── prometheus.go
|
||||
│ ├── reloader/ # Reloader deployment
|
||||
│ │ └── deploy.go
|
||||
│ └── scenarios/ # Test scenario implementations
|
||||
│ └── scenarios.go
|
||||
├── manifests/
|
||||
│ └── prometheus.yaml # Prometheus deployment manifest
|
||||
├── results/ # Generated after tests
|
||||
│ └── <scenario>/
|
||||
│ ├── old/ # Old version data
|
||||
│ │ ├── *.json # Prometheus metric snapshots
|
||||
│ │ └── reloader.log # Reloader pod logs
|
||||
│ ├── new/ # New version data
|
||||
│ │ ├── *.json # Prometheus metric snapshots
|
||||
│ │ └── reloader.log # Reloader pod logs
|
||||
│ ├── expected.json # Expected values from test
|
||||
│ └── report.txt # Comparison report
|
||||
├── go.mod
|
||||
├── go.sum
|
||||
└── README.md
|
||||
```
|
||||
|
||||
## Building Local Images for Testing
|
||||
|
||||
If you want to test local code changes:
|
||||
|
||||
```bash
|
||||
# Build the new Reloader image from current source
|
||||
docker build -t localhost/reloader:dev -f Dockerfile .
|
||||
|
||||
# Build from a different branch/commit
|
||||
git checkout feature-branch
|
||||
docker build -t localhost/reloader:feature -f Dockerfile .
|
||||
|
||||
# Then run comparison
|
||||
./loadtest run \
|
||||
--old-image=stakater/reloader:v1.0.0 \
|
||||
--new-image=localhost/reloader:feature
|
||||
```
|
||||
|
||||
## Interpreting Results
|
||||
|
||||
### PASS
|
||||
All metrics are within acceptable thresholds. The new implementation is comparable or better than the old one.
|
||||
|
||||
### FAIL
|
||||
One or more metrics exceeded thresholds. Review the specific metrics:
|
||||
- **Latency degradation**: p95/p99 latencies are significantly higher
|
||||
- **Missed reloads**: `reload_executed_total` differs significantly
|
||||
- **Errors increased**: `errors_total` is higher in new version
|
||||
|
||||
### Investigation
|
||||
|
||||
If tests fail, check:
|
||||
1. Pod logs: `kubectl logs -n reloader-new deployment/reloader` (or check `results/<scenario>/new/reloader.log`)
|
||||
2. Resource usage: `kubectl top pods -n reloader-new`
|
||||
3. Events: `kubectl get events -n reloader-test`
|
||||
|
||||
## Parallel Execution
|
||||
|
||||
The `--parallelism` option enables running scenarios on multiple kind clusters simultaneously, significantly reducing total test time.
|
||||
|
||||
### How It Works
|
||||
|
||||
1. **Multiple Clusters**: Creates N kind clusters named `reloader-loadtest-0`, `reloader-loadtest-1`, etc.
|
||||
2. **Separate Prometheus**: Each cluster gets its own Prometheus instance with a unique port (9091, 9092, etc.)
|
||||
3. **Worker Pool**: Scenarios are distributed to workers via a channel, with each worker running on its own cluster
|
||||
4. **Independent Execution**: Each scenario runs in complete isolation with no resource contention
|
||||
|
||||
### Usage
|
||||
|
||||
```bash
|
||||
# Run 4 scenarios at a time (creates 4 clusters)
|
||||
./loadtest run --new-image=my-image:tag --parallelism=4
|
||||
|
||||
# Run all 13 scenarios in parallel (creates 13 clusters)
|
||||
./loadtest run --new-image=my-image:tag --parallelism=13 --scenario=all
|
||||
```
|
||||
|
||||
### Resource Requirements
|
||||
|
||||
Parallel execution requires significant system resources:
|
||||
|
||||
| Parallelism | Clusters | Est. Memory | Est. CPU |
|
||||
|-------------|----------|-------------|----------|
|
||||
| 1 (default) | 1 | ~4GB | 2-4 cores |
|
||||
| 4 | 4 | ~16GB | 8-16 cores |
|
||||
| 13 | 13 | ~52GB | 26-52 cores |
|
||||
|
||||
### Notes
|
||||
|
||||
- The `--skip-cluster` option is not supported with parallelism > 1
|
||||
- Each worker loads images independently, so initial setup takes longer
|
||||
- All results are written to the same `--results-dir` with per-scenario subdirectories
|
||||
- If a cluster setup fails, remaining workers continue with available clusters
|
||||
- Parallelism automatically reduces to match scenario count if set higher
|
||||
|
||||
## CI Integration
|
||||
|
||||
### GitHub Actions
|
||||
|
||||
Load tests can be triggered on pull requests by commenting `/loadtest`:
|
||||
|
||||
```
|
||||
/loadtest
|
||||
```
|
||||
|
||||
This will:
|
||||
1. Build a container image from the PR branch
|
||||
2. Run all load test scenarios against it
|
||||
3. Post results as a PR comment
|
||||
4. Upload detailed results as artifacts
|
||||
|
||||
### Make Target
|
||||
|
||||
Run load tests locally or in CI:
|
||||
|
||||
```bash
|
||||
# From repository root
|
||||
make loadtest
|
||||
```
|
||||
|
||||
This builds the container image and runs all scenarios with a 60-second duration.
|
||||
7
test/loadtest/cmd/loadtest/main.go
Normal file
7
test/loadtest/cmd/loadtest/main.go
Normal file
@@ -0,0 +1,7 @@
|
||||
package main
|
||||
|
||||
import "github.com/stakater/Reloader/test/loadtest/internal/cmd"
|
||||
|
||||
func main() {
|
||||
cmd.Execute()
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user