Compare commits

..

65 Commits

Author SHA1 Message Date
github-actions[bot]
95d04c370d Fix: vela top cannot switch the theme (#5756)
Signed-off-by: howieyuen <howieyuen@outlook.com>
(cherry picked from commit cb1c33bed1)

Co-authored-by: howieyuen <howieyuen@outlook.com>
2023-03-28 15:38:55 +08:00
github-actions[bot]
a583f66b0d [Backport release-1.7] Fix: gateway message is wrong (#5751)
Co-authored-by: Qiaozp <qiaozhongpei.qzp@alibaba-inc.com>
2023-03-28 09:30:29 +08:00
github-actions[bot]
7ae1aff648 [Backport release-1.7] Fix: fix vela-minimal helm chart unrecognised options (#5729)
* fix(vela-minimal): fix unrecognised options

Signed-off-by: florent.madiot.e <florent.madiot.e@thalesdigital.io>
(cherry picked from commit c9394a18a6)

* Fix(vela-minimal): make reviewable

Signed-off-by: florent.madiot.e <florent.madiot.e@thalesdigital.io>
(cherry picked from commit f4f7f96cc8)

---------

Co-authored-by: florent.madiot.e <florent.madiot.e@thalesdigital.io>
2023-03-24 11:29:34 +08:00
github-actions[bot]
0ff40d75e5 Fix: use logs to show errs intead of return in adopt all (#5712)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit f0e3c33be2)

Co-authored-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-03-21 13:40:54 +08:00
github-actions[bot]
3d410fed5f Fix: system crd validation hook should not always use the default vela system (#5711)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit b4b0f99f41)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2023-03-21 10:18:28 +08:00
Tianxin Dong
f8285df49d Feat: add adopt all command (#5690) (#5696) (#5697)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-03-20 13:40:04 +08:00
Tianxin Dong
e4cd1ffd1d Feat: add adopt all command (#5690) (#5696)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-03-17 15:51:39 +08:00
Tianxin Dong
592f8b8e8f Fix: stores workflow status in revison if it is restarted (#5604) (#5673)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-03-14 22:14:41 +08:00
github-actions[bot]
94c215c361 [Backport release-1.7] Fix: add addon registry (#5660)
* Fix: add addon registry

Signed-off-by: suwanliang_yewu <suwanliang_yewu@cmss.chinamobile.com>
(cherry picked from commit 74d3ad2c88)

* Fix: modify edit errors

Signed-off-by: suwanliang_yewu <suwanliang_yewu@cmss.chinamobile.com>
(cherry picked from commit 6ae828cb04)

---------

Co-authored-by: suwanliang_yewu <suwanliang_yewu@cmss.chinamobile.com>
2023-03-13 14:15:13 +08:00
github-actions[bot]
1723a1795d Fix: make read-only object not found error more clear (#5619)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 7bc97f812b)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2023-03-06 13:20:20 +08:00
github-actions[bot]
0861498ab8 Fix: replication example componentdefinition miss workload field for webhook validation (#5618)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit c8709a2c13)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2023-03-06 13:18:49 +08:00
github-actions[bot]
52c9a8f0a3 [Backport release-1.7] Fix: length of name should be less than 32 (#5599)
* Fix: length of name should be less than 32

Signed-off-by: caiqi <caiqi_yewu@cmss.chinamobile.com>
(cherry picked from commit c06347c923)

* Fix: length of name should be less than 32

Signed-off-by: caiqi <caiqi_yewu@cmss.chinamobile.com>
(cherry picked from commit b18bca6f92)

---------

Co-authored-by: caiqi <caiqi_yewu@cmss.chinamobile.com>
2023-03-03 16:56:38 +08:00
github-actions[bot]
c7f337fd35 [Backport release-1.7] Feat: update version of terraform-controller to v0.7.10. (#5591)
Co-authored-by: raradhakrishnan <raradhakrishnan@guidewire.com>
2023-03-01 11:05:22 +08:00
github-actions[bot]
9dca76e0de [Backport release-1.7] Fix: the array type cannot be converted to interface type (#5588)
* Fix: the array type cannot be converted to interface type

Signed-off-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>
(cherry picked from commit 16f70d7335)

* Fix: the array type cannot be converted to interface type

Signed-off-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>
(cherry picked from commit 0d0f51e3c5)

* Fix: the array type cannot be converted to interface type

Signed-off-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>
(cherry picked from commit 07a52595d3)

* Fix: the array type cannot be converted to interface type

Signed-off-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>
(cherry picked from commit 19175578dc)

---------

Co-authored-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>
2023-03-01 09:49:19 +08:00
github-actions[bot]
348f540a82 Fix: The resource topology fails to display the pods under the job (#5566) (#5573)
Signed-off-by: hanzhaoyang <hanzhaoyang@jd.com>
(cherry picked from commit c0327918fb)

Co-authored-by: hanzhaoyang <hanzhaoyang@jd.com>
2023-02-27 16:29:38 +08:00
github-actions[bot]
1c2df10299 Fix: swagger DateType (#5570)
Signed-off-by: yueyongyue <yueyongyue@sina.cn>
(cherry picked from commit a4a4d64729)

Co-authored-by: yueyongyue <yueyongyue@sina.cn>
2023-02-27 09:53:52 +08:00
github-actions[bot]
918ed9727b [Backport release-1.7] Fix: delete the secret of the cluster (#5517)
* Fix: delete the secret of the cluster

Signed-off-by: suwanliang_yewu <suwanliang_yewu@cmss.chinamobile.com>
(cherry picked from commit ec466ab67e)

* Fix: add test

Signed-off-by: suwanliang_yewu <suwanliang_yewu@cmss.chinamobile.com>
(cherry picked from commit 63c44fdc39)

* Fix: modify error

Signed-off-by: suwanliang_yewu <suwanliang_yewu@cmss.chinamobile.com>
(cherry picked from commit e580597367)

* Fix: solve check-diff

Signed-off-by: suwanliang_yewu <suwanliang_yewu@cmss.chinamobile.com>
(cherry picked from commit c2316e10e8)

* Fix: modify test

Signed-off-by: suwanliang_yewu <suwanliang_yewu@cmss.chinamobile.com>
(cherry picked from commit afca7e6027)

---------

Co-authored-by: suwanliang_yewu <suwanliang_yewu@cmss.chinamobile.com>
2023-02-16 14:04:41 +08:00
github-actions[bot]
fa78cb7632 [Backport release-1.7] Fix: removes default parameter name for terraform provider (#5516)
* removes default name for terraform provider

Signed-off-by: afzalbin64 <afzal442@gmail.com>
(cherry picked from commit 92e7bf8263)

* fixes minor typos

Signed-off-by: afzalbin64 <afzal442@gmail.com>

undo the changes to typo

fixes minor typos in go_test files

updates config_test to support name as required param

(cherry picked from commit 51c2650a95)

---------

Co-authored-by: afzalbin64 <afzal442@gmail.com>
2023-02-16 14:02:10 +08:00
github-actions[bot]
f3cdbcf203 [Backport release-1.7] Feat: The vela-apiserver supports displaying chart values stored in the OCI registry (#5509)
* support helm chart values

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

rebase

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

no lint

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix lint error

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

add test and deprecated API

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix url bug

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix tests panic

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix tests

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit fc1d8c248d)

* fix golint

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 6c462b7850)

* return values.yaml

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 70d8cc5d52)

* fix test

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit ef4574a188)

* fix return values

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit c1488eee77)

* add multiple valeus yaml in

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 04436188ff)

* add old interface back

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit ae0ac9f50f)

* fix golint

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix test

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 00957744c0)

---------

Co-authored-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2023-02-15 14:00:16 +08:00
github-actions[bot]
7bd2cf4dbc [Backport release-1.7] Fix: read-only definition in cue spec (#5508)
* Fix: ref-object and take-over definition in cue spec

Signed-off-by: Qiaozp <qiaozhongpei.qzp@alibaba-inc.com>
(cherry picked from commit 23723af9af)

* rollback ref-objects

Signed-off-by: Qiaozp <qiaozhongpei.qzp@alibaba-inc.com>
(cherry picked from commit 1af2c3934f)

---------

Co-authored-by: Qiaozp <qiaozhongpei.qzp@alibaba-inc.com>
2023-02-15 13:59:48 +08:00
github-actions[bot]
969babdd9e Fix: apply-terraform-provider and container-image definition (#5486)
Signed-off-by: Qiaozp <qiaozhongpei.qzp@alibaba-inc.com>
(cherry picked from commit 978feccf91)

Co-authored-by: Qiaozp <qiaozhongpei.qzp@alibaba-inc.com>
2023-02-13 11:21:52 +08:00
github-actions[bot]
94c46a179b Fix: use correct helm value when setting apprev compression (#5478)
Signed-off-by: Charlie Chiang <charlie_c_0129@outlook.com>
(cherry picked from commit 17c36206bc)

Co-authored-by: Charlie Chiang <charlie_c_0129@outlook.com>
2023-02-10 19:18:14 +08:00
Tianxin Dong
bec288c6b4 Chore: update cue version to v0.5.0-beta.5 to fix certain bugs in the old version (#5475)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-02-10 19:17:51 +08:00
github-actions[bot]
9a44be9788 Fix: pod view invalid cue for special pod output (#5474)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit ece0396425)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2023-02-10 13:07:21 +08:00
github-actions[bot]
9c7f4b7e03 Feat: enhance expose trait and adopt (#5473)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit a05e4c8643)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2023-02-10 11:30:20 +08:00
github-actions[bot]
539f1ed02b Fix: expose trait load balance cue check bug (#5461)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 9dccfcea1e)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2023-02-08 23:25:11 +08:00
github-actions[bot]
9e95122387 Chore: fix krew release with new binary format for .exe (#5462)
Signed-off-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
(cherry picked from commit a4ea6ccb04)

Co-authored-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
2023-02-08 23:24:37 +08:00
github-actions[bot]
f49f11dd72 Fix: suppress klog logs output for vela top (#5452)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 55662b645a)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2023-02-08 16:26:44 +08:00
github-actions[bot]
191d9038f1 [Backport release-1.7] Fix: use get before create or update (#5450)
* Fix: use get before create or update

Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 93c1881663)

* Fix: ignore resource not found error when manage privileges for target

Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit dd4864c6d1)

---------

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2023-02-08 14:50:38 +08:00
github-actions[bot]
e10e43b6c8 [Backport release-1.7] Fix: simplify notification parameters (#5446)
* Fix: simplify notification parameters

Signed-off-by: Qiaozp <qiaozhongpei.qzp@alibaba-inc.com>
(cherry picked from commit a9a8a18529)

* remove close

Signed-off-by: Qiaozp <qiaozhongpei.qzp@alibaba-inc.com>
(cherry picked from commit 123c3f51b2)

* format

Signed-off-by: Qiaozp <qiaozhongpei.qzp@alibaba-inc.com>
(cherry picked from commit e2eee58a4b)

---------

Co-authored-by: Qiaozp <qiaozhongpei.qzp@alibaba-inc.com>
2023-02-08 14:05:13 +08:00
github-actions[bot]
b9cc523267 [Backport release-1.7] Fix: Add confirmation prompt for vela adopt --apply with existing app name (#5419)
* Add confirmation prompt for vela adopt --apply with existing app name

Signed-off-by: Karanjot Singh <drquark@duck.com>
(cherry picked from commit 5278e1b350)

* Added changes according to the review

Signed-off-by: Karanjot Singh <drquark@duck.com>
(cherry picked from commit 0a437e336f)

* Fixed Userinput and used loadremoteApplication

Signed-off-by: Karanjot Singh <drquark@duck.com>

minor fixes

Signed-off-by: Karanjot Singh <drquark@duck.com>

used loadRemoteApplication

Signed-off-by: Karanjot Singh <drquark@duck.com>

Minor Fix

Signed-off-by: Karanjot Singh <drquark@duck.com>

Minor Fix

Signed-off-by: Karanjot Singh <drquark@duck.com>

Minor Fix

Signed-off-by: Karanjot Singh <drquark@duck.com>

Minor Fix

Signed-off-by: Karanjot Singh <drquark@duck.com>

Minor Fix

Signed-off-by: Karanjot Singh <drquark@duck.com>

Minor fix

Signed-off-by: Karanjot Singh <drquark@duck.com>

Minor fix

Signed-off-by: Karanjot Singh <drquark@duck.com>
(cherry picked from commit 4e92eaf73e)

* Used f.Client().Get method

Signed-off-by: Karanjot Singh <drquark@duck.com>

minor fix

Signed-off-by: Karanjot Singh <drquark@duck.com>

minor fix

Signed-off-by: Karanjot Singh <drquark@duck.com>
(cherry picked from commit 96a2ae8fb7)

* Changed bool to False

Signed-off-by: Karanjot Singh <drquark@duck.com>
(cherry picked from commit 17a1131b90)

---------

Co-authored-by: Karanjot Singh <drquark@duck.com>
2023-02-03 18:03:42 +08:00
github-actions[bot]
7f1743ef58 [Backport release-1.7] Fix: sync project from app crd to velaux (#5410)
* Fix: sync project from app crd to velaux

Signed-off-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>
(cherry picked from commit 3c613d7358)

* Fix: sync project from app crd to velaux

Signed-off-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>
(cherry picked from commit 2492e3725c)

* Fix: sync project from app crd to velaux

Signed-off-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>
(cherry picked from commit db3c7ea0a5)

* Fix: sync project from app crd to velaux

Signed-off-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>
(cherry picked from commit 1a364e0737)

* Fix: sync project from app crd to velaux

Signed-off-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>
(cherry picked from commit 363f56ac2a)

* Fix: sync project from app crd to velaux

Signed-off-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>
(cherry picked from commit 279517c142)

---------

Co-authored-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>
2023-02-02 15:31:25 +08:00
github-actions[bot]
dd39c38cf1 fix bugs of specified addonName (#5407)
Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 3a9df79b3a)

Co-authored-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2023-02-01 16:44:30 +08:00
github-actions[bot]
0851454c6f Fix: ignore validation webhook for ref-objects typed component (#5406)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 0fb1ab497b)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2023-02-01 16:10:16 +08:00
github-actions[bot]
cd3577db53 Fix: skip last-applied-configuration error for threewaymergepatch (#5405)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 6bf79461d4)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2023-02-01 16:00:44 +08:00
github-actions[bot]
d578adfe6e Fix: longer releaser timeout (#5400)
Signed-off-by: Qiaozp <qiaozhongpei.qzp@alibaba-inc.com>
(cherry picked from commit 4ca1acdc22)

Co-authored-by: Qiaozp <qiaozhongpei.qzp@alibaba-inc.com>
2023-02-01 14:01:07 +08:00
github-actions[bot]
4b88cd201e [Backport release-1.7] Fix: replace homemade release script with goreleaser (#5398)
* Replace homemade release script with goreleaser

refactor release.yml

Signed-off-by: Qiaozp <qiaozhongpei.qzp@alibaba-inc.com>
(cherry picked from commit 00ee1513a1)

* wrap files in a directory

Signed-off-by: Qiaozp <qiaozhongpei.qzp@alibaba-inc.com>
(cherry picked from commit ba4ed6ad0f)

---------

Co-authored-by: Qiaozp <qiaozhongpei.qzp@alibaba-inc.com>
2023-01-31 20:55:06 +08:00
github-actions[bot]
492e7d7f0d [Backport release-1.7] Fix: use the namespace specified in the resource if -n is not s… (#5396)
* fix #5368, use the namespace specified in the resource if -n is not specified

Signed-off-by: Basuotian <basuoluomiu@gmail.com>
(cherry picked from commit 47c9b4457b)

* add default namespace for the case missing namespace in resourceRef

Signed-off-by: Basuotian <basuoluomiu@gmail.com>
(cherry picked from commit 3ae0657796)

* add test case

Signed-off-by: Basuotian <basuoluomiu@gmail.com>
(cherry picked from commit 4ba5eb88eb)

---------

Co-authored-by: Basuotian <basuoluomiu@gmail.com>
2023-01-31 20:16:18 +08:00
Jianbo Sun
0742ca9ee5 Fix: aligin config create to be managed by apps with Dispatch function (#5384) (#5394)
Signed-off-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
2023-01-31 18:04:36 +08:00
github-actions[bot]
1cf4ae3fc3 [Backport release-1.7] Feat: add the updating the application trigger API (#5395)
* Feat: add the updating the application trigger API

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 05bc42bc50)

* Fix: change the test case

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit d6f3e0e90b)

* Fix: imported more than once

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 37f7af0beb)

---------

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2023-01-31 17:14:03 +08:00
qiaozp
444d315143 Fix: rework on apisrever e2e test covergae (#5390)
Signed-off-by: Qiaozp <qiaozhongpei.qzp@alibaba-inc.com>
2023-01-31 15:04:24 +08:00
github-actions[bot]
2e3a89f6b1 Chore: update workflow version to fix panic (#5386)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit 4ae1fe044e)

Co-authored-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-01-31 10:57:40 +08:00
github-actions[bot]
d94293ac59 Fix: failed to create the record when rollbacking the application (#5381)
Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 57d85ecb7c)

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2023-01-30 20:51:04 +08:00
github-actions[bot]
cc604cfadb Feat: upgrade cluster gateway to 1.7.0 (#5356)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 3245ea148c)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2023-01-19 11:21:38 +08:00
github-actions[bot]
8ef2513b2e fix: fix --cluster when addon enable (#5343)
Signed-off-by: zhaowei.wang <zhaowei.wang@metabit-trading.com>
(cherry picked from commit 021ca69cfd)

Co-authored-by: zhaowei.wang <zhaowei.wang@metabit-trading.com>
2023-01-13 17:09:26 +08:00
github-actions[bot]
657e3b1bde Fix: optimize skip reconcile and expose error if the traits patch an invalid workload like terraform (#5342)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit 3730291eff)

Co-authored-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-01-13 17:08:52 +08:00
github-actions[bot]
004e2a814d Feat: upgrade the workflow version to v0.4.0 (#5337)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit e0b8c9e4c9)

Co-authored-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-01-13 16:57:03 +08:00
github-actions[bot]
6dcba5ef11 [Backport release-1.7] Fix: conflict while using gc policy and shared-resource policy concurrently (#5333)
* Fix: conflict while using gc policy and shared-resource policy concurrently

Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit f8239be21e)

* Fix: github ci

Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 72e54e5f90)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2023-01-13 15:55:05 +08:00
github-actions[bot]
fd39804dc9 Fix: maintain compatibility with old project data (#5331)
Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit fef55b9b1b)

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2023-01-13 13:59:35 +08:00
github-actions[bot]
ce57fbb752 Feat: need one Trait to set Rollout strategy of Workload (#5327)
Signed-off-by: StevenLeiZhang <zhangleiic@163.com>
(cherry picked from commit 5d00b2ac73)

Co-authored-by: StevenLeiZhang <zhangleiic@163.com>
2023-01-12 17:19:17 +08:00
github-actions[bot]
80b2b2c2d3 velaql support indexing into exported array field (#5323)
Signed-off-by: hnd4r7 <307365651@qq.com>
(cherry picked from commit 441d8f0a66)

Co-authored-by: hnd4r7 <307365651@qq.com>
2023-01-12 10:13:57 +08:00
github-actions[bot]
49d97e5c2b Fix typo in the long cli description of vela system command (#5322)
Signed-off-by: Girish Ramnani <girishramnani95@gmail.com>
(cherry picked from commit 273b91c41f)

Co-authored-by: Girish <girishramnani95@gmail.com>
2023-01-12 10:10:15 +08:00
github-actions[bot]
1ee5915546 Fix: the developer user can't load the definition (#5319)
Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 78c9a1a370)

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2023-01-11 18:28:36 +08:00
github-actions[bot]
3f6b38cc7f small optimzie for addon (#5318)
Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

name the compoennt

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 3a020041d9)

Co-authored-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2023-01-11 16:33:16 +08:00
Jianbo Sun
c2c7ab91f9 Feat: vela dry-run render results should be affected by override policy and deploy workflowstep (#4815) (#5314)
* [Feature] vela dry-run render results should be affected by override policy and deploy workflowstep

Signed-off-by: cezhang <c1zhang.dev@gmail.com>

* multiple input files support; policy,workflow support; new flag: merge orphan policy or workflow

Signed-off-by: cezhang <c1zhang.dev@gmail.com>

* add more tests

Signed-off-by: cezhang <c1zhang.dev@gmail.com>

* fix comment issues

Signed-off-by: cezhang <c1zhang.dev@gmail.com>

* add tests

Signed-off-by: cezhang <c1zhang.dev@gmail.com>

* fix e2e

Signed-off-by: cezhang <c1zhang.dev@gmail.com>

* fix tests

Signed-off-by: cezhang <c1zhang.dev@gmail.com>

Signed-off-by: cezhang <c1zhang.dev@gmail.com>

Signed-off-by: cezhang <c1zhang.dev@gmail.com>
Co-authored-by: cezhang <c1zhang.dev@gmail.com>
2023-01-11 15:39:45 +08:00
github-actions[bot]
866ffb9689 Chore: vela delete doc (#5315)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit ce79ab1691)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2023-01-11 15:08:08 +08:00
github-actions[bot]
61afb366ee Fix: fix vela debug cli to find id for step (#5313)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit 2f77353cbe)

Co-authored-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2023-01-11 14:57:02 +08:00
github-actions[bot]
a2a8d73a58 Fix: don't return err if subresource type is not found when listing application resources (#5312)
Signed-off-by: hnd4r7 <307365651@qq.com>
(cherry picked from commit 58a150d1d4)

Co-authored-by: hnd4r7 <307365651@qq.com>
2023-01-11 14:56:22 +08:00
github-actions[bot]
fc3b428788 fix: errorMsg when uninstall vela (#5310)
Signed-off-by: bitliu <bitliu@tencent.com>
(cherry picked from commit 771e0b5429)

Co-authored-by: bitliu <bitliu@tencent.com>
2023-01-11 14:49:14 +08:00
github-actions[bot]
500dc52b34 [Backport release-1.7] Fix: create a config with the same name reported an incorrect error (#5307)
* Fix: create a config with the same name reported an incorrect error

Signed-off-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>
(cherry picked from commit 3c1759896b)

* Fix: create a config with the same name reported an incorrect error

Signed-off-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>
(cherry picked from commit 0aa456642b)

Co-authored-by: wuzhongjian <wuzhongjian_yewu@cmss.chinamobile.com>
2023-01-11 14:25:13 +08:00
github-actions[bot]
789aa38476 [Backport release-1.7] Feat: enhance the application synchronizer (#5305)
* Feat: enhance the application synchronizer

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit e5d112b04a)

* Fix: e2e test case

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 940ffb30f5)

* Fix: the unit test case

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit c3e896fbc1)

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2023-01-11 14:24:25 +08:00
github-actions[bot]
17adf35717 Fix: Index structure map[string]string,Mongo resulting in inconsistent results obtained by filtering non-string type by index. (#5303)
Signed-off-by: old.prince <di7zhang@gmail.com>
(cherry picked from commit a763d3da54)

Co-authored-by: old.prince <di7zhang@gmail.com>
2023-01-11 13:06:15 +08:00
github-actions[bot]
586f266798 [Backport release-1.7] Fix: more explicit error when addon package hasn't a metadata.yaml (#5302)
* more explicit error when addon package hasn't a metadata.yaml

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix checkdiff

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 6953fe872d)

* fix commets

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 7cacb50976)

* fix test

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 4d35cd58f9)

Co-authored-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2023-01-11 11:53:00 +08:00
github-actions[bot]
b8dcbe4964 Fix: Delete appplication fails if status.workflow.endTime not specified. (#5297)
Error Details:
E0106 08:12:02.807341       1 controller.go:317] controller/application "msg"="Reconciler error" "error"="Application.core.oam.dev \"test\" is invalid: status.workflow.endTime: Invalid value: \"null\": status.workflow.endTime in body must be of type string: \"null\"" "name"="test" "namespace"="test" "reconciler group"="core.oam.dev" "reconciler kind"="Application"

If the workflow is not completed, the endtime should be null, and the deletion of the application will fail

Signed-off-by: old.prince <di7zhang@gmail.com>
(cherry picked from commit 16b6a02018)

Co-authored-by: old.prince <di7zhang@gmail.com>
2023-01-10 15:16:26 +08:00
github-actions[bot]
778579c79b Test: let addon helper tests use local helm server (#5290)
Signed-off-by: Charlie Chiang <charlie_c_0129@outlook.com>
(cherry picked from commit 44eaa1d004)

Co-authored-by: Charlie Chiang <charlie_c_0129@outlook.com>
2023-01-09 13:30:04 +08:00
1804 changed files with 185551 additions and 152919 deletions

36
.github/CODEOWNERS vendored
View File

@@ -1,35 +1,37 @@
# This file is a github code protect rule follow the codeowners https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/creating-a-repository-on-github/about-code-owners#example-of-a-codeowners-file
* @barnettZQG @wonderflow @leejanee @Somefive @jefree-cat @FogDong @wangyikewxgm @chivalryq @anoop2811 @briankane @jguionnet
design/ @barnettZQG @leejanee @wonderflow @Somefive @jefree-cat @FogDong @anoop2811 @briankane @jguionnet
* @barnettZQG @wonderflow @leejanee @Somefive @jefree-cat @FogDong
design/ @barnettZQG @leejanee @wonderflow @Somefive @jefree-cat @FogDong
# Owner of Core Controllers
pkg/controller/core.oam.dev @Somefive @FogDong @barnettZQG @wonderflow @wangyikewxgm @chivalryq @anoop2811 @briankane @jguionnet
pkg/controller/core.oam.dev @Somefive @FogDong @barnettZQG @wonderflow
# Owner of Standard Controllers
pkg/controller/standard.oam.dev @wangyikewxgm @barnettZQG @wonderflow @Somefive @anoop2811 @FogDong @briankane @jguionnet
pkg/controller/standard.oam.dev @wangyikewxgm @barnettZQG @wonderflow
# Owner of CUE
pkg/cue @leejanee @FogDong @Somefive @anoop2811 @briankane @jguionnet
pkg/stdlib @leejanee @FogDong @Somefive @anoop2811 @briankane @jguionnet
pkg/cue @leejanee @FogDong @Somefive
pkg/stdlib @leejanee @FogDong @Somefive
# Owner of Workflow
pkg/workflow @leejanee @FogDong @Somefive @wangyikewxgm @chivalryq @anoop2811 @briankane @jguionnet
pkg/workflow @leejanee @FogDong @Somefive
# Owner of rollout
pkg/controller/common/rollout/ @wangyikewxgm @wonderflow
runtime/rollout @wangyikewxgm @wonderflow
# Owner of vela templates
vela-templates/ @Somefive @barnettZQG @wonderflow @FogDong @wangyikewxgm @chivalryq @anoop2811 @briankane @jguionnet
vela-templates/ @Somefive @barnettZQG @wonderflow @FogDong
# Owner of vela CLI
references/cli/ @Somefive @StevenLeiZhang @charlie0129 @wangyikewxgm @chivalryq @anoop2811 @FogDong @briankane @jguionnet
references/cli/ @Somefive @zzxwill @StevenLeiZhang @charlie0129 @chivalryq
# Owner of vela APIServer
pkg/apiserver/ @barnettZQG @yangsoon @FogDong
# Owner of vela addon framework
pkg/addon/ @wangyikewxgm @wonderflow @charlie0129 @anoop2811 @FogDong @briankane @jguionnet
pkg/addon/ @wangyikewxgm @wonderflow @charlie0129
# Owner of resource keeper and tracker
pkg/resourcekeeper @Somefive @FogDong @chivalryq @anoop2811 @briankane @jguionnet
pkg/resourcetracker @Somefive @FogDong @chivalryq @anoop2811 @briankane @jguionnet
.github/ @chivalryq @wonderflow @Somefive @FogDong @wangyikewxgm @anoop2811 @briankane @jguionnet
makefiles @chivalryq @wonderflow @Somefive @FogDong @wangyikewxgm @anoop2811 @briankane @jguionnet
go.* @chivalryq @wonderflow @Somefive @FogDong @wangyikewxgm @anoop2811 @briankane @jguionnet
pkg/resourcekeeper @Somefive @FogDong
pkg/resourcetracker @Somefive @FogDong

View File

@@ -1,8 +1,6 @@
### Description of your changes
copilot:all
<!--
Briefly describe what this pull request does. We love pull requests that resolve an open KubeVela issue. If yours does, you

View File

@@ -1,35 +0,0 @@
# Deploy Current Branch Action
This GitHub composite action builds a Docker image from the current branch commit and deploys it to a KubeVela cluster for development testing.
## What it does
- Generates a unique image tag from the latest commit hash
- Builds and loads the Docker image into a KinD cluster
- Applies KubeVela CRDs for upgrade safety
- Upgrades the KubeVela Helm release to use the local development image
- Verifies deployment status and the running image version
## Usage
```yaml
- name: Deploy Current Branch
uses: ./path/to/this/action
```
## Requirements
- Docker, Helm, kubectl, and KinD must be available in your runner environment
- Kubernetes cluster access
- `charts/vela-core/crds` directory with CRDs
- Valid Helm chart at `charts/vela-core`
## Steps performed
1. **Generate commit hash for image tag**
2. **Build & load Docker image into KinD**
3. **Pre-apply chart CRDs**
4. **Upgrade KubeVela using local image**
5. **Verify deployment and image version**
---

View File

@@ -1,89 +0,0 @@
name: 'Deploy Current Branch'
description: 'Builds Docker image from current branch commit and deploys it to KubeVela cluster for development testing'
runs:
using: "composite"
steps:
# ========================================================================
# Git Commit Hash Generation
# Generate unique image tag from current branch's latest commit
# ========================================================================
- name: Get commit hash
id: commit_hash
shell: bash
run: |
COMMIT_HASH="git-$(git rev-parse --short HEAD)"
echo "Using commit hash: $COMMIT_HASH"
echo "COMMIT_HASH=$COMMIT_HASH" >> $GITHUB_ENV
# ========================================================================
# Docker Image Build and Cluster Loading
# Build development image from current code and load into KinD cluster
# ========================================================================
- name: Build and load Docker image
shell: bash
run: |
echo "Building development image: vela-core-test:${{ env.COMMIT_HASH }}"
mkdir -p $HOME/tmp/
docker build --no-cache \
-t vela-core-test:${{ env.COMMIT_HASH }} \
-f Dockerfile .
echo "Loading image into KinD cluster..."
TMPDIR=$HOME/tmp/ kind load docker-image vela-core-test:${{ env.COMMIT_HASH }}
# ========================================================================
# Custom Resource Definitions Application
# Pre-apply CRDs to ensure upgrade compatibility and prevent conflicts
# ========================================================================
- name: Pre-apply CRDs from target chart (upgrade-safe)
shell: bash
run: |
CRD_DIR="charts/vela-core/crds"
echo "Applying CRDs idempotently..."
kubectl apply -f "${CRD_DIR}"
# ========================================================================
# KubeVela Helm Chart Upgrade
# Upgrade existing installation to use locally built development image
# ========================================================================
- name: Upgrade KubeVela to development image
shell: bash
run: |
echo "Upgrading KubeVela to development version..."
helm upgrade kubevela ./charts/vela-core \
--namespace vela-system \
--set image.repository=vela-core-test \
--set image.tag=${{ env.COMMIT_HASH }} \
--set image.pullPolicy=IfNotPresent \
--timeout 5m \
--wait \
--debug
# ========================================================================
# Deployment Status Verification
# Verify successful upgrade and confirm correct image deployment
# ========================================================================
- name: Verify deployment status
shell: bash
run: |
echo "=== DEPLOYMENT VERIFICATION ==="
echo "Verifying upgrade to local development image..."
echo "--- Pod Status ---"
kubectl get pods -n vela-system
echo "--- Deployment Rollout ---"
kubectl rollout status deployment/kubevela-vela-core \
-n vela-system \
--timeout=300s
echo "--- Deployed Image Version ---"
kubectl get deployment kubevela-vela-core \
-n vela-system \
-o yaml | grep "image:" | head -1
echo "Deployment verification completed successfully!"

View File

@@ -1,32 +0,0 @@
# Install Latest KubeVela Release Action
This GitHub composite action installs the latest stable KubeVela release from the official Helm repository and verifies its deployment status.
## What it does
- Discovers the latest stable KubeVela release tag from GitHub
- Adds and updates the official KubeVela Helm chart repository
- Installs KubeVela into the `vela-system` namespace (using Helm)
- Verifies pod status and deployment rollout for successful installation
## Usage
```yaml
- name: Install Latest KubeVela Release
uses: ./path/to/this/action
```
## Requirements
- Helm, kubectl, jq, and curl must be available in your runner environment
- Kubernetes cluster access
## Steps performed
1. **Release Tag Discovery:** Fetches latest stable tag (without `v` prefix)
2. **Helm Repo Setup:** Adds/updates KubeVela Helm chart repo
3. **Install KubeVela:** Installs latest release in the `vela-system` namespace
4. **Status Verification:** Checks pod status and rollout for readiness
---

View File

@@ -1,68 +0,0 @@
name: 'Install Latest KubeVela Release'
description: 'Installs the latest stable KubeVela release from official Helm repository with status verification'
runs:
using: "composite"
steps:
# ========================================================================
# Latest Release Tag Discovery
# Fetch current stable release version from GitHub API
# ========================================================================
- name: Get latest KubeVela release tag (no v prefix)
id: get_latest_tag
shell: bash
run: |
TAG=$(curl -s https://api.github.com/repos/kubevela/kubevela/releases/latest | \
jq -r ".tag_name" | \
awk '{sub(/^v/, ""); print}')
echo "LATEST_TAG=$TAG" >> $GITHUB_ENV
echo "Discovered latest release: $TAG"
# ========================================================================
# Helm Repository Configuration
# Add and update official KubeVela chart repository
# ========================================================================
- name: Add KubeVela Helm repo
shell: bash
run: |
echo "Adding KubeVela Helm repository..."
helm repo add kubevela https://kubevela.github.io/charts
helm repo update
echo "Helm repository configuration completed"
# ========================================================================
# KubeVela Stable Release Installation
# Deploy latest stable version to vela-system namespace
# ========================================================================
- name: Install KubeVela ${{ env.LATEST_TAG }}
shell: bash
run: |
echo "Installing KubeVela version: ${{ env.LATEST_TAG }}"
helm install \
--create-namespace \
-n vela-system \
kubevela kubevela/vela-core \
--version ${{ env.LATEST_TAG }} \
--timeout 10m \
--wait
echo "KubeVela installation completed"
# ========================================================================
# Installation Status Verification
# Verify successful deployment and readiness of KubeVela components
# ========================================================================
- name: Post-install status
shell: bash
run: |
echo "=== INSTALLATION VERIFICATION ==="
echo "Verifying KubeVela deployment status..."
echo "--- Pod Status ---"
kubectl get pods -n vela-system
echo "--- Deployment Rollout ---"
kubectl rollout status deployment/kubevela-vela-core \
-n vela-system \
--timeout=300s
echo "KubeVela installation verification completed successfully!"

View File

@@ -1,51 +0,0 @@
# Kubevela K8s Upgrade E2E Test Action
A comprehensive GitHub composite action for running KubeVela Kubernetes upgrade end-to-end (E2E) tests with complete environment setup, multiple test suites, and failure diagnostics.
> **Note**: This action requires the `GO_VERSION` environment variable to be set in your workflow.
## Quick Start
### Basic Usage
```yaml
name: E2E Tests
on: [push, pull_request]
jobs:
e2e-tests:
runs-on: ubuntu-latest
env:
GO_VERSION: '1.23.8'
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run KubeVela E2E Tests
uses: ./.github/actions/upgrade-e2e-test
```
## Test Flow Diagram
```
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Environment │ │ E2E Environment │ │ Test Execution │
│ Setup │───▶│ Preparation │───▶│ (3 Suites) │
│ │ │ │ │ │
│ • Install tools │ │ • Cleanup │ │ • API tests │
│ • Setup Go │ │ • Core setup │ │ • Addon tests │
│ • Dependencies │ │ • Helm tests │ │ • General tests │
│ • Build project │ │ │ │ │
└─────────────────┘ └──────────────────┘ └─────────────────┘
┌─────────────────┐
│ Diagnostics │
│ (On Failure) │
│ │
│ • Cluster logs │
│ • System events │
│ • Test artifacts│
└─────────────────┘
```

View File

@@ -1,100 +0,0 @@
name: 'Kubevela K8s Upgrade e2e Test'
description: 'Runs Kubevela K8s upgrade e2e tests, uploads coverage, and collects diagnostics on failure.'
inputs:
codecov-token:
description: 'Codecov token for uploading coverage reports'
required: false
default: ''
codecov-enable:
description: 'Enable codecov coverage upload'
required: false
default: 'false'
runs:
using: "composite"
steps:
# ========================================================================
# Environment Setup
# ========================================================================
- name: Configure environment setup
uses: ./.github/actions/env-setup
with:
install-ginkgo: 'true'
install-setup-envtest: 'false'
install-kustomize: 'false'
- name: Build project
shell: bash
run: make
# ========================================================================
# E2E Test Environment Preparation
# ========================================================================
- name: Prepare e2e environment
shell: bash
run: |
echo "Preparing e2e test environment..."
make e2e-cleanup
make e2e-setup-core
echo "Running Helm tests..."
helm test -n vela-system kubevela --timeout 5m
# ========================================================================
# E2E Test Execution
# ========================================================================
- name: Run API e2e tests
shell: bash
run: |
echo "Running API e2e tests..."
make e2e-api-test
- name: Run addon e2e tests
shell: bash
run: |
echo "Running addon e2e tests..."
make e2e-addon-test
- name: Run general e2e tests
shell: bash
run: |
echo "Running general e2e tests..."
make e2e-test
- name: Upload coverage report
if: ${{ inputs.codecov-enable == 'true' }}
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24
with:
token: ${{ inputs.codecov-token }}
files: ./coverage.txt
flags: core-unittests
name: codecov-umbrella
fail_ci_if_error: false
# ========================================================================
# Failure Diagnostics
# ========================================================================
- name: Collect failure diagnostics
if: failure()
shell: bash
run: |
echo "=== FAILURE DIAGNOSTICS ==="
echo "Collecting diagnostic information for debugging..."
echo "--- Cluster Status ---"
kubectl get nodes -o wide || true
kubectl get pods -A || true
echo "--- KubeVela System Logs ---"
kubectl logs -n vela-system -l app.kubernetes.io/name=vela-core --tail=100 || true
echo "--- Recent Events ---"
kubectl get events -A --sort-by='.lastTimestamp' --field-selector type!=Normal || true
echo "--- Helm Release Status ---"
helm list -A || true
helm status kubevela -n vela-system || true
echo "--- Test Artifacts ---"
find . -name "*.log" -type f -exec echo "=== {} ===" \; -exec cat {} \; || true

View File

@@ -1,67 +0,0 @@
# Kubevela Test Environment Setup Action
A GitHub Actions composite action that sets up a complete testing environment for Kubevela projects with Go, Kubernetes tools, and the Ginkgo testing framework.
## Features
- 🛠️ **System Dependencies**: Installs essential build tools (make, gcc, jq, curl, etc.)
- ☸️ **Kubernetes Tools**: Sets up kubectl and Helm for cluster operations
- 🐹 **Go Environment**: Configurable Go version with module caching
- 📦 **Dependency Management**: Downloads and verifies Go module dependencies
- 🧪 **Testing Framework**: Installs Ginkgo v2 for BDD-style testing
## Usage
```yaml
- name: Setup Kubevela Test Environment
uses: ./path/to/this/action
with:
go-version: '1.23.8' # Optional: Go version (default: 1.23.8)
```
### Example Workflow
```yaml
name: Kubevela Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Test Environment
uses: ./path/to/this/action
with:
go-version: '1.21'
- name: Run Tests
run: |
ginkgo -r ./tests/e2e/
```
## Inputs
| Input | Description | Required | Default | Usage |
|-------|-------------|----------|---------|-------|
| `go-version` | Go version to install and use | No | `1.23.8` | Specify Go version for your project |
## What This Action Installs
### System Tools
- **make**: Build automation tool
- **gcc**: GNU Compiler Collection
- **jq**: JSON processor for shell scripts
- **ca-certificates**: SSL/TLS certificates
- **curl**: HTTP client for downloads
- **gnupg**: GNU Privacy Guard for security
### Kubernetes Ecosystem
- **kubectl**: Kubernetes command-line tool (latest stable)
- **helm**: Kubernetes package manager (latest stable)
### Go Development
- **Go Runtime**: Specified version with module caching enabled
- **Go Modules**: Downloaded and verified dependencies
- **Ginkgo v2.14.0**: BDD testing framework for Go

View File

@@ -1,120 +0,0 @@
name: 'Kubevela Test Environment Setup'
description: 'Sets up complete testing environment for Kubevela with Go, Kubernetes tools, and testing frameworks.'
inputs:
go-version:
description: 'Go version to use for testing'
required: false
default: '1.23.8'
install-ginkgo:
description: 'Install Ginkgo testing framework'
required: false
default: 'true'
install-setup-envtest:
description: 'Install setup-envtest for integration testing'
required: false
default: 'false'
install-kustomize:
description: 'Install kustomize for manifest management'
required: false
default: 'false'
kustomize-version:
description: 'Kustomize version to install'
required: false
default: '4.5.4'
runs:
using: 'composite'
steps:
# ========================================================================
# Environment Setup
# ========================================================================
- name: Install system dependencies
shell: bash
run: |
# Update package manager and install essential tools
sudo apt-get update
sudo apt-get install -y \
make \
gcc \
jq \
ca-certificates \
curl \
gnupg
- name: Install kubectl and helm
shell: bash
run: |
# Detect architecture
ARCH=$(uname -m | sed 's/x86_64/amd64/' | sed 's/aarch64/arm64/')
# Install kubectl using a known stable version to avoid network issues
# The dl.k8s.io/release/stable.txt endpoint can return garbage due to CDN issues
KUBECTL_VERSION="v1.31.0"
echo "Installing kubectl version: $KUBECTL_VERSION for architecture: $ARCH"
curl -LO --retry 3 --fail "https://dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/${ARCH}/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
# Install helm using the official script
echo "Installing Helm using official script..."
curl -fsSL --retry 3 -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
rm get_helm.sh
# Verify installations
echo "Verifying installations..."
kubectl version --client
helm version
- name: Setup Go environment
uses: actions/setup-go@0a12ed9d6a96ab950c8f026ed9f722fe0da7ef32
with:
go-version: ${{ inputs.go-version }}
cache: true
- name: Download Go dependencies
shell: bash
run: |
# Download and cache Go module dependencies
go mod download
go mod verify
- name: Install Ginkgo testing framework
if: ${{ inputs.install-ginkgo == 'true' }}
shell: bash
run: |
echo "Installing Ginkgo testing framework..."
go install github.com/onsi/ginkgo/v2/ginkgo@v2.14.0
echo "Ginkgo installed successfully"
- name: Install setup-envtest
if: ${{ inputs.install-setup-envtest == 'true' }}
shell: bash
run: |
echo "Installing setup-envtest for integration testing..."
mkdir -p ./bin
GOBIN=$(pwd)/bin go install sigs.k8s.io/controller-runtime/tools/setup-envtest@v0.0.0-20240522175850-2e9781e9fc60
echo "setup-envtest installed successfully at ./bin/setup-envtest"
ls -la ./bin/setup-envtest
# Download and cache the Kubernetes binaries for envtest
echo "Downloading Kubernetes binaries for envtest..."
KUBEBUILDER_ASSETS=$(./bin/setup-envtest use 1.31.0 --bin-dir ./bin -p path)
echo "Kubernetes binaries downloaded successfully"
echo "KUBEBUILDER_ASSETS=${KUBEBUILDER_ASSETS}"
# Export for subsequent steps
echo "KUBEBUILDER_ASSETS=${KUBEBUILDER_ASSETS}" >> $GITHUB_ENV
- name: Install kustomize
if: ${{ inputs.install-kustomize == 'true' }}
shell: bash
run: |
echo "Installing kustomize version ${{ inputs.kustomize-version }}..."
mkdir -p ./bin
curl -sS https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh | bash -s ${{ inputs.kustomize-version }} $(pwd)/bin
echo "kustomize installed successfully at ./bin/kustomize"
./bin/kustomize version

View File

@@ -1,35 +0,0 @@
# Kubevela K8s Upgrade Multicluster E2E Test Action
A comprehensive GitHub Actions composite action for running Kubevela Kubernetes upgrade multicluster end-to-end tests with automated coverage reporting and failure diagnostics.
## Usage
```yaml
name: Kubevela Multicluster E2E Tests
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
multicluster-e2e:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Run Multicluster E2E Tests
uses: ./.github/actions/multicluster-test
with:
codecov-enable: 'true'
codecov-token: ${{ secrets.CODECOV_TOKEN }}
```
## Inputs
| Input | Description | Required | Default | Type |
|-------|-------------|----------|---------|------|
| `codecov-token` | Codecov token for uploading coverage reports | No | `''` | string |
| `codecov-enable` | Enable codecov coverage upload | No | `'false'` | string |

View File

@@ -1,80 +0,0 @@
name: 'Kubevela K8s Upgrade Multicluster E2E Test'
description: 'Runs Kubevela Kubernetes upgrade multicluster end-to-end tests, uploads coverage, and collects diagnostics on failure.'
author: 'viskumar_gwre'
inputs:
codecov-token:
description: 'Codecov token for uploading coverage reports'
required: false
default: ''
codecov-enable:
description: 'Enable codecov coverage upload'
required: false
default: 'false'
runs:
using: 'composite'
steps:
# ========================================================================
# Environment Setup
# ========================================================================
- name: Configure environment setup
uses: ./.github/actions/env-setup
with:
install-ginkgo: 'true'
install-setup-envtest: 'false'
install-kustomize: 'false'
# ========================================================================
# E2E Test Execution
# ========================================================================
- name: Prepare e2e test environment
shell: bash
run: |
# Build CLI tools and prepare test environment
echo "Building KubeVela CLI..."
make vela-cli
echo "Cleaning up previous test artifacts..."
make e2e-cleanup
echo "Setting up core authentication for e2e tests..."
make e2e-setup-core-auth
- name: Execute multicluster upgrade e2e tests
shell: bash
run: |
# Add built CLI to PATH and run multicluster tests
export PATH=$(pwd)/bin:$PATH
echo "Running e2e multicluster upgrade tests..."
make e2e-multicluster-test
- name: Upload coverage report
if: ${{ inputs.codecov-enable == 'true' }}
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24
with:
token: ${{ inputs.codecov-token }}
files: /tmp/e2e-profile.out,/tmp/e2e_multicluster_test.out
flags: e2e-multicluster-test
name: codecov-umbrella
# ========================================================================
# Failure Diagnostics
# ========================================================================
- name: Collect failure diagnostics
if: failure()
shell: bash
run: |
echo "=== FAILURE DIAGNOSTICS ==="
echo "Collecting diagnostic information for debugging..."
echo "--- Cluster Status ---"
kubectl get nodes -o wide || true
kubectl get pods -A || true
echo "--- KubeVela System Logs ---"
kubectl logs -n vela-system -l app.kubernetes.io/name=vela-core --tail=100 || true
echo "--- Recent Events ---"
kubectl get events -A --sort-by='.lastTimestamp' --field-selector type!=Normal || true

View File

@@ -1,78 +0,0 @@
# Setup Kind Cluster Action
A GitHub Action that sets up a Kubernetes testing environment using Kind (Kubernetes in Docker) for E2E testing.
## Inputs
| Input | Description | Required | Default |
|-------|-------------|----------|---------|
| `k8s-version` | Kubernetes version for the kind cluster | No | `v1.31.9` |
## Quick Start
```yaml
name: E2E Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.21'
- name: Setup Kind Cluster
uses: ./.github/actions/setup-kind-cluster
with:
k8s-version: 'v1.31.9'
- name: Run tests
run: |
kubectl cluster-info
make test-e2e
```
## What it does
1. **Installs Kind CLI** - Downloads Kind v0.29.0 using Go
2. **Cleans up** - Removes any existing Kind clusters
3. **Creates cluster** - Spins up Kubernetes v1.31.9 cluster
4. **Sets up environment** - Configures KUBECONFIG for kubectl access
5. **Loads images** - Builds and loads Docker images using `make image-load`
## File Structure
Save as `.github/actions/setup-kind-cluster/action.yaml`:
```yaml
name: 'SetUp kind cluster'
description: 'Sets up complete testing environment for Kubevela with Go, Kubernetes tools, and Ginkgo framework for E2E testing.'
inputs:
k8s-version:
description: 'Kubernetes version for the kind cluster'
required: false
default: 'v1.31.9'
runs:
using: 'composite'
steps:
# ========================================================================
# Kind cluster Setup
# ========================================================================
- name: Setup KinD
run: |
go install sigs.k8s.io/kind@v0.29.0
kind delete cluster || true
kind create cluster --image=kindest/node:${{ inputs.k8s-version }}
shell: bash
- name: Load image
run: |
mkdir -p $HOME/tmp/
TMPDIR=$HOME/tmp/ make image-load
shell: bash
```

View File

@@ -1,36 +0,0 @@
name: 'SetUp kind cluster'
description: 'Sets up a KinD (Kubernetes in Docker) cluster with configurable Kubernetes version and optional cluster naming for testing and development workflows.'
inputs:
k8s-version:
description: 'Kubernetes version for the kind cluster'
required: false
default: 'v1.31.9'
name:
description: 'Name of the kind cluster'
required: false
runs:
using: 'composite'
steps:
# ========================================================================
# Kind cluster Setup
# ========================================================================
- name: Setup KinD
run: |
go install sigs.k8s.io/kind@v0.29.0
if [ -n "${{ inputs.name }}" ]; then
kind delete cluster --name="${{ inputs.name }}" || true
kind create cluster --name="${{ inputs.name }}" --image=kindest/node:${{ inputs.k8s-version }}
kind export kubeconfig --internal --name="${{ inputs.name }}" --kubeconfig /tmp/${{ inputs.name }}.kubeconfig
else
kind delete cluster || true
kind create cluster --image=kindest/node:${{ inputs.k8s-version }}
fi
shell: bash
- name: Load image
run: |
if [ -z "${{ inputs.name }}" ]; then
mkdir -p $HOME/tmp/
TMPDIR=$HOME/tmp/ make image-load
fi
shell: bash

View File

@@ -1,34 +0,0 @@
# Kubevela K8s Upgrade Unit Test Action
A comprehensive GitHub composite action for running KubeVela Kubernetes upgrade unit tests with coverage reporting and failure diagnostics.
## Inputs
| Input | Description | Required | Default |
|-------|-------------|----------|---------|
| `codecov-token` | Codecov token for uploading coverage reports | ❌ | `''` |
| `codecov-enable` | Enable Codecov coverage upload (`'true'` or `'false'`) | ❌ | `'false'` |
| `go-version` | Go version to use for testing | ❌ | `'1.23.8'` |
## Quick Start
### Basic Usage
```yaml
name: Unit Tests with Coverage
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run KubeVela Unit Tests
uses: viskumar_gwre/kubevela-k8s-upgrade-unit-test-action@v1
with:
codecov-enable: 'true'
codecov-token: ${{ secrets.CODECOV_TOKEN }}
go-version: '1.23.8'
```

View File

@@ -1,71 +0,0 @@
name: 'Kubevela K8s Upgrade Unit Test'
description: 'Runs Kubevela K8s upgrade unit tests, uploads coverage, and collects diagnostics on failure.'
inputs:
codecov-token:
description: 'Codecov token for uploading coverage reports'
required: false
default: ''
codecov-enable:
description: 'Enable codecov coverage upload'
required: false
default: 'false'
runs:
using: "composite"
steps:
# ========================================================================
# Environment Setup
# ========================================================================
- name: Configure environment setup
uses: ./.github/actions/env-setup
with:
install-ginkgo: 'true'
install-setup-envtest: 'true'
install-kustomize: 'true'
# ========================================================================
# Unit Test Execution
# ========================================================================
- name: Run unit tests
shell: bash
run: |
echo "Running unit tests..."
make test
- name: Upload coverage report
if: ${{ inputs.codecov-enable == 'true' }}
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24
with:
token: ${{ inputs.codecov-token }}
files: ./coverage.txt
flags: core-unittests
name: codecov-umbrella
fail_ci_if_error: false
# ========================================================================
# Failure Diagnostics
# ========================================================================
- name: Collect failure diagnostics
if: failure()
shell: bash
run: |
echo "=== FAILURE DIAGNOSTICS ==="
echo "Collecting diagnostic information for debugging..."
echo "--- Go Environment ---"
go version || true
go env || true
echo "--- Cluster Status ---"
kubectl get nodes -o wide || true
kubectl get pods -A || true
echo "--- KubeVela System Logs ---"
kubectl logs -n vela-system -l app.kubernetes.io/name=vela-core --tail=100 || true
echo "--- Recent Events ---"
kubectl get events -A --sort-by='.lastTimestamp' --field-selector type!=Normal || true
echo "--- Test Artifacts ---"
find . -name "*.log" -o -name "*test*.xml" -o -name "coverage.*" | head -20 || true

View File

@@ -11,7 +11,4 @@ wangyuan249
chivalryq
FogDong
leejanee
barnettZQG
anoop2811
briankane
jguionnet
barnettZQG

View File

@@ -1,23 +0,0 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
- package-ecosystem: "gomod" # See documentation for possible values
directory: "/" # Location of package manifests
schedule:
interval: "weekly"
commit-message:
prefix: "Chore: "
include: "scope"
ignore:
- dependency-name: k8s.io/*
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
commit-message:
prefix: "Chore: "
include: "scope"

198
.github/workflows/apiserver-test.yml vendored Normal file
View File

@@ -0,0 +1,198 @@
name: VelaUX APIServer Test
on:
push:
branches:
- master
- release-*
- apiserver
tags:
- v*
workflow_dispatch: { }
pull_request:
branches:
- master
- release-*
- apiserver
env:
# Common versions
GO_VERSION: '1.19'
permissions:
contents: read
jobs:
detect-noop:
runs-on: ubuntu-20.04
outputs:
noop: ${{ steps.noop.outputs.should_skip }}
steps:
- name: Detect No-op Changes
id: noop
uses: fkirc/skip-duplicate-actions@12aca0a884f6137d619d6a8a09fcc3406ced5281
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
paths_ignore: '["**.md", "**.mdx", "**.png", "**.jpg"]'
do_not_skip: '["workflow_dispatch", "schedule", "push"]'
continue-on-error: true
apiserver-unit-tests:
runs-on: ubuntu-20.04
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Set up Go
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568
with:
go-version: ${{ env.GO_VERSION }}
id: go
- name: Check out code into the Go module directory
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
with:
submodules: true
- name: Cache Go Dependencies
uses: actions/cache@9b0c1fce7a93df8e3bb8926b0d6e9d89e92f20a7
with:
path: .work/pkg
key: ${{ runner.os }}-pkg-${{ hashFiles('**/go.sum') }}
restore-keys: ${{ runner.os }}-pkg-
- name: Install ginkgo
run: |
sudo sed -i 's/azure\.//' /etc/apt/sources.list
sudo apt-get update
sudo apt-get install -y golang-ginkgo-dev
- name: Start MongoDB
uses: supercharge/mongodb-github-action@538a4d2a1041920c47630172445cb688592d6e51 # 1.8.0
with:
mongodb-version: '5.0'
# TODO need update action version to resolve node 12 deprecated.
- name: install Kubebuilder
uses: RyanSiu1995/kubebuilder-action@ff52bff1bae252239223476e5ab0d71d6ba02343
with:
version: 3.1.0
kubebuilderOnly: false
kubernetesVersion: v1.21.2
- name: Run api server unit test
run: make unit-test-apiserver
- name: Upload coverage report
uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with:
token: ${{ secrets.CODECOV_TOKEN }}
file: ./coverage.txt
flags: apiserver-unittests
name: codecov-umbrella
apiserver-e2e-tests:
runs-on: aliyun
needs: [ detect-noop ]
if: needs.detect-noop.outputs.noop != 'true'
strategy:
matrix:
k8s-version: ["v1.20","v1.24"]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-${{ matrix.k8s-version }}
cancel-in-progress: true
steps:
- name: Set up Go
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568
with:
go-version: ${{ env.GO_VERSION }}
id: go
- name: Check out code into the Go module directory
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
with:
submodules: true
- name: Get dependencies
run: |
go get -v -t -d ./...
- name: Tear down K3d if exist
run: |
k3d cluster delete || true
k3d cluster delete worker || true
- name: Calculate K3d args
run: |
EGRESS_ARG=""
if [[ "${{ matrix.k8s-version }}" == v1.24 ]]; then
EGRESS_ARG="--k3s-arg --egress-selector-mode=disabled@server:0"
fi
echo "EGRESS_ARG=${EGRESS_ARG}" >> $GITHUB_ENV
- name: Setup K3d (Hub)
uses: nolar/setup-k3d-k3s@293b8e5822a20bc0d5bcdd4826f1a665e72aba96
with:
version: ${{ matrix.k8s-version }}
github-token: ${{ secrets.GITHUB_TOKEN }}
k3d-args: ${{ env.EGRESS_ARG }}
- name: Setup K3d (Worker)
uses: nolar/setup-k3d-k3s@293b8e5822a20bc0d5bcdd4826f1a665e72aba96
with:
version: ${{ matrix.k8s-version }}
github-token: ${{ secrets.GITHUB_TOKEN }}
k3d-name: worker
k3d-args: --kubeconfig-update-default=false --network=k3d-k3s-default ${{ env.EGRESS_ARG }}
- name: Kind Cluster (Worker)
run: |
internal_ip=$(docker network inspect k3d-k3s-default|jq ".[0].Containers"| jq -r '.[]| select(.Name=="k3d-worker-server-0")|.IPv4Address' | cut -d/ -f1)
k3d kubeconfig get worker > /tmp/worker.client.kubeconfig
cp /tmp/worker.client.kubeconfig /tmp/worker.kubeconfig
sed -i "s/0.0.0.0:[0-9]\+/$internal_ip:6443/" /tmp/worker.kubeconfig
- name: Load image to k3d cluster
run: make image-load
- name: Cleanup for e2e tests
run: |
make vela-cli
make e2e-cleanup
make e2e-setup-core
bin/vela addon enable fluxcd
bin/vela addon enable vela-workflow --override-definitions
timeout 600s bash -c -- 'while true; do kubectl get ns flux-system; if [ $? -eq 0 ] ; then break; else sleep 5; fi;done'
kubectl wait --for=condition=Ready pod -l app.kubernetes.io/name=vela-core,app.kubernetes.io/instance=kubevela -n vela-system --timeout=600s
kubectl wait --for=condition=Ready pod -l app=source-controller -n flux-system --timeout=600s
kubectl wait --for=condition=Ready pod -l app=helm-controller -n flux-system --timeout=600s
kubectl wait --for=condition=Ready pod -l app.kubernetes.io/name=vela-workflow -n vela-system --timeout=600s
- name: Run api server e2e test
run: |
export ALIYUN_ACCESS_KEY_ID=${{ secrets.ALIYUN_ACCESS_KEY_ID }}
export ALIYUN_ACCESS_KEY_SECRET=${{ secrets.ALIYUN_ACCESS_KEY_SECRET }}
export GITHUB_TOKEN=${{ secrets.GITHUB_TOKEN }}
make e2e-apiserver-test
- name: Stop kubevela, get profile
run: make end-e2e-core
- name: Upload coverage report
uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: /tmp/e2e-profile.out, /tmp/e2e_apiserver_test.out
flags: apiserver-e2etests
name: codecov-umbrella
- name: Clean e2e profile
run: rm /tmp/e2e-profile.out
- name: Cleanup image
if: ${{ always() }}
run: make image-cleanup

View File

@@ -10,19 +10,19 @@ permissions:
jobs:
# align with crossplane's choice https://github.com/crossplane/crossplane/blob/master/.github/workflows/backport.yml
open-pr:
runs-on: ubuntu-22.04
runs-on: ubuntu-20.04
if: github.event.pull_request.merged
permissions:
contents: write
pull-requests: write
steps:
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
with:
fetch-depth: 0
- name: Open Backport PR
uses: zeebe-io/backport-action@0193454f0c5947491d348f33a275c119f30eb736
uses: zeebe-io/backport-action@2ee900dc92632adf994f8e437b6d16840fd61f58
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
github_workspace: ${{ github.workspace }}

View File

@@ -9,15 +9,29 @@ on:
permissions:
contents: read
env:
BUCKET: ${{ secrets.OSS_BUCKET }}
ENDPOINT: ${{ secrets.OSS_ENDPOINT }}
ACCESS_KEY: ${{ secrets.OSS_ACCESS_KEY }}
ACCESS_KEY_SECRET: ${{ secrets.OSS_ACCESS_KEY_SECRET }}
ARTIFACT_HUB_REPOSITORY_ID: ${{ secrets.ARTIFACT_HUB_REPOSITORY_ID }}
jobs:
publish-charts:
env:
HELM_CHARTS_DIR: charts
HELM_CHART: charts/vela-core
MINIMAL_HELM_CHART: charts/vela-minimal
LEGACY_HELM_CHART: legacy/charts/vela-core-legacy
VELA_ROLLOUT_HELM_CHART: runtime/rollout/charts
LOCAL_OSS_DIRECTORY: .oss
HELM_CHART_NAME: vela-core
runs-on: ubuntu-22.04
MINIMAL_HELM_CHART_NAME: vela-minimal
LEGACY_HELM_CHART_NAME: vela-core-legacy
VELA_ROLLOUT_HELM_CHART_NAME: vela-rollout
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
- name: Get git revision
id: vars
shell: bash
@@ -28,12 +42,19 @@ jobs:
with:
version: v3.4.0
- name: Setup node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
uses: actions/setup-node@8c91899e586c5b171469028077307d293428b516
with:
node-version: '14'
- name: Generate helm doc
run: |
make helm-doc-gen
- name: Prepare legacy chart
run: |
rsync -r $LEGACY_HELM_CHART $HELM_CHARTS_DIR
rsync -r $HELM_CHART/* $LEGACY_HELM_CHART --exclude=Chart.yaml --exclude=crds
- name: Prepare vela chart
run: |
rsync -r $VELA_ROLLOUT_HELM_CHART $HELM_CHARTS_DIR
- name: Get the version
id: get_version
run: |
@@ -44,24 +65,39 @@ jobs:
image_tag=${{ steps.get_version.outputs.VERSION }}
chart_version=${{ steps.get_version.outputs.VERSION }}
sed -i "s/latest/${image_tag}/g" $HELM_CHART/values.yaml
sed -i "s/latest/${image_tag}/g" $MINIMAL_HELM_CHART/values.yaml
sed -i "s/latest/${image_tag}/g" $LEGACY_HELM_CHART/values.yaml
sed -i "s/latest/${image_tag}/g" $VELA_ROLLOUT_HELM_CHART/values.yaml
chart_smever=${chart_version#"v"}
sed -i "s/0.1.0/$chart_smever/g" $HELM_CHART/Chart.yaml
- uses: jnwng/github-app-installation-token-action@c54add4c02866dc41e106745ac6dcf5cdd6339e5 # v2
id: get_app_token
with:
appId: 340472
installationId: 38064967
privateKey: ${{ secrets.GH_KUBEVELA_APP_PRIVATE_KEY }}
- name: Sync Chart Repo
sed -i "s/0.1.0/$chart_smever/g" $MINIMAL_HELM_CHART/Chart.yaml
sed -i "s/0.1.0/$chart_smever/g" $LEGACY_HELM_CHART/Chart.yaml
sed -i "s/0.1.0/$chart_smever/g" $VELA_ROLLOUT_HELM_CHART/Chart.yaml
- name: Install ossutil
run: wget http://gosspublic.alicdn.com/ossutil/1.7.0/ossutil64 && chmod +x ossutil64 && mv ossutil64 ossutil
- name: Configure Alibaba Cloud OSSUTIL
run: ./ossutil --config-file .ossutilconfig config -i ${ACCESS_KEY} -k ${ACCESS_KEY_SECRET} -e ${ENDPOINT} -c .ossutilconfig
- name: sync cloud to local
run: ./ossutil --config-file .ossutilconfig sync oss://$BUCKET/core $LOCAL_OSS_DIRECTORY
- name: add artifacthub stuff to the repo
run: |
git config --global user.email "135009839+kubevela[bot]@users.noreply.github.com"
git config --global user.name "kubevela[bot]"
git clone https://x-access-token:${{ steps.get_app_token.outputs.token }}@github.com/kubevela/charts.git kubevela-charts
helm package $HELM_CHART --destination ./kubevela-charts/docs/
helm repo index --url https://kubevela.github.io/charts ./kubevela-charts/docs/
cd kubevela-charts/
git add docs/
chart_version=${{ steps.get_version.outputs.VERSION }}
git commit -m "update vela-core chart ${chart_version}"
git push https://x-access-token:${{ steps.get_app_token.outputs.token }}@github.com/kubevela/charts.git
rsync $HELM_CHART/README.md $LEGACY_HELM_CHART/README.md
rsync $HELM_CHART/README.md $VELA_ROLLOUT_HELM_CHART/README.md
sed -i "s/ARTIFACT_HUB_REPOSITORY_ID/$ARTIFACT_HUB_REPOSITORY_ID/g" hack/artifacthub/artifacthub-repo.yml
rsync hack/artifacthub/artifacthub-repo.yml $LOCAL_OSS_DIRECTORY
- name: Package helm charts
run: |
helm package $HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm package $MINIMAL_HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm package $LEGACY_HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm package $VELA_ROLLOUT_HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm repo index --url https://$BUCKET.$ENDPOINT/core $LOCAL_OSS_DIRECTORY
- name: sync local to cloud
run: |
image_tag=${{ steps.get_version.outputs.VERSION }}
chart_semver=${image_tag#"v"}
./ossutil --config-file .ossutilconfig cp -f $LOCAL_OSS_DIRECTORY/index.yaml oss://$BUCKET/core/index.yaml
./ossutil --config-file .ossutilconfig cp -f $LOCAL_OSS_DIRECTORY/$HELM_CHART_NAME-${chart_semver}.tgz oss://$BUCKET/core/$HELM_CHART_NAME-${chart_semver}.tgz
./ossutil --config-file .ossutilconfig cp -f $LOCAL_OSS_DIRECTORY/$MINIMAL_HELM_CHART_NAME-${chart_semver}.tgz oss://$BUCKET/core/$MINIMAL_HELM_CHART_NAME-${chart_semver}.tgz
./ossutil --config-file .ossutilconfig cp -f $LOCAL_OSS_DIRECTORY/$LEGACY_HELM_CHART_NAME-${chart_semver}.tgz oss://$BUCKET/core/$LEGACY_HELM_CHART_NAME-${chart_semver}.tgz
./ossutil --config-file .ossutilconfig cp -f $LOCAL_OSS_DIRECTORY/$VELA_ROLLOUT_HELM_CHART_NAME-${chart_semver}.tgz oss://$BUCKET/core/$VELA_ROLLOUT_HELM_CHART_NAME-${chart_semver}.tgz

View File

@@ -10,7 +10,7 @@ permissions:
jobs:
analyze:
name: Analyze
runs-on: ubuntu-22.04
runs-on: ubuntu-latest
permissions:
actions: read # for github/codeql-action/init to get workflow details
@@ -23,15 +23,15 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
- name: Initialize CodeQL
uses: github/codeql-action/init@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
uses: github/codeql-action/init@959cbb7472c4d4ad70cdfe6f4976053fe48ab394 # v2.1.37
with:
languages: ${{ matrix.language }}
- name: Autobuild
uses: github/codeql-action/autobuild@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
uses: github/codeql-action/autobuild@959cbb7472c4d4ad70cdfe6f4976053fe48ab394 # v2.1.37
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
uses: github/codeql-action/analyze@959cbb7472c4d4ad70cdfe6f4976053fe48ab394 # v2.1.37

View File

@@ -13,9 +13,9 @@ permissions:
jobs:
check:
runs-on: ubuntu-22.04
runs-on: ubuntu-latest
steps:
- uses: thehanimo/pr-title-checker@5652588c80c479af803eabfbdb5a3895a77c1388 # v1.4.1
- uses: thehanimo/pr-title-checker@v1.3.5
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
pass_on_octokit_error: true

View File

@@ -14,18 +14,18 @@ permissions:
jobs:
core-api-test:
runs-on: ubuntu-22.04
runs-on: ubuntu-20.04
steps:
- name: Set up Go 1.23.8
uses: actions/setup-go@v5
- name: Set up Go 1.19
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568
env:
GO_VERSION: '1.23.8'
GO_VERSION: '1.19'
with:
go-version: ${{ env.GO_VERSION }}
id: go
- name: Check out code into the Go module directory
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
- name: Get the version
id: get_version

View File

@@ -16,31 +16,32 @@ permissions:
env:
# Common versions
GO_VERSION: '1.23.8'
GO_VERSION: '1.19'
jobs:
definition-doc:
runs-on: ubuntu-22.04
runs-on: ubuntu-latest
steps:
- name: Setup Go
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568
with:
go-version: ${{ env.GO_VERSION }}
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
with:
submodules: true
- name: Setup KinD
uses: ./.github/actions/setup-kind-cluster
- name: Setup K3d
uses: nolar/setup-k3d-k3s@293b8e5822a20bc0d5bcdd4826f1a665e72aba96
with:
name: linter
version: v1.20
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Definition Doc generate check
run: |
go build -o docgen hack/docgen/def/gen.go
./docgen --type=comp --force-example-doc --path=./comp-def-check.md
./docgen --type=trait --force-example-doc --path=./trait-def-check.md
./docgen --type=wf --force-example-doc --path=./wf-def-check.md --def-dir=./vela-templates/definitions/internal/workflowstep/
./docgen --type=wf --force-example-doc --path=./wf-def-check.md
./docgen --type=policy --force-example-doc --path=./policy-def-check.md

View File

@@ -18,20 +18,20 @@ permissions:
env:
# Common versions
GO_VERSION: '1.23.8'
GO_VERSION: '1.19'
jobs:
detect-noop:
permissions:
actions: write
runs-on: ubuntu-22.04
runs-on: ubuntu-20.04
outputs:
noop: ${{ steps.noop.outputs.should_skip }}
steps:
- name: Detect No-op Changes
id: noop
uses: fkirc/skip-duplicate-actions@f75f66ce1886f00957d99748a42c724f4330bdcf
uses: fkirc/skip-duplicate-actions@12aca0a884f6137d619d6a8a09fcc3406ced5281
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
paths_ignore: '["**.md", "**.mdx", "**.png", "**.jpg"]'
@@ -39,48 +39,94 @@ jobs:
continue-on-error: true
e2e-multi-cluster-tests:
runs-on: ubuntu-22.04
runs-on: aliyun
needs: [ detect-noop ]
if: needs.detect-noop.outputs.noop != 'true'
strategy:
matrix:
k8s-version: ["v1.31.9"]
k8s-version: ["v1.20","v1.24"]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-${{ matrix.k8s-version }}
cancel-in-progress: true
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
- name: Setup worker cluster kinD
uses: ./.github/actions/setup-kind-cluster
- name: Setup Go
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568
with:
name: worker
k8s-version: ${{ matrix.k8s-version }}
go-version: ${{ env.GO_VERSION }}
- name: Setup master cluster kinD
uses: ./.github/actions/setup-kind-cluster
with:
k8s-version: ${{ matrix.k8s-version }}
- name: Get dependencies
run: |
go get -v -t -d ./...
- name: Run upgrade multicluster tests
uses: ./.github/actions/multicluster-test
- name: Tear down K3d if exist
run: |
k3d cluster delete || true
k3d cluster delete worker || true
- name: Calculate K3d args
run: |
EGRESS_ARG=""
if [[ "${{ matrix.k8s-version }}" == v1.24 ]]; then
EGRESS_ARG="--k3s-arg --egress-selector-mode=disabled@server:0"
fi
echo "EGRESS_ARG=${EGRESS_ARG}" >> $GITHUB_ENV
- name: Setup K3d (Hub)
uses: nolar/setup-k3d-k3s@293b8e5822a20bc0d5bcdd4826f1a665e72aba96
with:
codecov-enable: true
codecov-token: ${{ secrets.CODECOV_TOKEN }}
version: ${{ matrix.k8s-version }}
github-token: ${{ secrets.GITHUB_TOKEN }}
k3d-args: ${{ env.EGRESS_ARG }}
- name: Setup K3d (Worker)
uses: nolar/setup-k3d-k3s@293b8e5822a20bc0d5bcdd4826f1a665e72aba96
with:
version: ${{ matrix.k8s-version }}
github-token: ${{ secrets.GITHUB_TOKEN }}
k3d-name: worker
k3d-args: --kubeconfig-update-default=false --network=k3d-k3s-default ${{ env.EGRESS_ARG }}
- name: Generating internal worker kubeconfig
run: |
internal_ip=$(docker network inspect k3d-k3s-default|jq ".[0].Containers"| jq -r '.[]| select(.Name=="k3d-worker-server-0")|.IPv4Address' | cut -d/ -f1)
k3d kubeconfig get worker > /tmp/worker.client.kubeconfig
cp /tmp/worker.client.kubeconfig /tmp/worker.kubeconfig
sed -i "s/0.0.0.0:[0-9]\+/$internal_ip:6443/" /tmp/worker.kubeconfig
- name: Load image to k3d cluster (hub and worker)
run: make image-load image-load-runtime-cluster
- name: Cleanup for e2e tests
run: |
make vela-cli
make e2e-cleanup
make e2e-setup-core-auth
make setup-runtime-e2e-cluster
- name: Run e2e multicluster tests
run: |
export PATH=$(pwd)/bin:$PATH
make e2e-multicluster-test
- name: Stop kubevela, get profile
run: make end-e2e-core
- name: Upload coverage report
uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: /tmp/e2e-profile.out,/tmp/e2e_multicluster_test.out
flags: e2e-multicluster-test
name: codecov-umbrella
- name: Clean e2e profile
run: |
if [ -f /tmp/e2e-profile.out ]; then
rm /tmp/e2e-profile.out
echo "E2E profile cleaned"
else
echo "E2E profile not found, skipping cleanup"
fi
run: rm /tmp/e2e-profile.out
- name: Cleanup image
if: ${{ always() }}
run: |
make image-cleanup
docker image prune -f --filter "until=24h"
run: make image-cleanup

116
.github/workflows/e2e-rollout-test.yml vendored Normal file
View File

@@ -0,0 +1,116 @@
name: E2E Rollout Test
on:
push:
branches:
- master
- release-*
tags:
- v*
workflow_dispatch: {}
pull_request:
branches:
- master
- release-*
permissions:
contents: read
env:
# Common versions
GO_VERSION: '1.19'
jobs:
detect-noop:
permissions:
actions: write
runs-on: ubuntu-20.04
outputs:
noop: ${{ steps.noop.outputs.should_skip }}
steps:
- name: Detect No-op Changes
id: noop
uses: fkirc/skip-duplicate-actions@12aca0a884f6137d619d6a8a09fcc3406ced5281
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
paths_ignore: '["**.md", "**.mdx", "**.png", "**.jpg"]'
do_not_skip: '["workflow_dispatch", "schedule", "push"]'
continue-on-error: true
e2e-rollout-tests:
runs-on: aliyun
needs: [ detect-noop ]
if: needs.detect-noop.outputs.noop != 'true'
strategy:
matrix:
k8s-version: ["v1.20","v1.24"]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-${{ matrix.k8s-version }}
cancel-in-progress: true
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
- name: Setup Go
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568
with:
go-version: ${{ env.GO_VERSION }}
- name: Get dependencies
run: |
go get -v -t -d ./...
- name: Tear down K3d if exist
run: |
k3d cluster delete || true
k3d cluster delete worker || true
- name: Calculate K3d args
run: |
EGRESS_ARG=""
if [[ "${{ matrix.k8s-version }}" == v1.24 ]]; then
EGRESS_ARG="--k3s-arg --egress-selector-mode=disabled@server:0"
fi
echo "EGRESS_ARG=${EGRESS_ARG}" >> $GITHUB_ENV
- name: Setup K3d
uses: nolar/setup-k3d-k3s@293b8e5822a20bc0d5bcdd4826f1a665e72aba96
with:
version: ${{ matrix.k8s-version }}
github-token: ${{ secrets.GITHUB_TOKEN }}
k3d-args: ${{ env.EGRESS_ARG }}
- name: Load image to k3d cluster
run: make image-load image-load-runtime-cluster
- name: Prepare for e2e tests
run: |
make vela-cli
make e2e-cleanup
make e2e-setup-core
make setup-runtime-e2e-cluster
helm test -n vela-system kubevela --timeout 5m
- name: Run e2e tests
run: make e2e-rollout-test
- name: Stop kubevela, get profile
run: make end-e2e
- name: Upload coverage report
uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: /tmp/e2e-profile.out
flags: e2e-rollout-tests
name: codecov-umbrella
- name: Clean e2e profile
run: rm /tmp/e2e-profile.out
- name: Cleanup image
if: ${{ always() }}
run: make image-cleanup

View File

@@ -18,20 +18,20 @@ permissions:
env:
# Common versions
GO_VERSION: '1.23.8'
GO_VERSION: '1.19'
jobs:
detect-noop:
permissions:
actions: write
runs-on: ubuntu-22.04
runs-on: ubuntu-20.04
outputs:
noop: ${{ steps.noop.outputs.should_skip }}
steps:
- name: Detect No-op Changes
id: noop
uses: fkirc/skip-duplicate-actions@f75f66ce1886f00957d99748a42c724f4330bdcf
uses: fkirc/skip-duplicate-actions@12aca0a884f6137d619d6a8a09fcc3406ced5281
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
paths_ignore: '["**.md", "**.mdx", "**.png", "**.jpg"]'
@@ -39,43 +39,85 @@ jobs:
continue-on-error: true
e2e-tests:
runs-on: ubuntu-22.04
runs-on: aliyun
needs: [ detect-noop ]
if: needs.detect-noop.outputs.noop != 'true'
strategy:
matrix:
k8s-version: ["v1.31"]
k8s-version: ["v1.20","v1.24"]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-${{ matrix.k8s-version }}
cancel-in-progress: true
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
- name: Setup KinD
uses: ./.github/actions/setup-kind-cluster
# ========================================================================
# E2E Test Execution
# ========================================================================
- name: Run upgrade e2e tests
uses: ./.github/actions/e2e-test
- name: Setup Go
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568
with:
codecov-enable: true
codecov-token: ${{ secrets.CODECOV_TOKEN }}
go-version: ${{ env.GO_VERSION }}
- name: Get dependencies
run: |
go get -v -t -d ./...
- name: Tear down K3d if exist
run: |
k3d cluster delete || true
k3d cluster delete worker || true
- name: Calculate K3d args
run: |
EGRESS_ARG=""
if [[ "${{ matrix.k8s-version }}" == v1.24 ]]; then
EGRESS_ARG="--k3s-arg --egress-selector-mode=disabled@server:0"
fi
echo "EGRESS_ARG=${EGRESS_ARG}" >> $GITHUB_ENV
- name: Setup K3d
uses: nolar/setup-k3d-k3s@293b8e5822a20bc0d5bcdd4826f1a665e72aba96
with:
version: ${{ matrix.k8s-version }}
github-token: ${{ secrets.GITHUB_TOKEN }}
k3d-args: ${{ env.EGRESS_ARG }}
- name: Load image to k3d cluster
run: make image-load
- name: Run Make
run: make
- name: Prepare for e2e tests
run: |
make e2e-cleanup
make e2e-setup-core
helm test -n vela-system kubevela --timeout 5m
- name: Run api e2e tests
run: make e2e-api-test
- name: Run addons e2e tests
run: make e2e-addon-test
- name: Run e2e tests
run: make e2e-test
- name: Stop kubevela, get profile
run: make end-e2e
- name: Upload coverage report
uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: /tmp/e2e-profile.out
flags: e2etests
name: codecov-umbrella
- name: Clean e2e profile
run: |
if [ -f /tmp/e2e-profile.out ]; then
rm /tmp/e2e-profile.out
echo "E2E profile cleaned"
else
echo "E2E profile not found, skipping cleanup"
fi
run: rm /tmp/e2e-profile.out
- name: Cleanup image
if: ${{ always() }}
run: |
make image-cleanup
docker image prune -f --filter "until=24h"
run: make image-cleanup

View File

@@ -11,17 +11,18 @@ on:
- master
- release-*
permissions: # added using https://github.com/step-security/secure-workflows
permissions: # added using https://github.com/step-security/secure-workflows
contents: read
env:
# Common versions
GO_VERSION: "1.23.8"
GOLANGCI_VERSION: "v1.60.1"
GO_VERSION: '1.19'
GOLANGCI_VERSION: 'v1.49'
jobs:
detect-noop:
runs-on: ubuntu-22.04
runs-on: ubuntu-20.04
outputs:
noop: ${{ steps.noop.outputs.should_skip }}
permissions:
@@ -29,7 +30,7 @@ jobs:
steps:
- name: Detect No-op Changes
id: noop
uses: fkirc/skip-duplicate-actions@f75f66ce1886f00957d99748a42c724f4330bdcf
uses: fkirc/skip-duplicate-actions@12aca0a884f6137d619d6a8a09fcc3406ced5281
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
paths_ignore: '["**.md", "**.mdx", "**.png", "**.jpg"]'
@@ -37,18 +38,18 @@ jobs:
continue-on-error: true
staticcheck:
runs-on: ubuntu-22.04
runs-on: ubuntu-20.04
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Setup Go
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568
with:
go-version: ${{ env.GO_VERSION }}
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
with:
submodules: true
@@ -59,21 +60,21 @@ jobs:
run: make check-license-header
lint:
runs-on: ubuntu-22.04
runs-on: ubuntu-20.04
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
permissions:
contents: read # for actions/checkout to fetch code
pull-requests: read # for golangci/golangci-lint-action to fetch pull requests
contents: read # for actions/checkout to fetch code
pull-requests: read # for golangci/golangci-lint-action to fetch pull requests
steps:
- name: Setup Go
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568
with:
go-version: ${{ env.GO_VERSION }}
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
with:
submodules: true
@@ -82,73 +83,46 @@ jobs:
# version, but we prefer this action because it leaves 'annotations' (i.e.
# it comments on PRs to point out linter violations).
- name: Lint
uses: golangci/golangci-lint-action@v6
uses: golangci/golangci-lint-action@07db5389c99593f11ad7b44463c2d4233066a9b1 # v3.3.0
with:
version: ${{ env.GOLANGCI_VERSION }}
check-diff:
runs-on: ubuntu-22.04
runs-on: aliyun
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
with:
submodules: true
- name: Free Disk Space
run: |
echo "Disk space before cleanup:"
df -h
# Remove unnecessary software to free up disk space
sudo rm -rf /usr/share/dotnet
sudo rm -rf /usr/local/lib/android
sudo rm -rf /opt/ghc
sudo rm -rf /opt/hostedtoolcache/CodeQL
sudo docker image prune --all --force
echo "Disk space after cleanup:"
df -h
- name: Setup Env
uses: ./.github/actions/env-setup
- name: Setup node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
- name: Setup Go
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568
with:
node-version: "14"
go-version: ${{ env.GO_VERSION }}
- name: Setup kinD
uses: ./.github/actions/setup-kind-cluster
- name: Setup node
uses: actions/setup-node@8c91899e586c5b171469028077307d293428b516
with:
node-version: '14'
- name: Cache Go Dependencies
uses: actions/cache@9b0c1fce7a93df8e3bb8926b0d6e9d89e92f20a7
with:
path: .work/pkg
key: ${{ runner.os }}-pkg-${{ hashFiles('**/go.sum') }}
restore-keys: ${{ runner.os }}-pkg-
- name: Run cross-build
run: make cross-build
- name: Free Disk Space After Cross-Build
run: |
echo "Disk space before cleanup:"
df -h
# Remove cross-build artifacts to free up space
# (make build will rebuild binaries for current platform)
rm -rf _bin
# Clean Go build cache and test cache
go clean -cache -testcache
# Remove Docker build cache
sudo docker builder prune --all --force || true
echo "Disk space after cleanup:"
df -h
- name: Check Diff
run: |
export PATH=$(pwd)/bin/:$PATH
make check-diff
- name: Cleanup binary
run: make build-cleanup
@@ -159,17 +133,17 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568
with:
go-version: ${{ env.GO_VERSION }}
- name: Cache Go Dependencies
uses: actions/cache@v4
uses: actions/cache@9b0c1fce7a93df8e3bb8926b0d6e9d89e92f20a7
with:
path: .work/pkg
key: ${{ runner.os }}-pkg-${{ hashFiles('**/go.sum') }}
@@ -185,40 +159,60 @@ jobs:
.\bin\vela.exe version
check-core-image-build:
runs-on: ubuntu-22.04
runs-on: ubuntu-latest
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
with:
submodules: true
- name: Set up QEMU
uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392
uses: docker/setup-qemu-action@e81a89b1732b9c48d79cd809d8d81d79c4647a18 # v2.1.0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435
uses: docker/setup-buildx-action@8c0edbc76e98fa90f69d9a2c020dcb50019dc325 # v2.2.1
- name: Build Test for vela core
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83
uses: docker/build-push-action@c56af957549030174b10d6867f20e78cfd7debc5 # v3.2.0
with:
context: .
file: Dockerfile
platforms: linux/amd64,linux/arm64
check-cli-image-build:
runs-on: ubuntu-22.04
check-apiserver-image-build:
runs-on: ubuntu-latest
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
with:
submodules: true
- name: Set up QEMU
uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392
uses: docker/setup-qemu-action@e81a89b1732b9c48d79cd809d8d81d79c4647a18 # v2.1.0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435
- name: Build Test for CLI
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83
uses: docker/setup-buildx-action@8c0edbc76e98fa90f69d9a2c020dcb50019dc325 # v2.2.1
- name: Build Test for apiserver
uses: docker/build-push-action@c56af957549030174b10d6867f20e78cfd7debc5 # v3.2.0
with:
context: .
file: Dockerfile.cli
file: Dockerfile.apiserver
platforms: linux/amd64,linux/arm64
check-cli-image-build:
runs-on: ubuntu-latest
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Checkout
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
with:
submodules: true
- name: Set up QEMU
uses: docker/setup-qemu-action@e81a89b1732b9c48d79cd809d8d81d79c4647a18 # v2.1.0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@8c0edbc76e98fa90f69d9a2c020dcb50019dc325 # v2.2.1
- name: Build Test for CLI
uses: docker/build-push-action@c56af957549030174b10d6867f20e78cfd7debc5 # v3.2.0
with:
context: .
file: Dockerfile.cli

View File

@@ -1,4 +1,4 @@
name: Run commands for issues and pull requests
name: Run commands when issues are labeled or comments added
on:
issues:
types: [labeled, opened]
@@ -7,55 +7,50 @@ on:
permissions:
contents: read
issues: write
jobs:
bot:
runs-on: ubuntu-22.04
permissions:
pull-requests: write
issues: write
runs-on: ubuntu-20.04
steps:
- name: Checkout Actions
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
with:
repository: "oam-dev/kubevela-github-actions"
path: ./actions
ref: v0.4.2
- name: Setup Node.js
uses: actions/setup-node@1e60f620b9541d16bece96c5465dc8ee9832be0b
uses: actions/setup-node@8c91899e586c5b171469028077307d293428b516
with:
node-version: "14"
cache: "npm"
node-version: '14'
cache: 'npm'
cache-dependency-path: ./actions/package-lock.json
- name: Install Dependencies
run: npm ci --production --prefix ./actions
- name: Run Commands
uses: ./actions/commands
with:
token: ${{ secrets.GH_KUBEVELA_COMMAND_WORKFLOW }}
token: ${{secrets.VELA_BOT_TOKEN}}
configPath: issue-commands
backport:
runs-on: ubuntu-22.04
if: github.event.issue.pull_request && contains(github.event.comment.body, '/backport')
permissions:
contents: write
pull-requests: write
issues: write
pull-requests: write
steps:
- name: Extract Command
id: command
uses: xt0rted/slash-command-action@bf51f8f5f4ea3d58abc7eca58f77104182b23e88
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
repo-token: ${{ secrets.VELA_BOT_TOKEN }}
command: backport
reaction: "true"
reaction-type: "eyes"
allow-edits: "false"
permission-level: read
- name: Handle Command
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea
uses: actions/github-script@d556feaca394842dc55e4734bf3bb9f685482fa0
env:
VERSION: ${{ steps.command.outputs.command-arguments }}
with:
@@ -67,7 +62,7 @@ jobs:
label = "backport " + version
}
// Add our backport label.
github.rest.issues.addLabels({
github.issues.addLabels({
// Every pull request is an issue, but not every issue is a pull request.
issue_number: context.issue.number,
owner: context.repo.owner,
@@ -76,68 +71,11 @@ jobs:
})
console.log("Added '" + label + "' label.")
- name: Checkout
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
with:
fetch-depth: 0
- name: Open Backport PR
uses: zeebe-io/backport-action@0193454f0c5947491d348f33a275c119f30eb736
uses: zeebe-io/backport-action@2ee900dc92632adf994f8e437b6d16840fd61f58
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
github_workspace: ${{ github.workspace }}
retest:
runs-on: ubuntu-22.04
if: github.event.issue.pull_request && contains(github.event.comment.body, '/retest')
permissions:
actions: write
pull-requests: write
issues: write
steps:
- name: Retest the current pull request
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea
env:
PULL_REQUEST_ID: ${{ github.event.issue.number }}
COMMENT_ID: ${{ github.event.comment.id }}
COMMENT_BODY: ${{ github.event.comment.body }}
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const pull_request_id = process.env.PULL_REQUEST_ID
const comment_id = process.env.COMMENT_ID
const comment_body = process.env.COMMENT_BODY
console.log("retest pr: #" + pull_request_id + " comment: " + comment_body)
const {data: pr} = await github.rest.pulls.get({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: pull_request_id,
})
console.log("pr: " + JSON.stringify(pr))
const action = comment_body.split(" ")[0]
let workflow_ids = comment_body.split(" ").slice(1).filter(line => line.length > 0).map(line => line + ".yml")
if (workflow_ids.length == 0) workflow_ids = ["go.yml", "unit-test.yml", "e2e-test.yml", "e2e-multicluster-test.yml"]
for (let i = 0; i < workflow_ids.length; i++) {
const workflow_id = workflow_ids[i]
const {data: runs} = await github.rest.actions.listWorkflowRuns({
owner: context.repo.owner,
repo: context.repo.repo,
workflow_id: workflow_id,
head_sha: pr.head.sha,
})
console.log("runs for " + workflow_id + ": ", JSON.stringify(runs))
runs.workflow_runs.forEach((workflow_run) => {
if (workflow_run.status === "in_progress") return
let handler = github.rest.actions.reRunWorkflow
if (action === "/retest-failed") handler = github.rest.actions.reRunWorkflowFailedJobs
handler({
owner: context.repo.owner,
repo: context.repo.repo,
run_id: workflow_run.id
})
})
}
github.rest.reactions.createForIssueComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: comment_id,
content: "eyes",
});

View File

@@ -9,17 +9,18 @@ on:
branches:
- master
- release-*
-
permissions:
contents: read
jobs:
license_check:
runs-on: ubuntu-22.04
runs-on: ubuntu-latest
name: Check for unapproved licenses
steps:
- uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608
- uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
- name: Set up Ruby
uses: ruby/setup-ruby@a6e6f86333f0a2523ece813039b8b4be04560854 # v1.190.0
uses: ruby/setup-ruby@v1
with:
ruby-version: 2.6
- name: Install dependencies

View File

@@ -1,45 +1,27 @@
name: Registry
on:
push:
branches:
- master
tags:
- 'v*'
- "v*"
workflow_dispatch: {}
env:
ACCESS_KEY: ${{ secrets.OSS_ACCESS_KEY }}
ACCESS_KEY_SECRET: ${{ secrets.OSS_ACCESS_KEY_SECRET }}
permissions:
contents: read
jobs:
publish-vela-images:
name: Build and Push Vela Images
publish-core-images:
permissions:
packages: write
id-token: write
attestations: write
contents: write
runs-on: ubuntu-22.04
outputs:
vela_core_image: ${{ steps.meta-vela-core.outputs.image }}
vela_core_digest: ${{ steps.meta-vela-core.outputs.digest }}
vela_core_dockerhub_image: ${{ steps.meta-vela-core.outputs.dockerhub_image }}
vela_cli_image: ${{ steps.meta-vela-cli.outputs.image }}
vela_cli_digest: ${{ steps.meta-vela-cli.outputs.digest }}
vela_cli_dockerhub_image: ${{ steps.meta-vela-cli.outputs.dockerhub_image }}
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608 # v4.1.1
- name: Install Crane
uses: imjasonh/setup-crane@00c9e93efa4e1138c9a7a5c594acd6c75a2fbf0c # v0.1
- name: Install Cosign
uses: sigstore/cosign-installer@d58896d6a1865668819e1d91763c7751a165e159 # main
with:
cosign-release: 'v2.5.0'
- name: Get the image version
- uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
- name: Get the version
id: get_version
run: |
VERSION=${GITHUB_REF#refs/tags/}
@@ -47,41 +29,40 @@ jobs:
VERSION=latest
fi
echo "VERSION=${VERSION}" >> $GITHUB_OUTPUT
- name: Get git revision
id: vars
shell: bash
run: |
echo "git_revision=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
- name: Login to GHCR
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
- name: Login ghcr.io
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to DockerHub
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
- name: Login docker.io
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
with:
registry: docker.io
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Setup QEMU
uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@6524bf65af31da8d45b59e8c27de4bd072b392f5 # v3.8.0
- name: Login Alibaba Cloud ACR
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
with:
registry: ${{ secrets.ACR_DOMAIN }}
username: ${{ secrets.ACR_USERNAME }}
password: ${{ secrets.ACR_PASSWORD }}
- uses: docker/setup-qemu-action@e81a89b1732b9c48d79cd809d8d81d79c4647a18 # v2.1.0
- uses: docker/setup-buildx-action@8c0edbc76e98fa90f69d9a2c020dcb50019dc325 # v2.2.1
with:
driver-opts: image=moby/buildkit:master
- name: Build & Push Vela Core for Dockerhub, GHCR
uses: docker/build-push-action@471d1dc4e07e5cdedd4c2171150001c434f0b7a4 # v6.15.0
- uses: docker/build-push-action@c56af957549030174b10d6867f20e78cfd7debc5 # v3.2.0
name: Build & Pushing vela-core for Dockerhub, GHCR and ACR
with:
context: .
file: Dockerfile
labels: |
labels: |-
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
platforms: linux/amd64,linux/arm64
@@ -90,55 +71,17 @@ jobs:
GITVERSION=git-${{ steps.vars.outputs.git_revision }}
VERSION=${{ steps.get_version.outputs.VERSION }}
GOPROXY=https://proxy.golang.org
tags: |
tags: |-
docker.io/oamdev/vela-core:${{ steps.get_version.outputs.VERSION }}
ghcr.io/${{ github.repository_owner }}/oamdev/vela-core:${{ steps.get_version.outputs.VERSION }}
${{ secrets.ACR_DOMAIN }}/oamdev/vela-core:${{ steps.get_version.outputs.VERSION }}
- name: Get Vela Core Image Digest
id: meta-vela-core
run: |
GHCR_IMAGE=ghcr.io/${{ github.repository_owner }}/oamdev/vela-core
DOCKER_IMAGE=docker.io/oamdev/vela-core
TAG=${{ steps.get_version.outputs.VERSION }}
DIGEST=$(crane digest $GHCR_IMAGE:$TAG)
echo "image=$GHCR_IMAGE" >> $GITHUB_OUTPUT
echo "dockerhub_image=$DOCKER_IMAGE" >> $GITHUB_OUTPUT
echo "digest=$DIGEST" >> $GITHUB_OUTPUT
- name: Generate SBOM for Vela Core Image
id: generate_vela_core_sbom
uses: anchore/sbom-action@v0.17.0
with:
image: ghcr.io/${{ github.repository_owner }}/oamdev/vela-core:${{ steps.get_version.outputs.VERSION }}
registry-username: ${{ github.actor }}
registry-password: ${{ secrets.GITHUB_TOKEN }}
format: spdx-json
artifact-name: sbom-vela-core.spdx.json
output-file: ${{ github.workspace }}/sbom-vela-core.spdx.json
- name: Sign Vela Core Image and Attest SBOM
env:
COSIGN_EXPERIMENTAL: 'true'
run: |
echo "signing vela core images..."
cosign sign --yes ghcr.io/${{ github.repository_owner }}/oamdev/vela-core@${{ steps.meta-vela-core.outputs.digest }}
cosign sign --yes docker.io/oamdev/vela-core@${{ steps.meta-vela-core.outputs.digest }}
echo "attesting SBOM against the vela core image..."
cosign attest --yes --predicate ${{ github.workspace }}/sbom-vela-core.spdx.json --type spdx \
ghcr.io/${{ github.repository_owner }}/oamdev/vela-core@${{ steps.meta-vela-core.outputs.digest }}
cosign attest --yes --predicate ${{ github.workspace }}/sbom-vela-core.spdx.json --type spdx \
docker.io/oamdev/vela-core@${{ steps.meta-vela-core.outputs.digest }}
- name: Build & Push Vela CLI for Dockerhub, GHCR
uses: docker/build-push-action@471d1dc4e07e5cdedd4c2171150001c434f0b7a4 # v6.15.0
- uses: docker/build-push-action@c56af957549030174b10d6867f20e78cfd7debc5 # v3.2.0
name: Build & Pushing CLI for Dockerhub, GHCR and ACR
with:
context: .
file: Dockerfile.cli
labels: |
labels: |-
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
platforms: linux/amd64,linux/arm64
@@ -147,100 +90,106 @@ jobs:
GITVERSION=git-${{ steps.vars.outputs.git_revision }}
VERSION=${{ steps.get_version.outputs.VERSION }}
GOPROXY=https://proxy.golang.org
tags: |
tags: |-
docker.io/oamdev/vela-cli:${{ steps.get_version.outputs.VERSION }}
ghcr.io/${{ github.repository_owner }}/oamdev/vela-cli:${{ steps.get_version.outputs.VERSION }}
${{ secrets.ACR_DOMAIN }}/oamdev/vela-cli:${{ steps.get_version.outputs.VERSION }}
- name: Get Vela CLI Image Digest
id: meta-vela-cli
publish-addon-images:
permissions:
packages: write
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
- name: Get the version
id: get_version
run: |
GHCR_IMAGE=ghcr.io/${{ github.repository_owner }}/oamdev/vela-cli
DOCKER_IMAGE=docker.io/oamdev/vela-cli
TAG=${{ steps.get_version.outputs.VERSION }}
DIGEST=$(crane digest $GHCR_IMAGE:$TAG)
echo "image=$GHCR_IMAGE" >> $GITHUB_OUTPUT
echo "dockerhub_image=$DOCKER_IMAGE" >> $GITHUB_OUTPUT
echo "digest=$DIGEST" >> $GITHUB_OUTPUT
- name: Generate SBOM for Vela CLI Image
id: generate_sbom
uses: anchore/sbom-action@v0.17.0
VERSION=${GITHUB_REF#refs/tags/}
if [[ ${GITHUB_REF} == "refs/heads/master" ]]; then
VERSION=latest
fi
echo "VERSION=${VERSION}" >> $GITHUB_OUTPUT
- name: Get git revision
id: vars
shell: bash
run: |
echo "git_revision=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
- name: Login ghcr.io
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
with:
image: ghcr.io/${{ github.repository_owner }}/oamdev/vela-cli:${{ steps.get_version.outputs.VERSION }}
registry-username: ${{ github.actor }}
registry-password: ${{ secrets.GITHUB_TOKEN }}
format: spdx-json
artifact-name: sbom-vela-cli.spdx.json
output-file: ${{ github.workspace }}/sbom-vela-cli.spdx.json
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login docker.io
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
with:
registry: docker.io
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login Alibaba Cloud ACR
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # v2.1.0
with:
registry: ${{ secrets.ACR_DOMAIN }}
username: ${{ secrets.ACR_USERNAME }}
password: ${{ secrets.ACR_PASSWORD }}
- uses: docker/setup-qemu-action@e81a89b1732b9c48d79cd809d8d81d79c4647a18 # v2.1.0
- uses: docker/setup-buildx-action@8c0edbc76e98fa90f69d9a2c020dcb50019dc325 # v2.2.1
with:
driver-opts: image=moby/buildkit:master
- name: Sign Vela CLI Image and Attest SBOM
env:
COSIGN_EXPERIMENTAL: 'true'
run: |
echo "signing vela CLI images..."
cosign sign --yes ghcr.io/${{ github.repository_owner }}/oamdev/vela-cli@${{ steps.meta-vela-cli.outputs.digest }}
cosign sign --yes docker.io/oamdev/vela-cli@${{ steps.meta-vela-cli.outputs.digest }}
- uses: docker/build-push-action@c56af957549030174b10d6867f20e78cfd7debc5 # v3.2.0
name: Build & Pushing vela-apiserver for Dockerhub, GHCR and ACR
with:
context: .
file: Dockerfile.apiserver
labels: |-
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' }}
build-args: |
GITVERSION=git-${{ steps.vars.outputs.git_revision }}
VERSION=${{ steps.get_version.outputs.VERSION }}
GOPROXY=https://proxy.golang.org
tags: |-
docker.io/oamdev/vela-apiserver:${{ steps.get_version.outputs.VERSION }}
ghcr.io/${{ github.repository_owner }}/oamdev/vela-apiserver:${{ steps.get_version.outputs.VERSION }}
${{ secrets.ACR_DOMAIN }}/oamdev/vela-apiserver:${{ steps.get_version.outputs.VERSION }}
echo "attesting SBOM against the vela cli image..."
cosign attest --yes --predicate ${{ github.workspace }}/sbom-vela-cli.spdx.json --type spdx \
ghcr.io/${{ github.repository_owner }}/oamdev/vela-cli@${{ steps.meta-vela-cli.outputs.digest }}
- uses: docker/build-push-action@c56af957549030174b10d6867f20e78cfd7debc5 # v3.2.0
name: Build & Pushing runtime rollout Dockerhub, GHCR and ACR
with:
context: .
file: runtime/rollout/Dockerfile
labels: |-
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' }}
build-args: |
GITVERSION=git-${{ steps.vars.outputs.git_revision }}
VERSION=${{ steps.get_version.outputs.VERSION }}
GOPROXY=https://proxy.golang.org
tags: |-
docker.io/oamdev/vela-rollout:${{ steps.get_version.outputs.VERSION }}
ghcr.io/${{ github.repository_owner }}/oamdev/vela-rollout:${{ steps.get_version.outputs.VERSION }}
${{ secrets.ACR_DOMAIN }}/oamdev/vela-rollout:${{ steps.get_version.outputs.VERSION }}
cosign attest --yes --predicate ${{ github.workspace }}/sbom-vela-cli.spdx.json --type spdx \
docker.io/oamdev/vela-cli@${{ steps.meta-vela-cli.outputs.digest }}
- name: Publish SBOMs as release artifacts
uses: anchore/sbom-action/publish-sbom@v0.17.0
provenance-ghcr:
name: Generate and Push Provenance to GCHR
needs: publish-vela-images
if: startsWith(github.ref, 'refs/tags/')
strategy:
matrix:
include:
- name: 'Vela Core Image'
image: ${{ needs.publish-vela-images.outputs.vela_core_image }}
digest: ${{ needs.publish-vela-images.outputs.vela_core_digest }}
- name: 'Vela CLI Image'
image: ${{ needs.publish-vela-images.outputs.vela_cli_image }}
digest: ${{ needs.publish-vela-images.outputs.vela_cli_digest }}
permissions:
id-token: write
contents: write
actions: read
packages: write
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@v2.1.0 # has to be sem var
with:
image: ${{ matrix.image }}
digest: ${{ matrix.digest }}
registry-username: ${{ github.actor }}
secrets:
registry-password: ${{ secrets.GITHUB_TOKEN }}
provenance-dockerhub:
name: Generate and Push Provenance to DockerHub
needs: publish-vela-images
if: startsWith(github.ref, 'refs/tags/')
strategy:
matrix:
include:
- name: 'Vela Core Image'
image: ${{ needs.publish-vela-images.outputs.vela_core_dockerhub_image }}
digest: ${{ needs.publish-vela-images.outputs.vela_core_digest }}
- name: 'Vela CLI Image'
image: ${{ needs.publish-vela-images.outputs.vela_cli_dockerhub_image }}
digest: ${{ needs.publish-vela-images.outputs.vela_cli_digest }}
permissions:
id-token: write
contents: write
packages: write
actions: read
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@v2.1.0
with:
image: ${{ matrix.image }}
digest: ${{ matrix.digest }}
secrets:
registry-username: ${{ secrets.DOCKER_USERNAME }}
registry-password: ${{ secrets.DOCKER_PASSWORD }}
publish-capabilities:
env:
CAPABILITY_BUCKET: kubevela-registry
CAPABILITY_DIR: capabilities
CAPABILITY_ENDPOINT: oss-cn-beijing.aliyuncs.com
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
- name: Install ossutil
run: wget http://gosspublic.alicdn.com/ossutil/1.7.0/ossutil64 && chmod +x ossutil64 && mv ossutil64 ossutil
- name: Configure Alibaba Cloud OSSUTIL
run: ./ossutil --config-file .ossutilconfig config -i ${ACCESS_KEY} -k ${ACCESS_KEY_SECRET} -e ${CAPABILITY_ENDPOINT} -c .ossutilconfig
- name: sync capabilities bucket to local
run: ./ossutil --config-file .ossutilconfig sync oss://$CAPABILITY_BUCKET $CAPABILITY_DIR
- name: rsync all capabilites
run: rsync vela-templates/registry/auto-gen/* $CAPABILITY_DIR
- name: sync local to cloud
run: ./ossutil --config-file .ossutilconfig sync $CAPABILITY_DIR oss://$CAPABILITY_BUCKET -f

View File

@@ -4,15 +4,19 @@ on:
push:
tags:
- "v*"
workflow_dispatch: {}
workflow_dispatch: { }
env:
BUCKET: ${{ secrets.CLI_OSS_BUCKET }}
ENDPOINT: ${{ secrets.CLI_OSS_ENDPOINT }}
ACCESS_KEY: ${{ secrets.CLI_OSS_ACCESS_KEY }}
ACCESS_KEY_SECRET: ${{ secrets.CLI_OSS_ACCESS_KEY_SECRET }}
permissions:
contents: read
jobs:
goreleaser:
name: goreleaser
runs-on: ubuntu-22.04
build:
permissions:
contents: write
actions: read
@@ -22,83 +26,48 @@ jobs:
pull-requests: read
repository-projects: read
statuses: read
id-token: write
outputs:
hashes: ${{ steps.hash.outputs.hashes }}
runs-on: ubuntu-latest
name: goreleaser
steps:
- name: Check disk (before)
run: |
df -h
sudo du -sh /usr/local/lib/android /usr/share/dotnet /opt/ghc || true
- name: Free Disk Space (Ubuntu)
uses: insightsengineering/disk-space-reclaimer@v1
with:
# this might remove tools that are actually needed,
# if set to "true" but frees about 6 GB
tools-cache: false
# all of these default to true, but feel free to set to
# "false" if necessary for your workflow
android: true
dotnet: true
haskell: true
large-packages: true
swap-storage: true
docker-images: true
# Extra prune in case your job builds/pulls images
- name: Deep Docker prune
run: |
docker system prune -af || true
docker builder prune -af || true
- name: Checkout
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
with:
fetch-depth: 0
- name: Get Git tags
run: git fetch --force --tags
- run: git fetch --force --tags
- name: Set up Go
uses: actions/setup-go@0c52d547c9bc32b1aa3301fd7a9cb496313a4491
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568
with:
go-version: 1.23.8
go-version: 1.19
cache: true
- name: Install Cosign
uses: sigstore/cosign-installer@main
with:
cosign-release: "v2.5.0"
- name: Install syft
uses: anchore/sbom-action/download-syft@f325610c9f50a54015d37c8d16cb3b0e2c8f4de0 # v0.18.0
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@9c156ee8a17a598857849441385a2041ef570552 # v6.3.0
- uses: goreleaser/goreleaser-action@9754a253a8673b0ea869c2e863b4e975497efd0c # v4.1.1
with:
distribution: goreleaser
version: 1.14.1
args: release --rm-dist --timeout 60m
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Generate hashes
id: hash
if: startsWith(github.ref, 'refs/tags/')
# Since goreleaser haven't supported aliyun OSS, we need to upload the release manually
- name: Get version
run: echo "VELA_VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_ENV
- name: Install ossutil
run: wget http://gosspublic.alicdn.com/ossutil/1.7.0/ossutil64 && chmod +x ossutil64 && mv ossutil64 ossutil
- name: Configure Alibaba Cloud OSSUTIL
run: ./ossutil --config-file .ossutilconfig config -i ${ACCESS_KEY} -k ${ACCESS_KEY_SECRET} -e ${ENDPOINT}
- name: split files to be upload
run: mkdir -p ./dist/files_upload && mv ./dist/*.tar.gz ./dist/files_upload && mv ./dist/*.zip ./dist/files_upload
- name: sync local to cloud
run: ./ossutil --config-file .ossutilconfig sync ./dist/files_upload oss://$BUCKET/binary/vela/${{ env.VELA_VERSION }}
- name: sync the latest version file
if: ${{ !contains(env.VELA_VERSION,'alpha') && !contains(env.VELA_VERSION,'beta') }}
run: |
set -euo pipefail
HASHES=$(find dist -type f -exec sha256sum {} \; | base64 -w0)
echo "hashes=$HASHES" >> "$GITHUB_OUTPUT"
- name: Check disk (after)
run: df -h
LATEST_VERSION=$(curl -fsSl https://static.kubevela.net/binary/vela/latest_version)
verlte() {
[ "$1" = "`echo -e "$1\n$2" | sort -V | head -n1`" ]
}
verlte ${{ env.VELA_VERSION }} $LATEST_VERSION && echo "${{ env.VELA_VERSION }} <= $LATEST_VERSION, skip update" && exit 0
echo ${{ env.VELA_VERSION }} > ./latest_version
./ossutil --config-file .ossutilconfig cp -u ./latest_version oss://$BUCKET/binary/vela/latest_version
upload-plugin-homebrew:
name: upload-sha256sums
needs: goreleaser
runs-on: ubuntu-22.04
if: ${{ !contains(github.ref, 'alpha') && !contains(github.ref, 'beta') && !contains(github.ref, 'rc') }}
permissions:
contents: write
actions: read
@@ -108,22 +77,19 @@ jobs:
pull-requests: read
repository-projects: read
statuses: read
needs: build
runs-on: ubuntu-latest
name: upload-sha256sums
steps:
- name: Checkout
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
- name: Update kubectl plugin version in krew-index
uses: rajatjindal/krew-release-bot@df3eb197549e3568be8b4767eec31c5e8e8e6ad8 # v0.0.46
provenance-vela-bins:
name: generate provenance for binaries
needs: [goreleaser]
if: startsWith(github.ref, 'refs/tags/')
permissions:
id-token: write
contents: write
actions: read
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.1.0 # has to be sem var
with:
base64-subjects: "${{ needs.goreleaser.outputs.hashes }}"
upload-assets: true
uses: rajatjindal/krew-release-bot@3320c0b546b5d2320613c46762bd3f73e2801bdc # v0.0.38
- name: Update Homebrew formula
uses: dawidd6/action-homebrew-bump-formula@02e79d9da43d79efa846d73695b6052cbbdbf48a # v3.8.3
with:
token: ${{ secrets.HOMEBREW_TOKEN }}
formula: kubevela
tag: ${{ github.ref }}
revision: ${{ github.sha }}
force: false

View File

@@ -12,7 +12,7 @@ permissions: read-all
jobs:
analysis:
name: Scorecards analysis
runs-on: ubuntu-22.04
runs-on: ubuntu-latest
permissions:
# Needed to upload the results to code-scanning dashboard.
security-events: write
@@ -23,12 +23,12 @@ jobs:
steps:
- name: "Checkout code"
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
with:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@f49aabe0b5af0936a0987cfb85d86b75731b0186 # tag=v2.4.1
uses: ossf/scorecard-action@937ffa90d79c7d720498178154ad4c7ba1e4ad8c # tag=v2.1.0
with:
results_file: results.sarif
results_format: sarif
@@ -47,7 +47,7 @@ jobs:
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
uses: actions/upload-artifact@134dcf33c0b9454c4b17a936843d7e21dccdc335 # v4.3.6
uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb # v3.1.1
with:
name: SARIF file
path: results.sarif
@@ -55,6 +55,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
uses: github/codeql-action/upload-sarif@959cbb7472c4d4ad70cdfe6f4976053fe48ab394 # v2.1.37
with:
sarif_file: results.sarif

View File

@@ -1,49 +0,0 @@
name: SDK Test
on:
push:
tags:
- v*
workflow_dispatch: {}
pull_request:
paths:
- "vela-templates/definitions/**"
- "pkg/definition/gen_sdk/**"
branches:
- master
- release-*
permissions:
contents: read
jobs:
sdk-tests:
runs-on: ubuntu-22.04
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- name: Setup Env
uses: ./.github/actions/env-setup
- name: Install Go tools
run: |
make goimports
make golangci
- name: Setup KinD
uses: ./.github/actions/setup-kind-cluster
with:
name: sdk-test
- name: Build CLI
run: make vela-cli
- name: Build SDK
run: bin/vela def gen-api -f vela-templates/definitions/internal/ -o ./kubevela-go-sdk --package=github.com/kubevela-contrib/kubevela-go-sdk --init
- name: Validate SDK
run: |
cd kubevela-go-sdk
go mod tidy
golangci-lint run --timeout 5m -e "exported:" -e "dot-imports" ./...

View File

@@ -10,15 +10,20 @@ on:
permissions:
contents: read
env:
GO_VERSION: '1.19'
jobs:
sync-core-api:
runs-on: ubuntu-22.04
runs-on: ubuntu-20.04
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608
- name: Set up Go
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Env
uses: ./.github/actions/env-setup
- name: Check out code into the Go module directory
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
- name: Get the version
id: get_version

View File

@@ -1,49 +0,0 @@
name: Sync SDK
on:
push:
paths:
- vela-templates/definitions/internal/**
- pkg/definition/gen_sdk/**
- .github/workflows/sync-sdk.yaml
tags:
- "v*"
branches:
- master
- release-*
permissions:
contents: read
jobs:
sync_sdk:
runs-on: ubuntu-22.04
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608
- name: Env setup
uses: ./.github/actions/env-setup
- name: Install Go tools
run: |
make goimports
- name: Build CLI
run: make vela-cli
- name: Setup KinD
uses: ./.github/actions/setup-kind-cluster
with:
name: sync-sdk
- name: Get the version
id: get_version
run: echo "VERSION=${GITHUB_REF}" >> $GITHUB_OUTPUT
- name: Sync SDK to kubevela/kubevela-go-sdk
run: bash ./hack/sdk/sync.sh
env:
SSH_DEPLOY_KEY: ${{ secrets.GO_SDK_DEPLOY_KEY }}
VERSION: ${{ steps.get_version.outputs.VERSION }}
COMMIT_ID: ${{ github.sha }}

14
.github/workflows/timed-task.yml vendored Normal file
View File

@@ -0,0 +1,14 @@
name: Timed Task
on:
schedule:
- cron: '* * * * *'
permissions:
contents: read
jobs:
clean-image:
runs-on: aliyun
steps:
- name: Cleanup image
run: docker image prune -f

View File

@@ -10,24 +10,24 @@ permissions:
jobs:
images:
name: Image Scan
runs-on: ubuntu-22.04
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
- name: Build Vela Core image from Dockerfile
run: |
docker build --build-arg GOPROXY=https://proxy.golang.org -t docker.io/oamdev/vela-core:${{ github.sha }} .
- name: Run Trivy vulnerability scanner for vela core
uses: aquasecurity/trivy-action@d9cd5b1c23aaf8cb31bb09141028215828364bbb # master
uses: aquasecurity/trivy-action@master
with:
image-ref: 'docker.io/oamdev/vela-core:${{ github.sha }}'
format: 'sarif'
output: 'trivy-results.sarif'
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@51f77329afa6477de8c49fc9c7046c15b9a4e79d # v3.29.5
uses: github/codeql-action/upload-sarif@v2
if: always()
with:
sarif_file: 'trivy-results.sarif'

View File

@@ -5,7 +5,7 @@ on:
branches:
- master
- release-*
workflow_dispatch: {}
workflow_dispatch: { }
pull_request:
branches:
- master
@@ -14,17 +14,22 @@ on:
permissions:
contents: read
env:
# Common versions
GO_VERSION: '1.19'
jobs:
detect-noop:
permissions:
actions: write # for fkirc/skip-duplicate-actions to skip or stop workflow runs
runs-on: ubuntu-22.04
actions: write # for fkirc/skip-duplicate-actions to skip or stop workflow runs
runs-on: ubuntu-20.04
outputs:
noop: ${{ steps.noop.outputs.should_skip }}
steps:
- name: Detect No-op Changes
id: noop
uses: fkirc/skip-duplicate-actions@f75f66ce1886f00957d99748a42c724f4330bdcf
uses: fkirc/skip-duplicate-actions@12aca0a884f6137d619d6a8a09fcc3406ced5281
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
paths_ignore: '["**.md", "**.mdx", "**.png", "**.jpg"]'
@@ -32,24 +37,55 @@ jobs:
continue-on-error: true
unit-tests:
runs-on: ubuntu-22.04
runs-on: ubuntu-20.04
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Set up Go
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568
with:
go-version: ${{ env.GO_VERSION }}
- name: Check out code into the Go module directory
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
uses: actions/checkout@755da8c3cf115ac066823e79a1e1788f8940201b
with:
submodules: true
- name: Setup Env
uses: ./.github/actions/env-setup
- name: Setup KinD with Kubernetes
uses: ./.github/actions/setup-kind-cluster
- name: Run unit tests
uses: ./.github/actions/unit-test
- name: Cache Go Dependencies
uses: actions/cache@9b0c1fce7a93df8e3bb8926b0d6e9d89e92f20a7
with:
codecov-enable: true
codecov-token: ${{ secrets.CODECOV_TOKEN }}
path: .work/pkg
key: ${{ runner.os }}-pkg-${{ hashFiles('**/go.sum') }}
restore-keys: ${{ runner.os }}-pkg-
- name: Install ginkgo
run: |
sudo sed -i 's/azure\.//' /etc/apt/sources.list
sudo apt-get update
sudo apt-get install -y golang-ginkgo-dev
- name: Setup K3d
uses: nolar/setup-k3d-k3s@293b8e5822a20bc0d5bcdd4826f1a665e72aba96
with:
version: v1.20
github-token: ${{ secrets.GITHUB_TOKEN }}
# TODO need update action version to resolve node 12 deprecated.
- name: install Kubebuilder
uses: RyanSiu1995/kubebuilder-action@ff52bff1bae252239223476e5ab0d71d6ba02343
with:
version: 3.1.0
kubebuilderOnly: false
kubernetesVersion: v1.21.2
- name: Run Make test
run: make test
- name: Upload coverage report
uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with:
token: ${{ secrets.CODECOV_TOKEN }}
file: ./coverage.txt
flags: core-unittests
name: codecov-umbrella

View File

@@ -1,98 +0,0 @@
# =============================================================================
# E2E Upgrade Multicluster Test Workflow
# =============================================================================
# This workflow performs end-to-end testing for KubeVela multicluster upgrades.
# It tests the upgrade path from the latest released version to the current
# development branch across multiple Kubernetes versions.
#
# Test Flow:
# 1. Install latest KubeVela release
# 2. Build and upgrade to current development version
# 3. Run multicluster e2e tests to verify functionality
# =============================================================================
name: E2E Upgrade Multicluster Test
# =============================================================================
# Trigger Configuration
# =============================================================================
on:
# Trigger on pull requests targeting main branches
pull_request:
branches:
- master
- release-*
# Allow manual workflow execution
workflow_dispatch: {}
# =============================================================================
# Security Configuration
# =============================================================================
permissions:
contents: read # Read-only access to repository contents
# =============================================================================
# Global Environment Variables
# =============================================================================
env:
GO_VERSION: '1.23.8' # Go version for building and testing
# =============================================================================
# Job Definitions
# =============================================================================
jobs:
upgrade-multicluster-tests:
name: Upgrade Multicluster Tests
runs-on: ubuntu-22.04
if: startsWith(github.head_ref, 'chore/upgrade-k8s-')
timeout-minutes: 60 # Prevent hanging jobs
# ==========================================================================
# Matrix Strategy - Test against multiple Kubernetes versions
# ==========================================================================
strategy:
fail-fast: false # Continue testing other versions if one fails
matrix:
k8s-version: ['v1.31.9']
# ==========================================================================
# Concurrency Control - Prevent overlapping runs
# ==========================================================================
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-${{ matrix.k8s-version }}
cancel-in-progress: true
steps:
# ========================================================================
# Environment Setup
# ========================================================================
- name: Check out repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
# ========================================================================
# Kubernetes Cluster Setup
# ========================================================================
- name: Setup worker cluster kinD
uses: ./.github/actions/setup-kind-cluster
with:
name: worker
- name: Setup KinD master clusters for multicluster testing
uses: ./.github/actions/setup-kind-cluster
with:
k8s-version: ${{ matrix.k8s-version }}
- name: Deploy latest release
uses: ./.github/actions/deploy-latest-release
- name: Upgrade from current branch
uses: ./.github/actions/deploy-current-branch
- name: Run upgarde multicluster tests
uses: ./.github/actions/multicluster-test
with:
codecov-enable: false
codecov-token: ''

View File

@@ -1,102 +0,0 @@
# =============================================================================
# Upgrade E2E Test Workflow
# =============================================================================
# This workflow performs comprehensive end-to-end testing for KubeVela upgrades.
# It validates the upgrade path from the latest stable release to the current
# development version by running multiple test suites including API, addon,
# and general e2e tests.
#
# Test Flow:
# 1. Install latest KubeVela release
# 2. Build and upgrade to current development version
# 3. Run comprehensive e2e test suites (API, addon, general)
# 4. Validate upgrade functionality and compatibility
# =============================================================================
name: Upgrade E2E Test
# =============================================================================
# Trigger Configuration
# =============================================================================
on:
# Trigger on pull requests targeting main branches
pull_request:
branches:
- master
- release-*
# Allow manual workflow execution
workflow_dispatch: {}
# =============================================================================
# Environment Variables
# =============================================================================
env:
GO_VERSION: '1.23.8'
# =============================================================================
# Security Configuration
# =============================================================================
permissions:
contents: read # Read-only access to repository contents
# =============================================================================
# Job Definitions
# =============================================================================
jobs:
upgrade-tests:
name: Upgrade E2E Tests
runs-on: ubuntu-22.04
if: startsWith(github.head_ref, 'chore/upgrade-k8s-')
timeout-minutes: 90 # Extended timeout for comprehensive e2e testing
# ==========================================================================
# Matrix Strategy - Test against multiple Kubernetes versions
# ==========================================================================
strategy:
fail-fast: false # Continue testing other versions if one fails
matrix:
k8s-version: ['v1.31.9']
# ==========================================================================
# Concurrency Control - Prevent overlapping runs
# ==========================================================================
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-${{ matrix.k8s-version }}
cancel-in-progress: true
steps:
# ========================================================================
# Repository Setup
# ========================================================================
- name: Check out code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
# ========================================================================
# Kubernetes Cluster Setup
# ========================================================================
- name: Setup KinD with Kubernetes ${{ matrix.k8s-version }}
uses: ./.github/actions/setup-kind-cluster
with:
k8s-version: ${{ matrix.k8s-version }}
- name: Build vela CLI
run: make vela-cli
- name: Build kubectl-vela plugin
run: make kubectl-vela
- name: Install kustomize
run: make kustomize
- name: Deploy latest release
uses: ./.github/actions/deploy-latest-release
- name: Upgrade from current branch
uses: ./.github/actions/deploy-current-branch
# ========================================================================
# E2E Test Execution
# ========================================================================
- name: Run upgrade e2e tests
uses: ./.github/actions/e2e-test

View File

@@ -1,83 +0,0 @@
# =============================================================================
# Upgrade Unit Test Workflow
# =============================================================================
# This workflow performs unit testing for KubeVela upgrades by:
# 1. Installing the latest stable KubeVela release
# 2. Building and upgrading to the current development version
# 3. Running unit tests to validate the upgrade functionality
# =============================================================================
name: Upgrade Unit Test
# =============================================================================
# Trigger Configuration
# =============================================================================
on:
# Trigger on pull requests targeting main and release branches
pull_request:
branches:
- master
- release-*
# Allow manual workflow execution
workflow_dispatch: {}
# =============================================================================
# Security Configuration
# =============================================================================
permissions:
contents: read # Read-only access to repository contents
# =============================================================================
# Job Definitions
# =============================================================================
jobs:
upgrade-tests:
name: Upgrade Unit Tests
runs-on: ubuntu-22.04
if: startsWith(github.head_ref, 'chore/upgrade-k8s-')
timeout-minutes: 45 # Prevent hanging jobs
# ==========================================================================
# Matrix Strategy - Test against multiple Kubernetes versions
# ==========================================================================
strategy:
fail-fast: false # Continue testing other versions if one fails
matrix:
k8s-version: ['v1.31.9']
# ==========================================================================
# Concurrency Control - Prevent overlapping runs
# ==========================================================================
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-${{ matrix.k8s-version }}
cancel-in-progress: true
steps:
# ========================================================================
# Environment Setup
# ========================================================================
- name: Check out code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
# ========================================================================
# Kubernetes Cluster Setup
# ========================================================================
- name: Setup KinD with Kubernetes ${{ matrix.k8s-version }}
uses: ./.github/actions/setup-kind-cluster
with:
k8s-version: ${{ matrix.k8s-version }}
- name: Deploy latest release
uses: ./.github/actions/deploy-latest-release
- name: Upgrade from current branch
uses: ./.github/actions/deploy-current-branch
- name: Run unit tests
uses: ./.github/actions/unit-test
with:
codecov-enable: false
codecov-token: ''

View File

@@ -1,165 +0,0 @@
name: Webhook Upgrade Validation
on:
push:
branches:
- master
- release-*
tags:
- v*
workflow_dispatch: {}
pull_request:
branches:
- master
- release-*
permissions:
contents: read
env:
GO_VERSION: '1.23.8'
jobs:
webhook-upgrade-check:
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- name: Setup Env
uses: ./.github/actions/env-setup
- name: Setup KinD
run: |
go install sigs.k8s.io/kind@v0.29.0
kind delete cluster || true
kind create cluster --image=kindest/node:v1.31.9
- name: Install KubeVela CLI
run: curl -fsSL https://kubevela.io/script/install.sh | bash
- name: Install KubeVela baseline
run: |
vela install --set featureGates.enableCueValidation=true
kubectl wait --namespace vela-system --for=condition=Available deployment/kubevela-vela-core --timeout=300s
- name: Prepare failing chart changes
run: |
cat <<'CHART' > charts/vela-core/templates/defwithtemplate/resource.yaml
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/resource.cue
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: Add resource requests and limits on K8s pod for your workload which follows the pod spec in path 'spec.template.'
name: resource
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
appliesToWorkloads:
- deployments.apps
- statefulsets.apps
- daemonsets.apps
- jobs.batch
- cronjobs.batch
podDisruptive: true
schematic:
cue:
template: |2
let resourceContent = {
resources: {
if parameter.cpu != _|_ if parameter.memory != _|_ if parameter.requests == _|_ if parameter.limits == _|_ {
// +patchStrategy=retainKeys
requests: {
cpu: parameter.cpu
memory: parameter.memory
}
// +patchStrategy=retainKeys
limits: {
cpu: parameter.cpu
memory: parameter.memory
}
}
if parameter.requests != _|_ {
// +patchStrategy=retainKeys
requests: {
cpu: parameter.requests.cpu
memory: parameter.requests.memory
}
}
if parameter.limits != _|_ {
// +patchStrategy=retainKeys
limits: {
cpu: parameter.limits.cpu
memory: parameter.limits.memory
}
}
}
}
if context.output.spec != _|_ if context.output.spec.template != _|_ {
patch: spec: template: spec: {
// +patchKey=name
containers: [resourceContent]
}
}
if context.output.spec != _|_ if context.output.spec.jobTemplate != _|_ {
patch: spec: jobTemplate: spec: template: spec: {
// +patchKey=name
containers: [resourceContent]
}
}
parameter: {
// +usage=Specify the amount of cpu for requests and limits
cpu?: *1 | number | string
// +usage=Specify the amount of memory for requests and limits
memory?: *"2048Mi" | =~"^([1-9][0-9]{0,63})(E|P|T|G|M|K|Ei|Pi|Ti|Gi|Mi|Ki)$"
// +usage=Specify the resources in requests
requests?: {
// +usage=Specify the amount of cpu for requests
cpu: *1 | number | string
// +usage=Specify the amount of memory for requests
memory: *"2048Mi" | =~"^([1-9][0-9]{0,63})(E|P|T|G|M|K|Ei|Pi|Ti|Gi|Mi|Ki)$"
}
// +usage=Specify the resources in limits
limits?: {
// +usage=Specify the amount of cpu for limits
cpu: *1 | number | string
// +usage=Specify the amount of memory for limits
memory: *"2048Mi" | =~"^([1-9][0-9]{0,63})(E|P|T|G|M|K|Ei|Pi|Ti|Gi|Mi|Ki)$"
}
}
- name: Load image
run: |
mkdir -p $HOME/tmp/
TMPDIR=$HOME/tmp/ make image-load
- name: Run Helm upgrade (expected to fail)
run: |
set +e
helm upgrade \
--set image.repository=vela-core-test \
--set image.tag=$(git rev-parse --short HEAD) \
--set featureGates.enableCueValidation=true \
--wait kubevela ./charts/vela-core --debug -n vela-system
status=$?
echo "Helm upgrade exit code: ${status}"
if [ $status -eq 0 ]; then
echo "Expected helm upgrade to fail" >&2
exit 1
fi
echo "Helm upgrade failed as expected"
- name: Dump webhook configurations
if: ${{ always() }}
run: |
kubectl get mutatingwebhookconfiguration kubevela-vela-core-admission -o yaml
kubectl get validatingwebhookconfiguration kubevela-vela-core-admission -o yaml
- name: Verify webhook validation remains active
run: ginkgo -v --focus-file requiredparam_validation_test.go ./test/e2e-test
- name: Cleanup kind cluster
if: ${{ always() }}
run: kind delete cluster --name kind

11
.gitignore vendored
View File

@@ -35,21 +35,12 @@ vendor/
.vscode
.history
# Debug binaries generated by VS Code/Delve
__debug_bin*
*/__debug_bin*
# Webhook certificates generated at runtime
k8s-webhook-server/
options.go.bak
pkg/test/vela
config/crd/bases
_tmp/
references/cmd/cli/fake/source.go
references/cmd/cli/fake/chart_source.go
references/vela-sdk-gen/*
charts/vela-core/crds/_.yaml
.test_vela
tmp/
@@ -59,6 +50,8 @@ tmp/
# check docs
git-page/
# e2e rollout runtime image build
runtime/rollout/e2e/tmp
vela.json
dist/

View File

@@ -1,6 +1,18 @@
run:
timeout: 10m
skip-files:
- "zz_generated\\..+\\.go$"
- ".*_test.go$"
skip-dirs:
- "hack"
- "e2e"
output:
# colored-line-number|line-number|json|tab|checkstyle|code-climate, default is "colored-line-number"
format: colored-line-number
linters-settings:
errcheck:
# report about not checking of errors in type assetions: `a := b.(MyStruct)`;
@@ -11,12 +23,24 @@ linters-settings:
# default is false: such cases aren't reported by default.
check-blank: false
# [deprecated] comma-separated list of pairs of the form pkg:regex
# the regex is used to ignore names within pkg. (default "fmt:.*").
# see https://github.com/kisielk/errcheck#the-deprecated-method for details
ignore: fmt:.*,io/ioutil:^Read.*
exhaustive:
# indicates that switch statements are to be considered exhaustive if a
# 'default' case is present, even if all enum members aren't listed in the
# switch
default-signifies-exhaustive: true
govet:
# report about shadowed variables
check-shadowing: false
revive:
# minimal confidence for issues, default is 0.8
min-confidence: 0.8
gofmt:
# simplify code: gofmt with `-s` option, true by default
@@ -29,8 +53,11 @@ linters-settings:
gocyclo:
# minimal code complexity to report, 30 by default (but we recommend 10-20)
min-complexity: 35
min-complexity: 30
maligned:
# print struct with more effective memory layout or not, false by default
suggest-new: true
dupl:
# tokens count to trigger issue, 150 by default
@@ -46,6 +73,13 @@ linters-settings:
# tab width in spaces. Default to 1.
tab-width: 1
unused:
# treat code as a program (not a library) and report unused exported identifiers; default is false.
# XXX: if you enable this setting, unused will report a lot of false-positives in text editors:
# if it's called for subdir of a project it can't find funcs usages. All text editor integrations
# with golangci-lint call it on a directory with the changed file.
check-exported: false
unparam:
# Inspect exported functions, default is false. Set to true if no external program/library imports your code.
# XXX: if you enable this setting, unparam will report a lot of false-positives in text editors:
@@ -73,13 +107,9 @@ linters-settings:
# Allow only slices initialized with a length of zero. Default is false.
always: false
revive:
rules:
- name: unused-parameter
disabled: true
linters:
enable:
- megacheck
- govet
- gocyclo
- gocritic
@@ -91,10 +121,11 @@ linters:
- misspell
- nakedret
- exportloopref
- unused
- gosimple
- staticcheck
disable:
- deadcode
- scopelint
- structcheck
- varcheck
- rowserrcheck
- sqlclosecheck
- errchkjson
@@ -106,28 +137,8 @@ linters:
issues:
exclude-files:
- "zz_generated\\..+\\.go$"
- ".*_test.go$"
exclude-dirs:
- "hack"
- "e2e"
# Excluding configuration per-path and per-linter
exclude-rules:
- path: .*\.go
linters:
- errcheck
text: "fmt\\."
# Ignore unchecked errors from io/ioutil functions starting with Read
- path: .*\.go
linters:
- errcheck
text: "io/ioutil.*Read"
# Exclude some linters from running on tests files.
- path: _test(ing)?\.go
linters:
@@ -144,21 +155,6 @@ issues:
linters:
- gocritic
# The preferFprint suggestion (sb.WriteString(fmt.Sprintf(...)) -> fmt.Fprintf(sb, ...))
# is a micro-optimization. The defkit package generates CUE code infrequently,
# so the performance difference is negligible and the current style is more readable.
- path: pkg/definition/defkit/
text: "preferFprint"
linters:
- gocritic
# Gosmopolitan complains of internationalization issues on the file that actually defines
# the translation.
- path: i18n\.go
text: "Han"
linters:
- gosmopolitan
# These are performance optimisations rather than style issues per se.
# They warn when function arguments or range values copy a lot of memory
# rather than using a pointer.
@@ -224,7 +220,7 @@ issues:
new: false
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-issues-per-linter: 0
max-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0
max-same-issues: 0

View File

@@ -31,28 +31,6 @@ builds:
ldflags:
- -s -w -X github.com/oam-dev/kubevela/version.VelaVersion={{ .Version }} -X github.com/oam-dev/kubevela/version.GitRevision=git-{{.ShortCommit}}
sboms:
- id: kubevela-binaries-sboms
artifacts: binary
documents:
- "${artifact}-{{ .Version }}-{{ .Os }}-{{ .Arch }}.spdx.sbom.json"
signs:
- id: kubevela-cosign-keyless
artifacts: checksum # sign the checksum file over individual artifacts
signature: "${artifact}-keyless.sig"
certificate: "${artifact}-keyless.pem"
cmd: cosign
args:
- "sign-blob"
- "--yes"
- "--output-signature"
- "${artifact}-keyless.sig"
- "--output-certificate"
- "${artifact}-keyless.pem"
- "${artifact}"
output: true
archives:
- format: tar.gz
id: vela-cli-tgz

View File

@@ -230,7 +230,7 @@ spec:
1. Workflow support specify Order Steps by Field Tag (#2022)
2. support application policy (#2011)
3. add OCM multi cluster demo (#1992)
4. Fix(volume): separate volume to trait (#2027)
4. Fix(volume): seperate volume to trait (#2027)
5. allow application skip gc resource and leave workload ownerReference controlled by rollout(#2024)
6. Store component parameters in context (#2030)
7. Allow specify chart values for helm trait(#2033)

View File

@@ -1,3 +1,3 @@
# CONTRIBUTING Guide
Please refer to https://kubevela.io/docs/contributor/overview for details.
Please refer to https://kubevela.io/docs/contributor/overview for details.

View File

@@ -1,6 +1,6 @@
ARG BASE_IMAGE
# Build the manager binary
FROM golang:1.23.8-alpine@sha256:b7486658b87d34ecf95125e5b97e8dfe86c21f712aa36fc0c702e5dc41dc63e1 AS builder
FROM --platform=${BUILDPLATFORM:-linux/amd64} golang:1.19-alpine@sha256:a9b24b67dc83b3383d22a14941c2b2b2ca6a103d805cac6820fd1355943beaf1 as builder
WORKDIR /workspace
# Copy the Go Modules manifests
@@ -9,7 +9,7 @@ COPY go.sum go.sum
# It's a proxy for CN developer, please unblock it if you have network issue
ARG GOPROXY
ENV GOPROXY=${GOPROXY:-https://proxy.golang.org}
ENV GOPROXY=${GOPROXY:-https://goproxy.cn}
# cache deps before building and copying source so that we don't need to re-download as much
# and so that source changes don't invalidate our downloaded layer
@@ -34,7 +34,7 @@ RUN GO111MODULE=on CGO_ENABLED=0 GOOS=linux GOARCH=${TARGETARCH} \
# You can replace distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
# Overwrite `BASE_IMAGE` by passing `--build-arg=BASE_IMAGE=gcr.io/distroless/static:nonroot`
FROM ${BASE_IMAGE:-alpine:3.18}
FROM ${BASE_IMAGE:-alpine:3.15@sha256:cf34c62ee8eb3fe8aa24c1fab45d7e9d12768d945c3f5a6fd6a63d901e898479}
# This is required by daemon connecting with cri
RUN apk add --no-cache ca-certificates bash expat

48
Dockerfile.apiserver Normal file
View File

@@ -0,0 +1,48 @@
ARG BASE_IMAGE
# Build the manager binary
FROM --platform=${BUILDPLATFORM:-linux/amd64} golang:1.19-alpine@sha256:a9b24b67dc83b3383d22a14941c2b2b2ca6a103d805cac6820fd1355943beaf1 as builder
ARG GOPROXY
ENV GOPROXY=${GOPROXY:-https://goproxy.cn}
WORKDIR /workspace
# Copy the Go Modules manifests
COPY go.mod go.mod
COPY go.sum go.sum
# cache deps before building and copying source so that we don't need to re-download as much
# and so that source changes don't invalidate our downloaded layer
RUN go mod download
# Copy the go source for building apiserver
COPY cmd/apiserver/ cmd/apiserver/
COPY apis/ apis/
COPY pkg/ pkg/
COPY version/ version/
COPY references/ references/
# Build
ARG TARGETARCH
ARG VERSION
ARG GITVERSION
RUN GO111MODULE=on CGO_ENABLED=0 GOOS=linux GOARCH=${TARGETARCH} \
go build -a -ldflags "-s -w -X github.com/oam-dev/kubevela/version.VelaVersion=${VERSION:-undefined} -X github.com/oam-dev/kubevela/version.GitRevision=${GITVERSION:-undefined}" \
-o apiserver-${TARGETARCH} cmd/apiserver/main.go
# Use alpine as base image due to the discussion in issue #1448
# You can replace distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
# Overwrite `BASE_IMAGE` by passing `--build-arg=BASE_IMAGE=gcr.io/distroless/static:nonroot`
FROM ${BASE_IMAGE:-alpine:3.15@sha256:cf34c62ee8eb3fe8aa24c1fab45d7e9d12768d945c3f5a6fd6a63d901e898479}
# This is required by daemon connecting with cri
RUN apk add --no-cache ca-certificates bash expat
WORKDIR /
ARG TARGETARCH
COPY --from=builder /workspace/apiserver-${TARGETARCH} /usr/local/bin/apiserver
COPY entrypoint.sh /usr/local/bin/
ENTRYPOINT ["entrypoint.sh"]
CMD ["apiserver"]

View File

@@ -1,8 +1,8 @@
ARG BASE_IMAGE
# Build the cli binary
FROM --platform=${BUILDPLATFORM:-linux/amd64} golang:1.23.8-alpine@sha256:b7486658b87d34ecf95125e5b97e8dfe86c21f712aa36fc0c702e5dc41dc63e1 AS builder
FROM --platform=${BUILDPLATFORM:-linux/amd64} golang:1.19-alpine@sha256:a9b24b67dc83b3383d22a14941c2b2b2ca6a103d805cac6820fd1355943beaf1 as builder
ARG GOPROXY
ENV GOPROXY=${GOPROXY:-https://proxy.golang.org}
ENV GOPROXY=${GOPROXY:-https://goproxy.cn}
WORKDIR /workspace
# Copy the Go Modules manifests
COPY go.mod go.mod

View File

@@ -1,6 +1,6 @@
ARG BASE_IMAGE
# Build the manager binary
FROM --platform=${BUILDPLATFORM:-linux/amd64} golang:1.23.8-alpine@sha256:b7486658b87d34ecf95125e5b97e8dfe86c21f712aa36fc0c702e5dc41dc63e1 AS builder
FROM --platform=${BUILDPLATFORM:-linux/amd64} golang:1.19-alpine@sha256:a9b24b67dc83b3383d22a14941c2b2b2ca6a103d805cac6820fd1355943beaf1 as builder
WORKDIR /workspace
# Copy the Go Modules manifests
@@ -13,6 +13,7 @@ RUN go mod download
# Copy the go source
COPY cmd/core/main.go main.go
COPY cmd/core/main_e2e_test.go main_e2e_test.go
COPY cmd/apiserver/main.go cmd/apiserver/main.go
COPY cmd/ cmd/
COPY apis/ apis/
COPY pkg/ pkg/
@@ -28,6 +29,10 @@ RUN apk add gcc musl-dev libc-dev ;\
GO111MODULE=on CGO_ENABLED=0 GOOS=linux GOARCH=${TARGETARCH} \
go test -c -o manager-${TARGETARCH} -cover -covermode=atomic -coverpkg ./... .
RUN GO111MODULE=on CGO_ENABLED=0 GOOS=linux GOARCH=${TARGETARCH} \
go build -a -ldflags "-s -w -X github.com/oam-dev/kubevela/version.VelaVersion=${VERSION:-undefined} -X github.com/oam-dev/kubevela/version.GitRevision=${GITVERSION:-undefined}" \
-o apiserver-${TARGETARCH} cmd/apiserver/main.go
# Use alpine as base image due to the discussion in issue #1448
# You can replace distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
@@ -41,6 +46,7 @@ WORKDIR /
ARG TARGETARCH
COPY --from=builder /workspace/manager-${TARGETARCH} /usr/local/bin/manager
COPY --from=builder /workspace/apiserver-${TARGETARCH} /usr/local/bin/apiserver
COPY entrypoint.sh /usr/local/bin/

117
Makefile
View File

@@ -8,77 +8,64 @@ include makefiles/e2e.mk
.DEFAULT_GOAL := all
all: build
# ==============================================================================
# Targets
## test: Run tests
test: envtest unit-test-core test-cli-gen
# Run tests
test: unit-test-core test-cli-gen
@$(OK) unit-tests pass
## test-cli-gen: Run the unit tests for cli gen
test-cli-gen:
@mkdir -p ./bin/doc
@go run ./hack/docgen/cli/gen.go ./bin/doc
## unit-test-core: Run the unit tests for core
mkdir -p ./bin/doc
go run ./hack/docgen/cli/gen.go ./bin/doc
unit-test-core:
KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) -p path)" go test -coverprofile=coverage.txt $(shell go list ./pkg/... ./cmd/... ./apis/... | grep -v apiserver | grep -v applicationconfiguration)
KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) -p path)" go test $(shell go list ./references/... | grep -v apiserver)
go test -coverprofile=coverage.txt $(shell go list ./pkg/... ./cmd/... ./apis/... | grep -v apiserver | grep -v applicationconfiguration)
go test $(shell go list ./references/... | grep -v apiserver)
unit-test-apiserver:
go test -gcflags=all=-l -coverprofile=coverage.txt $(shell go list ./pkg/... ./cmd/... | grep -E 'apiserver|velaql')
## build: Build vela cli binary
# Build vela cli binary
build: vela-cli kubectl-vela
@$(OK) build succeed
## build-cli: Clean build
build-cleanup:
@echo "===========> Cleaning all build output"
@rm -rf _bin
rm -rf _bin
## fmt: Run go fmt against code
# Run go fmt against code
fmt: goimports installcue
go fmt ./...
$(GOIMPORTS) -local github.com/oam-dev/kubevela -w $$(go list -f {{.Dir}} ./...)
$(CUE) fmt ./vela-templates/definitions/internal/*
$(CUE) fmt ./vela-templates/definitions/deprecated/*
$(CUE) fmt ./vela-templates/definitions/registry/*
$(CUE) fmt ./pkg/workflow/template/static/*
$(CUE) fmt ./pkg/workflow/providers/...
## sdk_fmt: Run go fmt against code
sdk_fmt:
./hack/sdk/reviewable.sh
## vet: Run go vet against code
$(CUE) fmt ./pkg/stdlib/pkgs/*
$(CUE) fmt ./pkg/stdlib/op.cue
$(CUE) fmt ./pkg/workflow/tasks/template/static/*
# Run go vet against code
vet:
@$(INFO) go vet
@go vet $(shell go list ./...|grep -v scaffold)
go vet ./...
## staticcheck: Run the staticcheck
staticcheck: staticchecktool
@$(INFO) staticcheck
@$(STATICCHECK) $(shell go list ./...|grep -v scaffold)
$(STATICCHECK) ./...
## lint: Run the golangci-lint
lint: golangci
@$(INFO) lint
@GOLANGCILINT=$(GOLANGCILINT) ./hack/utils/golangci-lint-wrapper.sh
$(GOLANGCILINT) run ./...
## reviewable: Run the reviewable
## Run make build to compile vela binary before running this target to ensure all generated definitions are up to date.
reviewable: build manifests fmt vet lint staticcheck helm-doc-gen sdk_fmt
reviewable: manifests fmt vet lint staticcheck helm-doc-gen
go mod tidy
# check-diff: Execute auto-gen code commands and ensure branch is clean.
# Execute auto-gen code commands and ensure branch is clean.
check-diff: reviewable
git --no-pager diff
git diff --quiet || ($(ERR) please run 'make reviewable' to include all changes && false)
@$(OK) branch is clean
## docker-push: Push the docker image
# Push the docker image
docker-push:
@echo "===========> Pushing docker image"
@docker push $(VELA_CORE_IMAGE)
docker push $(VELA_CORE_IMAGE)
build-swagger:
go run ./cmd/apiserver/main.go build-swagger ./docs/apidoc/swagger.json
## image-cleanup: Delete Docker images
image-cleanup:
ifneq (, $(shell which docker))
# Delete Docker images
@@ -87,27 +74,45 @@ ifneq ($(shell docker images -q $(VELA_CORE_TEST_IMAGE)),)
docker rmi -f $(VELA_CORE_TEST_IMAGE)
endif
ifneq ($(shell docker images -q $(VELA_RUNTIME_ROLLOUT_TEST_IMAGE)),)
docker rmi -f $(VELA_RUNTIME_ROLLOUT_TEST_IMAGE)
endif
## image-load: load docker image to the kind cluster
endif
# load docker image to the k3d cluster
image-load:
docker build -t $(VELA_CORE_TEST_IMAGE) -f Dockerfile.e2e .
kind load docker-image $(VELA_CORE_TEST_IMAGE) || { echo >&2 "kind not installed or error loading image: $(VELA_CORE_TEST_IMAGE)"; exit 1; }
k3d image import $(VELA_CORE_TEST_IMAGE) || { echo >&2 "kind not installed or error loading image: $(VELA_CORE_TEST_IMAGE)"; exit 1; }
## core-test: Run tests
image-load-runtime-cluster:
/bin/sh hack/e2e/build_runtime_rollout.sh
docker build -t $(VELA_RUNTIME_ROLLOUT_TEST_IMAGE) -f runtime/rollout/e2e/Dockerfile.e2e runtime/rollout/e2e/
rm -rf runtime/rollout/e2e/tmp
k3d image import $(VELA_RUNTIME_ROLLOUT_TEST_IMAGE) || { echo >&2 "kind not installed or error loading image: $(VELA_RUNTIME_ROLLOUT_TEST_IMAGE)"; exit 1; }
k3d cluster get $(RUNTIME_CLUSTER_NAME) && k3d image import $(VELA_RUNTIME_ROLLOUT_TEST_IMAGE) --cluster=$(RUNTIME_CLUSTER_NAME) || echo "no worker cluster"
# Run tests
core-test:
go test ./pkg/... -coverprofile cover.out
## manager: Build vela core manager binary
# Build vela core manager and apiserver binary
manager:
$(GOBUILD_ENV) go build -o bin/manager -a -ldflags $(LDFLAGS) ./cmd/core/main.go
$(GOBUILD_ENV) go build -o bin/apiserver -a -ldflags $(LDFLAGS) ./cmd/apiserver/main.go
## manifests: Generate manifests e.g. CRD, RBAC etc.
manifests: tidy installcue kustomize sync-crds
vela-runtime-rollout-manager:
$(GOBUILD_ENV) go build -o ./runtime/rollout/bin/manager -a -ldflags $(LDFLAGS) ./runtime/rollout/cmd/main.go
# Generate manifests e.g. CRD, RBAC etc.
manifests: installcue kustomize
go generate $(foreach t,pkg apis,./$(t)/...)
# TODO(yangsoon): kustomize will merge all CRD into a whole file, it may not work if we want patch more than one CRD in this way
$(KUSTOMIZE) build config/crd -o config/crd/base/core.oam.dev_applications.yaml
go run ./hack/crd/dispatch/dispatch.go config/crd/base charts/vela-core/crds
./hack/crd/cleanup.sh
go run ./hack/crd/dispatch/dispatch.go config/crd/base charts/vela-core/crds runtime/ charts/vela-minimal/crds
rm -f config/crd/base/*
./vela-templates/gen_definitions.sh
@@ -119,26 +124,12 @@ HOSTARCH := amd64
endif
## check-license-header: Check license header
check-license-header:
./hack/licence/header-check.sh
## def-gen: Install definitions
def-install:
./hack/utils/installdefinition.sh
## helm-doc-gen: Generate helm chart README.md
helm-doc-gen: helmdoc
readme-generator -v charts/vela-core/values.yaml -r charts/vela-core/README.md
## help: Display help information
help: Makefile
@echo ""
@echo "Usage:"
@echo ""
@echo " make [target]"
@echo ""
@echo "Targets:"
@echo ""
@awk -F ':|##' '/^[^\.%\t][^\t]*:.*##/{printf " \033[36m%-20s\033[0m %s\n", $$1, $$NF}' $(MAKEFILE_LIST) | sort
@sed -n 's/^##//p' ${MAKEFILE_LIST} | column -t -s ':' | sed -e 's/^/ /'
readme-generator -v charts/vela-minimal/values.yaml -r charts/vela-minimal/README.md

View File

@@ -17,8 +17,7 @@
[![Artifact HUB](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/kubevela)](https://artifacthub.io/packages/search?repo=kubevela)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4602/badge)](https://bestpractices.coreinfrastructure.org/projects/4602)
![E2E status](https://github.com/kubevela/kubevela/workflows/E2E%20Test/badge.svg)
[![OpenSSF Scorecard](https://api.securityscorecards.dev/projects/github.com/kubevela/kubevela/badge)](https://scorecard.dev/viewer/?uri=github.com/kubevela/kubevela)
[![](https://img.shields.io/badge/KubeVela-Check%20Your%20Contribution-orange)](https://opensource.alibaba.com/contribution_leaderboard/details?projectValue=kubevela)
[![OpenSSF Scorecard](https://api.securityscorecards.dev/projects/github.com/kubevela/kubevela/badge)](https://api.securityscorecards.dev/projects/github.com/kubevela/kubevela)
## Introduction
@@ -30,28 +29,17 @@ KubeVela is a modern application delivery platform that makes deploying and oper
KubeVela practices the "render, orchestrate, deploy" workflow with below highlighted values added to existing ecosystem:
#### **Deployment as Code**
* Deployment as Code
Declare your deployment plan as workflow, run it automatically with any CI/CD or GitOps system, extend or re-program the workflow steps with [CUE](https://cuelang.org/).
No ad-hoc scripts, no dirty glue code, just deploy. The deployment workflow in KubeVela is powered by [Open Application Model](https://oam.dev/).
Declare your deployment plan as workflow, run it automatically with any CI/CD or GitOps system, extend or re-program the workflow steps with CUE. No add-hoc scripts, no dirty glue code, just deploy. The deployment workflow in KubeVela is powered by [Open Application Model](https://oam.dev/).
#### **Built-in observability, multi-tenancy and security support**
* Built-in security and compliance building blocks
Choose from the wide range of LDAP integrations we provided out-of-box, enjoy enhanced [multi-tenancy and multi-cluster authorization and authentication](https://kubevela.net/docs/platform-engineers/auth/advance),
pick and apply fine-grained RBAC modules and customize them as per your own supply chain requirements.
All delivery process has fully [automated observability dashboards](https://kubevela.net/docs/platform-engineers/operations/observability).
Choose from the wide range of LDAP integrations we provided out-of-box, enjoy multi-cluster authorization that is fully automated, pick and apply fine-grained RBAC modules and customize them per your own supply chain requirements.
#### **Multi-cloud/hybrid-environments app delivery as first-class citizen**
* Multi-cloud/hybrid-environments app delivery as first-class citizen
Natively supports multi-cluster/hybrid-cloud scenarios such as progressive rollout across test/staging/production environments,
automatic canary, blue-green and continuous verification, rich placement strategy across clusters and clouds,
along with automated cloud environments provision.
#### **Lightweight but highly extensible architecture**
Minimize your control plane deployment with only one pod and 0.5c1g resources to handle thousands of application delivery.
Glue and orchestrate all your infrastructure capabilities as reusable modules with a highly extensible architecture
and share the large growing community [addons](https://kubevela.net/docs/reference/addons/overview).
Progressive rollout across test/staging/production environments, automatic canary, blue-green and continuous verification, rich placement strategy across clusters and clouds, fully managed cloud environments provision.
## Getting Started
@@ -59,14 +47,6 @@ and share the large growing community [addons](https://kubevela.net/docs/referen
* [Installation](https://kubevela.io/docs/install)
* [Deploy Your Application](https://kubevela.io/docs/quick-start)
### Get Your Own Demo with Alibaba Cloud
- install KubeVela on a Serverless K8S cluster in 3 minutes, try:
<a href="https://acs.console.aliyun.com/quick-deploy?repo=kubevela/kubevela&branch=master" target="_blank">
<img src="https://img.alicdn.com/imgextra/i1/O1CN01aiPSuA1Wiz7wkgF5u_!!6000000002823-55-tps-399-70.svg" width="200" alt="Deploy on Alibaba Cloud">
</a>
## Documentation
Full documentation is available on the [KubeVela website](https://kubevela.io/).
@@ -107,7 +87,7 @@ Check out [KubeVela videos](https://kubevela.io/videos/talks/en/oam-dapr) for th
## Contributing
Check out [CONTRIBUTING](https://kubevela.io/docs/contributor/overview) to see how to develop with KubeVela
Check out [CONTRIBUTING](https://kubevela.io/docs/contributor/overview) to see how to develop with KubeVela.
## Report Vulnerability
@@ -115,4 +95,4 @@ Security is a first priority thing for us at KubeVela. If you come across a rela
## Code of Conduct
KubeVela adopts [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
KubeVela adopts [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).

View File

@@ -26,13 +26,23 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
wfTypesv1alpha1 "github.com/kubevela/pkg/apis/oam/v1alpha1"
workflowv1alpha1 "github.com/kubevela/workflow/api/v1alpha1"
"github.com/oam-dev/kubevela/apis/core.oam.dev/condition"
"github.com/oam-dev/kubevela/apis/standard.oam.dev/v1alpha1"
"github.com/oam-dev/kubevela/pkg/oam"
)
// Kube defines the encapsulation in raw Kubernetes resource format
type Kube struct {
// Template defines the raw Kubernetes resource
// +kubebuilder:pruning:PreserveUnknownFields
Template runtime.RawExtension `json:"template"`
// Parameters defines configurable parameters
Parameters []KubeParameter `json:"parameters,omitempty"`
}
// ParameterValueType refers to a data type of parameter
type ParameterValueType string
@@ -43,6 +53,31 @@ const (
BooleanType ParameterValueType = "boolean"
)
// A KubeParameter defines a configurable parameter of a component.
type KubeParameter struct {
// Name of this parameter
Name string `json:"name"`
// +kubebuilder:validation:Enum:=string;number;boolean
// ValueType indicates the type of the parameter value, and
// only supports basic data types: string, number, boolean.
ValueType ParameterValueType `json:"type"`
// FieldPaths specifies an array of fields within this workload that will be
// overwritten by the value of this parameter. All fields must be of the
// same type. Fields are specified as JSON field paths without a leading
// dot, for example 'spec.replicas'.
FieldPaths []string `json:"fieldPaths"`
// +kubebuilder:default:=false
// Required specifies whether or not a value for this parameter must be
// supplied when authoring an Application.
Required *bool `json:"required,omitempty"`
// Description of this parameter.
Description *string `json:"description,omitempty"`
}
// CUE defines the encapsulation in CUE format
type CUE struct {
// Template defines the abstraction template data of the capability, it will replace the old CUE template in extension field.
@@ -53,11 +88,26 @@ type CUE struct {
// Schematic defines the encapsulation of this capability(workload/trait/scope),
// the encapsulation can be defined in different ways, e.g. CUE/HCL(terraform)/KUBE(K8s Object)/HELM, etc...
type Schematic struct {
KUBE *Kube `json:"kube,omitempty"`
CUE *CUE `json:"cue,omitempty"`
HELM *Helm `json:"helm,omitempty"`
Terraform *Terraform `json:"terraform,omitempty"`
}
// A Helm represents resources used by a Helm module
type Helm struct {
// Release records a Helm release used by a Helm module workload.
// +kubebuilder:pruning:PreserveUnknownFields
Release runtime.RawExtension `json:"release"`
// HelmRelease records a Helm repository used by a Helm module workload.
// +kubebuilder:pruning:PreserveUnknownFields
Repository runtime.RawExtension `json:"repository"`
}
// Terraform is the struct to describe cloud resources managed by Hashicorp Terraform
type Terraform struct {
// Configuration is Terraform Configuration
@@ -136,9 +186,6 @@ type Status struct {
// HealthPolicy defines the health check policy for the abstraction
// +optional
HealthPolicy string `json:"healthPolicy,omitempty"`
// Details stores a string representation of a CUE status map to be evaluated at runtime for display
// +optional
Details string `json:"details,omitempty"`
}
// ApplicationPhase is a label for the condition of an application at the current time
@@ -167,6 +214,26 @@ const (
ApplicationDeleting ApplicationPhase = "deleting"
)
// WorkflowState is a string that mark the workflow state
type WorkflowState string
const (
// WorkflowStateInitializing means the workflow is in initial state
WorkflowStateInitializing WorkflowState = "Initializing"
// WorkflowStateTerminated means workflow is terminated manually, and it won't be started unless the spec changed.
WorkflowStateTerminated WorkflowState = "Terminated"
// WorkflowStateSuspended means workflow is suspended manually, and it can be resumed.
WorkflowStateSuspended WorkflowState = "Suspended"
// WorkflowStateSucceeded means workflow is running successfully, all steps finished.
WorkflowStateSucceeded WorkflowState = "Succeeded"
// WorkflowStateFinished means workflow is end.
WorkflowStateFinished WorkflowState = "Finished"
// WorkflowStateExecuting means workflow is still running or waiting some steps.
WorkflowStateExecuting WorkflowState = "Executing"
// WorkflowStateSkipping means it will skip this reconcile and let next reconcile to handle it.
WorkflowStateSkipping WorkflowState = "Skipping"
)
// ApplicationComponentStatus record the health status of App component
type ApplicationComponentStatus struct {
Name string `json:"name"`
@@ -174,15 +241,11 @@ type ApplicationComponentStatus struct {
Cluster string `json:"cluster,omitempty"`
Env string `json:"env,omitempty"`
// WorkloadDefinition is the definition of a WorkloadDefinition, such as deployments/apps.v1
WorkloadDefinition WorkloadGVK `json:"workloadDefinition,omitempty"`
Healthy bool `json:"healthy"`
// WorkloadHealthy indicates the workload health without considering trait health.
// +optional
WorkloadHealthy bool `json:"workloadHealthy,omitempty"`
Details map[string]string `json:"details,omitempty"`
Message string `json:"message,omitempty"`
Traits []ApplicationTraitStatus `json:"traits,omitempty"`
Scopes []corev1.ObjectReference `json:"scopes,omitempty"`
WorkloadDefinition WorkloadGVK `json:"workloadDefinition,omitempty"`
Healthy bool `json:"healthy"`
Message string `json:"message,omitempty"`
Traits []ApplicationTraitStatus `json:"traits,omitempty"`
Scopes []corev1.ObjectReference `json:"scopes,omitempty"`
}
// Equal check if two ApplicationComponentStatus are equal
@@ -193,11 +256,9 @@ func (in ApplicationComponentStatus) Equal(r ApplicationComponentStatus) bool {
// ApplicationTraitStatus records the trait health status
type ApplicationTraitStatus struct {
Type string `json:"type"`
Healthy bool `json:"healthy"`
Pending bool `json:"pending,omitempty"`
Details map[string]string `json:"details,omitempty"`
Message string `json:"message,omitempty"`
Type string `json:"type"`
Healthy bool `json:"healthy"`
Message string `json:"message,omitempty"`
}
// Revision has name and revision number
@@ -209,6 +270,13 @@ type Revision struct {
RevisionHash string `json:"revisionHash,omitempty"`
}
// RawComponent record raw component
type RawComponent struct {
// +kubebuilder:validation:EmbeddedResource
// +kubebuilder:pruning:PreserveUnknownFields
Raw runtime.RawExtension `json:"raw"`
}
// AppStatus defines the observed state of Application
type AppStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
@@ -230,12 +298,6 @@ type AppStatus struct {
// Workflow record the status of workflow
Workflow *WorkflowStatus `json:"workflow,omitempty"`
// WorkflowRestartScheduledAt schedules a workflow restart at the specified time.
// This field is automatically set when the app.oam.dev/restart-workflow annotation is present,
// and is cleared after the restart is triggered. Use RFC3339 format or set to current time for immediate restart.
// +optional
WorkflowRestartScheduledAt *metav1.Time `json:"workflowRestartScheduledAt,omitempty"`
// LatestRevision of the application configuration it generates
// +optional
LatestRevision *Revision `json:"latestRevision,omitempty"`
@@ -296,6 +358,19 @@ const (
WorkflowStepType DefinitionType = "WorkflowStep"
)
// AppRolloutStatus defines the observed state of AppRollout
type AppRolloutStatus struct {
v1alpha1.RolloutStatus `json:",inline"`
// LastUpgradedTargetAppRevision contains the name of the app that we upgraded to
// We will restart the rollout if this is not the same as the spec
LastUpgradedTargetAppRevision string `json:"lastTargetAppRevision"`
// LastSourceAppRevision contains the name of the app that we need to upgrade from.
// We will restart the rollout if this is not the same as the spec
LastSourceAppRevision string `json:"LastSourceAppRevision,omitempty"`
}
// ApplicationTrait defines the trait of application
type ApplicationTrait struct {
Type string `json:"type"`
@@ -312,9 +387,9 @@ type ApplicationComponent struct {
// +kubebuilder:pruning:PreserveUnknownFields
Properties *runtime.RawExtension `json:"properties,omitempty"`
DependsOn []string `json:"dependsOn,omitempty"`
Inputs wfTypesv1alpha1.StepInputs `json:"inputs,omitempty"`
Outputs wfTypesv1alpha1.StepOutputs `json:"outputs,omitempty"`
DependsOn []string `json:"dependsOn,omitempty"`
Inputs workflowv1alpha1.StepInputs `json:"inputs,omitempty"`
Outputs workflowv1alpha1.StepOutputs `json:"outputs,omitempty"`
// Traits define the trait of one component, the type must be array to keep the order.
Traits []ApplicationTrait `json:"traits,omitempty"`
@@ -339,22 +414,41 @@ type ClusterSelector struct {
Labels map[string]string `json:"labels,omitempty"`
}
// Distribution defines the replica distribution of an AppRevision to a cluster.
type Distribution struct {
// Replicas is the replica number.
Replicas int `json:"replicas,omitempty"`
}
// ClusterPlacement defines the cluster placement rules for an app revision.
type ClusterPlacement struct {
// ClusterSelector selects the cluster to deploy apps to.
// If not specified, it indicates the host cluster per se.
ClusterSelector *ClusterSelector `json:"clusterSelector,omitempty"`
// Distribution defines the replica distribution of an AppRevision to a cluster.
Distribution Distribution `json:"distribution,omitempty"`
}
const (
// PolicyResourceCreator create the policy resource.
PolicyResourceCreator string = "policy"
// WorkflowResourceCreator create the resource in workflow.
WorkflowResourceCreator string = "workflow"
// DebugResourceCreator create the debug resource.
DebugResourceCreator string = "debug"
)
// OAMObjectReference defines the object reference for an oam resource
type OAMObjectReference struct {
Component string `json:"component,omitempty"`
Trait string `json:"trait,omitempty"`
Env string `json:"env,omitempty"`
}
// Equal check if two references are equal
func (in OAMObjectReference) Equal(r OAMObjectReference) bool {
return in.Component == r.Component && in.Trait == r.Trait
return in.Component == r.Component && in.Trait == r.Trait && in.Env == r.Env
}
// AddLabelsToObject add labels to object if properties are not empty
@@ -369,6 +463,9 @@ func (in OAMObjectReference) AddLabelsToObject(obj client.Object) {
if in.Trait != "" {
labels[oam.TraitTypeLabel] = in.Trait
}
if in.Env != "" {
labels[oam.LabelAppEnv] = in.Env
}
obj.SetLabels(labels)
}
@@ -378,6 +475,7 @@ func NewOAMObjectReferenceFromObject(obj client.Object) OAMObjectReference {
return OAMObjectReference{
Component: labels[oam.LabelAppComponent],
Trait: labels[oam.TraitTypeLabel],
Env: labels[oam.LabelAppEnv],
}
}
return OAMObjectReference{}
@@ -435,6 +533,8 @@ const (
RenderCondition
// WorkflowCondition indicates whether workflow processing is successful.
WorkflowCondition
// RolloutCondition indicates whether rollout processing is successful.
RolloutCondition
// ReadyCondition indicates whether whole application processing is successful.
ReadyCondition
)
@@ -445,6 +545,7 @@ var conditions = map[ApplicationConditionType]string{
PolicyCondition: "Policy",
RenderCondition: "Render",
WorkflowCondition: "Workflow",
RolloutCondition: "Rollout",
ReadyCondition: "Ready",
}

View File

@@ -29,12 +29,13 @@ func TestOAMObjectReference(t *testing.T) {
o1 := OAMObjectReference{
Component: "component",
Trait: "trait",
Env: "env",
}
obj := &unstructured.Unstructured{}
o2 := NewOAMObjectReferenceFromObject(obj)
r.False(o2.Equal(o1))
o1.AddLabelsToObject(obj)
r.Equal(2, len(obj.GetLabels()))
r.Equal(3, len(obj.GetLabels()))
o3 := NewOAMObjectReferenceFromObject(obj)
r.True(o1.Equal(o3))
o3.Component = "comp"

View File

@@ -1,7 +1,8 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright 2023 The KubeVela Authors.
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -21,13 +22,28 @@ limitations under the License.
package common
import (
oamv1alpha1 "github.com/kubevela/pkg/apis/oam/v1alpha1"
"github.com/kubevela/workflow/api/v1alpha1"
crossplane_runtime "github.com/oam-dev/terraform-controller/api/types/crossplane-runtime"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AppRolloutStatus) DeepCopyInto(out *AppRolloutStatus) {
*out = *in
in.RolloutStatus.DeepCopyInto(&out.RolloutStatus)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AppRolloutStatus.
func (in *AppRolloutStatus) DeepCopy() *AppRolloutStatus {
if in == nil {
return nil
}
out := new(AppRolloutStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AppStatus) DeepCopyInto(out *AppStatus) {
*out = *in
@@ -49,10 +65,6 @@ func (in *AppStatus) DeepCopyInto(out *AppStatus) {
*out = new(WorkflowStatus)
(*in).DeepCopyInto(*out)
}
if in.WorkflowRestartScheduledAt != nil {
in, out := &in.WorkflowRestartScheduledAt, &out.WorkflowRestartScheduledAt
*out = (*in).DeepCopy()
}
if in.LatestRevision != nil {
in, out := &in.LatestRevision, &out.LatestRevision
*out = new(Revision)
@@ -97,12 +109,12 @@ func (in *ApplicationComponent) DeepCopyInto(out *ApplicationComponent) {
}
if in.Inputs != nil {
in, out := &in.Inputs, &out.Inputs
*out = make(oamv1alpha1.StepInputs, len(*in))
*out = make(v1alpha1.StepInputs, len(*in))
copy(*out, *in)
}
if in.Outputs != nil {
in, out := &in.Outputs, &out.Outputs
*out = make(oamv1alpha1.StepOutputs, len(*in))
*out = make(v1alpha1.StepOutputs, len(*in))
copy(*out, *in)
}
if in.Traits != nil {
@@ -135,19 +147,10 @@ func (in *ApplicationComponent) DeepCopy() *ApplicationComponent {
func (in *ApplicationComponentStatus) DeepCopyInto(out *ApplicationComponentStatus) {
*out = *in
out.WorkloadDefinition = in.WorkloadDefinition
if in.Details != nil {
in, out := &in.Details, &out.Details
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
if in.Traits != nil {
in, out := &in.Traits, &out.Traits
*out = make([]ApplicationTraitStatus, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
copy(*out, *in)
}
if in.Scopes != nil {
in, out := &in.Scopes, &out.Scopes
@@ -189,13 +192,6 @@ func (in *ApplicationTrait) DeepCopy() *ApplicationTrait {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ApplicationTraitStatus) DeepCopyInto(out *ApplicationTraitStatus) {
*out = *in
if in.Details != nil {
in, out := &in.Details, &out.Details
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ApplicationTraitStatus.
@@ -261,6 +257,27 @@ func (in *ClusterObjectReference) DeepCopy() *ClusterObjectReference {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ClusterPlacement) DeepCopyInto(out *ClusterPlacement) {
*out = *in
if in.ClusterSelector != nil {
in, out := &in.ClusterSelector, &out.ClusterSelector
*out = new(ClusterSelector)
(*in).DeepCopyInto(*out)
}
out.Distribution = in.Distribution
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ClusterPlacement.
func (in *ClusterPlacement) DeepCopy() *ClusterPlacement {
if in == nil {
return nil
}
out := new(ClusterPlacement)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ClusterSelector) DeepCopyInto(out *ClusterSelector) {
*out = *in
@@ -298,6 +315,91 @@ func (in *DefinitionReference) DeepCopy() *DefinitionReference {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Distribution) DeepCopyInto(out *Distribution) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Distribution.
func (in *Distribution) DeepCopy() *Distribution {
if in == nil {
return nil
}
out := new(Distribution)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Helm) DeepCopyInto(out *Helm) {
*out = *in
in.Release.DeepCopyInto(&out.Release)
in.Repository.DeepCopyInto(&out.Repository)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Helm.
func (in *Helm) DeepCopy() *Helm {
if in == nil {
return nil
}
out := new(Helm)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Kube) DeepCopyInto(out *Kube) {
*out = *in
in.Template.DeepCopyInto(&out.Template)
if in.Parameters != nil {
in, out := &in.Parameters, &out.Parameters
*out = make([]KubeParameter, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Kube.
func (in *Kube) DeepCopy() *Kube {
if in == nil {
return nil
}
out := new(Kube)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KubeParameter) DeepCopyInto(out *KubeParameter) {
*out = *in
if in.FieldPaths != nil {
in, out := &in.FieldPaths, &out.FieldPaths
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.Required != nil {
in, out := &in.Required, &out.Required
*out = new(bool)
**out = **in
}
if in.Description != nil {
in, out := &in.Description, &out.Description
*out = new(string)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeParameter.
func (in *KubeParameter) DeepCopy() *KubeParameter {
if in == nil {
return nil
}
out := new(KubeParameter)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *OAMObjectReference) DeepCopyInto(out *OAMObjectReference) {
*out = *in
@@ -333,6 +435,22 @@ func (in *PolicyStatus) DeepCopy() *PolicyStatus {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RawComponent) DeepCopyInto(out *RawComponent) {
*out = *in
in.Raw.DeepCopyInto(&out.Raw)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RawComponent.
func (in *RawComponent) DeepCopy() *RawComponent {
if in == nil {
return nil
}
out := new(RawComponent)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RawExtensionPointer) DeepCopyInto(out *RawExtensionPointer) {
*out = *in
@@ -409,11 +527,21 @@ func (in *Revision) DeepCopy() *Revision {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Schematic) DeepCopyInto(out *Schematic) {
*out = *in
if in.KUBE != nil {
in, out := &in.KUBE, &out.KUBE
*out = new(Kube)
(*in).DeepCopyInto(*out)
}
if in.CUE != nil {
in, out := &in.CUE, &out.CUE
*out = new(CUE)
**out = **in
}
if in.HELM != nil {
in, out := &in.HELM, &out.HELM
*out = new(Helm)
(*in).DeepCopyInto(*out)
}
if in.Terraform != nil {
in, out := &in.Terraform, &out.Terraform
*out = new(Terraform)

View File

@@ -1,7 +1,8 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright 2023 The KubeVela Authors.
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -21,12 +21,13 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1alpha1"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1alpha2"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
)
func init() {
// Register the types with the Scheme so the resources can map objects to GroupVersionKinds and back
AddToSchemes = append(AddToSchemes, v1alpha1.SchemeBuilder.AddToScheme, v1beta1.SchemeBuilder.AddToScheme)
AddToSchemes = append(AddToSchemes, v1alpha1.SchemeBuilder.AddToScheme, v1alpha2.SchemeBuilder.AddToScheme, v1beta1.SchemeBuilder.AddToScheme)
}
// AddToSchemes may be used to add all resources defined in the project to a Scheme

View File

@@ -17,9 +17,7 @@ limitations under the License.
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
@@ -29,17 +27,10 @@ const (
// GarbageCollectPolicySpec defines the spec of configuration drift
type GarbageCollectPolicySpec struct {
// ApplicationRevisionLimit if set, this application will use this number for application revision instead of
// the global configuration
ApplicationRevisionLimit *int `json:"applicationRevisionLimit,omitempty"`
// KeepLegacyResource if is set, outdated versioned resourcetracker will not be recycled automatically
// outdated resources will be kept until resourcetracker be deleted manually
KeepLegacyResource bool `json:"keepLegacyResource,omitempty"`
// ContinueOnFailure if is set, continue to execute gc when the workflow fails, by default gc will be executed only after the workflow succeeds
ContinueOnFailure bool `json:"continueOnFailure,omitempty"`
// Order defines the order of garbage collect
Order GarbageCollectOrder `json:"order,omitempty"`
@@ -58,9 +49,8 @@ const (
// GarbageCollectPolicyRule defines a single garbage-collect policy rule
type GarbageCollectPolicyRule struct {
Selector ResourcePolicyRuleSelector `json:"selector"`
Strategy GarbageCollectStrategy `json:"strategy"`
Propagation *GarbageCollectPropagation `json:"propagation"`
Selector ResourcePolicyRuleSelector `json:"selector"`
Strategy GarbageCollectStrategy `json:"strategy"`
}
// GarbageCollectStrategy the strategy for target resource to recycle
@@ -76,16 +66,6 @@ const (
GarbageCollectStrategyOnAppUpdate GarbageCollectStrategy = "onAppUpdate"
)
// GarbageCollectPropagation the deletion propagation setting similar to metav1.DeletionPropagation
type GarbageCollectPropagation string
const (
// GarbageCollectPropagationOrphan orphan child resources while deleting target resources
GarbageCollectPropagationOrphan = "orphan"
// GarbageCollectPropagationCascading delete child resources in background while deleting target resources
GarbageCollectPropagationCascading = "cascading"
)
// Type the type name of the policy
func (in *GarbageCollectPolicySpec) Type() string {
return GarbageCollectPolicyType
@@ -100,18 +80,3 @@ func (in *GarbageCollectPolicySpec) FindStrategy(manifest *unstructured.Unstruct
}
return nil
}
// FindDeleteOption find delete option for target resource
func (in *GarbageCollectPolicySpec) FindDeleteOption(manifest *unstructured.Unstructured) (bool, []client.DeleteOption) {
for _, rule := range in.Rules {
if rule.Selector.Match(manifest) && rule.Propagation != nil {
switch *rule.Propagation {
case GarbageCollectPropagationOrphan:
return true, []client.DeleteOption{client.PropagationPolicy(metav1.DeletePropagationOrphan)}
case GarbageCollectPropagationCascading:
return false, []client.DeleteOption{client.PropagationPolicy(metav1.DeletePropagationBackground)}
}
}
}
return false, nil
}

View File

@@ -21,7 +21,7 @@ import (
k8sscheme "k8s.io/client-go/kubernetes/scheme"
"sigs.k8s.io/controller-runtime/pkg/scheme"
wfTypesv1alpha1 "github.com/kubevela/pkg/apis/oam/v1alpha1"
workflowv1alpha1 "github.com/kubevela/workflow/api/v1alpha1"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
)
@@ -57,11 +57,6 @@ var (
func init() {
SchemeBuilder.Register(&Policy{}, &PolicyList{})
SchemeBuilder.Register(&wfTypesv1alpha1.Workflow{}, &wfTypesv1alpha1.WorkflowList{})
SchemeBuilder.Register(&workflowv1alpha1.Workflow{}, &workflowv1alpha1.WorkflowList{})
_ = SchemeBuilder.AddToScheme(k8sscheme.Scheme)
}
// Resource takes an unqualified resource and returns a Group qualified GroupResource
func Resource(resource string) schema.GroupResource {
return SchemeGroupVersion.WithResource(resource).GroupResource()
}

View File

@@ -18,7 +18,7 @@ package v1alpha1
import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/utils/ptr"
"k8s.io/utils/pointer"
stringslices "k8s.io/utils/strings/slices"
"github.com/oam-dev/kubevela/pkg/oam"
@@ -52,7 +52,7 @@ func (in *ResourcePolicyRuleSelector) Match(manifest *unstructured.Unstructured)
if len(src) == 0 {
return nil
}
return ptr.To(val != "" && stringslices.Contains(src, val))
return pointer.Bool(val != "" && stringslices.Contains(src, val))
}
conditions := []*bool{
match(in.CompNames, compName),

View File

@@ -1,70 +0,0 @@
/*
Copyright 2023 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
const (
// ResourceUpdatePolicyType refers to the type of resource-update policy
ResourceUpdatePolicyType = "resource-update"
)
// ResourceUpdatePolicySpec defines the spec of resource-update policy
type ResourceUpdatePolicySpec struct {
Rules []ResourceUpdatePolicyRule `json:"rules"`
}
// Type the type name of the policy
func (in *ResourceUpdatePolicySpec) Type() string {
return ResourceUpdatePolicyType
}
// ResourceUpdatePolicyRule defines the rule for resource-update resources
type ResourceUpdatePolicyRule struct {
// Selector picks which resources should be affected
Selector ResourcePolicyRuleSelector `json:"selector"`
// Strategy the strategy for updating resources
Strategy ResourceUpdateStrategy `json:"strategy,omitempty"`
}
// ResourceUpdateStrategy the update strategy for resource
type ResourceUpdateStrategy struct {
// Op the update op for selected resources
Op ResourceUpdateOp `json:"op,omitempty"`
// RecreateFields the field path which will trigger recreate if changed
RecreateFields []string `json:"recreateFields,omitempty"`
}
// ResourceUpdateOp update op for resource
type ResourceUpdateOp string
const (
// ResourceUpdateStrategyPatch patch the target resource (three-way patch)
ResourceUpdateStrategyPatch ResourceUpdateOp = "patch"
// ResourceUpdateStrategyReplace update the target resource
ResourceUpdateStrategyReplace ResourceUpdateOp = "replace"
)
// FindStrategy return if the target resource is read-only
func (in *ResourceUpdatePolicySpec) FindStrategy(manifest *unstructured.Unstructured) *ResourceUpdateStrategy {
for _, rule := range in.Rules {
if rule.Selector.Match(manifest) {
return &rule.Strategy
}
}
return nil
}

View File

@@ -1,7 +1,8 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright 2023 The KubeVela Authors.
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -313,11 +314,6 @@ func (in *EnvTraitPatch) DeepCopy() *EnvTraitPatch {
func (in *GarbageCollectPolicyRule) DeepCopyInto(out *GarbageCollectPolicyRule) {
*out = *in
in.Selector.DeepCopyInto(&out.Selector)
if in.Propagation != nil {
in, out := &in.Propagation, &out.Propagation
*out = new(GarbageCollectPropagation)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GarbageCollectPolicyRule.
@@ -333,11 +329,6 @@ func (in *GarbageCollectPolicyRule) DeepCopy() *GarbageCollectPolicyRule {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GarbageCollectPolicySpec) DeepCopyInto(out *GarbageCollectPolicySpec) {
*out = *in
if in.ApplicationRevisionLimit != nil {
in, out := &in.ApplicationRevisionLimit, &out.ApplicationRevisionLimit
*out = new(int)
**out = **in
}
if in.Rules != nil {
in, out := &in.Rules, &out.Rules
*out = make([]GarbageCollectPolicyRule, len(*in))
@@ -729,65 +720,6 @@ func (in *ResourcePolicyRuleSelector) DeepCopy() *ResourcePolicyRuleSelector {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ResourceUpdatePolicyRule) DeepCopyInto(out *ResourceUpdatePolicyRule) {
*out = *in
in.Selector.DeepCopyInto(&out.Selector)
in.Strategy.DeepCopyInto(&out.Strategy)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceUpdatePolicyRule.
func (in *ResourceUpdatePolicyRule) DeepCopy() *ResourceUpdatePolicyRule {
if in == nil {
return nil
}
out := new(ResourceUpdatePolicyRule)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ResourceUpdatePolicySpec) DeepCopyInto(out *ResourceUpdatePolicySpec) {
*out = *in
if in.Rules != nil {
in, out := &in.Rules, &out.Rules
*out = make([]ResourceUpdatePolicyRule, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceUpdatePolicySpec.
func (in *ResourceUpdatePolicySpec) DeepCopy() *ResourceUpdatePolicySpec {
if in == nil {
return nil
}
out := new(ResourceUpdatePolicySpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ResourceUpdateStrategy) DeepCopyInto(out *ResourceUpdateStrategy) {
*out = *in
if in.RecreateFields != nil {
in, out := &in.RecreateFields, &out.RecreateFields
*out = make([]string, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceUpdateStrategy.
func (in *ResourceUpdateStrategy) DeepCopy() *ResourceUpdateStrategy {
if in == nil {
return nil
}
out := new(ResourceUpdateStrategy)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SharedResourcePolicyRule) DeepCopyInto(out *SharedResourcePolicyRule) {
*out = *in

View File

@@ -0,0 +1,123 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha2
import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
"github.com/oam-dev/kubevela/apis/standard.oam.dev/v1alpha1"
)
// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN!
// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized.
// AppStatus defines the observed state of Application
type AppStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
v1alpha1.RolloutStatus `json:",inline"`
Phase common.ApplicationPhase `json:"status,omitempty"`
// Components record the related Components created by Application Controller
Components []corev1.ObjectReference `json:"components,omitempty"`
// Services record the status of the application services
Services []common.ApplicationComponentStatus `json:"services,omitempty"`
// ResourceTracker record the status of the ResourceTracker
ResourceTracker *corev1.ObjectReference `json:"resourceTracker,omitempty"`
// LatestRevision of the application configuration it generates
// +optional
LatestRevision *common.Revision `json:"latestRevision,omitempty"`
}
// ApplicationTrait defines the trait of application
type ApplicationTrait struct {
Name string `json:"name"`
// +kubebuilder:pruning:PreserveUnknownFields
Properties *runtime.RawExtension `json:"properties,omitempty"`
}
// ApplicationComponent describe the component of application
type ApplicationComponent struct {
Name string `json:"name"`
WorkloadType string `json:"type"`
// +kubebuilder:pruning:PreserveUnknownFields
Settings runtime.RawExtension `json:"settings,omitempty"`
// Traits define the trait of one component, the type must be array to keep the order.
Traits []ApplicationTrait `json:"traits,omitempty"`
// +kubebuilder:pruning:PreserveUnknownFields
// scopes in ApplicationComponent defines the component-level scopes
// the format is <scope-type:scope-instance-name> pairs, the key represents type of `ScopeDefinition` while the value represent the name of scope instance.
Scopes map[string]string `json:"scopes,omitempty"`
}
// ApplicationSpec is the spec of Application
type ApplicationSpec struct {
Components []ApplicationComponent `json:"components"`
// TODO(wonderflow): we should have application level scopes supported here
// RolloutPlan is the details on how to rollout the resources
// The controller simply replace the old resources with the new one if there is no rollout plan involved
// +optional
RolloutPlan *v1alpha1.RolloutPlan `json:"rolloutPlan,omitempty"`
}
// Application is the Schema for the applications API
// +kubebuilder:object:root=true
// +kubebuilder:resource:categories={oam},shortName={app,velaapp}
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:name="COMPONENT",type=string,JSONPath=`.spec.components[*].name`
// +kubebuilder:printcolumn:name="TYPE",type=string,JSONPath=`.spec.components[*].type`
// +kubebuilder:printcolumn:name="PHASE",type=string,JSONPath=`.status.status`
// +kubebuilder:printcolumn:name="HEALTHY",type=boolean,JSONPath=`.status.services[*].healthy`
// +kubebuilder:printcolumn:name="STATUS",type=string,JSONPath=`.status.services[*].message`
// +kubebuilder:printcolumn:name="AGE",type=date,JSONPath=".metadata.creationTimestamp"
type Application struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ApplicationSpec `json:"spec,omitempty"`
Status common.AppStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// ApplicationList contains a list of Application
type ApplicationList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Application `json:"items"`
}
// GetComponent get the component from the application based on its workload type
func (app *Application) GetComponent(workloadType string) *ApplicationComponent {
for _, c := range app.Spec.Components {
if c.WorkloadType == workloadType {
return &c
}
}
return nil
}

View File

@@ -0,0 +1,68 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha2
import (
"reflect"
"testing"
)
func TestApplicationGetComponent(t *testing.T) {
ac1 := ApplicationComponent{
Name: "ac1",
WorkloadType: "type1",
}
ac2 := ApplicationComponent{
Name: "ac2",
WorkloadType: "type2",
}
tests := map[string]struct {
app *Application
componentName string
want *ApplicationComponent
}{
"test get one": {
app: &Application{
Spec: ApplicationSpec{
Components: []ApplicationComponent{
ac1, ac2,
},
},
},
componentName: ac1.WorkloadType,
want: &ac1,
},
"test get none": {
app: &Application{
Spec: ApplicationSpec{
Components: []ApplicationComponent{
ac2,
},
},
},
componentName: ac1.WorkloadType,
want: nil,
},
}
for name, tt := range tests {
t.Run(name, func(t *testing.T) {
if got := tt.app.GetComponent(tt.componentName); !reflect.DeepEqual(got, tt.want) {
t.Errorf("GetComponent() = %v, want %v", got, tt.want)
}
})
}
}

View File

@@ -0,0 +1,73 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha2
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
)
// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized.
// ApplicationRevisionSpec is the spec of ApplicationRevision
type ApplicationRevisionSpec struct {
// Application records the snapshot of the created/modified Application
Application Application `json:"application"`
// ComponentDefinitions records the snapshot of the componentDefinitions related with the created/modified Application
ComponentDefinitions map[string]ComponentDefinition `json:"componentDefinitions,omitempty"`
// WorkloadDefinitions records the snapshot of the workloadDefinitions related with the created/modified Application
WorkloadDefinitions map[string]WorkloadDefinition `json:"workloadDefinitions,omitempty"`
// TraitDefinitions records the snapshot of the traitDefinitions related with the created/modified Application
TraitDefinitions map[string]TraitDefinition `json:"traitDefinitions,omitempty"`
// ScopeDefinitions records the snapshot of the scopeDefinitions related with the created/modified Application
ScopeDefinitions map[string]ScopeDefinition `json:"scopeDefinitions,omitempty"`
// Components records the rendered components from Application, it will contains the whole K8s CR of workload in it.
Components []common.RawComponent `json:"components,omitempty"`
// ApplicationConfiguration records the rendered applicationConfiguration from Application,
// it will contains the whole K8s CR of trait and the reference component in it.
// +kubebuilder:validation:EmbeddedResource
// +kubebuilder:pruning:PreserveUnknownFields
ApplicationConfiguration runtime.RawExtension `json:"applicationConfiguration"`
}
// ApplicationRevision is the Schema for the ApplicationRevision API
// +kubebuilder:object:root=true
// +kubebuilder:resource:categories={oam},shortName=apprev
// +kubebuilder:printcolumn:name="AGE",type=date,JSONPath=".metadata.creationTimestamp"
type ApplicationRevision struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ApplicationRevisionSpec `json:"spec,omitempty"`
}
// ApplicationRevisionList contains a list of ApplicationRevision
// +kubebuilder:object:root=true
type ApplicationRevisionList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []ApplicationRevision `json:"items"`
}

View File

@@ -0,0 +1,103 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha2
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"github.com/oam-dev/kubevela/apis/core.oam.dev/condition"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
)
// ComponentDefinitionSpec defines the desired state of ComponentDefinition
type ComponentDefinitionSpec struct {
// Workload is a workload type descriptor
Workload common.WorkloadTypeDescriptor `json:"workload"`
// ChildResourceKinds are the list of GVK of the child resources this workload generates
ChildResourceKinds []common.ChildResourceKind `json:"childResourceKinds,omitempty"`
// RevisionLabel indicates which label for underlying resources(e.g. pods) of this workload
// can be used by trait to create resource selectors(e.g. label selector for pods).
// +optional
RevisionLabel string `json:"revisionLabel,omitempty"`
// PodSpecPath indicates where/if this workload has K8s podSpec field
// if one workload has podSpec, trait can do lot's of assumption such as port, env, volume fields.
// +optional
PodSpecPath string `json:"podSpecPath,omitempty"`
// Status defines the custom health policy and status message for workload
// +optional
Status *common.Status `json:"status,omitempty"`
// Schematic defines the data format and template of the encapsulation of the workload
// +optional
Schematic *common.Schematic `json:"schematic,omitempty"`
// Extension is used for extension needs by OAM platform builders
// +optional
// +kubebuilder:pruning:PreserveUnknownFields
Extension *runtime.RawExtension `json:"extension,omitempty"`
}
// ComponentDefinitionStatus is the status of ComponentDefinition
type ComponentDefinitionStatus struct {
// ConditionedStatus reflects the observed status of a resource
condition.ConditionedStatus `json:",inline"`
// ConfigMapRef refer to a ConfigMap which contains OpenAPI V3 JSON schema of Component parameters.
ConfigMapRef string `json:"configMapRef,omitempty"`
// LatestRevision of the component definition
// +optional
LatestRevision *common.Revision `json:"latestRevision,omitempty"`
}
// +kubebuilder:object:root=true
// ComponentDefinition is the Schema for the componentdefinitions API
// +kubebuilder:resource:scope=Namespaced,categories={oam},shortName=comp
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:name="WORKLOAD-KIND",type=string,JSONPath=".spec.workload.definition.kind"
// +kubebuilder:printcolumn:name="DESCRIPTION",type=string,JSONPath=".metadata.annotations.definition\\.oam\\.dev/description"
type ComponentDefinition struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ComponentDefinitionSpec `json:"spec,omitempty"`
Status ComponentDefinitionStatus `json:"status,omitempty"`
}
// SetConditions set condition for WorkloadDefinition
func (cd *ComponentDefinition) SetConditions(c ...condition.Condition) {
cd.Status.SetConditions(c...)
}
// GetCondition gets condition from WorkloadDefinition
func (cd *ComponentDefinition) GetCondition(conditionType condition.ConditionType) condition.Condition {
return cd.Status.GetCondition(conditionType)
}
// +kubebuilder:object:root=true
// ComponentDefinitionList contains a list of ComponentDefinition
type ComponentDefinitionList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []ComponentDefinition `json:"items"`
}

View File

@@ -0,0 +1,139 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha2
import (
"fmt"
"reflect"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/klog/v2"
"sigs.k8s.io/controller-runtime/pkg/conversion"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
)
// ApplicationV1alpha2ToV1beta1 will convert v1alpha2 to v1beta1
func ApplicationV1alpha2ToV1beta1(v1a2 *Application, v1b1 *v1beta1.Application) {
// 1) convert metav1.TypeMeta
// apiVersion and Kind automatically converted
// 2) convert metav1.ObjectMeta
v1b1.ObjectMeta = *v1a2.ObjectMeta.DeepCopy()
// 3) convert Spec ApplicationSpec
// 3.1) convert Spec.Components
for _, comp := range v1a2.Spec.Components {
// convert trait, especially for `.name` -> `.type`
var traits = make([]common.ApplicationTrait, len(comp.Traits))
for j, trait := range comp.Traits {
traits[j] = common.ApplicationTrait{
Type: trait.Name,
Properties: trait.Properties.DeepCopy(),
}
}
// deep copy scopes
scopes := make(map[string]string)
for k, v := range comp.Scopes {
scopes[k] = v
}
// convert component
// `.settings` -> `.properties`
v1b1.Spec.Components = append(v1b1.Spec.Components, common.ApplicationComponent{
Name: comp.Name,
Type: comp.WorkloadType,
Properties: comp.Settings.DeepCopy(),
Traits: traits,
Scopes: scopes,
})
}
// 4) convert Status common.AppStatus
v1b1.Status = *v1a2.Status.DeepCopy()
}
// ConvertTo converts this Application to the Hub version (v1beta1 only for now).
func (app *Application) ConvertTo(dst conversion.Hub) error {
switch convertedApp := dst.(type) {
case *v1beta1.Application:
klog.Infof("convert *v1alpha2.Application [%s] to *v1beta1.Application", app.Name)
ApplicationV1alpha2ToV1beta1(app, convertedApp)
return nil
default:
}
return fmt.Errorf("unsupported convertTo object %v", reflect.TypeOf(dst))
}
// ConvertFrom converts from the Hub version (v1beta1) to this version (v1alpha2).
func (app *Application) ConvertFrom(src conversion.Hub) error {
switch sourceApp := src.(type) {
case *v1beta1.Application:
klog.Infof("convert *v1alpha2.Application from *v1beta1.Application [%s]", sourceApp.Name)
// 1) convert metav1.TypeMeta
// apiVersion and Kind automatically converted
// 2) convert metav1.ObjectMeta
app.ObjectMeta = *sourceApp.ObjectMeta.DeepCopy()
// 3) convert Spec ApplicationSpec
// 3.1) convert Spec.Components
for _, comp := range sourceApp.Spec.Components {
// convert trait, especially for `.type` -> `.name`
var traits = make([]ApplicationTrait, len(comp.Traits))
for j, trait := range comp.Traits {
traits[j] = ApplicationTrait{
Name: trait.Type,
Properties: trait.Properties.DeepCopy(),
}
}
// deep copy scopes
scopes := make(map[string]string)
for k, v := range comp.Scopes {
scopes[k] = v
}
// convert component
// `.properties` -> `.settings`
var compProperties runtime.RawExtension
if comp.Properties != nil {
compProperties = *comp.Properties.DeepCopy()
}
app.Spec.Components = append(app.Spec.Components, ApplicationComponent{
Name: comp.Name,
WorkloadType: comp.Type,
Settings: compProperties,
Traits: traits,
Scopes: scopes,
})
}
// 4) convert Status common.AppStatus
app.Status = *sourceApp.Status.DeepCopy()
return nil
default:
}
return fmt.Errorf("unsupported ConvertFrom object %v", reflect.TypeOf(src))
}

View File

@@ -0,0 +1,117 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha2
import (
"fmt"
"testing"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/stretchr/testify/require"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
)
var app = Application{
Spec: ApplicationSpec{
Components: []ApplicationComponent{{
Name: "test-component",
WorkloadType: "worker",
Traits: []ApplicationTrait{},
Scopes: map[string]string{},
}},
},
}
type errType struct {
}
func (*errType) Hub() {}
func (*errType) DeepCopyObject() runtime.Object {
return nil
}
func (*errType) GetObjectKind() schema.ObjectKind {
return nil
}
func TestApplicationV1alpha2ToV1beta1(t *testing.T) {
r := require.New(t)
expected := &v1beta1.Application{}
ApplicationV1alpha2ToV1beta1(&app, expected)
r.Equal(expected, &v1beta1.Application{
Spec: v1beta1.ApplicationSpec{
Components: []common.ApplicationComponent{{
Name: "test-component",
Type: "worker",
Properties: &runtime.RawExtension{},
Traits: []common.ApplicationTrait{},
Scopes: map[string]string{},
}},
},
})
}
func TestConvertTo(t *testing.T) {
r := require.New(t)
expected := &v1beta1.Application{}
err := app.ConvertTo(expected)
r.NoError(err)
r.Equal(expected, &v1beta1.Application{
Spec: v1beta1.ApplicationSpec{
Components: []common.ApplicationComponent{{
Name: "test-component",
Type: "worker",
Properties: &runtime.RawExtension{},
Traits: []common.ApplicationTrait{},
Scopes: map[string]string{},
}},
},
})
errCase := &errType{}
err = app.ConvertTo(errCase)
r.Equal(err, fmt.Errorf("unsupported convertTo object *v1alpha2.errType"))
}
func TestConvertFrom(t *testing.T) {
r := require.New(t)
to := &Application{}
from := &v1beta1.Application{
Spec: v1beta1.ApplicationSpec{
Components: []common.ApplicationComponent{{
Name: "test-component",
Type: "worker",
Properties: &runtime.RawExtension{},
Traits: []common.ApplicationTrait{},
Scopes: map[string]string{},
}},
},
}
err := to.ConvertFrom(from)
r.NoError(err)
r.Equal(to.Spec, app.Spec)
errCase := &errType{}
err = app.ConvertFrom(errCase)
r.Equal(err, fmt.Errorf("unsupported ConvertFrom object *v1alpha2.errType"))
}

View File

@@ -0,0 +1,146 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha2
import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/oam-dev/kubevela/apis/core.oam.dev/condition"
"github.com/oam-dev/kubevela/pkg/oam"
)
// HealthStatus represents health status strings.
type HealthStatus string
const (
// StatusHealthy represents healthy status.
StatusHealthy HealthStatus = "HEALTHY"
// StatusUnhealthy represents unhealthy status.
StatusUnhealthy = "UNHEALTHY"
// StatusUnknown represents unknown status.
StatusUnknown = "UNKNOWN"
)
var _ oam.Scope = &HealthScope{}
// A HealthScopeSpec defines the desired state of a HealthScope.
type HealthScopeSpec struct {
// ProbeTimeout is the amount of time in seconds to wait when receiving a response before marked failure.
ProbeTimeout *int32 `json:"probe-timeout,omitempty"`
// ProbeInterval is the amount of time in seconds between probing tries.
ProbeInterval *int32 `json:"probe-interval,omitempty"`
// AppRefs records references of applications' components
AppRefs []AppReference `json:"appReferences,omitempty"`
// WorkloadReferences to the workloads that are in this scope.
// +deprecated
WorkloadReferences []corev1.ObjectReference `json:"workloadRefs"`
}
// AppReference records references of an application's components
type AppReference struct {
AppName string `json:"appName,omitempty"`
CompReferences []CompReference `json:"compReferences,omitempty"`
}
// CompReference records references of a component's resources
type CompReference struct {
CompName string `json:"compName,omitempty"`
Workload corev1.ObjectReference `json:"workload,omitempty"`
Traits []corev1.ObjectReference `json:"traits,omitempty"`
}
// A HealthScopeStatus represents the observed state of a HealthScope.
type HealthScopeStatus struct {
condition.ConditionedStatus `json:",inline"`
// ScopeHealthCondition represents health condition summary of the scope
ScopeHealthCondition ScopeHealthCondition `json:"scopeHealthCondition"`
// AppHealthConditions represents health condition of applications in the scope
AppHealthConditions []*AppHealthCondition `json:"appHealthConditions,omitempty"`
// WorkloadHealthConditions represents health condition of workloads in the scope
// Use AppHealthConditions to provide app level status
// +deprecated
WorkloadHealthConditions []*WorkloadHealthCondition `json:"healthConditions,omitempty"`
}
// AppHealthCondition represents health condition of an application
type AppHealthCondition struct {
AppName string `json:"appName"`
EnvName string `json:"envName,omitempty"`
Components []*WorkloadHealthCondition `json:"components,omitempty"`
}
// ScopeHealthCondition represents health condition summary of a scope.
type ScopeHealthCondition struct {
HealthStatus HealthStatus `json:"healthStatus"`
Total int64 `json:"total,omitempty"`
HealthyWorkloads int64 `json:"healthyWorkloads,omitempty"`
UnhealthyWorkloads int64 `json:"unhealthyWorkloads,omitempty"`
UnknownWorkloads int64 `json:"unknownWorkloads,omitempty"`
}
// WorkloadHealthCondition represents informative health condition of a workload.
type WorkloadHealthCondition struct {
// ComponentName represents the component name if target is a workload
ComponentName string `json:"componentName,omitempty"`
TargetWorkload corev1.ObjectReference `json:"targetWorkload,omitempty"`
HealthStatus HealthStatus `json:"healthStatus"`
Diagnosis string `json:"diagnosis,omitempty"`
// WorkloadStatus represents status of workloads whose HealthStatus is UNKNOWN.
WorkloadStatus string `json:"workloadStatus,omitempty"`
CustomStatusMsg string `json:"customStatusMsg,omitempty"`
Traits []*TraitHealthCondition `json:"traits,omitempty"`
}
// TraitHealthCondition represents informative health condition of a trait.
type TraitHealthCondition struct {
Type string `json:"type"`
Resource string `json:"resource"`
HealthStatus HealthStatus `json:"healthStatus"`
Diagnosis string `json:"diagnosis,omitempty"`
CustomStatusMsg string `json:"customStatusMsg,omitempty"`
}
// +kubebuilder:object:root=true
// A HealthScope determines an aggregate health status based of the health of components.
// +kubebuilder:resource:categories={oam}
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".status.health",name=HEALTH,type=string
type HealthScope struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec HealthScopeSpec `json:"spec,omitempty"`
Status HealthScopeStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// HealthScopeList contains a list of HealthScope.
type HealthScopeList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []HealthScope `json:"items"`
}

View File

@@ -0,0 +1,673 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha2
import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/intstr"
"github.com/oam-dev/kubevela/apis/core.oam.dev/condition"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
"github.com/oam-dev/kubevela/apis/types"
)
// A WorkloadDefinitionSpec defines the desired state of a WorkloadDefinition.
type WorkloadDefinitionSpec struct {
// Reference to the CustomResourceDefinition that defines this workload kind.
Reference common.DefinitionReference `json:"definitionRef"`
// ChildResourceKinds are the list of GVK of the child resources this workload generates
ChildResourceKinds []common.ChildResourceKind `json:"childResourceKinds,omitempty"`
// RevisionLabel indicates which label for underlying resources(e.g. pods) of this workload
// can be used by trait to create resource selectors(e.g. label selector for pods).
// +optional
RevisionLabel string `json:"revisionLabel,omitempty"`
// PodSpecPath indicates where/if this workload has K8s podSpec field
// if one workload has podSpec, trait can do lot's of assumption such as port, env, volume fields.
// +optional
PodSpecPath string `json:"podSpecPath,omitempty"`
// Status defines the custom health policy and status message for workload
// +optional
Status *common.Status `json:"status,omitempty"`
// Schematic defines the data format and template of the encapsulation of the workload
// +optional
Schematic *common.Schematic `json:"schematic,omitempty"`
// Extension is used for extension needs by OAM platform builders
// +optional
// +kubebuilder:pruning:PreserveUnknownFields
Extension *runtime.RawExtension `json:"extension,omitempty"`
}
// WorkloadDefinitionStatus is the status of WorkloadDefinition
type WorkloadDefinitionStatus struct {
condition.ConditionedStatus `json:",inline"`
}
// +kubebuilder:object:root=true
// A WorkloadDefinition registers a kind of Kubernetes custom resource as a
// valid OAM workload kind by referencing its CustomResourceDefinition. The CRD
// is used to validate the schema of the workload when it is embedded in an OAM
// Component.
// +kubebuilder:resource:scope=Namespaced,categories={oam},shortName=workload
// +kubebuilder:printcolumn:name="DEFINITION-NAME",type=string,JSONPath=".spec.definitionRef.name"
type WorkloadDefinition struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec WorkloadDefinitionSpec `json:"spec,omitempty"`
Status WorkloadDefinitionStatus `json:"status,omitempty"`
}
// SetConditions set condition for WorkloadDefinition
func (wd *WorkloadDefinition) SetConditions(c ...condition.Condition) {
wd.Status.SetConditions(c...)
}
// GetCondition gets condition from WorkloadDefinition
func (wd *WorkloadDefinition) GetCondition(conditionType condition.ConditionType) condition.Condition {
return wd.Status.GetCondition(conditionType)
}
// +kubebuilder:object:root=true
// WorkloadDefinitionList contains a list of WorkloadDefinition.
type WorkloadDefinitionList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []WorkloadDefinition `json:"items"`
}
// A TraitDefinitionSpec defines the desired state of a TraitDefinition.
type TraitDefinitionSpec struct {
// Reference to the CustomResourceDefinition that defines this trait kind.
Reference common.DefinitionReference `json:"definitionRef,omitempty"`
// Revision indicates whether a trait is aware of component revision
// +optional
RevisionEnabled bool `json:"revisionEnabled,omitempty"`
// WorkloadRefPath indicates where/if a trait accepts a workloadRef object
// +optional
WorkloadRefPath string `json:"workloadRefPath,omitempty"`
// PodDisruptive specifies whether using the trait will cause the pod to restart or not.
// +optional
PodDisruptive bool `json:"podDisruptive,omitempty"`
// AppliesToWorkloads specifies the list of workload kinds this trait
// applies to. Workload kinds are specified in kind.group/version format,
// e.g. server.core.oam.dev/v1alpha2. Traits that omit this field apply to
// all workload kinds.
// +optional
AppliesToWorkloads []string `json:"appliesToWorkloads,omitempty"`
// ConflictsWith specifies the list of traits(CRD name, Definition name, CRD group)
// which could not apply to the same workloads with this trait.
// Traits that omit this field can work with any other traits.
// Example rules:
// "service" # Trait definition name
// "services.k8s.io" # API resource/crd name
// "*.networking.k8s.io" # API group
// "labelSelector:foo=bar" # label selector
// labelSelector format: https://pkg.go.dev/k8s.io/apimachinery/pkg/labels#Parse
// +optional
ConflictsWith []string `json:"conflictsWith,omitempty"`
// Schematic defines the data format and template of the encapsulation of the trait
// +optional
Schematic *common.Schematic `json:"schematic,omitempty"`
// Status defines the custom health policy and status message for trait
// +optional
Status *common.Status `json:"status,omitempty"`
// Extension is used for extension needs by OAM platform builders
// +optional
// +kubebuilder:pruning:PreserveUnknownFields
Extension *runtime.RawExtension `json:"extension,omitempty"`
}
// TraitDefinitionStatus is the status of TraitDefinition
type TraitDefinitionStatus struct {
// ConditionedStatus reflects the observed status of a resource
condition.ConditionedStatus `json:",inline"`
// ConfigMapRef refer to a ConfigMap which contains OpenAPI V3 JSON schema of Component parameters.
ConfigMapRef string `json:"configMapRef,omitempty"`
// LatestRevision of the trait definition
// +optional
LatestRevision *common.Revision `json:"latestRevision,omitempty"`
}
// +kubebuilder:object:root=true
// A TraitDefinition registers a kind of Kubernetes custom resource as a valid
// OAM trait kind by referencing its CustomResourceDefinition. The CRD is used
// to validate the schema of the trait when it is embedded in an OAM
// ApplicationConfiguration.
// +kubebuilder:resource:scope=Namespaced,categories={oam},shortName=trait
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:name="APPLIES-TO",type=string,JSONPath=".spec.appliesToWorkloads"
// +kubebuilder:printcolumn:name="DESCRIPTION",type=string,JSONPath=".metadata.annotations.definition\\.oam\\.dev/description"
type TraitDefinition struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec TraitDefinitionSpec `json:"spec,omitempty"`
Status TraitDefinitionStatus `json:"status,omitempty"`
}
// SetConditions set condition for TraitDefinition
func (td *TraitDefinition) SetConditions(c ...condition.Condition) {
td.Status.SetConditions(c...)
}
// GetCondition gets condition from TraitDefinition
func (td *TraitDefinition) GetCondition(conditionType condition.ConditionType) condition.Condition {
return td.Status.GetCondition(conditionType)
}
// +kubebuilder:object:root=true
// TraitDefinitionList contains a list of TraitDefinition.
type TraitDefinitionList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []TraitDefinition `json:"items"`
}
// A ScopeDefinitionSpec defines the desired state of a ScopeDefinition.
type ScopeDefinitionSpec struct {
// Reference to the CustomResourceDefinition that defines this scope kind.
Reference common.DefinitionReference `json:"definitionRef"`
// WorkloadRefsPath indicates if/where a scope accepts workloadRef objects
WorkloadRefsPath string `json:"workloadRefsPath,omitempty"`
// AllowComponentOverlap specifies whether an OAM component may exist in
// multiple instances of this kind of scope.
AllowComponentOverlap bool `json:"allowComponentOverlap"`
// Extension is used for extension needs by OAM platform builders
// +optional
// +kubebuilder:pruning:PreserveUnknownFields
Extension *runtime.RawExtension `json:"extension,omitempty"`
}
// +kubebuilder:object:root=true
// A ScopeDefinition registers a kind of Kubernetes custom resource as a valid
// OAM scope kind by referencing its CustomResourceDefinition. The CRD is used
// to validate the schema of the scope when it is embedded in an OAM
// ApplicationConfiguration.
// +kubebuilder:printcolumn:JSONPath=".spec.definitionRef.name",name=DEFINITION-NAME,type=string
// +kubebuilder:resource:scope=Namespaced,categories={oam},shortName=scope
type ScopeDefinition struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ScopeDefinitionSpec `json:"spec,omitempty"`
}
// +kubebuilder:object:root=true
// ScopeDefinitionList contains a list of ScopeDefinition.
type ScopeDefinitionList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []ScopeDefinition `json:"items"`
}
// A ComponentParameter defines a configurable parameter of a component.
type ComponentParameter struct {
// Name of this parameter. OAM ApplicationConfigurations will specify
// parameter values using this name.
Name string `json:"name"`
// FieldPaths specifies an array of fields within this Component's workload
// that will be overwritten by the value of this parameter. The type of the
// parameter (e.g. int, string) is inferred from the type of these fields;
// All fields must be of the same type. Fields are specified as JSON field
// paths without a leading dot, for example 'spec.replicas'.
FieldPaths []string `json:"fieldPaths"`
// +kubebuilder:default:=false
// Required specifies whether or not a value for this parameter must be
// supplied when authoring an ApplicationConfiguration.
// +optional
Required *bool `json:"required,omitempty"`
// Description of this parameter.
// +optional
Description *string `json:"description,omitempty"`
}
// A ComponentSpec defines the desired state of a Component.
type ComponentSpec struct {
// A Workload that will be created for each ApplicationConfiguration that
// includes this Component. Workload is an instance of a workloadDefinition.
// We either use the GVK info or a special "type" field in the workload to associate
// the content of the workload with its workloadDefinition
// +kubebuilder:validation:EmbeddedResource
// +kubebuilder:pruning:PreserveUnknownFields
Workload runtime.RawExtension `json:"workload"`
// HelmRelease records a Helm release used by a Helm module workload.
// +optional
Helm *common.Helm `json:"helm,omitempty"`
// Parameters exposed by this component. ApplicationConfigurations that
// reference this component may specify values for these parameters, which
// will in turn be injected into the embedded workload.
// +optional
Parameters []ComponentParameter `json:"parameters,omitempty"`
}
// A ComponentStatus represents the observed state of a Component.
type ComponentStatus struct {
// The generation observed by the component controller.
// +optional
ObservedGeneration int64 `json:"observedGeneration"`
condition.ConditionedStatus `json:",inline"`
// LatestRevision of component
// +optional
LatestRevision *common.Revision `json:"latestRevision,omitempty"`
// One Component should only be used by one AppConfig
}
// +kubebuilder:object:root=true
// A Component describes how an OAM workload kind may be instantiated.
// +kubebuilder:resource:categories={oam}
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.workload.kind",name=WORKLOAD-KIND,type=string
// +kubebuilder:printcolumn:name="age",type="date",JSONPath=".metadata.creationTimestamp"
type Component struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ComponentSpec `json:"spec,omitempty"`
Status ComponentStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// ComponentList contains a list of Component.
type ComponentList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Component `json:"items"`
}
// A ComponentParameterValue specifies a value for a named parameter. The
// associated component must publish a parameter with this name.
type ComponentParameterValue struct {
// Name of the component parameter to set.
Name string `json:"name"`
// Value to set.
Value intstr.IntOrString `json:"value"`
}
// A ComponentTrait specifies a trait that should be applied to a component.
type ComponentTrait struct {
// A Trait that will be created for the component
// +kubebuilder:validation:EmbeddedResource
// +kubebuilder:pruning:PreserveUnknownFields
Trait runtime.RawExtension `json:"trait"`
// DataOutputs specify the data output sources from this trait.
// +optional
DataOutputs []DataOutput `json:"dataOutputs,omitempty"`
// DataInputs specify the data input sinks into this trait.
// +optional
DataInputs []DataInput `json:"dataInputs,omitempty"`
}
// A ComponentScope specifies a scope in which a component should exist.
type ComponentScope struct {
// A ScopeReference must refer to an OAM scope resource.
ScopeReference corev1.ObjectReference `json:"scopeRef"`
}
// An ApplicationConfigurationComponent specifies a component of an
// ApplicationConfiguration. Each component is used to instantiate a workload.
type ApplicationConfigurationComponent struct {
// ComponentName specifies a component whose latest revision will be bind
// with ApplicationConfiguration. When the spec of the referenced component
// changes, ApplicationConfiguration will automatically migrate all trait
// affect from the prior revision to the new one. This is mutually exclusive
// with RevisionName.
// +optional
ComponentName string `json:"componentName,omitempty"`
// RevisionName of a specific component revision to which to bind
// ApplicationConfiguration. This is mutually exclusive with componentName.
// +optional
RevisionName string `json:"revisionName,omitempty"`
// DataOutputs specify the data output sources from this component.
DataOutputs []DataOutput `json:"dataOutputs,omitempty"`
// DataInputs specify the data input sinks into this component.
DataInputs []DataInput `json:"dataInputs,omitempty"`
// ParameterValues specify values for the the specified component's
// parameters. Any parameter required by the component must be specified.
// +optional
ParameterValues []ComponentParameterValue `json:"parameterValues,omitempty"`
// Traits of the specified component.
// +optional
Traits []ComponentTrait `json:"traits,omitempty"`
// Scopes in which the specified component should exist.
// +optional
Scopes []ComponentScope `json:"scopes,omitempty"`
}
// An ApplicationConfigurationSpec defines the desired state of a
// ApplicationConfiguration.
type ApplicationConfigurationSpec struct {
// Components of which this ApplicationConfiguration consists. Each
// component will be used to instantiate a workload.
Components []ApplicationConfigurationComponent `json:"components"`
}
// A TraitStatus represents the state of a trait.
type TraitStatus string
// A WorkloadTrait represents a trait associated with a workload and its status
type WorkloadTrait struct {
// Status is a place holder for a customized controller to fill
// if it needs a single place to summarize the status of the trait
Status TraitStatus `json:"status,omitempty"`
// Reference to a trait created by an ApplicationConfiguration.
Reference corev1.ObjectReference `json:"traitRef"`
// Message will allow controller to leave some additional information for this trait
Message string `json:"message,omitempty"`
// AppliedGeneration indicates the generation observed by the appConfig controller.
// The same field is also recorded in the annotations of traits.
// A trait is possible to be deleted from cluster after created.
// This field is useful to track the observed generation of traits after they are
// deleted.
AppliedGeneration int64 `json:"appliedGeneration,omitempty"`
// DependencyUnsatisfied notify does the trait has dependency unsatisfied
DependencyUnsatisfied bool `json:"dependencyUnsatisfied,omitempty"`
}
// A ScopeStatus represents the state of a scope.
type ScopeStatus string
// A WorkloadScope represents a scope associated with a workload and its status
type WorkloadScope struct {
// Status is a place holder for a customized controller to fill
// if it needs a single place to summarize the status of the scope
Status ScopeStatus `json:"status,omitempty"`
// Reference to a scope created by an ApplicationConfiguration.
Reference corev1.ObjectReference `json:"scopeRef"`
}
// A WorkloadStatus represents the status of a workload.
type WorkloadStatus struct {
// Status is a place holder for a customized controller to fill
// if it needs a single place to summarize the entire status of the workload
Status string `json:"status,omitempty"`
// ComponentName that produced this workload.
ComponentName string `json:"componentName,omitempty"`
// ComponentRevisionName of current component
ComponentRevisionName string `json:"componentRevisionName,omitempty"`
// DependencyUnsatisfied notify does the workload has dependency unsatisfied
DependencyUnsatisfied bool `json:"dependencyUnsatisfied,omitempty"`
// AppliedComponentRevision indicates the applied component revision name of this workload
AppliedComponentRevision string `json:"appliedComponentRevision,omitempty"`
// Reference to a workload created by an ApplicationConfiguration.
Reference corev1.ObjectReference `json:"workloadRef,omitempty"`
// Traits associated with this workload.
Traits []WorkloadTrait `json:"traits,omitempty"`
// Scopes associated with this workload.
Scopes []WorkloadScope `json:"scopes,omitempty"`
}
// HistoryWorkload contain the old component revision that are still running
type HistoryWorkload struct {
// Revision of this workload
Revision string `json:"revision,omitempty"`
// Reference to running workload.
Reference corev1.ObjectReference `json:"workloadRef,omitempty"`
}
// A ApplicationStatus represents the state of the entire application.
type ApplicationStatus string
// An ApplicationConfigurationStatus represents the observed state of a
// ApplicationConfiguration.
type ApplicationConfigurationStatus struct {
condition.ConditionedStatus `json:",inline"`
// Status is a place holder for a customized controller to fill
// if it needs a single place to summarize the status of the entire application
Status ApplicationStatus `json:"status,omitempty"`
Dependency DependencyStatus `json:"dependency,omitempty"`
// RollingStatus indicates what phase are we in the rollout phase
RollingStatus types.RollingStatus `json:"rollingStatus,omitempty"`
// Workloads created by this ApplicationConfiguration.
Workloads []WorkloadStatus `json:"workloads,omitempty"`
// The generation observed by the appConfig controller.
// +optional
ObservedGeneration int64 `json:"observedGeneration"`
// HistoryWorkloads will record history but still working revision workloads.
HistoryWorkloads []HistoryWorkload `json:"historyWorkloads,omitempty"`
}
// DependencyStatus represents the observed state of the dependency of
// an ApplicationConfiguration.
type DependencyStatus struct {
Unsatisfied []UnstaifiedDependency `json:"unsatisfied,omitempty"`
}
// UnstaifiedDependency describes unsatisfied dependency flow between
// one pair of objects.
type UnstaifiedDependency struct {
Reason string `json:"reason"`
From DependencyFromObject `json:"from"`
To DependencyToObject `json:"to"`
}
// DependencyFromObject represents the object that dependency data comes from.
type DependencyFromObject struct {
corev1.ObjectReference `json:",inline"`
FieldPath string `json:"fieldPath,omitempty"`
}
// DependencyToObject represents the object that dependency data goes to.
type DependencyToObject struct {
corev1.ObjectReference `json:",inline"`
FieldPaths []string `json:"fieldPaths,omitempty"`
}
// +kubebuilder:object:root=true
// An ApplicationConfiguration represents an OAM application.
// +kubebuilder:resource:shortName=appconfig,categories={oam}
// +kubebuilder:subresource:status
type ApplicationConfiguration struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ApplicationConfigurationSpec `json:"spec,omitempty"`
Status ApplicationConfigurationStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// ApplicationConfigurationList contains a list of ApplicationConfiguration.
type ApplicationConfigurationList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []ApplicationConfiguration `json:"items"`
}
// DataOutput specifies a data output source from an object.
type DataOutput struct {
// Name is the unique name of a DataOutput in an ApplicationConfiguration.
Name string `json:"name,omitempty"`
// FieldPath refers to the value of an object's field.
FieldPath string `json:"fieldPath,omitempty"`
// Conditions specify the conditions that should be satisfied before emitting a data output.
// Different conditions are AND-ed together.
// If no conditions is specified, it is by default to check output value not empty.
// +optional
Conditions []ConditionRequirement `json:"conditions,omitempty"`
// OutputStore specifies the object used to store intermediate data generated by Operations
OutputStore StoreReference `json:"outputStore,omitempty"`
}
// StoreReference specifies the referenced object in DataOutput or DataInput
type StoreReference struct {
corev1.ObjectReference `json:",inline"`
// Operations specify the data processing operations
Operations []DataOperation `json:"operations,omitempty"`
}
// DataOperation defines the specific operation for data
type DataOperation struct {
// Type specifies the type of DataOperation
Type string `json:"type"`
// Operator specifies the operation under this DataOperation type
Operator DataOperator `json:"op"`
// ToFieldPath refers to the value of an object's field
ToFieldPath string `json:"toFieldPath"`
// ToDataPath refers to the value of an object's specfied by ToDataPath. For example the ToDataPath "redis" specifies "redis info" in '{"redis":"redis info"}'
ToDataPath string `json:"toDataPath,omitempty"`
// +optional
// Value specifies an expected value
// This is mutually exclusive with ValueFrom
Value string `json:"value,omitempty"`
// +optional
// ValueFrom specifies expected value from object such as workload and trait
// This is mutually exclusive with Value
ValueFrom ValueFrom `json:"valueFrom,omitempty"`
Conditions []ConditionRequirement `json:"conditions,omitempty"`
}
// DataOperator defines the type of Operator in DataOperation
type DataOperator string
const (
// AddOperator specifies the add operation for data passing
AddOperator DataOperator = "add"
// DeleteOperator specifies the delete operation for data passing
DeleteOperator DataOperator = "delete"
// ReplaceOperator specifies the replace operation for data passing
ReplaceOperator DataOperator = "replace"
)
// DataInput specifies a data input sink to an object.
// If input is array, it will be appended to the target field paths.
type DataInput struct {
// ValueFrom specifies the value source.
ValueFrom DataInputValueFrom `json:"valueFrom,omitempty"`
// ToFieldPaths specifies the field paths of an object to fill passed value.
ToFieldPaths []string `json:"toFieldPaths,omitempty"`
// StrategyMergeKeys specifies the merge key if the toFieldPaths target is an array.
// The StrategyMergeKeys is optional, by default, if the toFieldPaths target is an array, we will append.
// If StrategyMergeKeys specified, we will check the key in the target array.
// If any key exist, do update; if no key exist, append.
StrategyMergeKeys []string `json:"strategyMergeKeys,omitempty"`
// When the Conditions is satified, ToFieldPaths will be filled with passed value
Conditions []ConditionRequirement `json:"conditions,omitempty"`
// InputStore specifies the object used to read intermediate data genereted by DataOutput
InputStore StoreReference `json:"inputStore,omitempty"`
}
// DataInputValueFrom specifies the value source for a data input.
type DataInputValueFrom struct {
// DataOutputName matches a name of a DataOutput in the same AppConfig.
DataOutputName string `json:"dataOutputName"`
}
// ConditionRequirement specifies the requirement to match a value.
type ConditionRequirement struct {
Operator ConditionOperator `json:"op"`
// +optional
// Value specifies an expected value
// This is mutually exclusive with ValueFrom
Value string `json:"value,omitempty"`
// +optional
// ValueFrom specifies expected value from AppConfig
// This is mutually exclusive with Value
ValueFrom ValueFrom `json:"valueFrom,omitempty"`
// +optional
// FieldPath specifies got value from workload/trait object
FieldPath string `json:"fieldPath,omitempty"`
}
// ValueFrom gets value from AppConfig object by specifying a path
type ValueFrom struct {
FieldPath string `json:"fieldPath"`
}
// ConditionOperator specifies the operator to match a value.
type ConditionOperator string
const (
// ConditionEqual indicates equal to given value
ConditionEqual ConditionOperator = "eq"
// ConditionNotEqual indicates not equal to given value
ConditionNotEqual ConditionOperator = "notEq"
// ConditionNotEmpty indicates given value not empty
ConditionNotEmpty ConditionOperator = "notEmpty"
)

View File

@@ -0,0 +1,355 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package v1alpha2
package v1alpha2
import (
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/util/intstr"
)
// An OperatingSystem required by a containerised workload.
type OperatingSystem string
// Supported operating system types.
const (
OperatingSystemLinux OperatingSystem = "linux"
OperatingSystemWindows OperatingSystem = "windows"
)
// A CPUArchitecture required by a containerised workload.
type CPUArchitecture string
// Supported architectures
const (
CPUArchitectureI386 CPUArchitecture = "i386"
CPUArchitectureAMD64 CPUArchitecture = "amd64"
CPUArchitectureARM CPUArchitecture = "arm"
CPUArchitectureARM64 CPUArchitecture = "arm64"
)
// A SecretKeySelector is a reference to a secret key in an arbitrary namespace.
type SecretKeySelector struct {
// The name of the secret.
Name string `json:"name"`
// The key to select.
Key string `json:"key"`
}
// TODO(negz): The OAM spec calls for float64 quantities in some cases, but this
// is incompatible with controller-gen and Kubernetes API conventions. We should
// reassess whether resource.Quantity is appropriate after resolving
// https://github.com/oam-dev/spec/issues/313
// CPUResources required by a container.
type CPUResources struct {
// Required CPU count. 1.0 represents one CPU core.
Required resource.Quantity `json:"required"`
}
// MemoryResources required by a container.
type MemoryResources struct {
// Required memory.
Required resource.Quantity `json:"required"`
}
// GPUResources required by a container.
type GPUResources struct {
// Required GPU count.
Required resource.Quantity `json:"required"`
}
// DiskResource required by a container.
type DiskResource struct {
// Required disk space.
Required resource.Quantity `json:"required"`
// Ephemeral specifies whether an external disk needs to be mounted.
// +optional
Ephemeral *bool `json:"ephemeral,omitempty"`
}
// A VolumeAccessMode determines how a volume may be accessed.
type VolumeAccessMode string
// Volume access modes.
const (
VolumeAccessModeRO VolumeAccessMode = "RO"
VolumeAccessModeRW VolumeAccessMode = "RW"
)
// A VolumeSharingPolicy determines how a volume may be shared.
type VolumeSharingPolicy string
// Volume sharing policies.
const (
VolumeSharingPolicyExclusive VolumeSharingPolicy = "Exclusive"
VolumeSharingPolicyShared VolumeSharingPolicy = "Shared"
)
// VolumeResource required by a container.
type VolumeResource struct {
// Name of this volume. Must be unique within its container.
Name string `json:"name"`
// MountPath at which this volume will be mounted within its container.
MountPath string `json:"mountPath"`
// TODO(negz): Use +kubebuilder:default marker to default AccessMode to RW
// and SharingPolicy to Exclusive once we're generating v1 CRDs.
// AccessMode of this volume; RO (read only) or RW (read and write).
// +optional
// +kubebuilder:validation:Enum=RO;RW
AccessMode *VolumeAccessMode `json:"accessMode,omitempty"`
// SharingPolicy of this volume; Exclusive or Shared.
// +optional
// +kubebuilder:validation:Enum=Exclusive;Shared
SharingPolicy *VolumeSharingPolicy `json:"sharingPolicy,omitempty"`
// Disk requirements of this volume.
// +optional
Disk *DiskResource `json:"disk,omitempty"`
}
// ExtendedResource required by a container.
type ExtendedResource struct {
// Name of the external resource. Resource names are specified in
// kind.group/version format, e.g. motionsensor.ext.example.com/v1.
Name string `json:"name"`
// Required extended resource(s), e.g. 8 or "very-cool-widget"
Required intstr.IntOrString `json:"required"`
}
// ContainerResources specifies a container's required compute resources.
type ContainerResources struct {
// CPU required by this container.
CPU CPUResources `json:"cpu"`
// Memory required by this container.
Memory MemoryResources `json:"memory"`
// GPU required by this container.
// +optional
GPU *GPUResources `json:"gpu,omitempty"`
// Volumes required by this container.
// +optional
Volumes []VolumeResource `json:"volumes,omitempty"`
// Extended resources required by this container.
// +optional
Extended []ExtendedResource `json:"extended,omitempty"`
}
// A ContainerEnvVar specifies an environment variable that should be set within
// a container.
type ContainerEnvVar struct {
// Name of the environment variable. Must be composed of valid Unicode
// letter and number characters, as well as _ and -.
// +kubebuilder:validation:Pattern=^[-_a-zA-Z0-9]+$
Name string `json:"name"`
// Value of the environment variable.
// +optional
Value *string `json:"value,omitempty"`
// FromSecret is a secret key reference which can be used to assign a value
// to the environment variable.
// +optional
FromSecret *SecretKeySelector `json:"fromSecret,omitempty"`
}
// A ContainerConfigFile specifies a configuration file that should be written
// within a container.
type ContainerConfigFile struct {
// Path within the container at which the configuration file should be
// written.
Path string `json:"path"`
// Value that should be written to the configuration file.
// +optional
Value *string `json:"value,omitempty"`
// FromSecret is a secret key reference which can be used to assign a value
// to be written to the configuration file at the given path in the
// container.
// +optional
FromSecret *SecretKeySelector `json:"fromSecret,omitempty"`
}
// A TransportProtocol represents a transport layer protocol.
type TransportProtocol string
// Transport protocols.
const (
TransportProtocolTCP TransportProtocol = "TCP"
TransportProtocolUDP TransportProtocol = "UDP"
)
// A ContainerPort specifies a port that is exposed by a container.
type ContainerPort struct {
// Name of this port. Must be unique within its container. Must be lowercase
// alphabetical characters.
// +kubebuilder:validation:Pattern=^[a-z]+$
Name string `json:"name"`
// Port number. Must be unique within its container.
Port int32 `json:"containerPort"`
// TODO(negz): Use +kubebuilder:default marker to default Protocol to TCP
// once we're generating v1 CRDs.
// Protocol used by the server listening on this port.
// +kubebuilder:validation:Enum=TCP;UDP
// +optional
Protocol *TransportProtocol `json:"protocol,omitempty"`
}
// An ExecProbe probes a container's health by executing a command.
type ExecProbe struct {
// Command to be run by this probe.
Command []string `json:"command"`
}
// A HTTPHeader to be passed when probing a container.
type HTTPHeader struct {
// Name of this HTTP header. Must be unique per probe.
Name string `json:"name"`
// Value of this HTTP header.
Value string `json:"value"`
}
// A HTTPGetProbe probes a container's health by sending an HTTP GET request.
type HTTPGetProbe struct {
// Path to probe, e.g. '/healthz'.
Path string `json:"path"`
// Port to probe.
Port int32 `json:"port"`
// HTTPHeaders to send with the GET request.
// +optional
HTTPHeaders []HTTPHeader `json:"httpHeaders,omitempty"`
}
// A TCPSocketProbe probes a container's health by connecting to a TCP socket.
type TCPSocketProbe struct {
// Port this probe should connect to.
Port int32 `json:"port"`
}
// A ContainerHealthProbe specifies how to probe the health of a container.
// Exactly one of Exec, HTTPGet, or TCPSocket must be specified.
type ContainerHealthProbe struct {
// Exec probes a container's health by executing a command.
// +optional
Exec *ExecProbe `json:"exec,omitempty"`
// HTTPGet probes a container's health by sending an HTTP GET request.
// +optional
HTTPGet *HTTPGetProbe `json:"httpGet,omitempty"`
// TCPSocketProbe probes a container's health by connecting to a TCP socket.
// +optional
TCPSocket *TCPSocketProbe `json:"tcpSocket,omitempty"`
// InitialDelaySeconds after a container starts before the first probe.
// +optional
InitialDelaySeconds *int32 `json:"initialDelaySeconds,omitempty"`
// TODO(negz): Use +kubebuilder:default marker to default PeriodSeconds,
// TimeoutSeconds, SuccessThreshold, and FailureThreshold to 10, 1, 1, and 3
// respectively once we're generating v1 CRDs.
// PeriodSeconds between probes.
// +optional
PeriodSeconds *int32 `json:"periodSeconds,omitempty"`
// TimeoutSeconds after which the probe times out.
// +optional
TimeoutSeconds *int32 `json:"timeoutSeconds,omitempty"`
// SuccessThreshold specifies how many consecutive probes must success in
// order for the container to be considered healthy.
// +optional
SuccessThreshold *int32 `json:"successThreshold,omitempty"`
// FailureThreshold specifies how many consecutive probes must fail in order
// for the container to be considered healthy.
// +optional
FailureThreshold *int32 `json:"failureThreshold,omitempty"`
}
// A Container represents an Open Containers Initiative (OCI) container.
type Container struct {
// Name of this container. Must be unique within its workload.
Name string `json:"name"`
// Image this container should run. Must be a path-like or URI-like
// representation of an OCI image. May be prefixed with a registry address
// and should be suffixed with a tag.
Image string `json:"image"`
// Resources required by this container
// +optional
Resources *ContainerResources `json:"resources,omitempty"`
// Command to be run by this container.
// +optional
Command []string `json:"command,omitempty"`
// Arguments to be passed to the command run by this container.
// +optional
Arguments []string `json:"args,omitempty"`
// Environment variables that should be set within this container.
// +optional
Environment []ContainerEnvVar `json:"env,omitempty"`
// ConfigFiles that should be written within this container.
// +optional
ConfigFiles []ContainerConfigFile `json:"config,omitempty"`
// Ports exposed by this container.
// +optional
Ports []ContainerPort `json:"ports,omitempty"`
// A LivenessProbe assesses whether this container is alive. Containers that
// fail liveness probes will be restarted.
// +optional
LivenessProbe *ContainerHealthProbe `json:"livenessProbe,omitempty"`
// A ReadinessProbe assesses whether this container is ready to serve
// requests. Containers that fail readiness probes will be withdrawn from
// service.
// +optional
ReadinessProbe *ContainerHealthProbe `json:"readinessProbe,omitempty"`
// TODO(negz): Ideally the key within this secret would be configurable, but
// the current OAM spec allows only a secret name.
// ImagePullSecret specifies the name of a Secret from which the
// credentials required to pull this container's image can be loaded.
// +optional
ImagePullSecret *string `json:"imagePullSecret,omitempty"`
}

View File

@@ -0,0 +1,22 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package v1alpha2 contains resources relating to the Open Application Model.
// See https://github.com/oam-dev/spec for more details.
// +kubebuilder:object:generate=true
// +groupName=core.oam.dev
// +versionName=v1alpha2
package v1alpha2

View File

@@ -0,0 +1,65 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// This code is manually implemented, but should be generated in the future.
package v1alpha2
import (
corev1 "k8s.io/api/core/v1"
"github.com/oam-dev/kubevela/apis/core.oam.dev/condition"
)
// GetCondition of this ApplicationConfiguration.
func (ac *ApplicationConfiguration) GetCondition(ct condition.ConditionType) condition.Condition {
return ac.Status.GetCondition(ct)
}
// SetConditions of this ApplicationConfiguration.
func (ac *ApplicationConfiguration) SetConditions(c ...condition.Condition) {
ac.Status.SetConditions(c...)
}
// GetCondition of this Component.
func (cm *Component) GetCondition(ct condition.ConditionType) condition.Condition {
return cm.Status.GetCondition(ct)
}
// SetConditions of this Component.
func (cm *Component) SetConditions(c ...condition.Condition) {
cm.Status.SetConditions(c...)
}
// GetCondition of this HealthScope.
func (hs *HealthScope) GetCondition(ct condition.ConditionType) condition.Condition {
return hs.Status.GetCondition(ct)
}
// SetConditions of this HealthScope.
func (hs *HealthScope) SetConditions(c ...condition.Condition) {
hs.Status.SetConditions(c...)
}
// GetWorkloadReferences to get all workload references for scope.
func (hs *HealthScope) GetWorkloadReferences() []corev1.ObjectReference {
return hs.Spec.WorkloadReferences
}
// AddWorkloadReference to add a workload reference to this scope.
func (hs *HealthScope) AddWorkloadReference(r corev1.ObjectReference) {
hs.Spec.WorkloadReferences = append(hs.Spec.WorkloadReferences, r)
}

View File

@@ -0,0 +1,124 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha2
import (
"reflect"
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/controller-runtime/pkg/scheme"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
)
// Package type metadata.
const (
Group = common.Group
Version = "v1alpha2"
)
var (
// SchemeGroupVersion is group version used to register these objects
SchemeGroupVersion = schema.GroupVersion{Group: Group, Version: Version}
// SchemeBuilder is used to add go types to the GroupVersionKind scheme
SchemeBuilder = &scheme.Builder{GroupVersion: SchemeGroupVersion}
)
// ComponentDefinition type metadata.
var (
ComponentDefinitionKind = reflect.TypeOf(ComponentDefinition{}).Name()
ComponentDefinitionGroupKind = schema.GroupKind{Group: Group, Kind: ComponentDefinitionKind}.String()
ComponentDefinitionKindAPIVersion = ComponentDefinitionKind + "." + SchemeGroupVersion.String()
ComponentDefinitionGroupVersionKind = SchemeGroupVersion.WithKind(ComponentDefinitionKind)
)
// WorkloadDefinition type metadata.
var (
WorkloadDefinitionKind = reflect.TypeOf(WorkloadDefinition{}).Name()
WorkloadDefinitionGroupKind = schema.GroupKind{Group: Group, Kind: WorkloadDefinitionKind}.String()
WorkloadDefinitionKindAPIVersion = WorkloadDefinitionKind + "." + SchemeGroupVersion.String()
WorkloadDefinitionGroupVersionKind = SchemeGroupVersion.WithKind(WorkloadDefinitionKind)
)
// TraitDefinition type metadata.
var (
TraitDefinitionKind = reflect.TypeOf(TraitDefinition{}).Name()
TraitDefinitionGroupKind = schema.GroupKind{Group: Group, Kind: TraitDefinitionKind}.String()
TraitDefinitionKindAPIVersion = TraitDefinitionKind + "." + SchemeGroupVersion.String()
TraitDefinitionGroupVersionKind = SchemeGroupVersion.WithKind(TraitDefinitionKind)
)
// ScopeDefinition type metadata.
var (
ScopeDefinitionKind = reflect.TypeOf(ScopeDefinition{}).Name()
ScopeDefinitionGroupKind = schema.GroupKind{Group: Group, Kind: ScopeDefinitionKind}.String()
ScopeDefinitionKindAPIVersion = ScopeDefinitionKind + "." + SchemeGroupVersion.String()
ScopeDefinitionGroupVersionKind = SchemeGroupVersion.WithKind(ScopeDefinitionKind)
)
// Component type metadata.
var (
ComponentKind = reflect.TypeOf(Component{}).Name()
ComponentGroupKind = schema.GroupKind{Group: Group, Kind: ComponentKind}.String()
ComponentKindAPIVersion = ComponentKind + "." + SchemeGroupVersion.String()
ComponentGroupVersionKind = SchemeGroupVersion.WithKind(ComponentKind)
)
// ApplicationConfiguration type metadata.
var (
ApplicationConfigurationKind = reflect.TypeOf(ApplicationConfiguration{}).Name()
ApplicationConfigurationGroupKind = schema.GroupKind{Group: Group, Kind: ApplicationConfigurationKind}.String()
ApplicationConfigurationKindAPIVersion = ApplicationConfigurationKind + "." + SchemeGroupVersion.String()
ApplicationConfigurationGroupVersionKind = SchemeGroupVersion.WithKind(ApplicationConfigurationKind)
)
// HealthScope type metadata.
var (
HealthScopeKind = reflect.TypeOf(HealthScope{}).Name()
HealthScopeGroupKind = schema.GroupKind{Group: Group, Kind: HealthScopeKind}.String()
HealthScopeKindAPIVersion = HealthScopeKind + "." + SchemeGroupVersion.String()
HealthScopeGroupVersionKind = SchemeGroupVersion.WithKind(HealthScopeKind)
)
// Application type metadata.
var (
ApplicationKind = reflect.TypeOf(Application{}).Name()
ApplicationGroupKind = schema.GroupKind{Group: Group, Kind: ApplicationKind}.String()
ApplicationKindAPIVersion = ApplicationKind + "." + SchemeGroupVersion.String()
ApplicationKindVersionKind = SchemeGroupVersion.WithKind(ApplicationKind)
)
// ApplicationRevision type metadata
var (
ApplicationRevisionKind = reflect.TypeOf(ApplicationRevision{}).Name()
ApplicationRevisionGroupKind = schema.GroupKind{Group: Group, Kind: ApplicationRevisionKind}.String()
ApplicationRevisionKindAPIVersion = ApplicationRevisionKind + "." + SchemeGroupVersion.String()
ApplicationRevisionGroupVersionKind = SchemeGroupVersion.WithKind(ApplicationRevisionKind)
)
func init() {
SchemeBuilder.Register(&ComponentDefinition{}, &ComponentDefinitionList{})
SchemeBuilder.Register(&WorkloadDefinition{}, &WorkloadDefinitionList{})
SchemeBuilder.Register(&TraitDefinition{}, &TraitDefinitionList{})
SchemeBuilder.Register(&ScopeDefinition{}, &ScopeDefinitionList{})
SchemeBuilder.Register(&Component{}, &ComponentList{})
SchemeBuilder.Register(&ApplicationConfiguration{}, &ApplicationConfigurationList{})
SchemeBuilder.Register(&HealthScope{}, &HealthScopeList{})
SchemeBuilder.Register(&Application{}, &ApplicationList{})
SchemeBuilder.Register(&ApplicationRevision{}, &ApplicationRevisionList{})
}

File diff suppressed because it is too large Load Diff

View File

@@ -23,18 +23,29 @@ import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
wfTypesv1alpha1 "github.com/kubevela/pkg/apis/oam/v1alpha1"
workflowv1alpha1 "github.com/kubevela/workflow/api/v1alpha1"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
"github.com/oam-dev/kubevela/apis/core.oam.dev/condition"
)
const (
// TypeHealthy application are believed to be determined as healthy by a health scope.
TypeHealthy condition.ConditionType = "Healthy"
)
// Reasons an application is or is not healthy
const (
ReasonHealthy condition.ConditionReason = "AllComponentsHealthy"
ReasonUnhealthy condition.ConditionReason = "UnhealthyOrUnknownComponents"
ReasonHealthCheckErr condition.ConditionReason = "HealthCheckeError"
)
// AppPolicy defines a global policy for all components in the app.
type AppPolicy struct {
// Name is the unique name of the policy.
// +optional
Name string `json:"name,omitempty"`
// Type is the type of the policy
Name string `json:"name"`
Type string `json:"type"`
// +kubebuilder:pruning:PreserveUnknownFields
Properties *runtime.RawExtension `json:"properties,omitempty"`
@@ -42,9 +53,9 @@ type AppPolicy struct {
// Workflow defines workflow steps and other attributes
type Workflow struct {
Ref string `json:"ref,omitempty"`
Mode *wfTypesv1alpha1.WorkflowExecuteMode `json:"mode,omitempty"`
Steps []wfTypesv1alpha1.WorkflowStep `json:"steps,omitempty"`
Ref string `json:"ref,omitempty"`
Mode *workflowv1alpha1.WorkflowExecuteMode `json:"mode,omitempty"`
Steps []workflowv1alpha1.WorkflowStep `json:"steps,omitempty"`
}
// ApplicationSpec is the spec of Application
@@ -62,6 +73,8 @@ type ApplicationSpec struct {
// - will have a context in annotation.
// - should mark "finish" phase in status.conditions.
Workflow *Workflow `json:"workflow,omitempty"`
// TODO(wonderflow): we should have application level scopes supported here
}
// +kubebuilder:object:root=true

View File

@@ -19,8 +19,8 @@ package v1beta1
import (
"encoding/json"
wfTypesv1alpha1 "github.com/kubevela/pkg/apis/oam/v1alpha1"
"github.com/kubevela/pkg/util/compression"
workflowv1alpha1 "github.com/kubevela/workflow/api/v1alpha1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
@@ -45,25 +45,31 @@ type ApplicationRevisionCompressibleFields struct {
Application Application `json:"application"`
// ComponentDefinitions records the snapshot of the componentDefinitions related with the created/modified Application
ComponentDefinitions map[string]*ComponentDefinition `json:"componentDefinitions,omitempty"`
ComponentDefinitions map[string]ComponentDefinition `json:"componentDefinitions,omitempty"`
// WorkloadDefinitions records the snapshot of the workloadDefinitions related with the created/modified Application
WorkloadDefinitions map[string]WorkloadDefinition `json:"workloadDefinitions,omitempty"`
// TraitDefinitions records the snapshot of the traitDefinitions related with the created/modified Application
TraitDefinitions map[string]*TraitDefinition `json:"traitDefinitions,omitempty"`
TraitDefinitions map[string]TraitDefinition `json:"traitDefinitions,omitempty"`
// ScopeDefinitions records the snapshot of the scopeDefinitions related with the created/modified Application
ScopeDefinitions map[string]ScopeDefinition `json:"scopeDefinitions,omitempty"`
// PolicyDefinitions records the snapshot of the PolicyDefinitions related with the created/modified Application
PolicyDefinitions map[string]PolicyDefinition `json:"policyDefinitions,omitempty"`
// WorkflowStepDefinitions records the snapshot of the WorkflowStepDefinitions related with the created/modified Application
WorkflowStepDefinitions map[string]*WorkflowStepDefinition `json:"workflowStepDefinitions,omitempty"`
WorkflowStepDefinitions map[string]WorkflowStepDefinition `json:"workflowStepDefinitions,omitempty"`
// ScopeGVK records the apiVersion to GVK mapping
ScopeGVK map[string]metav1.GroupVersionKind `json:"scopeGVK,omitempty"`
// Policies records the external policies
Policies map[string]v1alpha1.Policy `json:"policies,omitempty"`
// Workflow records the external workflow
Workflow *wfTypesv1alpha1.Workflow `json:"workflow,omitempty"`
Workflow *workflowv1alpha1.Workflow `json:"workflow,omitempty"`
// ReferredObjects records the referred objects used in the ref-object typed components
// +kubebuilder:pruning:PreserveUnknownFields

View File

@@ -32,16 +32,18 @@ func TestApplicationRevisionCompression(t *testing.T) {
// Fill data
spec := &ApplicationRevisionSpec{}
spec.Application = Application{Spec: ApplicationSpec{Components: []common.ApplicationComponent{{Name: "test-name"}}}}
spec.ComponentDefinitions = make(map[string]*ComponentDefinition)
spec.ComponentDefinitions["def"] = &ComponentDefinition{Spec: ComponentDefinitionSpec{PodSpecPath: "path"}}
spec.ComponentDefinitions = make(map[string]ComponentDefinition)
spec.ComponentDefinitions["def"] = ComponentDefinition{Spec: ComponentDefinitionSpec{PodSpecPath: "path"}}
spec.WorkloadDefinitions = make(map[string]WorkloadDefinition)
spec.WorkloadDefinitions["def"] = WorkloadDefinition{Spec: WorkloadDefinitionSpec{Reference: common.DefinitionReference{Name: "testdef"}}}
spec.TraitDefinitions = make(map[string]*TraitDefinition)
spec.TraitDefinitions["def"] = &TraitDefinition{Spec: TraitDefinitionSpec{ControlPlaneOnly: true}}
spec.TraitDefinitions = make(map[string]TraitDefinition)
spec.TraitDefinitions["def"] = TraitDefinition{Spec: TraitDefinitionSpec{ControlPlaneOnly: true}}
spec.ScopeDefinitions = make(map[string]ScopeDefinition)
spec.ScopeDefinitions["def"] = ScopeDefinition{Spec: ScopeDefinitionSpec{AllowComponentOverlap: true}}
spec.PolicyDefinitions = make(map[string]PolicyDefinition)
spec.PolicyDefinitions["def"] = PolicyDefinition{Spec: PolicyDefinitionSpec{ManageHealthCheck: true}}
spec.WorkflowStepDefinitions = make(map[string]*WorkflowStepDefinition)
spec.WorkflowStepDefinitions["def"] = &WorkflowStepDefinition{Spec: WorkflowStepDefinitionSpec{Reference: common.DefinitionReference{Name: "testname"}}}
spec.WorkflowStepDefinitions = make(map[string]WorkflowStepDefinition)
spec.WorkflowStepDefinitions["def"] = WorkflowStepDefinition{Spec: WorkflowStepDefinitionSpec{Reference: common.DefinitionReference{Name: "testname"}}}
spec.ReferredObjects = []common.ReferredObject{{RawExtension: runtime.RawExtension{Raw: []byte("123")}}}
testAppRev := &ApplicationRevision{Spec: *spec}

View File

@@ -27,9 +27,6 @@ import (
// ComponentDefinitionSpec defines the desired state of ComponentDefinition
type ComponentDefinitionSpec struct {
// +optional
Version string `json:"version,omitempty"`
// Workload is a workload type descriptor
Workload common.WorkloadTypeDescriptor `json:"workload"`

View File

@@ -164,9 +164,6 @@ type TraitDefinitionSpec struct {
// pre-process and post-process respectively.
// +optional
Stage StageType `json:"stage,omitempty"`
// +optional
Version string `json:"version,omitempty"`
}
// StageType describes how the manifests should be dispatched.
@@ -235,3 +232,49 @@ type TraitDefinitionList struct {
metav1.ListMeta `json:"metadata,omitempty"`
Items []TraitDefinition `json:"items"`
}
// A ScopeDefinitionSpec defines the desired state of a ScopeDefinition.
type ScopeDefinitionSpec struct {
// Reference to the CustomResourceDefinition that defines this scope kind.
Reference common.DefinitionReference `json:"definitionRef"`
// WorkloadRefsPath indicates if/where a scope accepts workloadRef objects
WorkloadRefsPath string `json:"workloadRefsPath,omitempty"`
// AllowComponentOverlap specifies whether an OAM component may exist in
// multiple instances of this kind of scope.
AllowComponentOverlap bool `json:"allowComponentOverlap"`
// Extension is used for extension needs by OAM platform builders
// +optional
// +kubebuilder:pruning:PreserveUnknownFields
Extension *runtime.RawExtension `json:"extension,omitempty"`
}
// +kubebuilder:object:root=true
// A ScopeDefinition registers a kind of Kubernetes custom resource as a valid
// OAM scope kind by referencing its CustomResourceDefinition. The CRD is used
// to validate the schema of the scope when it is embedded in an OAM
// ApplicationConfiguration.
// +kubebuilder:printcolumn:JSONPath=".spec.definitionRef.name",name=DEFINITION-NAME,type=string
// +kubebuilder:resource:scope=Namespaced,categories={oam},shortName=scope
// +kubebuilder:storageversion
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
type ScopeDefinition struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ScopeDefinitionSpec `json:"spec,omitempty"`
}
// +kubebuilder:object:root=true
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ScopeDefinitionList contains a list of ScopeDefinition.
type ScopeDefinitionList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []ScopeDefinition `json:"items"`
}

View File

@@ -37,9 +37,6 @@ type PolicyDefinitionSpec struct {
// ManageHealthCheck means the policy will handle health checking and skip application controller
// built-in health checking.
ManageHealthCheck bool `json:"manageHealthCheck,omitempty"`
//+optional
Version string `json:"version,omitempty"`
}
// PolicyDefinitionStatus is the status of PolicyDefinition

View File

@@ -49,7 +49,6 @@ var (
ComponentDefinitionGroupKind = schema.GroupKind{Group: Group, Kind: ComponentDefinitionKind}.String()
ComponentDefinitionKindAPIVersion = ComponentDefinitionKind + "." + SchemeGroupVersion.String()
ComponentDefinitionGroupVersionKind = SchemeGroupVersion.WithKind(ComponentDefinitionKind)
ComponentDefinitionGVR = SchemeGroupVersion.WithResource("componentdefinitions")
)
// WorkloadDefinition type metadata.
@@ -66,7 +65,6 @@ var (
TraitDefinitionGroupKind = schema.GroupKind{Group: Group, Kind: TraitDefinitionKind}.String()
TraitDefinitionKindAPIVersion = TraitDefinitionKind + "." + SchemeGroupVersion.String()
TraitDefinitionGroupVersionKind = SchemeGroupVersion.WithKind(TraitDefinitionKind)
TraitDefinitionGVR = SchemeGroupVersion.WithResource("traitdefinitions")
)
// PolicyDefinition type metadata.
@@ -75,7 +73,6 @@ var (
PolicyDefinitionGroupKind = schema.GroupKind{Group: Group, Kind: PolicyDefinitionKind}.String()
PolicyDefinitionKindAPIVersion = PolicyDefinitionKind + "." + SchemeGroupVersion.String()
PolicyDefinitionGroupVersionKind = SchemeGroupVersion.WithKind(PolicyDefinitionKind)
PolicyDefinitionGVR = SchemeGroupVersion.WithResource("policydefinitions")
)
// WorkflowStepDefinition type metadata.
@@ -84,7 +81,6 @@ var (
WorkflowStepDefinitionGroupKind = schema.GroupKind{Group: Group, Kind: WorkflowStepDefinitionKind}.String()
WorkflowStepDefinitionKindAPIVersion = WorkflowStepDefinitionKind + "." + SchemeGroupVersion.String()
WorkflowStepDefinitionGroupVersionKind = SchemeGroupVersion.WithKind(WorkflowStepDefinitionKind)
WorkflowStepDefinitionGVR = SchemeGroupVersion.WithResource("workflowstepdefinitions")
)
// DefinitionRevision type metadata.
@@ -111,6 +107,14 @@ var (
ApplicationRevisionGroupVersionKind = SchemeGroupVersion.WithKind(ApplicationRevisionKind)
)
// ScopeDefinition type metadata.
var (
ScopeDefinitionKind = reflect.TypeOf(ScopeDefinition{}).Name()
ScopeDefinitionGroupKind = schema.GroupKind{Group: Group, Kind: ScopeDefinitionKind}.String()
ScopeDefinitionKindAPIVersion = ScopeDefinitionKind + "." + SchemeGroupVersion.String()
ScopeDefinitionGroupVersionKind = SchemeGroupVersion.WithKind(ScopeDefinitionKind)
)
// ResourceTracker type metadata.
var (
ResourceTrackerKind = reflect.TypeOf(ResourceTracker{}).Name()
@@ -119,20 +123,6 @@ var (
ResourceTrackerKindVersionKind = SchemeGroupVersion.WithKind(ResourceTrackerKind)
)
// DefinitionTypeInfo contains the mapping information for a definition type
type DefinitionTypeInfo struct {
GVR schema.GroupVersionResource
Kind string
}
// DefinitionTypeMap maps definition types to their corresponding GVR and Kind
var DefinitionTypeMap = map[reflect.Type]DefinitionTypeInfo{
reflect.TypeOf(ComponentDefinition{}): {GVR: ComponentDefinitionGVR, Kind: ComponentDefinitionKind},
reflect.TypeOf(TraitDefinition{}): {GVR: TraitDefinitionGVR, Kind: TraitDefinitionKind},
reflect.TypeOf(PolicyDefinition{}): {GVR: PolicyDefinitionGVR, Kind: PolicyDefinitionKind},
reflect.TypeOf(WorkflowStepDefinition{}): {GVR: WorkflowStepDefinitionGVR, Kind: WorkflowStepDefinitionKind},
}
func init() {
SchemeBuilder.Register(&ComponentDefinition{}, &ComponentDefinitionList{})
SchemeBuilder.Register(&WorkloadDefinition{}, &WorkloadDefinitionList{})
@@ -140,6 +130,7 @@ func init() {
SchemeBuilder.Register(&PolicyDefinition{}, &PolicyDefinitionList{})
SchemeBuilder.Register(&WorkflowStepDefinition{}, &WorkflowStepDefinitionList{})
SchemeBuilder.Register(&DefinitionRevision{}, &DefinitionRevisionList{})
SchemeBuilder.Register(&ScopeDefinition{}, &ScopeDefinitionList{})
SchemeBuilder.Register(&Application{}, &ApplicationList{})
SchemeBuilder.Register(&ApplicationRevision{}, &ApplicationRevisionList{})
SchemeBuilder.Register(&ResourceTracker{}, &ResourceTrackerList{})

View File

@@ -1,117 +0,0 @@
/*
Copyright 2025 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1beta1
import (
"reflect"
"strings"
"testing"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/runtime/schema"
)
func TestDefinitionTypeMap(t *testing.T) {
tests := []struct {
name string
defType reflect.Type
expectedGVR schema.GroupVersionResource
expectedKind string
}{
{
name: "ComponentDefinition",
defType: reflect.TypeOf(ComponentDefinition{}),
expectedGVR: ComponentDefinitionGVR,
expectedKind: ComponentDefinitionKind,
},
{
name: "TraitDefinition",
defType: reflect.TypeOf(TraitDefinition{}),
expectedGVR: TraitDefinitionGVR,
expectedKind: TraitDefinitionKind,
},
{
name: "PolicyDefinition",
defType: reflect.TypeOf(PolicyDefinition{}),
expectedGVR: PolicyDefinitionGVR,
expectedKind: PolicyDefinitionKind,
},
{
name: "WorkflowStepDefinition",
defType: reflect.TypeOf(WorkflowStepDefinition{}),
expectedGVR: WorkflowStepDefinitionGVR,
expectedKind: WorkflowStepDefinitionKind,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
info, ok := DefinitionTypeMap[tt.defType]
assert.Truef(t, ok, "Type %v should exist in DefinitionTypeMap", tt.defType)
assert.Equal(t, tt.expectedGVR, info.GVR)
assert.Equal(t, tt.expectedKind, info.Kind)
// Verify GVR follows Kubernetes conventions
assert.Equal(t, Group, info.GVR.Group)
assert.Equal(t, Version, info.GVR.Version)
// Resource should be lowercase plural of Kind
assert.Equal(t, strings.ToLower(info.Kind)+"s", info.GVR.Resource)
})
}
}
func TestDefinitionTypeMapCompleteness(t *testing.T) {
// Ensure all expected definition types are in the map
expectedTypes := []reflect.Type{
reflect.TypeOf(ComponentDefinition{}),
reflect.TypeOf(TraitDefinition{}),
reflect.TypeOf(PolicyDefinition{}),
reflect.TypeOf(WorkflowStepDefinition{}),
}
assert.Equal(t, len(expectedTypes), len(DefinitionTypeMap), "DefinitionTypeMap should contain exactly %d entries", len(expectedTypes))
for _, expectedType := range expectedTypes {
_, ok := DefinitionTypeMap[expectedType]
assert.Truef(t, ok, "DefinitionTypeMap should contain %v", expectedType)
}
}
func TestDefinitionKindValues(t *testing.T) {
// Verify that the Kind values match the actual type names
tests := []struct {
defType interface{}
expectedKind string
}{
{ComponentDefinition{}, "ComponentDefinition"},
{TraitDefinition{}, "TraitDefinition"},
{PolicyDefinition{}, "PolicyDefinition"},
{WorkflowStepDefinition{}, "WorkflowStepDefinition"},
}
for _, tt := range tests {
t.Run(tt.expectedKind, func(t *testing.T) {
actualKind := reflect.TypeOf(tt.defType).Name()
assert.Equal(t, tt.expectedKind, actualKind)
// Also verify it matches what's in the map
info, ok := DefinitionTypeMap[reflect.TypeOf(tt.defType)]
assert.True(t, ok)
assert.Equal(t, tt.expectedKind, info.Kind)
})
}
}

View File

@@ -32,6 +32,7 @@ import (
"github.com/kubevela/pkg/util/compression"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
"github.com/oam-dev/kubevela/apis/interfaces"
velatypes "github.com/oam-dev/kubevela/apis/types"
"github.com/oam-dev/kubevela/pkg/oam"
velaerr "github.com/oam-dev/kubevela/pkg/utils/errors"
@@ -52,7 +53,8 @@ type ResourceTracker struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ResourceTrackerSpec `json:"spec,omitempty"`
Spec ResourceTrackerSpec `json:"spec,omitempty"`
Status ResourceTrackerStatus `json:"status,omitempty"`
}
// ResourceTrackerType defines the type of resourceTracker
@@ -138,7 +140,7 @@ type ManagedResource struct {
}
// Equal check if two managed resource equals
func (in *ManagedResource) Equal(r ManagedResource) bool {
func (in ManagedResource) Equal(r ManagedResource) bool {
if !in.ClusterObjectReference.Equal(r.ClusterObjectReference) {
return false
}
@@ -149,7 +151,7 @@ func (in *ManagedResource) Equal(r ManagedResource) bool {
}
// DisplayName readable name for locating resource
func (in *ManagedResource) DisplayName() string {
func (in ManagedResource) DisplayName() string {
s := in.Kind + " " + in.Name
if in.Namespace != "" || in.Cluster != "" {
s += " ("
@@ -168,12 +170,12 @@ func (in *ManagedResource) DisplayName() string {
}
// NamespacedName namespacedName
func (in *ManagedResource) NamespacedName() types.NamespacedName {
func (in ManagedResource) NamespacedName() types.NamespacedName {
return types.NamespacedName{Namespace: in.Namespace, Name: in.Name}
}
// ResourceKey computes the key for managed resource, resources with the same key points to the same resource
func (in *ManagedResource) ResourceKey() string {
func (in ManagedResource) ResourceKey() string {
group := in.GroupVersionKind().Group
kind := in.GroupVersionKind().Kind
cluster := in.Cluster
@@ -184,12 +186,12 @@ func (in *ManagedResource) ResourceKey() string {
}
// ComponentKey computes the key for the component which managed resource belongs to
func (in *ManagedResource) ComponentKey() string {
return strings.Join([]string{in.Cluster, in.Component}, "/")
func (in ManagedResource) ComponentKey() string {
return strings.Join([]string{in.Env, in.Component}, "/")
}
// UnmarshalTo unmarshal ManagedResource into target object
func (in *ManagedResource) UnmarshalTo(obj interface{}) error {
func (in ManagedResource) UnmarshalTo(obj interface{}) error {
if in.Data == nil || in.Data.Raw == nil {
return velaerr.ManagedResourceHasNoDataError{}
}
@@ -197,7 +199,7 @@ func (in *ManagedResource) UnmarshalTo(obj interface{}) error {
}
// ToUnstructured converts managed resource into unstructured
func (in *ManagedResource) ToUnstructured() *unstructured.Unstructured {
func (in ManagedResource) ToUnstructured() *unstructured.Unstructured {
obj := &unstructured.Unstructured{}
obj.SetGroupVersionKind(in.GroupVersionKind())
obj.SetName(in.Name)
@@ -209,7 +211,7 @@ func (in *ManagedResource) ToUnstructured() *unstructured.Unstructured {
}
// ToUnstructuredWithData converts managed resource into unstructured and unmarshal data
func (in *ManagedResource) ToUnstructuredWithData() (*unstructured.Unstructured, error) {
func (in ManagedResource) ToUnstructuredWithData() (*unstructured.Unstructured, error) {
obj := in.ToUnstructured()
if err := in.UnmarshalTo(obj); err != nil {
if errors.Is(err, velaerr.ManagedResourceHasNoDataError{}) {
@@ -219,6 +221,13 @@ func (in *ManagedResource) ToUnstructuredWithData() (*unstructured.Unstructured,
return obj, nil
}
// ResourceTrackerStatus define the status of resourceTracker
// For backward-compatibility
type ResourceTrackerStatus struct {
// Deprecated
TrackedResources []common.ClusterObjectReference `json:"trackedResources,omitempty"`
}
// +kubebuilder:object:root=true
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
@@ -316,3 +325,29 @@ func (in *ResourceTracker) DeleteManagedResource(rsc client.Object, remove bool)
}
return true
}
// addClusterObjectReference
// Deprecated
func (in *ResourceTracker) addClusterObjectReference(ref common.ClusterObjectReference) bool {
for _, _rsc := range in.Status.TrackedResources {
if _rsc.Equal(ref) {
return true
}
}
in.Status.TrackedResources = append(in.Status.TrackedResources, ref)
return false
}
// AddTrackedResource add new object reference into tracked resources, return if already exists
// Deprecated
func (in *ResourceTracker) AddTrackedResource(rsc interfaces.TrackableResource) bool {
return in.addClusterObjectReference(common.ClusterObjectReference{
ObjectReference: corev1.ObjectReference{
APIVersion: rsc.GetAPIVersion(),
Kind: rsc.GetKind(),
Name: rsc.GetName(),
Namespace: rsc.GetNamespace(),
UID: rsc.GetUID(),
},
})
}

View File

@@ -31,7 +31,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/utils/ptr"
"k8s.io/utils/pointer"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
"github.com/oam-dev/kubevela/pkg/oam"
@@ -124,16 +124,17 @@ func TestManagedResourceKeys(t *testing.T) {
},
},
OAMObjectReference: common.OAMObjectReference{
Env: "env",
Component: "component",
Trait: "trait",
},
}
r.Equal("namespace/name", input.NamespacedName().String())
r.Equal("apps/Deployment/cluster/namespace/name", input.ResourceKey())
r.Equal("cluster/component", input.ComponentKey())
r.Equal("env/component", input.ComponentKey())
r.Equal("Deployment name (Cluster: cluster, Namespace: namespace)", input.DisplayName())
var deploy1, deploy2 appsv1.Deployment
deploy1.Spec.Replicas = ptr.To(int32(5))
deploy1.Spec.Replicas = pointer.Int32(5)
bs, err := json.Marshal(deploy1)
r.NoError(err)
r.ErrorIs(input.UnmarshalTo(&deploy2), errors.ManagedResourceHasNoDataError{})
@@ -168,7 +169,7 @@ func TestResourceTracker_ManagedResource(t *testing.T) {
pod3 := corev1.Pod{ObjectMeta: metav1.ObjectMeta{Name: "pod3"}}
input.AddManagedResource(&pod3, false, false, "")
r.Equal(3, len(input.Spec.ManagedResources))
deploy1.Spec.Replicas = ptr.To(int32(5))
deploy1.Spec.Replicas = pointer.Int32(5)
input.AddManagedResource(&deploy1, false, false, "")
r.Equal(3, len(input.Spec.ManagedResources))
input.DeleteManagedResource(&cm2, false)
@@ -199,11 +200,15 @@ func TestResourceTrackerCompression(t *testing.T) {
"../../../charts/vela-core/crds/core.oam.dev_applicationrevisions.yaml",
"../../../charts/vela-core/crds/core.oam.dev_applications.yaml",
"../../../charts/vela-core/crds/core.oam.dev_definitionrevisions.yaml",
"../../../charts/vela-core/crds/core.oam.dev_healthscopes.yaml",
"../../../charts/vela-core/crds/core.oam.dev_traitdefinitions.yaml",
"../../../charts/vela-core/crds/core.oam.dev_componentdefinitions.yaml",
"../../../charts/vela-core/crds/core.oam.dev_workloaddefinitions.yaml",
"../../../charts/vela-core/crds/standard.oam.dev_rollouts.yaml",
"../../../charts/vela-core/templates/kubevela-controller.yaml",
"../../../charts/vela-core/README.md",
"../../../pkg/workflow/providers/legacy/query/testdata/machinelearning.seldon.io_seldondeployments.yaml",
"../../../pkg/velaql/providers/query/testdata/machinelearning.seldon.io_seldondeployments.yaml",
"../../../legacy/charts/vela-core-legacy/crds/standard.oam.dev_podspecworkloads.yaml",
}
for _, p := range paths {
b, err := os.ReadFile(p)

View File

@@ -33,9 +33,6 @@ type WorkflowStepDefinitionSpec struct {
// Only CUE schematic is supported for now.
// +optional
Schematic *common.Schematic `json:"schematic,omitempty"`
// +optional
Version string `json:"version,omitempty"`
}
// WorkflowStepDefinitionStatus is the status of WorkflowStepDefinition

View File

@@ -1,7 +1,8 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright 2023 The KubeVela Authors.
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -21,7 +22,8 @@ limitations under the License.
package v1beta1
import (
"github.com/kubevela/pkg/apis/oam/v1alpha1"
"github.com/kubevela/workflow/api/v1alpha1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
@@ -140,18 +142,9 @@ func (in *ApplicationRevisionCompressibleFields) DeepCopyInto(out *ApplicationRe
in.Application.DeepCopyInto(&out.Application)
if in.ComponentDefinitions != nil {
in, out := &in.ComponentDefinitions, &out.ComponentDefinitions
*out = make(map[string]*ComponentDefinition, len(*in))
*out = make(map[string]ComponentDefinition, len(*in))
for key, val := range *in {
var outVal *ComponentDefinition
if val == nil {
(*out)[key] = nil
} else {
inVal := (*in)[key]
in, out := &inVal, &outVal
*out = new(ComponentDefinition)
(*in).DeepCopyInto(*out)
}
(*out)[key] = outVal
(*out)[key] = *val.DeepCopy()
}
}
if in.WorkloadDefinitions != nil {
@@ -163,18 +156,16 @@ func (in *ApplicationRevisionCompressibleFields) DeepCopyInto(out *ApplicationRe
}
if in.TraitDefinitions != nil {
in, out := &in.TraitDefinitions, &out.TraitDefinitions
*out = make(map[string]*TraitDefinition, len(*in))
*out = make(map[string]TraitDefinition, len(*in))
for key, val := range *in {
var outVal *TraitDefinition
if val == nil {
(*out)[key] = nil
} else {
inVal := (*in)[key]
in, out := &inVal, &outVal
*out = new(TraitDefinition)
(*in).DeepCopyInto(*out)
}
(*out)[key] = outVal
(*out)[key] = *val.DeepCopy()
}
}
if in.ScopeDefinitions != nil {
in, out := &in.ScopeDefinitions, &out.ScopeDefinitions
*out = make(map[string]ScopeDefinition, len(*in))
for key, val := range *in {
(*out)[key] = *val.DeepCopy()
}
}
if in.PolicyDefinitions != nil {
@@ -186,18 +177,16 @@ func (in *ApplicationRevisionCompressibleFields) DeepCopyInto(out *ApplicationRe
}
if in.WorkflowStepDefinitions != nil {
in, out := &in.WorkflowStepDefinitions, &out.WorkflowStepDefinitions
*out = make(map[string]*WorkflowStepDefinition, len(*in))
*out = make(map[string]WorkflowStepDefinition, len(*in))
for key, val := range *in {
var outVal *WorkflowStepDefinition
if val == nil {
(*out)[key] = nil
} else {
inVal := (*in)[key]
in, out := &inVal, &outVal
*out = new(WorkflowStepDefinition)
(*in).DeepCopyInto(*out)
}
(*out)[key] = outVal
(*out)[key] = *val.DeepCopy()
}
}
if in.ScopeGVK != nil {
in, out := &in.ScopeGVK, &out.ScopeGVK
*out = make(map[string]v1.GroupVersionKind, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
if in.Policies != nil {
@@ -552,22 +541,6 @@ func (in *DefinitionRevisionSpec) DeepCopy() *DefinitionRevisionSpec {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DefinitionTypeInfo) DeepCopyInto(out *DefinitionTypeInfo) {
*out = *in
out.GVR = in.GVR
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DefinitionTypeInfo.
func (in *DefinitionTypeInfo) DeepCopy() *DefinitionTypeInfo {
if in == nil {
return nil
}
out := new(DefinitionTypeInfo)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ManagedResource) DeepCopyInto(out *ManagedResource) {
*out = *in
@@ -697,6 +670,7 @@ func (in *ResourceTracker) DeepCopyInto(out *ResourceTracker) {
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceTracker.
@@ -788,6 +762,105 @@ func (in *ResourceTrackerSpec) DeepCopy() *ResourceTrackerSpec {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ResourceTrackerStatus) DeepCopyInto(out *ResourceTrackerStatus) {
*out = *in
if in.TrackedResources != nil {
in, out := &in.TrackedResources, &out.TrackedResources
*out = make([]common.ClusterObjectReference, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceTrackerStatus.
func (in *ResourceTrackerStatus) DeepCopy() *ResourceTrackerStatus {
if in == nil {
return nil
}
out := new(ResourceTrackerStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ScopeDefinition) DeepCopyInto(out *ScopeDefinition) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScopeDefinition.
func (in *ScopeDefinition) DeepCopy() *ScopeDefinition {
if in == nil {
return nil
}
out := new(ScopeDefinition)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *ScopeDefinition) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ScopeDefinitionList) DeepCopyInto(out *ScopeDefinitionList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]ScopeDefinition, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScopeDefinitionList.
func (in *ScopeDefinitionList) DeepCopy() *ScopeDefinitionList {
if in == nil {
return nil
}
out := new(ScopeDefinitionList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *ScopeDefinitionList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ScopeDefinitionSpec) DeepCopyInto(out *ScopeDefinitionSpec) {
*out = *in
out.Reference = in.Reference
if in.Extension != nil {
in, out := &in.Extension, &out.Extension
*out = new(runtime.RawExtension)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScopeDefinitionSpec.
func (in *ScopeDefinitionSpec) DeepCopy() *ScopeDefinitionSpec {
if in == nil {
return nil
}
out := new(ScopeDefinitionSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TraitDefinition) DeepCopyInto(out *TraitDefinition) {
*out = *in

View File

@@ -25,6 +25,10 @@ limitations under the License.
// Generate deepcopy methodsets and CRD manifests
//go:generate go run -tags generate sigs.k8s.io/controller-tools/cmd/controller-gen object:headerFile=../hack/boilerplate.go.txt paths=./... crd:crdVersions=v1,generateEmbeddedObjectMeta=true output:artifacts:config=../config/crd/base
// Generate legacy_support for K8s 1.12~1.15 versions CRD manifests
//go:generate go run -tags generate sigs.k8s.io/controller-tools/cmd/controller-gen object:headerFile=../hack/boilerplate.go.txt paths=./... crd:generateEmbeddedObjectMeta=true output:artifacts:config=../legacy/charts/vela-core-legacy/crds
//go:generate go run ../legacy/convert/main.go ../legacy/charts/vela-core-legacy/crds
package apis
import (

View File

@@ -0,0 +1,35 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package interfaces
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
)
// ObjectOwner is the interface for get and set ownerReference
type ObjectOwner interface {
GetOwnerReferences() []metav1.OwnerReference
SetOwnerReferences([]metav1.OwnerReference)
}
// TrackableResource is the interface for resources to be tracked by resourcetracker
type TrackableResource interface {
client.Object
metav1.Type
ObjectOwner
}

View File

@@ -0,0 +1,43 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package v1alpha1 contains API Schema definitions for the standard v1alpha1 API group
// +kubebuilder:object:generate=true
// +groupName=standard.oam.dev
package v1alpha1
import (
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/controller-runtime/pkg/scheme"
)
const (
// GroupName of the CRDs
GroupName = "standard.oam.dev"
// Version of the group of CRDs
Version = "v1alpha1"
)
var (
// SchemeGroupVersion is group version used to register these objects
SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: Version}
// SchemeBuilder is used to add go types to the GroupVersionKind scheme
SchemeBuilder = &scheme.Builder{GroupVersion: SchemeGroupVersion}
// AddToScheme adds the types in this group-version to the given scheme.
AddToScheme = SchemeBuilder.AddToScheme
)

View File

@@ -14,19 +14,22 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
package gen_sdk
package v1alpha1
import "embed"
import (
"reflect"
var (
//go:embed openapi-generator/templates
// Templates contains different template files for different languages
Templates embed.FS
// SupportedLangs is supported languages
SupportedLangs = map[string]bool{"go": true}
//go:embed _scaffold
// Scaffold is scaffold files for different languages
Scaffold embed.FS
// ScaffoldDir is scaffold dir name
ScaffoldDir = "_scaffold"
"k8s.io/apimachinery/pkg/runtime/schema"
)
// Rollout type metadata
var (
RolloutKind = reflect.TypeOf(Rollout{}).Name()
RolloutGroupKind = schema.GroupKind{Group: GroupName, Kind: RolloutKind}.String()
RolloutKindAPIVersion = RolloutKind + "." + SchemeGroupVersion.String()
RolloutKindVersionKind = SchemeGroupVersion.WithKind(RolloutKind)
)
func init() {
SchemeBuilder.Register(&Rollout{}, &RolloutList{})
}

View File

@@ -0,0 +1,285 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"github.com/oam-dev/kubevela/apis/core.oam.dev/condition"
)
// RolloutStrategyType defines strategies for pods rollout
type RolloutStrategyType string
const (
// IncreaseFirstRolloutStrategyType indicates that we increase the target resources first
IncreaseFirstRolloutStrategyType RolloutStrategyType = "IncreaseFirst"
// DecreaseFirstRolloutStrategyType indicates that we decrease the source resources first
DecreaseFirstRolloutStrategyType RolloutStrategyType = "DecreaseFirst"
)
// HookType can be pre, post or during rollout
type HookType string
const (
// InitializeRolloutHook execute webhook during the rollout initializing phase
InitializeRolloutHook HookType = "initialize-rollout"
// PreBatchRolloutHook execute webhook before each batch rollout
PreBatchRolloutHook HookType = "pre-batch-rollout"
// PostBatchRolloutHook execute webhook after each batch rollout
PostBatchRolloutHook HookType = "post-batch-rollout"
// FinalizeRolloutHook execute the webhook during the rollout finalizing phase
FinalizeRolloutHook HookType = "finalize-rollout"
)
// RollingState is the overall rollout state
type RollingState string
const (
// LocatingTargetAppState indicates that the rollout is in the stage of locating target app
// we use this state to make sure we special handle the target app successfully only once
LocatingTargetAppState RollingState = "locatingTargetApp"
// VerifyingSpecState indicates that the rollout is in the stage of verifying the rollout settings
// and the controller can locate both the target and the source
VerifyingSpecState RollingState = "verifyingSpec"
// InitializingState indicates that the rollout is initializing all the new resources
InitializingState RollingState = "initializing"
// RollingInBatchesState indicates that the rollout starts rolling
RollingInBatchesState RollingState = "rollingInBatches"
// FinalisingState indicates that the rollout is finalizing, possibly clean up the old resources, adjust traffic
FinalisingState RollingState = "finalising"
// RolloutFailingState indicates that the rollout is failing
// one needs to finalize it before mark it as failed by cleaning up the old resources, adjust traffic
RolloutFailingState RollingState = "rolloutFailing"
// RolloutSucceedState indicates that rollout successfully completed to match the desired target state
RolloutSucceedState RollingState = "rolloutSucceed"
// RolloutAbandoningState indicates that the rollout is being abandoned
// we need to finalize it by cleaning up the old resources, adjust traffic and return control back to its owner
RolloutAbandoningState RollingState = "rolloutAbandoning"
// RolloutDeletingState indicates that the rollout is being deleted
// we need to finalize it by cleaning up the old resources, adjust traffic and return control back to its owner
RolloutDeletingState RollingState = "RolloutDeletingState"
// RolloutFailedState indicates that rollout is failed, the target replica is not reached
// we can not move forward anymore, we will let the client to decide when or whether to revert.
RolloutFailedState RollingState = "rolloutFailed"
)
// BatchRollingState is the sub state when the rollout is on the fly
type BatchRollingState string
const (
// BatchInitializingState still rolling the batch, the batch rolling is not completed yet
BatchInitializingState BatchRollingState = "batchInitializing"
// BatchInRollingState still rolling the batch, the batch rolling is not completed yet
BatchInRollingState BatchRollingState = "batchInRolling"
// BatchVerifyingState verifying if the application is ready to roll.
BatchVerifyingState BatchRollingState = "batchVerifying"
// BatchRolloutFailedState indicates that the batch didn't get the manual or automatic approval
BatchRolloutFailedState BatchRollingState = "batchVerifyFailed"
// BatchFinalizingState indicates that all the pods in the are available, we can move on to the next batch
BatchFinalizingState BatchRollingState = "batchFinalizing"
// BatchReadyState indicates that all the pods in the are upgraded and its state is ready
BatchReadyState BatchRollingState = "batchReady"
)
// RolloutPlan fines the details of the rollout plan
type RolloutPlan struct {
// RolloutStrategy defines strategies for the rollout plan
// The default is IncreaseFirstRolloutStrategyType
// +optional
RolloutStrategy RolloutStrategyType `json:"rolloutStrategy,omitempty"`
// The size of the target resource. The default is the same
// as the size of the source resource.
// +optional
TargetSize *int32 `json:"targetSize,omitempty"`
// The number of batches, default = 1
// +optional
NumBatches *int32 `json:"numBatches,omitempty"`
// The exact distribution among batches.
// its size has to be exactly the same as the NumBatches (if set)
// The total number cannot exceed the targetSize or the size of the source resource
// We will IGNORE the last batch's replica field if it's a percentage since round errors can lead to inaccurate sum
// We highly recommend to leave the last batch's replica field empty
// +optional
RolloutBatches []RolloutBatch `json:"rolloutBatches,omitempty"`
// All pods in the batches up to the batchPartition (included) will have
// the target resource specification while the rest still have the source resource
// This is designed for the operators to manually rollout
// Default is the the number of batches which will rollout all the batches
// +optional
BatchPartition *int32 `json:"batchPartition,omitempty"`
// Paused the rollout, default is false
// +optional
Paused bool `json:"paused,omitempty"`
// RolloutWebhooks provide a way for the rollout to interact with an external process
// +optional
RolloutWebhooks []RolloutWebhook `json:"rolloutWebhooks,omitempty"`
// CanaryMetric provides a way for the rollout process to automatically check certain metrics
// before complete the process
// +optional
CanaryMetric []CanaryMetric `json:"canaryMetric,omitempty"`
}
// RolloutBatch is used to describe how the each batch rollout should be
type RolloutBatch struct {
// Replicas is the number of pods to upgrade in this batch
// it can be an absolute number (ex: 5) or a percentage of total pods
// we will ignore the percentage of the last batch to just fill the gap
// +optional
// it is mutually exclusive with the PodList field
Replicas intstr.IntOrString `json:"replicas,omitempty"`
// The list of Pods to get upgraded
// +optional
// it is mutually exclusive with the Replicas field
PodList []string `json:"podList,omitempty"`
// MaxUnavailable is the max allowed number of pods that is unavailable
// during the upgrade. We will mark the batch as ready as long as there are less
// or equal number of pods unavailable than this number.
// default = 0
// +optional
MaxUnavailable *intstr.IntOrString `json:"maxUnavailable,omitempty"`
// The wait time, in seconds, between instances upgrades, default = 0
// +optional
InstanceInterval *int32 `json:"instanceInterval,omitempty"`
// RolloutWebhooks provides a way for the batch rollout to interact with an external process
// +optional
BatchRolloutWebhooks []RolloutWebhook `json:"batchRolloutWebhooks,omitempty"`
// CanaryMetric provides a way for the batch rollout process to automatically check certain metrics
// before moving to the next batch
// +optional
CanaryMetric []CanaryMetric `json:"canaryMetric,omitempty"`
}
// RolloutWebhook holds the reference to external checks used for canary analysis
type RolloutWebhook struct {
// Type of this webhook
Type HookType `json:"type"`
// Name of this webhook
Name string `json:"name"`
// URL address of this webhook
URL string `json:"url"`
// Method the HTTP call method, default is POST
Method string `json:"method,omitempty"`
// ExpectedStatus contains all the expected http status code that we will accept as success
ExpectedStatus []int `json:"expectedStatus,omitempty"`
// Metadata (key-value pairs) for this webhook
// +optional
Metadata *map[string]string `json:"metadata,omitempty"`
}
// RolloutWebhookPayload holds the info and metadata sent to webhooks
type RolloutWebhookPayload struct {
// Name of the upgrading resource
Name string `json:"name"`
// Namespace of the upgrading resource
Namespace string `json:"namespace"`
// Phase of the rollout
Phase string `json:"phase"`
// Metadata (key-value pairs) are the extra data send to this webhook
Metadata map[string]string `json:"metadata,omitempty"`
}
// CanaryMetric holds the reference to metrics used for canary analysis
type CanaryMetric struct {
// Name of the metric
Name string `json:"name"`
// Interval represents the windows size
Interval string `json:"interval,omitempty"`
// Range value accepted for this metric
// +optional
MetricsRange *MetricsExpectedRange `json:"metricsRange,omitempty"`
// TemplateRef references a metric template object
// +optional
TemplateRef *corev1.ObjectReference `json:"templateRef,omitempty"`
}
// MetricsExpectedRange defines the range used for metrics validation
type MetricsExpectedRange struct {
// Minimum value
// +optional
Min *intstr.IntOrString `json:"min,omitempty"`
// Maximum value
// +optional
Max *intstr.IntOrString `json:"max,omitempty"`
}
// RolloutStatus defines the observed state of a rollout plan
type RolloutStatus struct {
// Conditions represents the latest available observations of a CloneSet's current state.
condition.ConditionedStatus `json:",inline"`
// RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification
// and does not change until the rollout is restarted
RolloutOriginalSize int32 `json:"rolloutOriginalSize,omitempty"`
// RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification
// and does not change until the rollout is restarted
RolloutTargetSize int32 `json:"rolloutTargetSize,omitempty"`
// NewPodTemplateIdentifier is a string that uniquely represent the new pod template
// each workload type could use different ways to identify that so we cannot compare between resources
NewPodTemplateIdentifier string `json:"targetGeneration,omitempty"`
// lastAppliedPodTemplateIdentifier is a string that uniquely represent the last pod template
// each workload type could use different ways to identify that so we cannot compare between resources
// We update this field only after a successful rollout
LastAppliedPodTemplateIdentifier string `json:"lastAppliedPodTemplateIdentifier,omitempty"`
// RollingState is the Rollout State
RollingState RollingState `json:"rollingState"`
// BatchRollingState only meaningful when the Status is rolling
// +optional
BatchRollingState BatchRollingState `json:"batchRollingState"`
// The current batch the rollout is working on/blocked
// it starts from 0
CurrentBatch int32 `json:"currentBatch"`
// UpgradedReplicas is the number of Pods upgraded by the rollout controller
UpgradedReplicas int32 `json:"upgradedReplicas"`
// UpgradedReadyReplicas is the number of Pods upgraded by the rollout controller that have a Ready Condition.
UpgradedReadyReplicas int32 `json:"upgradedReadyReplicas"`
}

View File

@@ -0,0 +1,430 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
"fmt"
"time"
"github.com/oam-dev/kubevela/apis/core.oam.dev/condition"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/klog/v2"
)
// RolloutEvent is used to describe the events during rollout
type RolloutEvent string
const (
// RollingFailedEvent indicates that we encountered an unexpected error during upgrading and can't be retried
RollingFailedEvent RolloutEvent = "RollingFailedEvent"
// RollingRetriableFailureEvent indicates that we encountered an unexpected but retriable error
RollingRetriableFailureEvent RolloutEvent = "RollingRetriableFailureEvent"
// AppLocatedEvent indicates that apps are located successfully
AppLocatedEvent RolloutEvent = "AppLocatedEvent"
// RollingModifiedEvent indicates that the rolling target or source has changed
RollingModifiedEvent RolloutEvent = "RollingModifiedEvent"
// RollingDeletedEvent indicates that the rolling is being deleted
RollingDeletedEvent RolloutEvent = "RollingDeletedEvent"
// RollingSpecVerifiedEvent indicates that we have successfully verified that the rollout spec
RollingSpecVerifiedEvent RolloutEvent = "RollingSpecVerifiedEvent"
// RollingInitializedEvent indicates that we have finished initializing all the workload resources
RollingInitializedEvent RolloutEvent = "RollingInitializedEvent"
// AllBatchFinishedEvent indicates that all batches are upgraded
AllBatchFinishedEvent RolloutEvent = "AllBatchFinishedEvent"
// RollingFinalizedEvent indicates that we have finalized the rollout which includes but not
// limited to the resource garbage collection
RollingFinalizedEvent RolloutEvent = "AllBatchFinishedEvent"
// InitializedOneBatchEvent indicates that we have successfully rolled out one batch
InitializedOneBatchEvent RolloutEvent = "InitializedOneBatchEvent"
// FinishedOneBatchEvent indicates that we have successfully rolled out one batch
FinishedOneBatchEvent RolloutEvent = "FinishedOneBatchEvent"
// RolloutOneBatchEvent indicates that we have rollout one batch
RolloutOneBatchEvent RolloutEvent = "RolloutOneBatchEvent"
// OneBatchAvailableEvent indicates that the batch resource is considered available
// this events comes after we have examine the pod readiness check and traffic shifting if needed
OneBatchAvailableEvent RolloutEvent = "OneBatchAvailable"
// BatchRolloutApprovedEvent indicates that we got the approval manually
BatchRolloutApprovedEvent RolloutEvent = "BatchRolloutApprovedEvent"
// BatchRolloutFailedEvent indicates that we are waiting for the approval of the
BatchRolloutFailedEvent RolloutEvent = "BatchRolloutFailedEvent"
)
// These are valid conditions of the rollout.
const (
// RolloutSpecVerifying indicates that the rollout just started with verification
RolloutSpecVerifying condition.ConditionType = "RolloutSpecVerifying"
// RolloutInitializing means we start to initialize the cluster
RolloutInitializing condition.ConditionType = "RolloutInitializing"
// RolloutInProgress means we are upgrading resources.
RolloutInProgress condition.ConditionType = "RolloutInProgress"
// RolloutFinalizing means the rollout is finalizing
RolloutFinalizing condition.ConditionType = "RolloutFinalizing"
// RolloutFailing means the rollout is failing
RolloutFailing condition.ConditionType = "RolloutFailing"
// RolloutAbandoning means that the rollout is being abandoned.
RolloutAbandoning condition.ConditionType = "RolloutAbandoning"
// RolloutDeleting means that the rollout is being deleted.
RolloutDeleting condition.ConditionType = "RolloutDeleting"
// RolloutFailed means that the rollout failed.
RolloutFailed condition.ConditionType = "RolloutFailed"
// RolloutSucceed means that the rollout is done.
RolloutSucceed condition.ConditionType = "RolloutSucceed"
// BatchInitializing
BatchInitializing condition.ConditionType = "BatchInitializing"
// BatchPaused
BatchPaused condition.ConditionType = "BatchPaused"
// BatchVerifying
BatchVerifying condition.ConditionType = "BatchVerifying"
// BatchRolloutFailed
BatchRolloutFailed condition.ConditionType = "BatchRolloutFailed"
// BatchFinalizing
BatchFinalizing condition.ConditionType = "BatchFinalizing"
// BatchReady
BatchReady condition.ConditionType = "BatchReady"
)
// NewPositiveCondition creates a positive condition type
func NewPositiveCondition(condType condition.ConditionType) condition.Condition {
return condition.Condition{
Type: condType,
Status: v1.ConditionTrue,
LastTransitionTime: metav1.NewTime(time.Now()),
}
}
// NewNegativeCondition creates a false condition type
func NewNegativeCondition(condType condition.ConditionType, message string) condition.Condition {
return condition.Condition{
Type: condType,
Status: v1.ConditionFalse,
LastTransitionTime: metav1.NewTime(time.Now()),
Message: message,
}
}
const invalidRollingStateTransition = "the rollout state transition from `%s` state with `%s` is invalid"
const invalidBatchRollingStateTransition = "the batch rolling state transition from `%s` state with `%s` is invalid"
func (r *RolloutStatus) getRolloutConditionType() condition.ConditionType {
// figure out which condition type should we put in the condition depends on its state
switch r.RollingState {
case VerifyingSpecState:
return RolloutSpecVerifying
case InitializingState:
return RolloutInitializing
case RollingInBatchesState:
switch r.BatchRollingState {
case BatchInitializingState:
return BatchInitializing
case BatchVerifyingState:
return BatchVerifying
case BatchFinalizingState:
return BatchFinalizing
case BatchRolloutFailedState:
return BatchRolloutFailed
case BatchReadyState:
return BatchReady
default:
return RolloutInProgress
}
case FinalisingState:
return RolloutFinalizing
case RolloutFailingState:
return RolloutFailing
case RolloutAbandoningState:
return RolloutAbandoning
case RolloutDeletingState:
return RolloutDeleting
case RolloutSucceedState:
return RolloutSucceed
default:
return RolloutFailed
}
}
// RolloutRetry is a special state transition since we need an error message
func (r *RolloutStatus) RolloutRetry(reason string) {
// we can still retry, no change on the state
r.SetConditions(NewNegativeCondition(r.getRolloutConditionType(), reason))
}
// RolloutFailed is a special state transition since we need an error message
func (r *RolloutStatus) RolloutFailed(reason string) {
// set the condition first which depends on the state
r.SetConditions(NewNegativeCondition(r.getRolloutConditionType(), reason))
r.RollingState = RolloutFailedState
}
// RolloutFailing is a special state transition that always moves the rollout state to the failing state
func (r *RolloutStatus) RolloutFailing(reason string) {
// set the condition first which depends on the state
r.SetConditions(NewNegativeCondition(r.getRolloutConditionType(), reason))
r.RollingState = RolloutFailingState
r.BatchRollingState = BatchInitializingState
}
// ResetStatus resets the status of the rollout to start from beginning
func (r *RolloutStatus) ResetStatus() {
r.NewPodTemplateIdentifier = ""
r.RolloutTargetSize = -1
r.LastAppliedPodTemplateIdentifier = ""
r.RollingState = LocatingTargetAppState
r.BatchRollingState = BatchInitializingState
r.CurrentBatch = 0
r.UpgradedReplicas = 0
r.UpgradedReadyReplicas = 0
}
// SetRolloutCondition sets the supplied condition, replacing any existing condition
// of the same type unless they are identical.
func (r *RolloutStatus) SetRolloutCondition(new condition.Condition) {
exists := false
for i, existing := range r.Conditions {
if existing.Type != new.Type {
continue
}
// we want to update the condition when the LTT changes
if existing.Type == new.Type &&
existing.Status == new.Status &&
existing.Reason == new.Reason &&
existing.Message == new.Message &&
existing.LastTransitionTime == new.LastTransitionTime {
exists = true
continue
}
r.Conditions[i] = new
exists = true
}
if !exists {
r.Conditions = append(r.Conditions, new)
}
}
// we can't panic since it will crash the other controllers
func (r *RolloutStatus) illegalStateTransition(err error) {
r.RolloutFailed(err.Error())
}
// StateTransition is the center place to do rollout state transition
// it returns an error if the transition is invalid
// it changes the coming rollout state if it's valid
func (r *RolloutStatus) StateTransition(event RolloutEvent) {
rollingState := r.RollingState
batchRollingState := r.BatchRollingState
defer func() {
klog.InfoS("try to execute a rollout state transition",
"pre rolling state", rollingState,
"pre batch rolling state", batchRollingState,
"post rolling state", r.RollingState,
"post batch rolling state", r.BatchRollingState)
}()
// we have special transition for these types of event since they require additional info
if event == RollingFailedEvent || event == RollingRetriableFailureEvent {
r.illegalStateTransition(fmt.Errorf(invalidRollingStateTransition, rollingState, event))
return
}
// special handle modified event here
if event == RollingModifiedEvent {
if r.RollingState == RolloutDeletingState {
r.illegalStateTransition(fmt.Errorf(invalidRollingStateTransition, rollingState, event))
return
}
if r.RollingState == RolloutFailedState || r.RollingState == RolloutSucceedState {
r.ResetStatus()
} else {
r.SetRolloutCondition(NewNegativeCondition(r.getRolloutConditionType(), "Rollout Spec is modified"))
r.RollingState = RolloutAbandoningState
r.BatchRollingState = BatchInitializingState
}
return
}
// special handle deleted event here, it can happen at many states
if event == RollingDeletedEvent {
if r.RollingState == RolloutFailedState || r.RollingState == RolloutSucceedState {
r.illegalStateTransition(fmt.Errorf(invalidRollingStateTransition, rollingState, event))
return
}
r.SetRolloutCondition(NewNegativeCondition(r.getRolloutConditionType(), "Rollout is being deleted"))
r.RollingState = RolloutDeletingState
r.BatchRollingState = BatchInitializingState
return
}
// special handle appLocatedEvent event here, it only applies to one state but it's legal to happen at other states
if event == AppLocatedEvent {
if r.RollingState == LocatingTargetAppState {
r.RollingState = VerifyingSpecState
}
return
}
switch rollingState {
case VerifyingSpecState:
if event == RollingSpecVerifiedEvent {
r.SetRolloutCondition(NewPositiveCondition(r.getRolloutConditionType()))
r.RollingState = InitializingState
return
}
r.illegalStateTransition(fmt.Errorf(invalidRollingStateTransition, rollingState, event))
case InitializingState:
if event == RollingInitializedEvent {
r.SetRolloutCondition(NewPositiveCondition(r.getRolloutConditionType()))
r.RollingState = RollingInBatchesState
r.BatchRollingState = BatchInitializingState
return
}
r.illegalStateTransition(fmt.Errorf(invalidRollingStateTransition, rollingState, event))
case RollingInBatchesState:
r.batchStateTransition(event)
return
case RolloutAbandoningState:
if event == RollingFinalizedEvent {
r.SetRolloutCondition(NewPositiveCondition(r.getRolloutConditionType()))
r.ResetStatus()
return
}
r.illegalStateTransition(fmt.Errorf(invalidRollingStateTransition, rollingState, event))
case RolloutDeletingState:
if event == RollingFinalizedEvent {
r.SetRolloutCondition(NewPositiveCondition(r.getRolloutConditionType()))
r.RollingState = RolloutFailedState
return
}
r.illegalStateTransition(fmt.Errorf(invalidRollingStateTransition, rollingState, event))
case FinalisingState:
if event == RollingFinalizedEvent {
r.SetRolloutCondition(NewPositiveCondition(r.getRolloutConditionType()))
r.RollingState = RolloutSucceedState
return
}
r.illegalStateTransition(fmt.Errorf(invalidRollingStateTransition, rollingState, event))
case RolloutFailingState:
if event == RollingFinalizedEvent {
r.SetRolloutCondition(NewPositiveCondition(r.getRolloutConditionType()))
r.RollingState = RolloutFailedState
return
}
r.illegalStateTransition(fmt.Errorf(invalidRollingStateTransition, rollingState, event))
case RolloutSucceedState, RolloutFailedState:
r.illegalStateTransition(fmt.Errorf(invalidRollingStateTransition, rollingState, event))
default:
r.illegalStateTransition(fmt.Errorf("invalid rolling state %s before transition", rollingState))
}
}
// batchStateTransition handles the state transition when the rollout is in action
func (r *RolloutStatus) batchStateTransition(event RolloutEvent) {
batchRollingState := r.BatchRollingState
if event == BatchRolloutFailedEvent {
r.BatchRollingState = BatchRolloutFailedState
r.RollingState = RolloutFailedState
r.SetConditions(NewNegativeCondition(r.getRolloutConditionType(), "failed"))
return
}
switch batchRollingState {
case BatchInitializingState:
if event == InitializedOneBatchEvent {
r.BatchRollingState = BatchInRollingState
return
}
r.illegalStateTransition(fmt.Errorf(invalidBatchRollingStateTransition, batchRollingState, event))
case BatchInRollingState:
if event == RolloutOneBatchEvent {
r.SetRolloutCondition(NewPositiveCondition(r.getRolloutConditionType()))
r.BatchRollingState = BatchVerifyingState
return
}
r.illegalStateTransition(fmt.Errorf(invalidBatchRollingStateTransition, batchRollingState, event))
case BatchVerifyingState:
if event == OneBatchAvailableEvent {
r.SetRolloutCondition(NewPositiveCondition(r.getRolloutConditionType()))
r.BatchRollingState = BatchFinalizingState
return
}
r.illegalStateTransition(fmt.Errorf(invalidBatchRollingStateTransition, batchRollingState, event))
case BatchFinalizingState:
if event == FinishedOneBatchEvent {
r.SetRolloutCondition(NewPositiveCondition(r.getRolloutConditionType()))
r.BatchRollingState = BatchReadyState
return
}
if event == AllBatchFinishedEvent {
r.SetRolloutCondition(NewPositiveCondition(r.getRolloutConditionType()))
// transition out of the batch loop
r.BatchRollingState = BatchReadyState
r.RollingState = FinalisingState
return
}
r.illegalStateTransition(fmt.Errorf(invalidBatchRollingStateTransition, batchRollingState, event))
case BatchReadyState:
if event == BatchRolloutApprovedEvent {
r.SetRolloutCondition(NewPositiveCondition(r.getRolloutConditionType()))
r.BatchRollingState = BatchInitializingState
r.CurrentBatch++
return
}
r.illegalStateTransition(fmt.Errorf(invalidBatchRollingStateTransition, batchRollingState, event))
default:
r.illegalStateTransition(fmt.Errorf("invalid batch rolling state %s", batchRollingState))
}
}

View File

@@ -0,0 +1,77 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// Rollout is the Schema for the Rollout API
// +kubebuilder:object:root=true
// +kubebuilder:resource:categories={oam},shortName=rollout
// +kubebuilder:subresource:status
// +kubebuilder:storageversion
// +kubebuilder:printcolumn:name="TARGET",type=string,JSONPath=`.status.rolloutTargetSize`
// +kubebuilder:printcolumn:name="UPGRADED",type=string,JSONPath=`.status.upgradedReplicas`
// +kubebuilder:printcolumn:name="READY",type=string,JSONPath=`.status.upgradedReadyReplicas`
// +kubebuilder:printcolumn:name="BATCH-STATE",type=string,JSONPath=`.status.batchRollingState`
// +kubebuilder:printcolumn:name="ROLLING-STATE",type=string,JSONPath=`.status.rollingState`
// +kubebuilder:printcolumn:name="AGE",type=date,JSONPath=".metadata.creationTimestamp"
type Rollout struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec RolloutSpec `json:"spec,omitempty"`
Status CompRolloutStatus `json:"status,omitempty"`
}
// RolloutSpec defines how to describe an update between different compRevision
type RolloutSpec struct {
// TargetRevisionName contains the name of the componentRevisionName that we need to upgrade to.
TargetRevisionName string `json:"targetRevisionName"`
// SourceRevisionName contains the name of the componentRevisionName that we need to upgrade from.
// it can be empty only when it's the first time to deploy the application
SourceRevisionName string `json:"sourceRevisionName,omitempty"`
// ComponentName specify the component name
ComponentName string `json:"componentName"`
// RolloutPlan is the details on how to rollout the resources
RolloutPlan RolloutPlan `json:"rolloutPlan"`
}
// CompRolloutStatus defines the observed state of rollout
type CompRolloutStatus struct {
RolloutStatus `json:",inline"`
// LastUpgradedTargetRevision contains the name of the componentRevisionName that we upgraded to
// We will restart the rollout if this is not the same as the spec
LastUpgradedTargetRevision string `json:"lastTargetRevision"`
// LastSourceRevision contains the name of the componentRevisionName that we need to upgrade from.
// We will restart the rollout if this is not the same as the spec
LastSourceRevision string `json:"LastSourceRevision,omitempty"`
}
// RolloutList contains a list of Rollout
// +kubebuilder:object:root=true
type RolloutList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Rollout `json:"items"`
}

View File

@@ -0,0 +1,334 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by controller-gen. DO NOT EDIT.
package v1alpha1
import (
v1 "k8s.io/api/core/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/intstr"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CanaryMetric) DeepCopyInto(out *CanaryMetric) {
*out = *in
if in.MetricsRange != nil {
in, out := &in.MetricsRange, &out.MetricsRange
*out = new(MetricsExpectedRange)
(*in).DeepCopyInto(*out)
}
if in.TemplateRef != nil {
in, out := &in.TemplateRef, &out.TemplateRef
*out = new(v1.ObjectReference)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CanaryMetric.
func (in *CanaryMetric) DeepCopy() *CanaryMetric {
if in == nil {
return nil
}
out := new(CanaryMetric)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CompRolloutStatus) DeepCopyInto(out *CompRolloutStatus) {
*out = *in
in.RolloutStatus.DeepCopyInto(&out.RolloutStatus)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CompRolloutStatus.
func (in *CompRolloutStatus) DeepCopy() *CompRolloutStatus {
if in == nil {
return nil
}
out := new(CompRolloutStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *MetricsExpectedRange) DeepCopyInto(out *MetricsExpectedRange) {
*out = *in
if in.Min != nil {
in, out := &in.Min, &out.Min
*out = new(intstr.IntOrString)
**out = **in
}
if in.Max != nil {
in, out := &in.Max, &out.Max
*out = new(intstr.IntOrString)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new MetricsExpectedRange.
func (in *MetricsExpectedRange) DeepCopy() *MetricsExpectedRange {
if in == nil {
return nil
}
out := new(MetricsExpectedRange)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Rollout) DeepCopyInto(out *Rollout) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Rollout.
func (in *Rollout) DeepCopy() *Rollout {
if in == nil {
return nil
}
out := new(Rollout)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *Rollout) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutBatch) DeepCopyInto(out *RolloutBatch) {
*out = *in
out.Replicas = in.Replicas
if in.PodList != nil {
in, out := &in.PodList, &out.PodList
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.MaxUnavailable != nil {
in, out := &in.MaxUnavailable, &out.MaxUnavailable
*out = new(intstr.IntOrString)
**out = **in
}
if in.InstanceInterval != nil {
in, out := &in.InstanceInterval, &out.InstanceInterval
*out = new(int32)
**out = **in
}
if in.BatchRolloutWebhooks != nil {
in, out := &in.BatchRolloutWebhooks, &out.BatchRolloutWebhooks
*out = make([]RolloutWebhook, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.CanaryMetric != nil {
in, out := &in.CanaryMetric, &out.CanaryMetric
*out = make([]CanaryMetric, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutBatch.
func (in *RolloutBatch) DeepCopy() *RolloutBatch {
if in == nil {
return nil
}
out := new(RolloutBatch)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutList) DeepCopyInto(out *RolloutList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]Rollout, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutList.
func (in *RolloutList) DeepCopy() *RolloutList {
if in == nil {
return nil
}
out := new(RolloutList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *RolloutList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutPlan) DeepCopyInto(out *RolloutPlan) {
*out = *in
if in.TargetSize != nil {
in, out := &in.TargetSize, &out.TargetSize
*out = new(int32)
**out = **in
}
if in.NumBatches != nil {
in, out := &in.NumBatches, &out.NumBatches
*out = new(int32)
**out = **in
}
if in.RolloutBatches != nil {
in, out := &in.RolloutBatches, &out.RolloutBatches
*out = make([]RolloutBatch, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.BatchPartition != nil {
in, out := &in.BatchPartition, &out.BatchPartition
*out = new(int32)
**out = **in
}
if in.RolloutWebhooks != nil {
in, out := &in.RolloutWebhooks, &out.RolloutWebhooks
*out = make([]RolloutWebhook, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.CanaryMetric != nil {
in, out := &in.CanaryMetric, &out.CanaryMetric
*out = make([]CanaryMetric, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutPlan.
func (in *RolloutPlan) DeepCopy() *RolloutPlan {
if in == nil {
return nil
}
out := new(RolloutPlan)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutSpec) DeepCopyInto(out *RolloutSpec) {
*out = *in
in.RolloutPlan.DeepCopyInto(&out.RolloutPlan)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutSpec.
func (in *RolloutSpec) DeepCopy() *RolloutSpec {
if in == nil {
return nil
}
out := new(RolloutSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutStatus) DeepCopyInto(out *RolloutStatus) {
*out = *in
in.ConditionedStatus.DeepCopyInto(&out.ConditionedStatus)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutStatus.
func (in *RolloutStatus) DeepCopy() *RolloutStatus {
if in == nil {
return nil
}
out := new(RolloutStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutWebhook) DeepCopyInto(out *RolloutWebhook) {
*out = *in
if in.ExpectedStatus != nil {
in, out := &in.ExpectedStatus, &out.ExpectedStatus
*out = make([]int, len(*in))
copy(*out, *in)
}
if in.Metadata != nil {
in, out := &in.Metadata, &out.Metadata
*out = new(map[string]string)
if **in != nil {
in, out := *in, *out
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutWebhook.
func (in *RolloutWebhook) DeepCopy() *RolloutWebhook {
if in == nil {
return nil
}
out := new(RolloutWebhook)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutWebhookPayload) DeepCopyInto(out *RolloutWebhookPayload) {
*out = *in
if in.Metadata != nil {
in, out := &in.Metadata, &out.Metadata
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutWebhookPayload.
func (in *RolloutWebhookPayload) DeepCopy() *RolloutWebhookPayload {
if in == nil {
return nil
}
out := new(RolloutWebhookPayload)
in.DeepCopyInto(out)
return out
}

View File

@@ -17,7 +17,13 @@ limitations under the License.
package types
import (
"encoding/json"
"cuelang.org/go/cue"
"github.com/spf13/pflag"
"k8s.io/apimachinery/pkg/runtime"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
)
// Source record the source of Capability
@@ -32,6 +38,22 @@ type CRDInfo struct {
Kind string `json:"kind"`
}
// Chart defines all necessary information to install a whole chart
type Chart struct {
Repo string `json:"repo"`
URL string `json:"url"`
Name string `json:"name"`
Namespace string `json:"namespace,omitempty"`
Version string `json:"version"`
Values map[string]interface{} `json:"values"`
}
// Installation defines the installation method for this Capability, currently only helm is supported
type Installation struct {
Helm Chart `json:"helm"`
// TODO(wonderflow) add raw yaml file support for install capability
}
// CapType defines the type of capability
type CapType string
@@ -42,6 +64,8 @@ const (
TypeWorkload CapType = "workload"
// TypeTrait represents OAM Trait
TypeTrait CapType = "trait"
// TypeScope represent OAM Scope
TypeScope CapType = "scope"
// TypeWorkflowStep represent OAM Workflow
TypeWorkflowStep CapType = "workflowstep"
// TypePolicy represent OAM Policy
@@ -67,6 +91,10 @@ type CapabilityCategory string
const (
TerraformCategory CapabilityCategory = "terraform"
HelmCategory CapabilityCategory = "helm"
KubeCategory CapabilityCategory = "kube"
CUECategory CapabilityCategory = "cue"
)
@@ -83,6 +111,49 @@ type Parameter struct {
JSONType string `json:"jsonType,omitempty"`
}
// SetFlagBy set cli flag from Parameter
func SetFlagBy(flags *pflag.FlagSet, v Parameter) {
name := v.Name
if v.Alias != "" {
name = v.Alias
}
// nolint:exhaustive
switch v.Type {
case cue.IntKind:
var vv int64
switch val := v.Default.(type) {
case int64:
vv = val
case json.Number:
vv, _ = val.Int64()
case int:
vv = int64(val)
case float64:
vv = int64(val)
}
flags.Int64P(name, v.Short, vv, v.Usage)
case cue.StringKind:
flags.StringP(name, v.Short, v.Default.(string), v.Usage)
case cue.BoolKind:
flags.BoolP(name, v.Short, v.Default.(bool), v.Usage)
case cue.NumberKind, cue.FloatKind:
var vv float64
switch val := v.Default.(type) {
case int64:
vv = float64(val)
case json.Number:
vv, _ = val.Float64()
case int:
vv = float64(val)
case float64:
vv = val
}
flags.Float64P(name, v.Short, vv, v.Usage)
default:
// other types not supported yet
}
}
// Capability defines the content of a capability
type Capability struct {
Name string `json:"name"`
@@ -91,6 +162,7 @@ type Capability struct {
CueTemplateURI string `json:"templateURI,omitempty"`
Parameters []Parameter `json:"parameters,omitempty"`
CrdName string `json:"crdName,omitempty"`
Center string `json:"center,omitempty"`
Status string `json:"status,omitempty"`
Description string `json:"description,omitempty"`
Example string `json:"example,omitempty"`
@@ -104,10 +176,16 @@ type Capability struct {
Namespace string `json:"namespace,omitempty"`
// Plugin Source
Source *Source `json:"source,omitempty"`
Source *Source `json:"source,omitempty"`
Install *Installation `json:"install,omitempty"`
CrdInfo *CRDInfo `json:"crdInfo,omitempty"`
// Terraform
TerraformConfiguration string `json:"terraformConfiguration,omitempty"`
ConfigurationType string `json:"configurationType,omitempty"`
Path string `json:"path,omitempty"`
// KubeTemplate
KubeTemplate runtime.RawExtension `json:"kubetemplate,omitempty"`
KubeParameter []common.KubeParameter `json:"kubeparameter,omitempty"`
}

View File

@@ -17,17 +17,25 @@ limitations under the License.
package types
import (
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
// ComponentManifest contains resources rendered from an application component.
type ComponentManifest struct {
Name string
Namespace string
RevisionName string
RevisionHash string
// ComponentOutput contains K8s resource generated from "output" block of ComponentDefinition
ComponentOutput *unstructured.Unstructured
// ComponentOutputsAndTraits contains both resources generated from "outputs" block of ComponentDefinition and resources generated from TraitDefinition
ComponentOutputsAndTraits []*unstructured.Unstructured
Name string
Namespace string
RevisionName string
RevisionHash string
ExternalRevision string
// StandardWorkload contains K8s resource generated from "output" block of ComponentDefinition
StandardWorkload *unstructured.Unstructured
// Traits contains both resources generated from "outputs" block of ComponentDefinition and resources generated from TraitDefinition
Traits []*unstructured.Unstructured
Scopes []*corev1.ObjectReference
// PackagedWorkloadResources contain all the workload related resources. It could be a Helm
// Release, Git Repo or anything that can package and run a workload.
PackagedWorkloadResources []*unstructured.Unstructured
PackagedTraitResources map[string][]*unstructured.Unstructured
}

Some files were not shown because too many files have changed in this diff Show More