Compare commits

...

65 Commits

Author SHA1 Message Date
Jianbo Sun
e3e6d57b2a Fix: align release image and charts with the master branch (#4526)
Signed-off-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
2022-08-02 16:23:41 +08:00
github-actions[bot]
813a7534f2 Fix: fix logs to record the right publish version (#4475)
Signed-off-by: yangsoon <songyang.song@alibaba-inc.com>
(cherry picked from commit 4846104c8f)

Co-authored-by: yangsoon <songyang.song@alibaba-inc.com>
2022-07-27 01:13:33 +08:00
github-actions[bot]
f8f75a3b64 Fix: The apply failure error is ignored when the workflow is executed (#4460)
Signed-off-by: yangsoon <songyang.song@alibaba-inc.com>
(cherry picked from commit b1d8e6c88b)

Co-authored-by: yangsoon <songyang.song@alibaba-inc.com>
2022-07-25 22:17:53 +08:00
github-actions[bot]
8bad1fc055 Fix: fix the goroutine leak in http request (#4303)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit 559ef83abd)

Co-authored-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-07-01 17:55:41 +08:00
barnettZQG
4569850740 Chore: change the acr registry address (#4214) (#4217)
Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
2022-06-22 14:47:13 +08:00
github-actions[bot]
54477eabf5 Fix: env trait error when existing env exists (#4039)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 2d5a16d45f)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2022-05-27 21:12:22 +08:00
Zheng Xi Zhou
43bbc97319 Fix: failed to create Provider by CLI (#3964)
Fix #3955

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
2022-05-24 16:42:22 +08:00
github-actions[bot]
cbed2b5cb3 Fix: remove last-applied-config annotation for configmap and secret (#3942)
Signed-off-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
(cherry picked from commit 4789fa8833)

Co-authored-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
2022-05-20 17:09:50 +08:00
github-actions[bot]
8be75545bc Fix: modify the template definition to solve the trait cli error Signed-off-by: Shijie Zhong <zhongsjie@cmbchina.com> (#3880)
Signed-off-by: ZhongsJie <zhongsjie@gmail.com>
(cherry picked from commit b3ef120f95)

Co-authored-by: ZhongsJie <zhongsjie@gmail.com>
2022-05-13 10:54:03 +08:00
github-actions[bot]
d748096f7c Fix: the endpoints is repeated and can not query the ingress with v1 version (#3864)
Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 31c28b6d00)

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2022-05-11 14:34:28 +08:00
github-actions[bot]
d4ab93c232 Fix: ignore no kind match error in gc (#3863)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 0021f8823f)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2022-05-11 12:43:50 +08:00
Zheng Xi Zhou
37a656a292 Fix: update Terraform Configuration CRDS test file (#3857)
Updated Terraform Configuration CRDS to v1beta2 to fix the UT
issue of https://github.com/kubevela/kubevela/pull/3851

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
2022-05-11 10:09:47 +08:00
github-actions[bot]
3c94ac1bc1 [Backport release-1.3] Fix(makefile): update kustomize version to be available for darwin-arm64 (#3858)
* Fix(makefile): update kustomize version to be available for darwin-arm64

Signed-off-by: Carmendelope <carmen@napptive.com>
(cherry picked from commit ad81fffe4f)

* make reviewable changes

Signed-off-by: Carmendelope <carmen@napptive.com>
(cherry picked from commit 84dd93425c)

Co-authored-by: Carmendelope <carmen@napptive.com>
2022-05-11 10:08:04 +08:00
StevenLeiZhang
8499dffcd7 Fix: The new addon can not shown in the Addons page (#3851)
Signed-off-by: StevenLeiZhang <zhangleiic@163.com>
2022-05-10 23:10:04 +08:00
StevenLeiZhang
30a6e85023 Fix: sensitive field of addon registry is exposed (#3852)
Signed-off-by: StevenLeiZhang <zhangleiic@163.com>
2022-05-10 20:25:10 +08:00
github-actions[bot]
11530e2720 Fix: add parse comments in lookupScript to make patch work (#3844)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit 1758dc319d)

Co-authored-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-05-10 13:38:30 +08:00
github-actions[bot]
f53466bc7d [Backport release-1.3] Fix: resolve locally installed addons not being displayed (#3842)
* Fix: resolve locally installed addons not being displayed

Addressed an issue where locally installed addons may not be displayed
if one with the same name is in the registry

Signed-off-by: Charlie Chiang <charlie_c_0129@outlook.com>
(cherry picked from commit 799a099890)

* Style: revert incorrect auto-formatting

Signed-off-by: Charlie Chiang <charlie_c_0129@outlook.com>
(cherry picked from commit 1430aac438)

* Refactor: change original variable name to avoid confusions

Signed-off-by: Charlie Chiang <charlie_c_0129@outlook.com>
(cherry picked from commit 0c1e347106)

* Test: add tests for outputs from `vela addon list`
when an addon with the same as registry one is locally installed

Signed-off-by: Charlie Chiang <charlie_c_0129@outlook.com>
(cherry picked from commit e6b3bb024c)

* Refactor: use more concise method to check length

Signed-off-by: Charlie Chiang <charlie_c_0129@outlook.com>
(cherry picked from commit afbb062e7c)

* Test: add one more test condition for dual addons
i.e. local and registry

Signed-off-by: Charlie Chiang <charlie_c_0129@outlook.com>
(cherry picked from commit ac0718a662)

* Refactor: simplify testing logic by removing unneeded looping

Signed-off-by: Charlie Chiang <charlie_c_0129@outlook.com>
(cherry picked from commit 75185f0f0d)

* Style: add missing license header

Signed-off-by: Charlie Chiang <charlie_c_0129@outlook.com>
(cherry picked from commit e1d8e99288)

Co-authored-by: Charlie Chiang <charlie_c_0129@outlook.com>
2022-05-10 13:37:37 +08:00
github-actions[bot]
e8ea8ec48f [Backport release-1.3] Fix: don't override user definied region (#3833)
* Fix: don't override user definied `region`

Fix #https://github.com/oam-dev/kubevela/issues/3384

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
(cherry picked from commit 0b2f0e381f)

* fix check-diff

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
(cherry picked from commit a9156212d0)

* fix CI

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
(cherry picked from commit 423ce6ece1)

* fix CI

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
(cherry picked from commit 5a827d2ef0)

* fix UT

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
(cherry picked from commit 4b71568547)

* revert some changes

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
(cherry picked from commit cde8d3a957)

Co-authored-by: Zheng Xi Zhou <zzxwill@gmail.com>
2022-05-09 15:16:21 +08:00
github-actions[bot]
6c4c9bdf7e [Backport release-1.3] Fix: update latest version Fix: 1.2 upgrade 1.3 workflowstep XXX not found (#3819)
* Fix: 1.2 upgrade 1.3 workflowstep XXX not found

Signed-off-by: cezhang <c1zhang.dev@gmail.com>

handle publishversion case

Signed-off-by: cezhang <c1zhang.dev@gmail.com>
(cherry picked from commit 9cea9b0914)

* add test

Signed-off-by: cezhang <c1zhang.dev@gmail.com>

add test

Signed-off-by: cezhang <c1zhang.dev@gmail.com>

lint code

Signed-off-by: cezhang <c1zhang.dev@gmail.com>
(cherry picked from commit 10b2f691c1)

Co-authored-by: cezhang <c1zhang.dev@gmail.com>
2022-05-07 13:25:52 +08:00
github-actions[bot]
e7930a2da0 Fix: update latest version (#3796)
Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 202ccf7b68)

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2022-04-29 17:50:43 +08:00
github-actions[bot]
45e1de19dc Fix: log message wraps wrong arguments (#3793)
Signed-off-by: StevenLeiZhang <zhangleiic@163.com>
(cherry picked from commit 79362f0648)

Co-authored-by: StevenLeiZhang <zhangleiic@163.com>
2022-04-29 13:34:15 +08:00
github-actions[bot]
d910bb7928 [Backport release-1.3] Chore: sync the cli binaries to OSS (#3785)
* Feat: show the parsing capability error message

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 99b57236eb)

* Chore: sync the cli binaries to OSS

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 81e01e7f56)

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2022-04-29 10:14:43 +08:00
github-actions[bot]
a580c9a44c Fix: env trait compatible with valueFrom (#3784)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit d0506db414)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2022-04-28 18:19:01 +08:00
github-actions[bot]
8f5eaefd89 Fix: kubectl check err (#3779)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 3bbcbe0e6f)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2022-04-28 16:33:28 +08:00
github-actions[bot]
7c3a35ae87 [Backport release-1.3] Fix: addon cli parse any type (#3777)
* fix addon parse any type

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit fdda3a70e5)

* test int

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit ca47004529)

Co-authored-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2022-04-28 16:08:48 +08:00
github-actions[bot]
ea0003f7cb Fix: fix revision in webservice (#3766)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit 1ab13437b4)

Co-authored-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-04-27 14:19:43 +08:00
Zheng Xi Zhou
3728857c82 Fix: use Terraform provider name as application in CLI (#3742) (#3756)
* Fix: use Terraform provider name as application in CLI

In CLI, use Terraform provider name as application name when
create a Provider. Also display there providers in VelaUX.
1). manually created a Terraform Provider object, like https://github.com/oam-dev/terraform-controller/blob/master/getting-started.md#aws
2). by enabling a Terraform provider addon in version older than v1.3.0
3). by create a Terraform provider via `vela provider add`
4). by VelaUX

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
2022-04-26 22:17:17 +08:00
github-actions[bot]
8e6c49cb37 [Backport release-1.3] Fix: fix the bug of vela cli enable addon by localDir on windows os (#3762)
* fix windows bug

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix several issue

fix bug

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix unit-test

(cherry picked from commit 956dff3261)

* add more tests

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 497a6ebcae)

Co-authored-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2022-04-26 20:53:34 +08:00
github-actions[bot]
4b31274bda [Backport release-1.3] Fix: velaux addon hint after enable (#3760)
* Fix: velaux addon hint after enable

Signed-off-by: qiaozp <chivalry.pp@gmail.com>
(cherry picked from commit 5adfd6210f)

* check if upgrade

Signed-off-by: qiaozp <chivalry.pp@gmail.com>
(cherry picked from commit 5a7467a494)

Co-authored-by: qiaozp <chivalry.pp@gmail.com>
2022-04-26 16:47:36 +08:00
github-actions[bot]
429e62d11b [Backport release-1.3] Feat: check whether a project matched a config's project (#3757)
* Feat: check whether a project matched a config's project

If the config project is not nil, it's matched whether the project
matched the target project.
If the config project is nil, the target project matched the config.

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
(cherry picked from commit dca9646693)

Co-authored-by: Zheng Xi Zhou <zzxwill@gmail.com>
2022-04-26 14:59:41 +08:00
github-actions[bot]
841a18189a Fix: public image registry config could not be created (#3748)
Fix #3663

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
(cherry picked from commit 4fc599b8c9)

Co-authored-by: Zheng Xi Zhou <zzxwill@gmail.com>
2022-04-25 13:59:00 +08:00
github-actions[bot]
59bd066c05 use unical project filter func to list secret (#3746)
Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix pointer

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit b441878545)

Co-authored-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2022-04-25 10:06:37 +08:00
github-actions[bot]
efa9dedb85 Fix: vela-cli does not print cluster name, if application installed in default cluster (#3740)
Signed-off-by: StevenLeiZhang <zhangleiic@163.com>
(cherry picked from commit 3819981dd4)

Co-authored-by: StevenLeiZhang <zhangleiic@163.com>
2022-04-24 09:17:39 +08:00
Xiangbo Ma
fdffde4dfd Fix: cherry-pick #3724 to delete apprev annotation. Signed-off-by: Xiangbo Ma <maxiangboo@cmbchina.com> (#3739)
Signed-off-by: fourierr <maxiangboo@qq.com>
2022-04-24 09:17:15 +08:00
wyike
d08aa7d12c fix several issues (#3729) (#3735)
Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2022-04-22 17:29:01 +08:00
wyike
f9755a405f Fix: change systemInfo some fields (cp #3715) (#3723)
* Fix: change systemInfo some  fields (#3715)

* add some field an calculate workflow step

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

* fix the calculate job cannot start issue

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

* fix comments

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix test

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

* add suit test framework

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

* modify the go mod file

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix worry file name

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2022-04-22 16:42:54 +08:00
github-actions[bot]
d751d95bac Feat: change the webservice and config-image-registry definitions (#3733)
Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 300f0c5ace)

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2022-04-22 16:34:28 +08:00
github-actions[bot]
e86eec07e0 specify staticcheck version (#3728)
Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix the workflow

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix

try to fix

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix make file

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix makefile

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 7b62664332)

Co-authored-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2022-04-22 14:26:37 +08:00
github-actions[bot]
4abb5c6ced [Backport release-1.3] Fix: embed.FS filepath that follow the unix style file path when running on windows (#3720)
* fix: "builtin-apply-component.cue: file does not exist"

Signed-off-by: lei.chu <1062186165@qq.com>
(cherry picked from commit fba60a1af1)

* fix: "builtin-apply-component.cue: file does not exist"

Signed-off-by: lei.chu <1062186165@qq.com>
(cherry picked from commit 9e74023951)

Co-authored-by: lei.chu <1062186165@qq.com>
2022-04-21 14:32:30 +08:00
github-actions[bot]
32d9a9ec94 Fix: vela-core does not report error, when component depends on invalid component (#3712)
Signed-off-by: StevenLeiZhang <zhangleiic@163.com>
(cherry picked from commit 01781bdc02)

Co-authored-by: StevenLeiZhang <zhangleiic@163.com>
2022-04-20 13:39:53 +08:00
github-actions[bot]
f6f9ef4ded Feat: support disable legacy gc upgrade operation (#3697)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 31ab3d859c)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2022-04-19 09:53:10 +08:00
github-actions[bot]
166c93d548 Fix: set provider name as the config name (#3695)
- For VelaUX, hidden a provider name (users don't need to manual set it). Used
the application/component name (config name) to be the provider name.
- Store description of a config to the annotation of the config application

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
(cherry picked from commit e3feeeec24)

Co-authored-by: Zheng Xi Zhou <zzxwill@gmail.com>
2022-04-18 16:48:37 +08:00
github-actions[bot]
8f767068bf Fix: rt resource key compare mismatch local cluster (#3685)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit fa12bc1950)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2022-04-15 16:37:00 +08:00
github-actions[bot]
8d9e2a71e7 [Backport release-1.3] Fix: can not query the instance list for the app with apply once policy (#3684)
* Fix: can not query the instance list for the app with apply once policy

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit fbcba8da98)

* Fix: change the test case about ListResourcesInApp

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 91c45132b0)

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2022-04-15 15:04:04 +08:00
wyike
58b3bca537 cherrypick 3665 and 3605 to release 1.3 (#3668)
Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2022-04-15 12:11:05 +08:00
github-actions[bot]
825f1aaa22 [Backport release-1.3] Fix: fix token invalid after the server restarted (#3662)
* Fix: fix token invalid after the server restarted

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit 13c6f0c5a3)

* fix lint

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit 96896d4956)

* Pending test temporary

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit 33160fd199)

* Pending test temporary

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit c858b81d86)

Co-authored-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-04-14 22:26:57 +08:00
github-actions[bot]
82075427e6 Fix: vela status tree show cluster alias & raw format (#3661)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 4ff0f53c04)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2022-04-14 19:40:14 +08:00
github-actions[bot]
f89cf673c0 Fix: add label from inner system in CR can prevent sync (#3660)
Signed-off-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
(cherry picked from commit dceb642ad6)

Co-authored-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
2022-04-14 19:33:35 +08:00
github-actions[bot]
ce53f6922f [Backport release-1.3] Fix: duplicately list pods in velaQL (#3656)
* Fix: duplicately list pods in velaQL

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 5917141b12)

* Fix: the create time of synced app is empty

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit d404c4b507)

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2022-04-14 17:46:08 +08:00
github-actions[bot]
b6f70d9a3c Fix: failed to deploy application when no there is no avaiable (#3654)
When there are configs, but not in the project where the appliation
is about to deploy, the sync application will hit an issue. It will
lead to block the deploy of an application.

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
(cherry picked from commit 6cac625d53)

Co-authored-by: Zheng Xi Zhou <zzxwill@gmail.com>
2022-04-14 17:26:31 +08:00
Somefive
64d063ccfe [Backport release-1.3] vela status tree & controller flags fix (#3649)
* Feat: vela status --tree (#3609)

* Feat: vela status --tree

Signed-off-by: Somefive <yd219913@alibaba-inc.com>

* Feat: support show not-deployed clusters

Signed-off-by: Somefive <yd219913@alibaba-inc.com>

* Fix: add tests

Signed-off-by: Somefive <yd219913@alibaba-inc.com>

* Fix: add multicluster e2e coverage

Signed-off-by: Somefive <yd219913@alibaba-inc.com>

* Chore: minor fix

Signed-off-by: Somefive <yd219913@alibaba-inc.com>

* Fix: cli default switch on feature flags (#3625)

Signed-off-by: Somefive <yd219913@alibaba-inc.com>

* Feat: support alias in cluster (#3630)

* Feat: support alias in cluster

Signed-off-by: Somefive <yd219913@alibaba-inc.com>

* Fix: add test for cluster alias

Signed-off-by: Somefive <yd219913@alibaba-inc.com>

* Feat: rework vela up to support specified revision (#3634)

* Feat: rework vela up to support specified revision

Signed-off-by: Somefive <yd219913@alibaba-inc.com>

* Fix: add legacy compatibility

Signed-off-by: Somefive <yd219913@alibaba-inc.com>

* Feat: fix test

Signed-off-by: Somefive <yd219913@alibaba-inc.com>

* Fix: enhance vela status tree print (#3639)

Signed-off-by: Somefive <yd219913@alibaba-inc.com>
2022-04-14 16:36:35 +08:00
github-actions[bot]
a98278fb7a [Backport release-1.3] Fix: refine the config sync logic (#3647)
* Fix: refine config management

- Refine the config sync logics

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
(cherry picked from commit 97cc021d7a)

* address comments

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
(cherry picked from commit 3e0f9c07a8)

Co-authored-by: Zheng Xi Zhou <zzxwill@gmail.com>
2022-04-14 13:05:01 +08:00
wyike
0553d603e6 Chore: cherry-pick 3641 to release 1.3 (#3646)
* Fix: try to fix CVE (#3641)


* use santize

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2022-04-14 11:25:02 +08:00
github-actions[bot]
a36e99308f [Backport release-1.3] Fix: clear info when addon version cannot meet require (#3645)
* first

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

version miss match erro for addon

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

add log

(cherry picked from commit 14fda35867)

* add test for this

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

small fix

(cherry picked from commit 1e218b5732)

Co-authored-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2022-04-14 10:03:42 +08:00
github-actions[bot]
947bac2d35 Fix: verify password valid (#3643)
Signed-off-by: Zhiyu Wang <zhiyuwang.newbis@gmail.com>
(cherry picked from commit b623976f1e)

Co-authored-by: Zhiyu Wang <zhiyuwang.newbis@gmail.com>
2022-04-13 19:40:05 +08:00
github-actions[bot]
7644cc59cb Feat: refine config creation and provide config list (#3640)
- Make the api of creation a config to be async
- In listing config page, show the status of a config

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
(cherry picked from commit 5314e5bf9e)

Co-authored-by: Zheng Xi Zhou <zzxwill@gmail.com>
2022-04-13 13:48:20 +08:00
github-actions[bot]
89a441b8ce [Backport release-1.3] Fix: fix dex login with existed email (#3633)
* Fix: fix dex login with existed email

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit 15400df15e)

* add dex connector check

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit a7062e08e1)

* unset users' alias

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit 1a818f4b8b)

* fix ut

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit 1b3768ca73)

* fix ut

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit e54fc776b0)

Co-authored-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-04-12 16:30:10 +08:00
github-actions[bot]
780572c68f Fix: flags for controller (#3632)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit e5a5916973)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2022-04-12 16:13:50 +08:00
github-actions[bot]
a13cab65b2 [Backport release-1.3] Feat: support basic auth private helm repo (#3631)
* support auth

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 54c05afb1a)

* add test

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix check diff

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix test

fix

add comments

fix test

(cherry picked from commit a8961ec8cc)

* add tests

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix

add more test

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 4f45a6af8e)

* add more test

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit dee791aa51)

* extract set auth info as a global func

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit f8fb0137e3)

* return bcode

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 057a67d8b9)

Co-authored-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2022-04-12 16:10:48 +08:00
Min Kim
e26104adcc bump cluster-gateway to 1.3.2 (#3620)
Signed-off-by: yue9944882 <291271447@qq.com>
2022-04-11 19:49:20 +08:00
github-actions[bot]
26ac584655 [Backport release-1.3] Feat: add api of listing configs for project when creating a target (#3626)
* Feat: add api of listing configs for project

In a project, list configs by its type

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
(cherry picked from commit 87aae26f3f)

* address comments

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
(cherry picked from commit 830cc79dcf)

* fix ci

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
(cherry picked from commit bf10455f6b)

* add query parameter definition

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
(cherry picked from commit 73ff31382b)

* Update pkg/apiserver/rest/webservice/project.go

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit f0b346a1cb)

Co-authored-by: Zheng Xi Zhou <zzxwill@gmail.com>
2022-04-11 19:06:31 +08:00
github-actions[bot]
482976990d Fix: reuse chart values in vela install (#3617)
Signed-off-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
(cherry picked from commit 5bf0dd045f)

Co-authored-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
2022-04-11 09:52:03 +08:00
github-actions[bot]
bc4812a12e [Backport release-1.3] Fix: vela logs without specified resource name (#3608)
* Fix: vela logs without specified resource name

Signed-off-by: qiaozp <chivalry.pp@gmail.com>
(cherry picked from commit 43df60cb87)

* add unittest

Signed-off-by: qiaozp <chivalry.pp@gmail.com>
(cherry picked from commit daacb88601)

* reviewable

Signed-off-by: qiaozp <chivalry.pp@gmail.com>
(cherry picked from commit 195585b69f)

Co-authored-by: qiaozp <chivalry.pp@gmail.com>
2022-04-08 17:57:15 +08:00
github-actions[bot]
58c2208e2a add sorting for properties, outputs, writeSecretRefParameters in vela def doc-gen (#3604)
Signed-off-by: Nicola115 <2225992901@qq.com>
(cherry picked from commit f1f5fa563d)

Co-authored-by: Nicola115 <2225992901@qq.com>
2022-04-08 15:32:09 +08:00
github-actions[bot]
f83d88cfb0 [Backport release-1.3] Fix: add terraform aws provider without AWS_SESSION_TOKEN (#3594)
* Fix: add terraform aws provider without AWS_SESSION_TOKEN

Fix #3589 and refine prompts for cli

Signed-off-by: Zheng Xi Zhou <zzxwill@gmail.com>
(cherry picked from commit 904a72857b)
2022-04-07 17:03:35 +08:00
173 changed files with 9438 additions and 1428 deletions

89
.github/workflows/chart.yaml vendored Normal file
View File

@@ -0,0 +1,89 @@
name: Publish Chart
on:
push:
tags:
- "v*"
workflow_dispatch: { }
env:
BUCKET: ${{ secrets.OSS_BUCKET }}
ENDPOINT: ${{ secrets.OSS_ENDPOINT }}
ACCESS_KEY: ${{ secrets.OSS_ACCESS_KEY }}
ACCESS_KEY_SECRET: ${{ secrets.OSS_ACCESS_KEY_SECRET }}
ARTIFACT_HUB_REPOSITORY_ID: ${{ secrets.ARTIFACT_HUB_REPOSITORY_ID }}
jobs:
publish-charts:
env:
HELM_CHARTS_DIR: charts
HELM_CHART: charts/vela-core
MINIMAL_HELM_CHART: charts/vela-minimal
LEGACY_HELM_CHART: legacy/charts/vela-core-legacy
VELA_ROLLOUT_HELM_CHART: runtime/rollout/charts
LOCAL_OSS_DIRECTORY: .oss/
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@master
- name: Get git revision
id: vars
shell: bash
run: |
echo "::set-output name=git_revision::$(git rev-parse --short HEAD)"
- name: Install Helm
uses: azure/setup-helm@v1
with:
version: v3.4.0
- name: Setup node
uses: actions/setup-node@v2
with:
node-version: '14'
- name: Generate helm doc
run: |
make helm-doc-gen
- name: Prepare legacy chart
run: |
rsync -r $LEGACY_HELM_CHART $HELM_CHARTS_DIR
rsync -r $HELM_CHART/* $LEGACY_HELM_CHART --exclude=Chart.yaml --exclude=crds
- name: Prepare vela chart
run: |
rsync -r $VELA_ROLLOUT_HELM_CHART $HELM_CHARTS_DIR
- name: Get the version
id: get_version
run: |
VERSION=${GITHUB_REF#refs/tags/}
echo ::set-output name=VERSION::${VERSION}
- name: Tag helm chart image
run: |
image_tag=${{ steps.get_version.outputs.VERSION }}
chart_version=${{ steps.get_version.outputs.VERSION }}
sed -i "s/latest/${image_tag}/g" $HELM_CHART/values.yaml
sed -i "s/latest/${image_tag}/g" $MINIMAL_HELM_CHART/values.yaml
sed -i "s/latest/${image_tag}/g" $LEGACY_HELM_CHART/values.yaml
sed -i "s/latest/${image_tag}/g" $VELA_ROLLOUT_HELM_CHART/values.yaml
chart_smever=${chart_version#"v"}
sed -i "s/0.1.0/$chart_smever/g" $HELM_CHART/Chart.yaml
sed -i "s/0.1.0/$chart_smever/g" $MINIMAL_HELM_CHART/Chart.yaml
sed -i "s/0.1.0/$chart_smever/g" $LEGACY_HELM_CHART/Chart.yaml
sed -i "s/0.1.0/$chart_smever/g" $VELA_ROLLOUT_HELM_CHART/Chart.yaml
- name: Install ossutil
run: wget http://gosspublic.alicdn.com/ossutil/1.7.0/ossutil64 && chmod +x ossutil64 && mv ossutil64 ossutil
- name: Configure Alibaba Cloud OSSUTIL
run: ./ossutil --config-file .ossutilconfig config -i ${ACCESS_KEY} -k ${ACCESS_KEY_SECRET} -e ${ENDPOINT} -c .ossutilconfig
- name: sync cloud to local
run: ./ossutil --config-file .ossutilconfig sync oss://$BUCKET/core $LOCAL_OSS_DIRECTORY
- name: add artifacthub stuff to the repo
run: |
rsync $HELM_CHART/README.md $LEGACY_HELM_CHART/README.md
rsync $HELM_CHART/README.md $VELA_ROLLOUT_HELM_CHART/README.md
sed -i "s/ARTIFACT_HUB_REPOSITORY_ID/$ARTIFACT_HUB_REPOSITORY_ID/g" hack/artifacthub/artifacthub-repo.yml
rsync hack/artifacthub/artifacthub-repo.yml $LOCAL_OSS_DIRECTORY
- name: Package helm charts
run: |
helm package $HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm package $MINIMAL_HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm package $LEGACY_HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm package $VELA_ROLLOUT_HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm repo index --url https://$BUCKET.$ENDPOINT/core $LOCAL_OSS_DIRECTORY
- name: sync local to cloud
run: ./ossutil --config-file .ossutilconfig sync $LOCAL_OSS_DIRECTORY oss://$BUCKET/core -f -u

View File

@@ -96,7 +96,7 @@ jobs:
uses: codecov/codecov-action@v1
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: /tmp/e2e-profile.out
files: /tmp/e2e-profile.out,/tmp/e2e_multicluster_test.out
flags: e2e-multicluster-test
name: codecov-umbrella

View File

@@ -57,7 +57,7 @@ jobs:
restore-keys: ${{ runner.os }}-pkg-
- name: Install StaticCheck
run: GO111MODULE=off go get honnef.co/go/tools/cmd/staticcheck
run: GO111MODULE=on go get honnef.co/go/tools/cmd/staticcheck@v0.3.0
- name: Static Check
run: staticcheck ./...

View File

@@ -8,14 +8,11 @@ on:
workflow_dispatch: {}
env:
BUCKET: ${{ secrets.OSS_BUCKET }}
ENDPOINT: ${{ secrets.OSS_ENDPOINT }}
ACCESS_KEY: ${{ secrets.OSS_ACCESS_KEY }}
ACCESS_KEY_SECRET: ${{ secrets.OSS_ACCESS_KEY_SECRET }}
ARTIFACT_HUB_REPOSITORY_ID: ${{ secrets.ARTIFACT_HUB_REPOSITORY_ID }}
jobs:
publish-images:
publish-core-images:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
@@ -47,20 +44,16 @@ jobs:
- name: Login Alibaba Cloud ACR
uses: docker/login-action@v1
with:
registry: kubevela-registry.cn-hangzhou.cr.aliyuncs.com
username: ${{ secrets.ACR_USERNAME }}@aliyun-inner.com
registry: ${{ secrets.ACR_DOMAIN }}
username: ${{ secrets.ACR_USERNAME }}
password: ${{ secrets.ACR_PASSWORD }}
- uses: docker/setup-qemu-action@v1
- uses: docker/setup-buildx-action@v1
with:
driver-opts: image=moby/buildkit:master
- name: Build & Pushing vela-core for ACR
run: |
docker build --build-arg GOPROXY=https://proxy.golang.org --build-arg VERSION=${{ steps.get_version.outputs.VERSION }} --build-arg GITVERSION=git-${{ steps.vars.outputs.git_revision }} -t kubevela-registry.cn-hangzhou.cr.aliyuncs.com/oamdev/vela-core:${{ steps.get_version.outputs.VERSION }} .
docker push kubevela-registry.cn-hangzhou.cr.aliyuncs.com/oamdev/vela-core:${{ steps.get_version.outputs.VERSION }}
- uses: docker/build-push-action@v2
name: Build & Pushing vela-core for Dockerhub and GHCR
name: Build & Pushing vela-core for Dockerhub, GHCR and ACR
with:
context: .
file: Dockerfile
@@ -75,14 +68,51 @@ jobs:
GOPROXY=https://proxy.golang.org
tags: |-
docker.io/oamdev/vela-core:${{ steps.get_version.outputs.VERSION }}
ghcr.io/${{ github.repository }}/vela-core:${{ steps.get_version.outputs.VERSION }}
ghcr.io/${{ github.repository_owner }}/oamdev/vela-core:${{ steps.get_version.outputs.VERSION }}
${{ secrets.ACR_DOMAIN }}/oamdev/vela-core:${{ steps.get_version.outputs.VERSION }}
- name: Build & Pushing vela-apiserver for ACR
publish-addon-images:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- name: Get the version
id: get_version
run: |
docker build --build-arg GOPROXY=https://proxy.golang.org --build-arg VERSION=${{ steps.get_version.outputs.VERSION }} --build-arg GITVERSION=git-${{ steps.vars.outputs.git_revision }} -t kubevela-registry.cn-hangzhou.cr.aliyuncs.com/oamdev/vela-apiserver:${{ steps.get_version.outputs.VERSION }} -f Dockerfile.apiserver .
docker push kubevela-registry.cn-hangzhou.cr.aliyuncs.com/oamdev/vela-apiserver:${{ steps.get_version.outputs.VERSION }}
VERSION=${GITHUB_REF#refs/tags/}
if [[ ${GITHUB_REF} == "refs/heads/master" ]]; then
VERSION=latest
fi
echo ::set-output name=VERSION::${VERSION}
- name: Get git revision
id: vars
shell: bash
run: |
echo "::set-output name=git_revision::$(git rev-parse --short HEAD)"
- name: Login ghcr.io
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login docker.io
uses: docker/login-action@v1
with:
registry: docker.io
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login Alibaba Cloud ACR
uses: docker/login-action@v1
with:
registry: ${{ secrets.ACR_DOMAIN }}
username: ${{ secrets.ACR_USERNAME }}
password: ${{ secrets.ACR_PASSWORD }}
- uses: docker/setup-qemu-action@v1
- uses: docker/setup-buildx-action@v1
with:
driver-opts: image=moby/buildkit:master
- uses: docker/build-push-action@v2
name: Build & Pushing vela-apiserver for Dockerhub and GHCR
name: Build & Pushing vela-apiserver for Dockerhub, GHCR and ACR
with:
context: .
file: Dockerfile.apiserver
@@ -97,14 +127,10 @@ jobs:
GOPROXY=https://proxy.golang.org
tags: |-
docker.io/oamdev/vela-apiserver:${{ steps.get_version.outputs.VERSION }}
ghcr.io/${{ github.repository }}/vela-apiserver:${{ steps.get_version.outputs.VERSION }}
- name: Build & Pushing vela runtime rollout for ACR
run: |
docker build --build-arg GOPROXY=https://proxy.golang.org --build-arg VERSION=${{ steps.get_version.outputs.VERSION }} --build-arg GITVERSION=git-${{ steps.vars.outputs.git_revision }} -t kubevela-registry.cn-hangzhou.cr.aliyuncs.com/oamdev/vela-rollout:${{ steps.get_version.outputs.VERSION }} .
docker push kubevela-registry.cn-hangzhou.cr.aliyuncs.com/oamdev/vela-rollout:${{ steps.get_version.outputs.VERSION }}
ghcr.io/${{ github.repository_owner }}/oamdev/vela-apiserver:${{ steps.get_version.outputs.VERSION }}
${{ secrets.ACR_DOMAIN }}/oamdev/vela-apiserver:${{ steps.get_version.outputs.VERSION }}
- uses: docker/build-push-action@v2
name: Build & Pushing runtime rollout for Dockerhub and GHCR
name: Build & Pushing runtime rollout Dockerhub, GHCR and ACR
with:
context: .
file: runtime/rollout/Dockerfile
@@ -119,96 +145,8 @@ jobs:
GOPROXY=https://proxy.golang.org
tags: |-
docker.io/oamdev/vela-rollout:${{ steps.get_version.outputs.VERSION }}
ghcr.io/${{ github.repository }}/vela-rollout:${{ steps.get_version.outputs.VERSION }}
publish-charts:
env:
HELM_CHARTS_DIR: charts
HELM_CHART: charts/vela-core
MINIMAL_HELM_CHART: charts/vela-minimal
LEGACY_HELM_CHART: legacy/charts/vela-core-legacy
OAM_RUNTIME_HELM_CHART: charts/oam-runtime
VELA_ROLLOUT_HELM_CHART: runtime/rollout/charts
LOCAL_OSS_DIRECTORY: .oss/
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@master
- name: Get git revision
id: vars
shell: bash
run: |
echo "::set-output name=git_revision::$(git rev-parse --short HEAD)"
- name: Install Helm
uses: azure/setup-helm@v1
with:
version: v3.4.0
- name: Setup node
uses: actions/setup-node@v2
with:
node-version: '14'
- name: Generate helm doc
run: |
make helm-doc-gen
- name: Prepare legacy chart
run: |
rsync -r $LEGACY_HELM_CHART $HELM_CHARTS_DIR
rsync -r $HELM_CHART/* $LEGACY_HELM_CHART --exclude=Chart.yaml --exclude=crds
- name: Prepare vela chart
run: |
rsync -r $VELA_ROLLOUT_HELM_CHART $HELM_CHARTS_DIR
- uses: oprypin/find-latest-tag@v1
with:
repository: oam-dev/kubevela
releases-only: true
id: latest_tag
- name: Tag helm chart image
run: |
latest_repo_tag=${{ steps.latest_tag.outputs.tag }}
sub="."
major="$(cut -d"$sub" -f1 <<<"$latest_repo_tag")"
minor="$(cut -d"$sub" -f2 <<<"$latest_repo_tag")"
patch="0"
current_repo_tag="$major.$minor.$patch"
image_tag=${GITHUB_REF#refs/tags/}
chart_version=$latest_repo_tag
if [[ ${GITHUB_REF} == "refs/heads/master" ]]; then
image_tag=latest
chart_version=${current_repo_tag}-nightly-build
fi
sed -i "s/latest/${image_tag}/g" $HELM_CHART/values.yaml
sed -i "s/latest/${image_tag}/g" $MINIMAL_HELM_CHART/values.yaml
sed -i "s/latest/${image_tag}/g" $LEGACY_HELM_CHART/values.yaml
sed -i "s/latest/${image_tag}/g" $OAM_RUNTIME_HELM_CHART/values.yaml
sed -i "s/latest/${image_tag}/g" $VELA_ROLLOUT_HELM_CHART/values.yaml
chart_smever=${chart_version#"v"}
sed -i "s/0.1.0/$chart_smever/g" $HELM_CHART/Chart.yaml
sed -i "s/0.1.0/$chart_smever/g" $MINIMAL_HELM_CHART/Chart.yaml
sed -i "s/0.1.0/$chart_smever/g" $LEGACY_HELM_CHART/Chart.yaml
sed -i "s/0.1.0/$chart_smever/g" $OAM_RUNTIME_HELM_CHART/Chart.yaml
sed -i "s/0.1.0/$chart_smever/g" $VELA_ROLLOUT_HELM_CHART/Chart.yaml
- name: Install ossutil
run: wget http://gosspublic.alicdn.com/ossutil/1.7.0/ossutil64 && chmod +x ossutil64 && mv ossutil64 ossutil
- name: Configure Alibaba Cloud OSSUTIL
run: ./ossutil --config-file .ossutilconfig config -i ${ACCESS_KEY} -k ${ACCESS_KEY_SECRET} -e ${ENDPOINT} -c .ossutilconfig
- name: sync cloud to local
run: ./ossutil --config-file .ossutilconfig sync oss://$BUCKET/core $LOCAL_OSS_DIRECTORY
- name: add artifacthub stuff to the repo
run: |
rsync $HELM_CHART/README.md $LEGACY_HELM_CHART/README.md
rsync $HELM_CHART/README.md $OAM_RUNTIME_HELM_CHART/README.md
rsync $HELM_CHART/README.md $VELA_ROLLOUT_HELM_CHART/README.md
sed -i "s/ARTIFACT_HUB_REPOSITORY_ID/$ARTIFACT_HUB_REPOSITORY_ID/g" hack/artifacthub/artifacthub-repo.yml
rsync hack/artifacthub/artifacthub-repo.yml $LOCAL_OSS_DIRECTORY
- name: Package helm charts
run: |
helm package $HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm package $MINIMAL_HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm package $LEGACY_HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm package $OAM_RUNTIME_HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm package $VELA_ROLLOUT_HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm repo index --url https://$BUCKET.$ENDPOINT/core $LOCAL_OSS_DIRECTORY
- name: sync local to cloud
run: ./ossutil --config-file .ossutilconfig sync $LOCAL_OSS_DIRECTORY oss://$BUCKET/core -f
ghcr.io/${{ github.repository_owner }}/oamdev/vela-rollout:${{ steps.get_version.outputs.VERSION }}
${{ secrets.ACR_DOMAIN }}/oamdev/vela-rollout:${{ steps.get_version.outputs.VERSION }}
publish-capabilities:
env:
@@ -227,4 +165,4 @@ jobs:
- name: rsync all capabilites
run: rsync vela-templates/registry/auto-gen/* $CAPABILITY_DIR
- name: sync local to cloud
run: ./ossutil --config-file .ossutilconfig sync $CAPABILITY_DIR oss://$CAPABILITY_BUCKET -f
run: ./ossutil --config-file .ossutilconfig sync $CAPABILITY_DIR oss://$CAPABILITY_BUCKET -f -u

View File

@@ -8,6 +8,10 @@ on:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
BUCKET: ${{ secrets.CLI_OSS_BUCKET }}
ENDPOINT: ${{ secrets.CLI_OSS_ENDPOINT }}
ACCESS_KEY: ${{ secrets.CLI_OSS_ACCESS_KEY }}
ACCESS_KEY_SECRET: ${{ secrets.CLI_OSS_ACCESS_KEY_SECRET }}
jobs:
build:
@@ -104,6 +108,23 @@ jobs:
name: sha256sums
path: ./_bin/sha256-${{ steps.get_matrix.outputs.OS }}-${{ steps.get_matrix.outputs.ARCH }}.txt
retention-days: 1
- name: clear the asset
run: |
rm -rf ./_bin/vela/${{ steps.get_matrix.outputs.OS }}-${{ steps.get_matrix.outputs.ARCH }}
mv ./_bin/vela/vela-${{ steps.get_matrix.outputs.OS }}-${{ steps.get_matrix.outputs.ARCH }}.tar.gz ./_bin/vela/vela-${{ env.VELA_VERSION }}-${{ steps.get_matrix.outputs.OS }}-${{ steps.get_matrix.outputs.ARCH }}.tar.gz
mv ./_bin/vela/vela-${{ steps.get_matrix.outputs.OS }}-${{ steps.get_matrix.outputs.ARCH }}.zip ./_bin/vela/vela-${{ env.VELA_VERSION }}-${{ steps.get_matrix.outputs.OS }}-${{ steps.get_matrix.outputs.ARCH }}.zip
- name: Install ossutil
run: wget http://gosspublic.alicdn.com/ossutil/1.7.0/ossutil64 && chmod +x ossutil64 && mv ossutil64 ossutil
- name: Configure Alibaba Cloud OSSUTIL
run: ./ossutil --config-file .ossutilconfig config -i ${ACCESS_KEY} -k ${ACCESS_KEY_SECRET} -e ${ENDPOINT} -c .ossutilconfig
- name: sync local to cloud
run: ./ossutil --config-file .ossutilconfig sync ./_bin/vela oss://$BUCKET/binary/vela/${{ env.VELA_VERSION }}
- name: sync the latest version file
run: |
echo ${{ env.VELA_VERSION }} > ./latest_version
./ossutil --config-file .ossutilconfig cp -u ./latest_version oss://$BUCKET/binary/vela/latest_version
upload-plugin-homebrew:
needs: build

View File

@@ -31,6 +31,7 @@ import (
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
"github.com/oam-dev/kubevela/apis/interfaces"
velatypes "github.com/oam-dev/kubevela/apis/types"
"github.com/oam-dev/kubevela/pkg/oam"
"github.com/oam-dev/kubevela/pkg/utils/errors"
)
@@ -121,7 +122,11 @@ func (in ManagedResource) NamespacedName() types.NamespacedName {
// ResourceKey computes the key for managed resource, resources with the same key points to the same resource
func (in ManagedResource) ResourceKey() string {
gv, kind := in.GroupVersionKind().ToAPIVersionAndKind()
return strings.Join([]string{gv, kind, in.Cluster, in.Namespace, in.Name}, "/")
cluster := in.Cluster
if cluster == "" {
cluster = velatypes.ClusterLocalName
}
return strings.Join([]string{gv, kind, cluster, in.Namespace, in.Name}, "/")
}
// ComponentKey computes the key for the component which managed resource belongs to

View File

@@ -16,9 +16,15 @@ limitations under the License.
package types
import "github.com/oam-dev/cluster-gateway/pkg/apis/cluster/v1alpha1"
import (
"github.com/oam-dev/cluster-gateway/pkg/apis/cluster/v1alpha1"
"github.com/oam-dev/cluster-gateway/pkg/config"
)
const (
// ClusterLocalName the name for the hub cluster
ClusterLocalName = "local"
// CredentialTypeInternal identifies the virtual cluster from internal kubevela system
CredentialTypeInternal v1alpha1.CredentialType = "Internal"
// CredentialTypeOCMManagedCluster identifies the virtual cluster from ocm
@@ -29,3 +35,8 @@ const (
// ClustersArg indicates the argument for specific clusters to install addon
ClustersArg = "clusters"
)
var (
// AnnotationClusterAlias the annotation key for cluster alias
AnnotationClusterAlias = config.MetaApiGroupName + "/cluster-alias"
)

View File

@@ -71,6 +71,10 @@ const (
LabelConfigProject = "config.oam.dev/project"
// LabelConfigSyncToMultiCluster is the label to decide whether a config will be synchronized to multi-cluster
LabelConfigSyncToMultiCluster = "config.oam.dev/multi-cluster"
// LabelConfigIdentifier is the label for config identifier
LabelConfigIdentifier = "config.oam.dev/identifier"
// AnnotationConfigDescription is the annotation for config description
AnnotationConfigDescription = "config.oam.dev/description"
// AnnotationConfigAlias is the annotation for config alias
AnnotationConfigAlias = "config.oam.dev/alias"
)
@@ -139,4 +143,20 @@ const (
TerraformProvider = "terraform-provider"
// DexConnector is the config type for dex connector
DexConnector = "config-dex-connector"
// ImageRegistry is the config type for image registry
ImageRegistry = "config-image-registry"
// HelmRepository is the config type for Helm chart repository
HelmRepository = "config-helm-repository"
)
const (
// TerraformComponentPrefix is the prefix of component type of terraform-xxx
TerraformComponentPrefix = "terraform-"
// ProviderAppPrefix is the prefix of the application to create a Terraform Provider
ProviderAppPrefix = "config-terraform-provider"
// ProviderNamespace is the namespace of Terraform Cloud Provider
ProviderNamespace = "default"
// VelaCoreConfig is to mark application, config and its secret or Terraform provider lelong to a KubeVela config
VelaCoreConfig = "velacore-config"
)

View File

@@ -86,7 +86,7 @@ helm install --create-namespace -n vela-system kubevela kubevela/vela-core --wai
| `multicluster.clusterGateway.replicaCount` | ClusterGateway replica count | `1` |
| `multicluster.clusterGateway.port` | ClusterGateway port | `9443` |
| `multicluster.clusterGateway.image.repository` | ClusterGateway image repository | `oamdev/cluster-gateway` |
| `multicluster.clusterGateway.image.tag` | ClusterGateway image tag | `v1.3.0` |
| `multicluster.clusterGateway.image.tag` | ClusterGateway image tag | `v1.3.2` |
| `multicluster.clusterGateway.image.pullPolicy` | ClusterGateway image pull policy | `IfNotPresent` |
| `multicluster.clusterGateway.resources.limits.cpu` | ClusterGateway cpu limit | `100m` |
| `multicluster.clusterGateway.resources.limits.memory` | ClusterGateway memory limit | `200Mi` |

File diff suppressed because it is too large Load Diff

View File

@@ -106,7 +106,7 @@ spec:
}]
}
}
parameter: #PatchParams | close({
parameter: *#PatchParams | close({
// +usage=Specify the commands for multiple containers
containers: [...#PatchParams]
})

View File

@@ -0,0 +1,73 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/config-image-registry.cue
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
annotations:
custom.definition.oam.dev/alias.config.oam.dev: Image Registry
definition.oam.dev/description: Config information to authenticate image registry
labels:
custom.definition.oam.dev/catalog.config.oam.dev: velacore-config
custom.definition.oam.dev/multi-cluster.config.oam.dev: "true"
custom.definition.oam.dev/type.config.oam.dev: image-registry
custom.definition.oam.dev/ui-hidden: "true"
name: config-image-registry
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"encoding/base64"
"encoding/json"
)
output: {
apiVersion: "v1"
kind: "Secret"
metadata: {
name: context.name
namespace: context.namespace
labels: {
"config.oam.dev/catalog": "velacore-config"
"config.oam.dev/type": "image-registry"
"config.oam.dev/multi-cluster": "true"
"config.oam.dev/identifier": parameter.registry
"config.oam.dev/sub-type": "auth"
}
}
if parameter.auth != _|_ {
type: "kubernetes.io/dockerconfigjson"
}
if parameter.auth == _|_ {
type: "Opaque"
}
if parameter.auth != _|_ {
stringData: ".dockerconfigjson": json.Marshal({
auths: "\(parameter.registry)": {
username: parameter.auth.username
password: parameter.auth.password
if parameter.auth.email != _|_ {
email: parameter.auth.email
}
auth: base64.Encode(null, (parameter.auth.username + ":" + parameter.auth.password))
}
})
}
}
parameter: {
// +usage=Image registry FQDN
registry: string
// +usage=Authenticate the image registry
auth?: {
// +usage=Private Image registry username
username: string
// +usage=Private Image registry password
password: string
// +usage=Private Image registry email
email?: string
}
}
workload:
type: autodetects.core.oam.dev

View File

@@ -69,7 +69,7 @@ spec:
}]
}
}
parameter: #PatchParams | close({
parameter: *#PatchParams | close({
// +usage=Specify the container image for multiple containers
containers: [...#PatchParams]
})

View File

@@ -46,7 +46,7 @@ spec:
}]
}
if _baseEnv != _|_ {
_baseEnvMap: {for envVar in _baseEnv {"\(envVar.name)": envVar.value}}
_baseEnvMap: {for envVar in _baseEnv {"\(envVar.name)": envVar}}
// +patchStrategy=replace
env: [ for envVar in _baseEnv if _delKeys[envVar.name] == _|_ && !_params.replace {
name: envVar.name
@@ -54,7 +54,12 @@ spec:
value: _params.env[envVar.name]
}
if _params.env[envVar.name] == _|_ {
value: envVar.value
if envVar.value != _|_ {
value: envVar.value
}
if envVar.valueFrom != _|_ {
valueFrom: envVar.valueFrom
}
}
}] + [ for k, v in _params.env if _delKeys[k] == _|_ && (_params.replace || _baseEnvMap[k] == _|_) {
name: k
@@ -92,7 +97,7 @@ spec:
}]
}
}
parameter: #PatchParams | close({
parameter: *#PatchParams | close({
// +usage=Specify the environment variables for multiple containers
containers: [...#PatchParams]
})

View File

@@ -132,10 +132,9 @@ spec:
parameter.labels
}
if parameter.addRevisionLabel {
"app.oam.dev/appRevision": context.appRevision
"app.oam.dev/revision": context.revision
}
"app.oam.dev/component": context.name
"app.oam.dev/revision": context.revision
}
if parameter.annotations != _|_ {
annotations: parameter.annotations
@@ -333,7 +332,7 @@ spec:
exposeType: *"ClusterIP" | "NodePort" | "LoadBalancer" | "ExternalName"
// +ignore
// +usage=If addRevisionLabel is true, the appRevision label will be added to the underlying pods
// +usage=If addRevisionLabel is true, the revision label will be added to the underlying pods
addRevisionLabel: *false | bool
// +usage=Commands to run in the container
@@ -455,7 +454,7 @@ spec:
readinessProbe?: #HealthProbe
// +usage=Specify the hostAliases to add
hostAliases: [...{
hostAliases?: [...{
ip: string
hostnames: [...string]
}]

View File

@@ -104,7 +104,7 @@ multicluster:
port: 9443
image:
repository: oamdev/cluster-gateway
tag: v1.3.0
tag: v1.3.2
pullPolicy: IfNotPresent
resources:
limits:

View File

@@ -105,7 +105,7 @@ helm install --create-namespace -n vela-system kubevela kubevela/vela-minimal --
| `multicluster.clusterGateway.replicaCount` | ClusterGateway replica count | `1` |
| `multicluster.clusterGateway.port` | ClusterGateway port | `9443` |
| `multicluster.clusterGateway.image.repository` | ClusterGateway image repository | `oamdev/cluster-gateway` |
| `multicluster.clusterGateway.image.tag` | ClusterGateway image tag | `v1.3.0` |
| `multicluster.clusterGateway.image.tag` | ClusterGateway image tag | `v1.3.2` |
| `multicluster.clusterGateway.image.pullPolicy` | ClusterGateway image pull policy | `IfNotPresent` |
| `multicluster.clusterGateway.resources.limits.cpu` | ClusterGateway cpu limit | `100m` |
| `multicluster.clusterGateway.resources.limits.memory` | ClusterGateway memory limit | `200Mi` |

File diff suppressed because it is too large Load Diff

View File

@@ -106,7 +106,7 @@ spec:
}]
}
}
parameter: #PatchParams | close({
parameter: *#PatchParams | close({
// +usage=Specify the commands for multiple containers
containers: [...#PatchParams]
})

View File

@@ -0,0 +1,73 @@
# Code generated by KubeVela templates. DO NOT EDIT. Please edit the original cue file.
# Definition source cue file: vela-templates/definitions/internal/config-image-registry.cue
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
annotations:
custom.definition.oam.dev/alias.config.oam.dev: Image Registry
definition.oam.dev/description: Config information to authenticate image registry
labels:
custom.definition.oam.dev/catalog.config.oam.dev: velacore-config
custom.definition.oam.dev/multi-cluster.config.oam.dev: "true"
custom.definition.oam.dev/type.config.oam.dev: image-registry
custom.definition.oam.dev/ui-hidden: "true"
name: config-image-registry
namespace: {{ include "systemDefinitionNamespace" . }}
spec:
schematic:
cue:
template: |
import (
"encoding/base64"
"encoding/json"
)
output: {
apiVersion: "v1"
kind: "Secret"
metadata: {
name: context.name
namespace: context.namespace
labels: {
"config.oam.dev/catalog": "velacore-config"
"config.oam.dev/type": "image-registry"
"config.oam.dev/multi-cluster": "true"
"config.oam.dev/identifier": parameter.registry
"config.oam.dev/sub-type": "auth"
}
}
if parameter.auth != _|_ {
type: "kubernetes.io/dockerconfigjson"
}
if parameter.auth == _|_ {
type: "Opaque"
}
if parameter.auth != _|_ {
stringData: ".dockerconfigjson": json.Marshal({
auths: "\(parameter.registry)": {
username: parameter.auth.username
password: parameter.auth.password
if parameter.auth.email != _|_ {
email: parameter.auth.email
}
auth: base64.Encode(null, (parameter.auth.username + ":" + parameter.auth.password))
}
})
}
}
parameter: {
// +usage=Image registry FQDN
registry: string
// +usage=Authenticate the image registry
auth?: {
// +usage=Private Image registry username
username: string
// +usage=Private Image registry password
password: string
// +usage=Private Image registry email
email?: string
}
}
workload:
type: autodetects.core.oam.dev

View File

@@ -69,7 +69,7 @@ spec:
}]
}
}
parameter: #PatchParams | close({
parameter: *#PatchParams | close({
// +usage=Specify the container image for multiple containers
containers: [...#PatchParams]
})

View File

@@ -46,7 +46,7 @@ spec:
}]
}
if _baseEnv != _|_ {
_baseEnvMap: {for envVar in _baseEnv {"\(envVar.name)": envVar.value}}
_baseEnvMap: {for envVar in _baseEnv {"\(envVar.name)": envVar}}
// +patchStrategy=replace
env: [ for envVar in _baseEnv if _delKeys[envVar.name] == _|_ && !_params.replace {
name: envVar.name
@@ -54,7 +54,12 @@ spec:
value: _params.env[envVar.name]
}
if _params.env[envVar.name] == _|_ {
value: envVar.value
if envVar.value != _|_ {
value: envVar.value
}
if envVar.valueFrom != _|_ {
valueFrom: envVar.valueFrom
}
}
}] + [ for k, v in _params.env if _delKeys[k] == _|_ && (_params.replace || _baseEnvMap[k] == _|_) {
name: k
@@ -92,7 +97,7 @@ spec:
}]
}
}
parameter: #PatchParams | close({
parameter: *#PatchParams | close({
// +usage=Specify the environment variables for multiple containers
containers: [...#PatchParams]
})

View File

@@ -132,10 +132,9 @@ spec:
parameter.labels
}
if parameter.addRevisionLabel {
"app.oam.dev/appRevision": context.appRevision
"app.oam.dev/revision": context.revision
}
"app.oam.dev/component": context.name
"app.oam.dev/revision": context.revision
}
if parameter.annotations != _|_ {
annotations: parameter.annotations
@@ -333,7 +332,7 @@ spec:
exposeType: *"ClusterIP" | "NodePort" | "LoadBalancer" | "ExternalName"
// +ignore
// +usage=If addRevisionLabel is true, the appRevision label will be added to the underlying pods
// +usage=If addRevisionLabel is true, the revision label will be added to the underlying pods
addRevisionLabel: *false | bool
// +usage=Commands to run in the container
@@ -455,7 +454,7 @@ spec:
readinessProbe?: #HealthProbe
// +usage=Specify the hostAliases to add
hostAliases: [...{
hostAliases?: [...{
ip: string
hostnames: [...string]
}]

View File

@@ -107,7 +107,7 @@ multicluster:
port: 9443
image:
repository: oamdev/cluster-gateway
tag: v1.3.0
tag: v1.3.2
pullPolicy: IfNotPresent
resources:
limits:

View File

@@ -7,4 +7,5 @@ coverage:
default:
target: 70%
ignore:
- "**/zz_generated.deepcopy.go"
- "**/zz_generated.deepcopy.go"
- "references/"

View File

@@ -3337,6 +3337,284 @@
}
}
},
"/api/v1/config_types": {
"get": {
"consumes": [
"application/xml",
"application/json"
],
"produces": [
"application/json",
"application/xml"
],
"tags": [
"config"
],
"summary": "list all config types",
"operationId": "listConfigTypes",
"parameters": [
{
"type": "string",
"description": "Fuzzy search based on name and description.",
"name": "query",
"in": "query"
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/v1.ConfigType"
}
}
},
"400": {
"description": "Bad Request",
"schema": {
"$ref": "#/definitions/bcode.Bcode"
}
}
}
}
},
"/api/v1/config_types/{configType}": {
"get": {
"consumes": [
"application/xml",
"application/json"
],
"produces": [
"application/json",
"application/xml"
],
"tags": [
"config"
],
"summary": "get a config type",
"operationId": "getConfigType",
"parameters": [
{
"type": "string",
"description": "identifier of the config type",
"name": "configType",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/v1.ConfigType"
}
},
"400": {
"description": "Bad Request",
"schema": {
"$ref": "#/definitions/bcode.Bcode"
}
}
}
},
"post": {
"consumes": [
"application/xml",
"application/json"
],
"produces": [
"application/json",
"application/xml"
],
"tags": [
"config"
],
"summary": "create or update a config",
"operationId": "createConfig",
"parameters": [
{
"type": "string",
"description": "identifier of the config type",
"name": "configType",
"in": "path",
"required": true
},
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1.CreateConfigRequest"
}
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/v1.EmptyResponse"
}
},
"400": {
"description": "Bad Request",
"schema": {
"$ref": "#/definitions/bcode.Bcode"
}
},
"404": {
"description": "Not Found",
"schema": {
"$ref": "#/definitions/bcode.Bcode"
}
}
}
}
},
"/api/v1/config_types/{configType}/configs": {
"get": {
"consumes": [
"application/xml",
"application/json"
],
"produces": [
"application/json",
"application/xml"
],
"tags": [
"config"
],
"summary": "get configs from a config type",
"operationId": "getConfigs",
"parameters": [
{
"type": "string",
"description": "identifier of the config",
"name": "configType",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/*v1.Config"
}
}
},
"400": {
"description": "Bad Request",
"schema": {
"$ref": "#/definitions/bcode.Bcode"
}
}
}
}
},
"/api/v1/config_types/{configType}/configs/{name}": {
"get": {
"consumes": [
"application/xml",
"application/json"
],
"produces": [
"application/json",
"application/xml"
],
"tags": [
"config"
],
"summary": "get a config from a config type",
"operationId": "getConfig",
"parameters": [
{
"type": "string",
"description": "identifier of the config type",
"name": "configType",
"in": "path",
"required": true
},
{
"type": "string",
"description": "identifier of the config",
"name": "name",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/*v1.Config"
}
}
},
"400": {
"description": "Bad Request",
"schema": {
"$ref": "#/definitions/bcode.Bcode"
}
}
}
},
"delete": {
"consumes": [
"application/xml",
"application/json"
],
"produces": [
"application/json",
"application/xml"
],
"tags": [
"config"
],
"summary": "delete a config",
"operationId": "deleteConfig",
"parameters": [
{
"type": "string",
"description": "identifier of the config type",
"name": "configType",
"in": "path",
"required": true
},
{
"type": "string",
"description": "identifier of the config",
"name": "name",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/v1.EmptyResponse"
}
},
"400": {
"description": "Bad Request",
"schema": {
"$ref": "#/definitions/bcode.Bcode"
}
},
"404": {
"description": "Not Found",
"schema": {
"$ref": "#/definitions/bcode.Bcode"
}
}
}
}
},
"/api/v1/definitions": {
"get": {
"consumes": [
@@ -3862,6 +4140,55 @@
}
}
},
"/api/v1/projects/{projectName}/configs": {
"get": {
"consumes": [
"application/xml",
"application/json"
],
"produces": [
"application/json",
"application/xml"
],
"tags": [
"project"
],
"summary": "get configs which are in a project",
"operationId": "getConfigs",
"parameters": [
{
"type": "string",
"description": "config type",
"name": "configType",
"in": "query"
},
{
"type": "string",
"description": "identifier of the project",
"name": "projectName",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/*v1.Config"
}
}
},
"400": {
"description": "Bad Request",
"schema": {
"$ref": "#/definitions/bcode.Bcode"
}
}
}
}
},
"/api/v1/projects/{projectName}/permissions": {
"get": {
"consumes": [
@@ -5302,6 +5629,7 @@
},
"definitions": {
"*v1.ApplicationTriggerBase": {},
"*v1.Config": {},
"*v1.EmptyResponse": {},
"addon.Dependency": {
"properties": {
@@ -5354,6 +5682,22 @@
}
}
},
"addon.GitlabAddonSource": {
"properties": {
"path": {
"type": "string"
},
"repo": {
"type": "string"
},
"token": {
"type": "string"
},
"url": {
"type": "string"
}
}
},
"addon.HelmSource": {
"properties": {
"url": {
@@ -6623,34 +6967,6 @@
}
}
},
"model.SystemInfo": {
"required": [
"createTime",
"updateTime",
"installID",
"enableCollection",
"loginType"
],
"properties": {
"createTime": {
"type": "string",
"format": "date-time"
},
"enableCollection": {
"type": "boolean"
},
"installID": {
"type": "string"
},
"loginType": {
"type": "string"
},
"updateTime": {
"type": "string",
"format": "date-time"
}
}
},
"model.WorkflowStepStatus": {
"required": [
"id",
@@ -7039,6 +7355,9 @@
"gitee": {
"$ref": "#/definitions/addon.GiteeAddonSource"
},
"gitlab": {
"$ref": "#/definitions/addon.GitlabAddonSource"
},
"helm": {
"$ref": "#/definitions/addon.HelmSource"
},
@@ -7209,12 +7528,12 @@
},
"v1.ApplicationDeployResponse": {
"required": [
"version",
"status",
"note",
"createTime",
"envName",
"triggerType"
"triggerType",
"createTime",
"version",
"status"
],
"properties": {
"codeInfo": {
@@ -7748,6 +8067,31 @@
}
}
},
"v1.ConfigType": {
"required": [
"definitions",
"alias",
"name",
"description"
],
"properties": {
"alias": {
"type": "string"
},
"definitions": {
"type": "array",
"items": {
"type": "string"
}
},
"description": {
"type": "string"
},
"name": {
"type": "string"
}
}
},
"v1.ConnectCloudClusterRequest": {
"required": [
"accessKeyID",
@@ -7797,6 +8141,9 @@
"gitee": {
"$ref": "#/definitions/addon.GiteeAddonSource"
},
"gitlab": {
"$ref": "#/definitions/addon.GitlabAddonSource"
},
"helm": {
"$ref": "#/definitions/addon.HelmSource"
},
@@ -8093,6 +8440,31 @@
}
}
},
"v1.CreateConfigRequest": {
"required": [
"name",
"alias",
"project",
"componentType"
],
"properties": {
"alias": {
"type": "string"
},
"componentType": {
"type": "string"
},
"name": {
"type": "string"
},
"project": {
"type": "string"
},
"properties": {
"type": "string"
}
}
},
"v1.CreateEnvRequest": {
"required": [
"name",
@@ -8304,11 +8676,11 @@
},
"v1.DetailAddonResponse": {
"required": [
"name",
"description",
"icon",
"version",
"description",
"invisible",
"name",
"icon",
"schema",
"uiSchema",
"definitions",
@@ -8388,12 +8760,12 @@
},
"v1.DetailApplicationResponse": {
"required": [
"alias",
"icon",
"project",
"description",
"icon",
"name",
"createTime",
"name",
"alias",
"updateTime",
"policies",
"envBindings",
@@ -8455,20 +8827,20 @@
},
"v1.DetailClusterResponse": {
"required": [
"status",
"apiServerURL",
"dashboardURL",
"kubeConfigSecret",
"icon",
"labels",
"kubeConfig",
"updateTime",
"reason",
"provider",
"name",
"alias",
"description",
"reason",
"icon",
"dashboardURL",
"createTime",
"provider",
"description",
"status",
"updateTime",
"apiServerURL",
"kubeConfig",
"labels",
"kubeConfigSecret",
"resourceInfo"
],
"properties": {
@@ -8526,14 +8898,14 @@
},
"v1.DetailComponentResponse": {
"required": [
"createTime",
"updateTime",
"appPrimaryKey",
"type",
"main",
"name",
"updateTime",
"createTime",
"creator",
"alias",
"type",
"main",
"appPrimaryKey",
"definition"
],
"properties": {
@@ -8635,13 +9007,13 @@
},
"v1.DetailPolicyResponse": {
"required": [
"name",
"type",
"description",
"creator",
"properties",
"createTime",
"updateTime",
"name"
"updateTime"
],
"properties": {
"createTime": {
@@ -8671,17 +9043,17 @@
},
"v1.DetailRevisionResponse": {
"required": [
"updateTime",
"triggerType",
"updateTime",
"version",
"deployUser",
"reason",
"createTime",
"status",
"deployUser",
"note",
"workflowName",
"envName",
"appPrimaryKey",
"reason",
"note"
"appPrimaryKey"
],
"properties": {
"appPrimaryKey": {
@@ -8737,8 +9109,8 @@
"required": [
"project",
"createTime",
"name",
"updateTime"
"updateTime",
"name"
],
"properties": {
"alias": {
@@ -8778,11 +9150,11 @@
},
"v1.DetailUserResponse": {
"required": [
"disabled",
"createTime",
"lastLoginTime",
"name",
"email",
"disabled",
"createTime",
"projects",
"roles"
],
@@ -8823,12 +9195,12 @@
},
"v1.DetailWorkflowRecordResponse": {
"required": [
"status",
"name",
"namespace",
"workflowName",
"workflowAlias",
"applicationRevision",
"status",
"deployTime",
"deployUser",
"note",
@@ -8880,14 +9252,14 @@
},
"v1.DetailWorkflowResponse": {
"required": [
"envName",
"createTime",
"updateTime",
"name",
"enable",
"default",
"description",
"name",
"alias"
"envName",
"createTime",
"alias",
"description"
],
"properties": {
"alias": {
@@ -9466,11 +9838,11 @@
},
"v1.LoginUserInfoResponse": {
"required": [
"disabled",
"createTime",
"lastLoginTime",
"name",
"email",
"disabled",
"projects",
"platformPermissions",
"projectPermissions"
@@ -9778,6 +10150,24 @@
}
}
},
"v1.SystemInfo": {
"required": [
"installID",
"enableCollection",
"loginType"
],
"properties": {
"enableCollection": {
"type": "boolean"
},
"installID": {
"type": "string"
},
"loginType": {
"type": "string"
}
}
},
"v1.SystemInfoRequest": {
"required": [
"enableCollection",
@@ -9789,6 +10179,9 @@
},
"loginType": {
"type": "string"
},
"velaAddress": {
"type": "string"
}
}
},
@@ -9797,15 +10190,9 @@
"installID",
"enableCollection",
"loginType",
"createTime",
"updateTime",
"systemVersion"
],
"properties": {
"createTime": {
"type": "string",
"format": "date-time"
},
"enableCollection": {
"type": "boolean"
},
@@ -9817,10 +10204,6 @@
},
"systemVersion": {
"$ref": "#/definitions/v1.SystemVersion"
},
"updateTime": {
"type": "string",
"format": "date-time"
}
}
},
@@ -9889,6 +10272,9 @@
"gitee": {
"$ref": "#/definitions/addon.GiteeAddonSource"
},
"gitlab": {
"$ref": "#/definitions/addon.GitlabAddonSource"
},
"helm": {
"$ref": "#/definitions/addon.HelmSource"
},

View File

@@ -0,0 +1,23 @@
name: mock-addon
version: 1.0.0
description: Extended workload to do continuous and progressive delivery
icon: https://raw.githubusercontent.com/fluxcd/flux/master/docs/_files/weave-flux.png
url: https://fluxcd.io
tags:
- extended_workload
- gitops
- only_example
deployTo:
control_plane: true
runtime_cluster: false
dependencies: []
#- name: addon_name
# set invisible means this won't be list and will be enabled when depended on
# for example, terraform-alibaba depends on terraform which is invisible,
# when terraform-alibaba is enabled, terraform will be enabled automatically
# default: false
invisible: false

View File

@@ -0,0 +1,14 @@
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: mock-addon
namespace: vela-system
spec:
components:
- name: ns-example-system
type: raw
properties:
apiVersion: v1
kind: Namespace
metadata:
name: mock-system

20
go.mod
View File

@@ -63,10 +63,11 @@ require (
github.com/wonderflow/cert-manager-api v1.0.3
go.mongodb.org/mongo-driver v1.5.1
go.uber.org/zap v1.18.1
golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97
golang.org/x/oauth2 v0.0.0-20210402161424-2e8d93401602
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6 // indirect
golang.org/x/tools v0.1.6 // indirect
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519
golang.org/x/oauth2 v0.0.0-20220309155454-6242fa91716a
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e // indirect
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211
golang.org/x/tools v0.1.11-0.20220316014157-77aa08bb151a // indirect
gopkg.in/alexcesaro/quotedprintable.v3 v3.0.0-20150716171945-2caba252f4dc // indirect
gopkg.in/gomail.v2 v2.0.0-20160411212932-81ebce5c23df
gopkg.in/src-d/go-git.v4 v4.13.1
@@ -96,6 +97,8 @@ require (
sigs.k8s.io/yaml v1.2.0
)
require github.com/robfig/cron/v3 v3.0.1
require (
cloud.google.com/go v0.81.0 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
@@ -105,7 +108,7 @@ require (
github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect
github.com/Azure/go-autorest/logger v0.2.1 // indirect
github.com/Azure/go-autorest/tracing v0.6.0 // indirect
github.com/BurntSushi/toml v0.3.1 // indirect
github.com/BurntSushi/toml v0.4.1 // indirect
github.com/MakeNowJust/heredoc v0.0.0-20170808103936-bb23615498cd // indirect
github.com/Masterminds/goutils v1.1.1 // indirect
github.com/Masterminds/semver v1.5.0 // indirect
@@ -248,11 +251,10 @@ require (
go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5 // indirect
go.uber.org/atomic v1.7.0 // indirect
go.uber.org/multierr v1.6.0 // indirect
golang.org/x/mod v0.4.2 // indirect
golang.org/x/net v0.0.0-20211029224645-99673261e6eb // indirect
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3 // indirect
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd // indirect
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c // indirect
golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b // indirect
golang.org/x/text v0.3.6 // indirect
golang.org/x/text v0.3.7 // indirect
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
gomodules.xyz/jsonpatch/v2 v2.2.0 // indirect

37
go.sum
View File

@@ -110,8 +110,9 @@ github.com/Azure/go-autorest/logger v0.2.1/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZ
github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk=
github.com/Azure/go-autorest/tracing v0.6.0 h1:TYi4+3m5t6K48TGI9AUdb+IzbnSxvnvUMfuitfgcfuo=
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v0.4.1 h1:GaI7EiDXDRfa8VshkTj7Fym7ha+y8/XxIgD2okUIjLw=
github.com/BurntSushi/toml v0.4.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/DATA-DOG/go-sqlmock v1.3.3/go.mod h1:f/Ixk793poVmq4qj/V1dPUg2JEAKC73Q5eFN3EC/SaM=
github.com/DATA-DOG/go-sqlmock v1.5.0 h1:Shsta01QNfFxHCfpW6YH2STWB0MudeXXEWMr20OEh60=
@@ -1437,6 +1438,8 @@ github.com/rancher/wrangler v0.4.0/go.mod h1:1cR91WLhZgkZ+U4fV9nVuXqKurWbgXcIReU
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/remyoudompheng/bigfft v0.0.0-20170806203942-52369c62f446/go.mod h1:uYEyJGbgTkfkS4+E/PavXkNJcbFIpEtjt2B0KDQ5+9M=
github.com/retailnext/hllpp v1.0.1-0.20180308014038-101a6d2f8b52/go.mod h1:RDpi1RftBQPUCDRw6SmxeaREsAaRKnOclghuzp/WRzc=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
@@ -1654,7 +1657,7 @@ github.com/yuin/goldmark v1.1.30/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9de
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.0/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43 h1:+lm10QQTNSBd8DVTNGHx7o/IKu9HYDvLMffDhbyLccI=
github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43/go.mod h1:aX5oPXxHm3bOH+xeAttToC8pqch2ScQN/JoXYupl6xs=
github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50 h1:hlE8//ciYMztlGpl/VA+Zm1AcTPHYkHJPbHqE6WJUXE=
@@ -1796,8 +1799,9 @@ golang.org/x/crypto v0.0.0-20201221181555-eec23a3978ad/go.mod h1:jdWPYTVW3xRLrWP
golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97 h1:/UOmuWzQfxxo9UtlXMwuQU8CMgg1eZXqTRwkSQJWKOI=
golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519 h1:7I4JAnoQBe7ZtJcBaYHi5UtiO8tQHbUSXxL+pnGRANg=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20180807140117-3d87b88a115f/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
@@ -1841,8 +1845,9 @@ golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.1-0.20200828183125-ce943fd02449/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2 h1:Gz96sIWK3OalVv/I/qNygP42zyoKp3xptRVCWRFEBvo=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3 h1:kQgndtyPBW/JIYERgdxfwMYh3AVStj88WQTlNDi2a+o=
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY=
golang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180811021610-c39426892332/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -1907,9 +1912,10 @@ golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLd
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/net v0.0.0-20210520170846-37e1c6afe023/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211029224645-99673261e6eb h1:pirldcYWx7rx7kE5r+9WsOXPXK0+WH5+uZ7uPmJ44uM=
golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211029224645-99673261e6eb/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd h1:O7DYs+zxREGLKzKoMQrtrEacpb0ZVXA5rIwylE2Xchk=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190402181905-9f3314589c9a/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -1922,8 +1928,9 @@ golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210402161424-2e8d93401602 h1:0Ja1LBD+yisY6RWM/BH7TJVXWsSjs2VwBSmvSX4HdBc=
golang.org/x/oauth2 v0.0.0-20210402161424-2e8d93401602/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20220309155454-6242fa91716a h1:qfl7ob3DIEs3Ml9oLuPwY2N04gymzAW04WsUQHIClgM=
golang.org/x/oauth2 v0.0.0-20220309155454-6242fa91716a/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -2046,14 +2053,15 @@ golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6 h1:foEbQz/B0Oz6YIqu/69kfXPYeFQAuuMYFkjaqXzl5Wo=
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e h1:fLOSk5Q00efkSvAm+4xcoXD+RRmLmmulPn5I3Y9F2EM=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b h1:9zKuko04nR4gjZ4+DNjHqRlAJqbJETHwiNKDqTfOjfE=
golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 h1:JGgROgKl9N8DuW20oFS5gxc+lE67/N3FcwmBPMe7ArY=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -2064,8 +2072,9 @@ golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/time v0.0.0-20161028155119-f51c12702a4d/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -2200,8 +2209,8 @@ golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.3/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.6 h1:SIasE1FVIQOWz2GEAHFOmoW7xchJcqlucjSULTL0Ag4=
golang.org/x/tools v0.1.6/go.mod h1:LGqMHiF4EqQNHR1JncWGqT5BVaXmza+X+BDGol+dOxo=
golang.org/x/tools v0.1.11-0.20220316014157-77aa08bb151a h1:ofrrl6c6NG5/IOSx/R1cyiQxxjqlur0h/TvbUhkH0II=
golang.org/x/tools v0.1.11-0.20220316014157-77aa08bb151a/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=

View File

@@ -25,7 +25,7 @@ ifeq (, $(shell which staticcheck))
@{ \
set -e ;\
echo 'installing honnef.co/go/tools/cmd/staticcheck ' ;\
GO111MODULE=off go get honnef.co/go/tools/cmd/staticcheck ;\
GO111MODULE=on go get honnef.co/go/tools/cmd/staticcheck@v0.3.0 ;\
}
STATICCHECK=$(GOBIN)/staticcheck
else
@@ -56,7 +56,7 @@ else
CUE=$(shell which cue)
endif
KUSTOMIZE_VERSION ?= 3.8.2
KUSTOMIZE_VERSION ?= 4.5.4
.PHONY: kustomize
kustomize:

View File

@@ -49,6 +49,7 @@ e2e-apiserver-test:
pkill vela_addon_mock_server || true
go run ./e2e/addon/mock/vela_addon_mock_server.go &
go test -v -coverpkg=./... -coverprofile=/tmp/e2e_apiserver_test.out ./test/e2e-apiserver-test
sleep 15
@$(OK) tests pass
.PHONY: e2e-test

View File

@@ -195,6 +195,10 @@ func GetPatternFromItem(it Item, r AsyncReader, rootPath string) string {
if strings.HasPrefix(relativePath, strings.Join([]string{rootPath, p.Value}, "/")) {
return p.Value
}
if strings.HasPrefix(relativePath, filepath.Join(rootPath, p.Value)) {
// for enable addon by load dir, compatible with linux or windows os
return p.Value
}
}
return ""
}
@@ -357,7 +361,7 @@ func readResFile(a *InstallPackage, reader AsyncReader, readPath string) error {
if filename == "parameter.cue" {
return nil
}
file := ElementFile{Data: b, Name: path.Base(readPath)}
file := ElementFile{Data: b, Name: filepath.Base(readPath)}
switch filepath.Ext(filename) {
case ".cue":
a.CUETemplates = append(a.CUETemplates, file)
@@ -375,7 +379,7 @@ func readDefSchemaFile(a *InstallPackage, reader AsyncReader, readPath string) e
if err != nil {
return err
}
a.DefSchemas = append(a.DefSchemas, ElementFile{Data: b, Name: path.Base(readPath)})
a.DefSchemas = append(a.DefSchemas, ElementFile{Data: b, Name: filepath.Base(readPath)})
return nil
}
@@ -386,7 +390,7 @@ func readDefFile(a *UIData, reader AsyncReader, readPath string) error {
return err
}
filename := path.Base(readPath)
file := ElementFile{Data: b, Name: path.Base(readPath)}
file := ElementFile{Data: b, Name: filepath.Base(readPath)}
switch filepath.Ext(filename) {
case ".cue":
a.CUEDefinitions = append(a.CUEDefinitions, file)
@@ -1070,7 +1074,7 @@ func (h *Installer) enableAddon(addon *InstallPackage) error {
h.addon = addon
err = checkAddonVersionMeetRequired(h.ctx, addon.SystemRequirements, h.cli, h.dc)
if err != nil {
return ErrVersionMismatch
return VersionUnMatchError{addonName: addon.Name, err: err}
}
if err = h.installDependency(addon); err != nil {
@@ -1344,7 +1348,7 @@ func checkAddonVersionMeetRequired(ctx context.Context, require *SystemRequireme
return err
}
if !res {
return fmt.Errorf("vela cli/ux version: %s cannot meet requirement", version2.VelaVersion)
return fmt.Errorf("vela cli/ux version: %s require: %s", version2.VelaVersion, require.VelaVersion)
}
}
@@ -1361,7 +1365,7 @@ func checkAddonVersionMeetRequired(ctx context.Context, require *SystemRequireme
return err
}
if !res {
return fmt.Errorf("the vela core controller: %s cannot meet requirement ", imageVersion)
return fmt.Errorf("the vela core controller: %s require: %s", imageVersion, require.VelaVersion)
}
}
@@ -1382,7 +1386,7 @@ func checkAddonVersionMeetRequired(ctx context.Context, require *SystemRequireme
}
if !res {
return fmt.Errorf("the kubernetes version %s cannot meet requirement", k8sVersion.GitVersion)
return fmt.Errorf("the kubernetes version %s require: %s", k8sVersion.GitVersion, require.KubernetesVersion)
}
}

View File

@@ -21,6 +21,8 @@ import (
"fmt"
"time"
"github.com/oam-dev/kubevela/pkg/utils/apply"
"github.com/oam-dev/cluster-gateway/pkg/apis/cluster/v1alpha1"
appsv1 "k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/runtime"
@@ -362,6 +364,21 @@ var _ = Describe("func addon update ", func() {
})
})
var _ = Describe("test enable addon in local dir", func() {
BeforeEach(func() {
app := v1beta1.Application{ObjectMeta: metav1.ObjectMeta{Namespace: "vela-system", Name: "addon-example"}}
Expect(k8sClient.Delete(ctx, &app)).Should(SatisfyAny(BeNil(), util.NotFoundMatcher{}))
})
It("test enable addon by local dir", func() {
ctx := context.Background()
err := EnableAddonByLocalDir(ctx, "example", "./testdata/example", k8sClient, dc, apply.NewAPIApplicator(k8sClient), cfg, map[string]interface{}{"example": "test"})
Expect(err).Should(BeNil())
app := v1beta1.Application{}
Expect(k8sClient.Get(ctx, types2.NamespacedName{Namespace: "vela-system", Name: "addon-example"}, &app)).Should(BeNil())
})
})
const (
appYaml = `apiVersion: core.oam.dev/v1beta1
kind: Application

View File

@@ -20,6 +20,7 @@ import (
"context"
"encoding/json"
"encoding/xml"
"errors"
"fmt"
"net/http"
"net/http/httptest"
@@ -35,7 +36,7 @@ import (
v1alpha12 "github.com/oam-dev/cluster-gateway/pkg/apis/cluster/v1alpha1"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
@@ -303,7 +304,7 @@ func TestGetAddonStatus(t *testing.T) {
getFunc := test.MockGetFn(func(ctx context.Context, key client.ObjectKey, obj client.Object) error {
switch key.Name {
case "addon-disabled", "disabled":
return errors.NewNotFound(schema.GroupResource{Group: "apiVersion: core.oam.dev/v1beta1", Resource: "app"}, key.Name)
return kerrors.NewNotFound(schema.GroupResource{Group: "apiVersion: core.oam.dev/v1beta1", Resource: "app"}, key.Name)
case "addon-suspend":
o := obj.(*v1beta1.Application)
app := &v1beta1.Application{}
@@ -897,3 +898,11 @@ func TestRenderCUETemplate(t *testing.T) {
assert.True(t, component.Type == "raw")
assert.True(t, config["metadata"].(map[string]interface{})["labels"].(map[string]interface{})["version"] == "1.0.1")
}
func TestCheckEnableAddonErrorWhenMissMatch(t *testing.T) {
version2.VelaVersion = "v1.3.0"
i := InstallPackage{Meta: Meta{SystemRequirements: &SystemRequirements{VelaVersion: ">=1.4.0"}}}
installer := &Installer{}
err := installer.enableAddon(&i)
assert.Equal(t, errors.As(err, &VersionUnMatchError{}), true)
}

View File

@@ -19,10 +19,12 @@ package addon
import (
"context"
"fmt"
"strings"
"sync"
"time"
"github.com/oam-dev/kubevela/pkg/apiserver/log"
"github.com/oam-dev/kubevela/pkg/utils"
)
// We have three addon layer here
@@ -106,7 +108,7 @@ func (u *Cache) GetUIData(r Registry, addonName, version string) (*UIData, error
versionedRegistry := BuildVersionedRegistry(r.Name, r.Helm.URL)
addon, err = versionedRegistry.GetAddonUIData(context.Background(), addonName, version)
if err != nil {
log.Logger.Errorf("fail to get addons from registry %s for cache updating, %v", r.Name, err)
log.Logger.Errorf("fail to get addons from registry %s for cache updating, %v", utils.Sanitize(r.Name), err)
return nil, err
}
}
@@ -117,28 +119,27 @@ func (u *Cache) GetUIData(r Registry, addonName, version string) (*UIData, error
// ListUIData will always list UIData from cache first, if not exist, read from source.
func (u *Cache) ListUIData(r Registry) ([]*UIData, error) {
var err error
listAddons := u.listCachedUIData(r.Name)
if listAddons != nil {
return listAddons, nil
}
var listAddons []*UIData
if !IsVersionRegistry(r) {
addonMeta, err := u.ListAddonMeta(r)
listAddons = u.listCachedUIData(r.Name)
if listAddons != nil {
return listAddons, nil
}
listAddons, err = u.listUIDataAndCache(r)
if err != nil {
return nil, err
}
listAddons, err = r.ListUIData(addonMeta, UIMetaOptions)
if err != nil {
return nil, fmt.Errorf("fail to get addons from registry %s, %w", r.Name, err)
}
} else {
versionedRegistry := BuildVersionedRegistry(r.Name, r.Helm.URL)
listAddons, err = versionedRegistry.ListAddon()
listAddons = u.listVersionRegistryCachedUIData(r.Name)
if listAddons != nil {
return listAddons, nil
}
listAddons, err = u.listVersionRegistryUIDataAndCache(r)
if err != nil {
log.Logger.Errorf("fail to get addons from registry %s for cache updating, %v", r.Name, err)
return nil, err
}
}
u.putAddonUIData2Cache(r.Name, listAddons)
return listAddons, nil
}
@@ -173,6 +174,27 @@ func (u *Cache) listCachedUIData(name string) []*UIData {
return d
}
// listVersionRegistryCachedUIData will get cached addons from specified VersionRegistry in cache
func (u *Cache) listVersionRegistryCachedUIData(name string) []*UIData {
if u == nil {
return nil
}
u.mutex.RLock()
defer u.mutex.RUnlock()
d, ok := u.versionedUIData[name]
if !ok {
return nil
}
var uiDatas []*UIData
for version, uiData := range d {
if !strings.Contains(version, "-latest") {
uiDatas = append(uiDatas, uiData)
}
}
return uiDatas
}
// getCachedAddonMeta will get cached registry meta from specified registry in cache
func (u *Cache) getCachedAddonMeta(name string) map[string]SourceMeta {
if u == nil {
@@ -258,35 +280,51 @@ func (u *Cache) discoverAndRefreshRegistry() {
for _, r := range registries {
if !IsVersionRegistry(r) {
registryMeta, err := r.ListAddonMeta()
_, err = u.listUIDataAndCache(r)
if err != nil {
log.Logger.Errorf("fail to list registry %s metadata, %v", r.Name, err)
continue
}
u.putAddonMeta2Cache(r.Name, registryMeta)
uiData, err := r.ListUIData(registryMeta, UIMetaOptions)
if err != nil {
log.Logger.Errorf("fail to get addons from registry %s for cache updating, %v", r.Name, err)
continue
}
u.putAddonUIData2Cache(r.Name, uiData)
} else {
versionedRegistry := BuildVersionedRegistry(r.Name, r.Helm.URL)
uiDatas, err := versionedRegistry.ListAddon()
_, err = u.listVersionRegistryUIDataAndCache(r)
if err != nil {
log.Logger.Errorf("fail to get addons from registry %s for cache updating, %v", r.Name, err)
continue
}
for _, addon := range uiDatas {
uiData, err := versionedRegistry.GetAddonUIData(context.Background(), addon.Name, addon.Version)
if err != nil {
log.Logger.Errorf("fail to get addon from registry %s, addon %s version %s for cache updating, %v", addon.Name, r.Name, err)
continue
}
u.putVersionedUIData2Cache(r.Name, addon.Name, addon.Version, uiData)
// we also no version key, if use get addonUIData without version will return this vale as latest data.
u.putVersionedUIData2Cache(r.Name, addon.Name, "latest", uiData)
}
}
}
}
func (u *Cache) listUIDataAndCache(r Registry) ([]*UIData, error) {
registryMeta, err := r.ListAddonMeta()
if err != nil {
log.Logger.Errorf("fail to list registry %s metadata, %v", r.Name, err)
return nil, err
}
u.putAddonMeta2Cache(r.Name, registryMeta)
uiData, err := r.ListUIData(registryMeta, UIMetaOptions)
if err != nil {
log.Logger.Errorf("fail to get addons from registry %s for cache updating, %v", r.Name, err)
return nil, err
}
u.putAddonUIData2Cache(r.Name, uiData)
return uiData, nil
}
func (u *Cache) listVersionRegistryUIDataAndCache(r Registry) ([]*UIData, error) {
versionedRegistry := BuildVersionedRegistry(r.Name, r.Helm.URL)
uiDatas, err := versionedRegistry.ListAddon()
if err != nil {
log.Logger.Errorf("fail to get addons from registry %s for cache updating, %v", r.Name, err)
return nil, err
}
for _, addon := range uiDatas {
uiData, err := versionedRegistry.GetAddonUIData(context.Background(), addon.Name, addon.Version)
if err != nil {
log.Logger.Errorf("fail to get addon from versioned registry %s, addon %s version %s for cache updating, %v", r.Name, addon.Name, addon.Version, err)
continue
}
u.putVersionedUIData2Cache(r.Name, addon.Name, addon.Version, uiData)
// we also no version key, if use get addonUIData without version will return this vale as latest data.
u.putVersionedUIData2Cache(r.Name, addon.Name, "latest", uiData)
}
return uiDatas, nil
}

View File

@@ -31,3 +31,63 @@ func TestPutVersionedUIData2cache(t *testing.T) {
assert.NotEmpty(t, u.versionedUIData["helm-repo"]["fluxcd-1.0.0"])
assert.Equal(t, u.versionedUIData["helm-repo"]["fluxcd-1.0.0"].Name, "fluxcd")
}
func TestPutAddonUIData2Cache(t *testing.T) {
uiData := UIData{Meta: Meta{Name: "fluxcd", Icon: "test.com/fluxcd.png", Version: "1.0.0"}}
addons := []*UIData{&uiData}
name := "helm-repo"
u := NewCache(nil)
u.putAddonUIData2Cache(name, addons)
assert.NotEmpty(t, u.uiData)
assert.Equal(t, u.uiData[name], addons)
}
func TestListCachedUIData(t *testing.T) {
uiData := UIData{Meta: Meta{Name: "fluxcd", Icon: "test.com/fluxcd.png", Version: "1.0.0"}}
addons := []*UIData{&uiData}
name := "helm-repo"
u := NewCache(nil)
u.putAddonUIData2Cache(name, addons)
assert.Equal(t, u.listCachedUIData(name), addons)
}
func TestPutAddonMeta2Cache(t *testing.T) {
addonMeta := map[string]SourceMeta{
"fluxcd": {
Name: "fluxcd",
Items: []Item{
&OSSItem{
tp: FileType,
path: "fluxcd/definitions/helm-release.yaml",
name: "helm-release.yaml",
},
},
},
}
name := "helm-repo"
u := NewCache(nil)
u.putAddonMeta2Cache(name, addonMeta)
assert.NotEmpty(t, u.registryMeta)
assert.Equal(t, u.registryMeta[name], addonMeta)
}
func TestGetCachedAddonMeta(t *testing.T) {
addonMeta := map[string]SourceMeta{
"fluxcd": {
Name: "fluxcd",
Items: []Item{
&OSSItem{
tp: FileType,
path: "fluxcd/definitions/helm-release.yaml",
name: "helm-release.yaml",
},
},
},
}
name := "helm-repo"
u := NewCache(nil)
u.putAddonMeta2Cache(name, addonMeta)
assert.Equal(t, u.getCachedAddonMeta(name), addonMeta)
}

View File

@@ -17,6 +17,8 @@ limitations under the License.
package addon
import (
"fmt"
"github.com/google/go-github/v32/github"
"github.com/pkg/errors"
)
@@ -35,9 +37,6 @@ var (
// ErrNotExist means addon not exists
ErrNotExist = NewAddonError("addon not exist")
// ErrVersionMismatch means addon version requirement mismatch
ErrVersionMismatch = NewAddonError("addon version requirements mismatch")
)
// WrapErrRateLimit return ErrRateLimit if is the situation, or return error directly
@@ -48,3 +47,13 @@ func WrapErrRateLimit(err error) error {
}
return err
}
// VersionUnMatchError means addon system requirement cannot meet requirement
type VersionUnMatchError struct {
err error
addonName string
}
func (v VersionUnMatchError) Error() string {
return fmt.Sprintf("addon %s system requirement miss match: %v", v.addonName, v.err)
}

View File

@@ -20,6 +20,7 @@ import (
"context"
"encoding/json"
"fmt"
"path/filepath"
"k8s.io/client-go/discovery"
"k8s.io/klog/v2"
@@ -92,7 +93,11 @@ func DisableAddon(ctx context.Context, cli client.Client, name string, config *r
// EnableAddonByLocalDir enable an addon from local dir
func EnableAddonByLocalDir(ctx context.Context, name string, dir string, cli client.Client, dc *discovery.DiscoveryClient, applicator apply.Applicator, config *rest.Config, args map[string]interface{}) error {
r := localReader{dir: dir, name: name}
absDir, err := filepath.Abs(dir)
if err != nil {
return err
}
r := localReader{dir: absDir, name: name}
metas, err := r.ListAddonMeta()
if err != nil {
return err

View File

@@ -37,8 +37,10 @@ func (l localReader) ListAddonMeta() (map[string]SourceMeta, error) {
}
func (l localReader) ReadFile(path string) (string, error) {
file := strings.TrimPrefix(path, l.name+"/")
b, err := ioutil.ReadFile(filepath.Clean(filepath.Join(l.dir, file)))
path = strings.TrimPrefix(path, l.name+"/")
// for windows
path = strings.TrimPrefix(path, l.name+"\\")
b, err := ioutil.ReadFile(filepath.Clean(filepath.Join(l.dir, path)))
if err != nil {
return "", err
}
@@ -46,7 +48,8 @@ func (l localReader) ReadFile(path string) (string, error) {
}
func (l localReader) RelativePath(item Item) string {
return filepath.Join(l.name, strings.TrimPrefix(item.GetPath()+"/", l.dir))
file := strings.TrimPrefix(item.GetPath(), filepath.Clean(l.dir))
return filepath.Join(l.name, file)
}
func recursiveFetchFiles(path string, metas *SourceMeta) error {
@@ -60,7 +63,7 @@ func recursiveFetchFiles(path string, metas *SourceMeta) error {
return err
}
} else {
metas.Items = append(metas.Items, OSSItem{tp: "file", path: fmt.Sprintf("%s/%s", path, file.Name()), name: file.Name()})
metas.Items = append(metas.Items, OSSItem{tp: "file", path: filepath.Join(path, file.Name()), name: file.Name()})
}
}
return nil

View File

@@ -68,6 +68,43 @@ type HelmSource struct {
URL string `json:"url,omitempty" validate:"required"`
}
// SafeCopier is an interface to copy Struct without sensitive fields, such as Token, Username, Password
type SafeCopier interface {
SafeCopy() interface{}
}
// SafeCopy hides field Token
func (g *GitAddonSource) SafeCopy() *GitAddonSource {
if g == nil {
return nil
}
return &GitAddonSource{
URL: g.URL,
Path: g.Path,
}
}
// SafeCopy hides field Token
func (g *GiteeAddonSource) SafeCopy() *GiteeAddonSource {
if g == nil {
return nil
}
return &GiteeAddonSource{
URL: g.URL,
Path: g.Path,
}
}
// SafeCopy hides field Username, Password
func (h *HelmSource) SafeCopy() *HelmSource {
if h == nil {
return nil
}
return &HelmSource{
URL: h.URL,
}
}
// Item is a partial interface for github.RepositoryContent
type Item interface {
// GetType return "dir" or "file"

View File

@@ -115,3 +115,30 @@ func TestConvert2OssItem(t *testing.T) {
assert.Equal(t, expectItemCase, addonMetas)
}
func TestSafeCopy(t *testing.T) {
var git *GitAddonSource
sgit := git.SafeCopy()
assert.Nil(t, sgit)
git = &GitAddonSource{URL: "http://github.com/kubevela", Path: "addons", Token: "123456"}
sgit = git.SafeCopy()
assert.Empty(t, sgit.Token)
assert.Equal(t, "http://github.com/kubevela", sgit.URL)
assert.Equal(t, "addons", sgit.Path)
var gitee *GiteeAddonSource
sgitee := gitee.SafeCopy()
assert.Nil(t, sgitee)
gitee = &GiteeAddonSource{URL: "http://gitee.com/kubevela", Path: "addons", Token: "123456"}
sgitee = gitee.SafeCopy()
assert.Empty(t, sgitee.Token)
assert.Equal(t, "http://gitee.com/kubevela", sgitee.URL)
assert.Equal(t, "addons", sgitee.Path)
var helm *HelmSource
shelm := helm.SafeCopy()
assert.Nil(t, shelm)
helm = &HelmSource{URL: "https://hub.vela.com/chartrepo/addons"}
shelm = helm.SafeCopy()
assert.Equal(t, "https://hub.vela.com/chartrepo/addons", shelm.URL)
}

View File

@@ -52,7 +52,7 @@ type versionedRegistry struct {
}
func (i *versionedRegistry) ListAddon() ([]*UIData, error) {
chartIndex, err := i.h.GetIndexInfo(i.url, false)
chartIndex, err := i.h.GetIndexInfo(i.url, false, nil)
if err != nil {
return nil, err
}
@@ -107,7 +107,7 @@ func (i *versionedRegistry) resolveAddonListFromIndex(repoName string, index *re
}
func (i versionedRegistry) loadAddon(ctx context.Context, name, version string) (*WholeAddonPackage, error) {
versions, err := i.h.ListVersions(i.url, name, false)
versions, err := i.h.ListVersions(i.url, name, false, nil)
if err != nil {
return nil, err
}
@@ -131,7 +131,7 @@ func (i versionedRegistry) loadAddon(ctx context.Context, name, version string)
return nil, fmt.Errorf("specified version %s not exist", version)
}
for _, chartURL := range addonVersion.URLs {
archive, err := common.HTTPGet(ctx, chartURL)
archive, err := common.HTTPGetWithOption(ctx, chartURL, nil)
if err != nil {
continue
}

View File

@@ -0,0 +1,100 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package collect
import (
"context"
"fmt"
"math/rand"
"testing"
"time"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"k8s.io/client-go/rest"
"k8s.io/utils/pointer"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/envtest"
"github.com/oam-dev/kubevela/pkg/apiserver/clients"
"github.com/oam-dev/kubevela/pkg/apiserver/datastore"
"github.com/oam-dev/kubevela/pkg/apiserver/datastore/kubeapi"
"github.com/oam-dev/kubevela/pkg/apiserver/datastore/mongodb"
"github.com/oam-dev/kubevela/pkg/utils/common"
)
var cfg *rest.Config
var k8sClient client.Client
var testEnv *envtest.Environment
func TestCalculateJob(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "Caclculate systemInfo cronJob")
}
var _ = BeforeSuite(func(done Done) {
rand.Seed(time.Now().UnixNano())
By("bootstrapping test environment")
testEnv = &envtest.Environment{
ControlPlaneStartTimeout: time.Minute * 3,
ControlPlaneStopTimeout: time.Minute,
UseExistingCluster: pointer.BoolPtr(false),
CRDDirectoryPaths: []string{"../../../charts/vela-core/crds"},
}
By("start kube test env")
var err error
cfg, err = testEnv.Start()
Expect(err).Should(BeNil())
Expect(cfg).ToNot(BeNil())
By("new kube client")
cfg.Timeout = time.Minute * 2
k8sClient, err = client.New(cfg, client.Options{Scheme: common.Scheme})
Expect(err).Should(BeNil())
Expect(k8sClient).ToNot(BeNil())
By("new kube client success")
clients.SetKubeClient(k8sClient)
Expect(err).Should(BeNil())
close(done)
}, 240)
var _ = AfterSuite(func() {
By("tearing down the test environment")
err := testEnv.Stop()
Expect(err).ToNot(HaveOccurred())
})
func NewDatastore(cfg datastore.Config) (ds datastore.DataStore, err error) {
switch cfg.Type {
case "mongodb":
ds, err = mongodb.New(context.Background(), cfg)
if err != nil {
return nil, fmt.Errorf("create mongodb datastore instance failure %w", err)
}
case "kubeapi":
ds, err = kubeapi.New(context.Background(), cfg)
if err != nil {
return nil, fmt.Errorf("create mongodb datastore instance failure %w", err)
}
default:
return nil, fmt.Errorf("not support datastore type %s", cfg.Type)
}
return ds, nil
}

View File

@@ -0,0 +1,331 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package collect
import (
"context"
"sort"
"time"
client2 "sigs.k8s.io/controller-runtime/pkg/client"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
"github.com/oam-dev/kubevela/apis/types"
"github.com/oam-dev/kubevela/pkg/oam"
"github.com/oam-dev/kubevela/pkg/apiserver/clients"
"github.com/oam-dev/kubevela/pkg/multicluster"
"github.com/oam-dev/kubevela/pkg/apiserver/datastore"
"github.com/oam-dev/kubevela/pkg/apiserver/log"
"github.com/oam-dev/kubevela/pkg/apiserver/model"
"github.com/robfig/cron/v3"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/util/retry"
)
// TopKFrequent top frequency component or trait definition
var TopKFrequent = 5
// CrontabSpec the cron spec of job running
var CrontabSpec = "0 0 * * *"
// maximum tires is 5, initial duration is 1 minute
var waitBackOff = wait.Backoff{
Steps: 5,
Duration: 1 * time.Minute,
Factor: 5.0,
Jitter: 0.1,
}
// InfoCalculateCronJob is the cronJob to calculate the system info store in db
type InfoCalculateCronJob struct {
ds datastore.DataStore
}
// StartCalculatingInfoCronJob will start the system info calculating job.
func StartCalculatingInfoCronJob(ds datastore.DataStore) {
i := InfoCalculateCronJob{
ds: ds,
}
// run calculate job in 0:00 of every day
i.start(CrontabSpec)
}
func (i InfoCalculateCronJob) start(cronSpec string) {
c := cron.New(cron.WithChain(
// don't let job panic crash whole api-server process
cron.Recover(cron.DefaultLogger),
))
// ignore the entityId and error, the cron spec is defined by hard code, mustn't generate error
_, _ = c.AddFunc(cronSpec, func() {
// ExponentialBackoff retry this job
err := retry.OnError(waitBackOff, func(err error) bool {
// always retry
return true
}, func() error {
if err := i.run(); err != nil {
log.Logger.Errorf("Failed to calculate systemInfo, will try again after several minute error %v", err)
return err
}
log.Logger.Info("Successfully to calculate systemInfo")
return nil
})
if err != nil {
log.Logger.Errorf("After 5 tries the calculating cronJob failed: %v", err)
}
})
c.Start()
}
func (i InfoCalculateCronJob) run() error {
ctx := context.Background()
systemInfo := model.SystemInfo{}
e, err := i.ds.List(ctx, &systemInfo, &datastore.ListOptions{})
if err != nil {
return err
}
// if no systemInfo means velaux have not have not send get requestso skip calculate job
if len(e) == 0 {
return nil
}
info, ok := e[0].(*model.SystemInfo)
if !ok {
return nil
}
// if disable collection skip calculate job
if !info.EnableCollection {
return nil
}
if err := i.calculateAndUpdate(ctx, *info); err != nil {
return err
}
return nil
}
func (i InfoCalculateCronJob) calculateAndUpdate(ctx context.Context, systemInfo model.SystemInfo) error {
appCount, topKComp, topKTrait, topWorkflowStep, topKPolicy, err := i.calculateAppInfo(ctx)
if err != nil {
return err
}
enabledAddon, err := i.calculateAddonInfo(ctx)
if err != nil {
return err
}
clusterCount, err := i.calculateClusterInfo(ctx)
if err != nil {
return err
}
statisticInfo := model.StatisticInfo{
AppCount: genCountInfo(appCount),
TopKCompDef: topKComp,
TopKTraitDef: topKTrait,
TopKWorkflowStepDef: topWorkflowStep,
TopKPolicyDef: topKPolicy,
ClusterCount: genClusterCountInfo(clusterCount),
EnabledAddon: enabledAddon,
UpdateTime: time.Now(),
}
systemInfo.StatisticInfo = statisticInfo
if err := i.ds.Put(ctx, &systemInfo); err != nil {
return err
}
return nil
}
func (i InfoCalculateCronJob) calculateAppInfo(ctx context.Context) (int, []string, []string, []string, []string, error) {
var err error
var appCount int
compDef := map[string]int{}
traitDef := map[string]int{}
workflowDef := map[string]int{}
policyDef := map[string]int{}
var app = model.Application{}
entities, err := i.ds.List(ctx, &app, &datastore.ListOptions{})
if err != nil {
return 0, nil, nil, nil, nil, err
}
for _, entity := range entities {
appModel, ok := entity.(*model.Application)
if !ok {
continue
}
appCount++
comp := model.ApplicationComponent{
AppPrimaryKey: appModel.Name,
}
comps, err := i.ds.List(ctx, &comp, &datastore.ListOptions{})
if err != nil {
return 0, nil, nil, nil, nil, err
}
for _, e := range comps {
c, ok := e.(*model.ApplicationComponent)
if !ok {
continue
}
compDef[c.Type]++
for _, t := range c.Traits {
traitDef[t.Type]++
}
}
workflow := model.Workflow{
AppPrimaryKey: app.PrimaryKey(),
}
workflows, err := i.ds.List(ctx, &workflow, &datastore.ListOptions{})
if err != nil {
return 0, nil, nil, nil, nil, err
}
for _, e := range workflows {
w, ok := e.(*model.Workflow)
if !ok {
continue
}
for _, step := range w.Steps {
workflowDef[step.Type]++
}
}
policy := model.ApplicationPolicy{
AppPrimaryKey: app.PrimaryKey(),
}
policies, err := i.ds.List(ctx, &policy, &datastore.ListOptions{})
if err != nil {
return 0, nil, nil, nil, nil, err
}
for _, e := range policies {
p, ok := e.(*model.ApplicationPolicy)
if !ok {
continue
}
policyDef[p.Type]++
}
}
return appCount, topKFrequent(compDef, TopKFrequent), topKFrequent(traitDef, TopKFrequent), topKFrequent(workflowDef, TopKFrequent), topKFrequent(policyDef, TopKFrequent), nil
}
func (i InfoCalculateCronJob) calculateAddonInfo(ctx context.Context) (map[string]string, error) {
client, err := clients.GetKubeClient()
if err != nil {
return nil, err
}
apps := &v1beta1.ApplicationList{}
if err := client.List(ctx, apps, client2.InNamespace(types.DefaultKubeVelaNS), client2.HasLabels{oam.LabelAddonName}); err != nil {
return nil, err
}
res := map[string]string{}
for _, application := range apps.Items {
if addonName := application.Labels[oam.LabelAddonName]; addonName != "" {
var status string
switch application.Status.Phase {
case common.ApplicationRunning:
status = "enabled"
case common.ApplicationDeleting:
status = "disabling"
default:
status = "enabling"
}
res[addonName] = status
}
}
return res, nil
}
func (i InfoCalculateCronJob) calculateClusterInfo(ctx context.Context) (int, error) {
client, err := clients.GetKubeClient()
if err != nil {
return 0, err
}
cs, err := multicluster.ListVirtualClusters(ctx, client)
if err != nil {
return 0, err
}
return len(cs), nil
}
type defPair struct {
name string
count int
}
func topKFrequent(defs map[string]int, k int) []string {
var pairs []defPair
var res []string
for name, num := range defs {
pairs = append(pairs, defPair{name: name, count: num})
}
sort.Slice(pairs, func(i, j int) bool {
return pairs[i].count >= pairs[j].count
})
i := 0
for _, pair := range pairs {
res = append(res, pair.name)
i++
if i == k {
break
}
}
return res
}
func genCountInfo(num int) string {
switch {
case num < 10:
return "<10"
case num < 50:
return "<50"
case num < 100:
return "<100"
case num < 500:
return "<500"
case num < 2000:
return "<2000"
case num < 5000:
return "<5000"
case num < 10000:
return "<10000"
default:
return ">=10000"
}
}
func genClusterCountInfo(num int) string {
switch {
case num < 3:
return "<3"
case num < 10:
return "<10"
case num < 50:
return "<50"
default:
return ">=50"
}
}

View File

@@ -0,0 +1,273 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package collect
import (
"context"
"errors"
"testing"
"time"
"github.com/onsi/gomega/format"
"gotest.tools/assert"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
"github.com/oam-dev/kubevela/pkg/apiserver/datastore"
"github.com/oam-dev/kubevela/pkg/apiserver/model"
"github.com/oam-dev/kubevela/pkg/oam"
"github.com/oam-dev/kubevela/pkg/oam/util"
)
var _ = Describe("Test calculate cronJob", func() {
var (
ds datastore.DataStore
testProject string
i InfoCalculateCronJob
ctx = context.Background()
)
mockDataInDs := func() {
app1 := model.Application{BaseModel: model.BaseModel{CreateTime: time.Now()}, Name: "app1", Project: testProject}
app2 := model.Application{BaseModel: model.BaseModel{CreateTime: time.Now()}, Name: "app2", Project: testProject}
trait1 := model.ApplicationTrait{Type: "rollout"}
trait2 := model.ApplicationTrait{Type: "expose"}
trait3 := model.ApplicationTrait{Type: "rollout"}
trait4 := model.ApplicationTrait{Type: "patch"}
trait5 := model.ApplicationTrait{Type: "patch"}
trait6 := model.ApplicationTrait{Type: "rollout"}
appComp1 := model.ApplicationComponent{AppPrimaryKey: app1.PrimaryKey(), Name: "comp1", Type: "helm", Traits: []model.ApplicationTrait{trait1, trait4}}
appComp2 := model.ApplicationComponent{AppPrimaryKey: app2.PrimaryKey(), Name: "comp2", Type: "webservice", Traits: []model.ApplicationTrait{trait3}}
appComp3 := model.ApplicationComponent{AppPrimaryKey: app2.PrimaryKey(), Name: "comp3", Type: "webservice", Traits: []model.ApplicationTrait{trait2, trait5, trait6}}
Expect(ds.Add(ctx, &app1)).Should(SatisfyAny(BeNil(), DataExistMatcher{}))
Expect(ds.Add(ctx, &app2)).Should(SatisfyAny(BeNil(), DataExistMatcher{}))
Expect(ds.Add(ctx, &appComp1)).Should(SatisfyAny(BeNil(), DataExistMatcher{}))
Expect(ds.Add(ctx, &appComp2)).Should(SatisfyAny(BeNil(), DataExistMatcher{}))
Expect(ds.Add(ctx, &appComp3)).Should(SatisfyAny(BeNil(), DataExistMatcher{}))
Expect(k8sClient.Create(ctx, &v1.Namespace{ObjectMeta: metav1.ObjectMeta{Name: "vela-system"}})).Should(SatisfyAny(BeNil(), util.AlreadyExistMatcher{}))
Expect(k8sClient.Create(ctx, &v1beta1.Application{ObjectMeta: metav1.ObjectMeta{Namespace: "vela-system", Name: "addon-fluxcd", Labels: map[string]string{oam.LabelAddonName: "fluxcd"}}, Spec: v1beta1.ApplicationSpec{
Components: []common.ApplicationComponent{},
}})).Should(SatisfyAny(BeNil(), util.AlreadyExistMatcher{}))
Expect(k8sClient.Create(ctx, &v1beta1.Application{ObjectMeta: metav1.ObjectMeta{Namespace: "vela-system", Name: "addon-rollout", Labels: map[string]string{oam.LabelAddonName: "rollout"}}, Spec: v1beta1.ApplicationSpec{
Components: []common.ApplicationComponent{},
}})).Should(SatisfyAny(BeNil(), util.AlreadyExistMatcher{}))
}
BeforeEach(func() {
var err error
ds, err = NewDatastore(datastore.Config{Type: "kubeapi", Database: "target-test-kubevela"})
Expect(ds).ShouldNot(BeNil())
Expect(err).Should(BeNil())
testProject = "test-cronjob-project"
mockDataInDs()
i = InfoCalculateCronJob{
ds: ds,
}
systemInfo := model.SystemInfo{InstallID: "test-id", EnableCollection: true}
Expect(ds.Add(ctx, &systemInfo)).Should(SatisfyAny(BeNil(), DataExistMatcher{}))
})
It("Test calculate app Info", func() {
appNum, topKCom, topKTrait, _, _, err := i.calculateAppInfo(ctx)
Expect(err).Should(BeNil())
Expect(appNum).Should(BeEquivalentTo(2))
Expect(topKCom).Should(BeEquivalentTo([]string{"webservice", "helm"}))
Expect(topKTrait).Should(BeEquivalentTo([]string{"rollout", "patch", "expose"}))
})
It("Test calculate addon Info", func() {
enabledAddon, err := i.calculateAddonInfo(ctx)
Expect(err).Should(BeNil())
Expect(enabledAddon).Should(BeEquivalentTo(map[string]string{
"fluxcd": "enabling",
"rollout": "enabling",
}))
})
It("Test calculate cluster Info", func() {
clusterNum, err := i.calculateClusterInfo(ctx)
Expect(err).Should(BeNil())
Expect(clusterNum).Should(BeEquivalentTo(1))
})
It("Test calculateAndUpdate func", func() {
systemInfo := model.SystemInfo{}
es, err := ds.List(ctx, &systemInfo, &datastore.ListOptions{})
Expect(err).Should(BeNil())
Expect(len(es)).Should(BeEquivalentTo(1))
info, ok := es[0].(*model.SystemInfo)
Expect(ok).Should(BeTrue())
Expect(info.InstallID).Should(BeEquivalentTo("test-id"))
Expect(i.calculateAndUpdate(ctx, *info)).Should(BeNil())
systemInfo = model.SystemInfo{}
es, err = ds.List(ctx, &systemInfo, &datastore.ListOptions{})
Expect(err).Should(BeNil())
Expect(len(es)).Should(BeEquivalentTo(1))
info, ok = es[0].(*model.SystemInfo)
Expect(ok).Should(BeTrue())
Expect(info.InstallID).Should(BeEquivalentTo("test-id"))
Expect(info.StatisticInfo.AppCount).Should(BeEquivalentTo("<10"))
Expect(info.StatisticInfo.ClusterCount).Should(BeEquivalentTo("<3"))
Expect(info.StatisticInfo.TopKCompDef).Should(BeEquivalentTo([]string{"webservice", "helm"}))
Expect(info.StatisticInfo.TopKTraitDef).Should(BeEquivalentTo([]string{"rollout", "patch", "expose"}))
Expect(info.StatisticInfo.EnabledAddon).Should(BeEquivalentTo(map[string]string{
"fluxcd": "enabling",
"rollout": "enabling",
}))
})
It("Test run func", func() {
app3 := model.Application{BaseModel: model.BaseModel{CreateTime: time.Now()}, Name: "app3", Project: testProject}
Expect(ds.Add(ctx, &app3)).Should(BeNil())
systemInfo := model.SystemInfo{InstallID: "test-id", EnableCollection: false}
Expect(ds.Put(ctx, &systemInfo)).Should(BeNil())
Expect(i.run()).Should(BeNil())
})
})
func TestGenCountInfo(t *testing.T) {
testcases := []struct {
count int
res string
}{
{
count: 3,
res: "<10",
},
{
count: 14,
res: "<50",
},
{
count: 80,
res: "<100",
},
{
count: 350,
res: "<500",
},
{
count: 1800,
res: "<2000",
},
{
count: 4000,
res: "<5000",
},
{
count: 9000,
res: "<10000",
},
{
count: 30000,
res: ">=10000",
},
}
for _, testcase := range testcases {
assert.Equal(t, genCountInfo(testcase.count), testcase.res)
}
}
func TestGenClusterCountInfo(t *testing.T) {
testcases := []struct {
count int
res string
}{
{
count: 2,
res: "<3",
},
{
count: 7,
res: "<10",
},
{
count: 34,
res: "<50",
},
{
count: 100,
res: ">=50",
},
}
for _, testcase := range testcases {
assert.Equal(t, genClusterCountInfo(testcase.count), testcase.res)
}
}
func TestTopKFrequent(t *testing.T) {
testCases := []struct {
def map[string]int
k int
res []string
}{
{
def: map[string]int{
"rollout": 4,
"patch": 3,
"expose": 6,
},
k: 3,
res: []string{"expose", "rollout", "patch"},
},
{
// just return top2
def: map[string]int{
"rollout": 4,
"patch": 3,
"expose": 6,
},
k: 2,
res: []string{"expose", "rollout"},
},
}
for _, testCase := range testCases {
assert.DeepEqual(t, topKFrequent(testCase.def, testCase.k), testCase.res)
}
}
type DataExistMatcher struct{}
// Match matches error.
func (matcher DataExistMatcher) Match(actual interface{}) (success bool, err error) {
if actual == nil {
return false, nil
}
actualError := actual.(error)
return errors.Is(actualError, datastore.ErrRecordExist), nil
}
// FailureMessage builds an error message.
func (matcher DataExistMatcher) FailureMessage(actual interface{}) (message string) {
return format.Message(actual, "to be already exist")
}
// NegatedFailureMessage builds an error message.
func (matcher DataExistMatcher) NegatedFailureMessage(actual interface{}) (message string) {
return format.Message(actual, "not to be already exist")
}

View File

@@ -16,6 +16,8 @@ limitations under the License.
package model
import "time"
func init() {
RegisterModel(&SystemInfo{})
}
@@ -30,10 +32,11 @@ const (
// SystemInfo systemInfo model
type SystemInfo struct {
BaseModel
InstallID string `json:"installID"`
EnableCollection bool `json:"enableCollection"`
LoginType string `json:"loginType"`
DexConfig DexConfig `json:"dexConfig,omitempty"`
InstallID string `json:"installID"`
EnableCollection bool `json:"enableCollection"`
LoginType string `json:"loginType"`
DexConfig DexConfig `json:"dexConfig,omitempty"`
StatisticInfo StatisticInfo `json:"statisticInfo,omitempty"`
}
// DexConfig dex config
@@ -46,6 +49,18 @@ type DexConfig struct {
EnablePasswordDB bool `json:"enablePasswordDB"`
}
// StatisticInfo the system statistic info
type StatisticInfo struct {
ClusterCount string `json:"clusterCount,omitempty"`
AppCount string `json:"appCount,omitempty"`
EnabledAddon map[string]string `json:"enabledAddon,omitempty"`
TopKCompDef []string `json:"topKCompDef,omitempty"`
TopKTraitDef []string `json:"topKTraitDef,omitempty"`
TopKWorkflowStepDef []string `json:"topKWorkflowStepDef,omitempty"`
TopKPolicyDef []string `json:"topKPolicyDef,omitempty"`
UpdateTime time.Time `json:"updateTime,omitempty"`
}
// DexStorage dex storage
type DexStorage struct {
Type string `json:"type"`

View File

@@ -194,13 +194,17 @@ type ConfigType struct {
// Config define the metadata of a config
type Config struct {
ConfigType string `json:"configType"`
Name string `json:"name"`
Project string `json:"project"`
Identifier string `json:"identifier"`
Description string `json:"description"`
CreatedTime *time.Time `json:"createdTime"`
UpdatedTime *time.Time `json:"updatedTime"`
ConfigType string `json:"configType"`
ConfigTypeAlias string `json:"configTypeAlias"`
Name string `json:"name"`
Project string `json:"project"`
Identifier string `json:"identifier"`
Alias string `json:"alias"`
Description string `json:"description"`
CreatedTime *time.Time `json:"createdTime"`
UpdatedTime *time.Time `json:"updatedTime"`
ApplicationStatus common.ApplicationPhase `json:"applicationStatus"`
Status string `json:"status"`
}
// AccessKeyRequest request parameters to access cloud provider
@@ -405,6 +409,7 @@ type CreateApplicationRequest struct {
type CreateConfigRequest struct {
Name string `json:"name" validate:"checkname"`
Alias string `json:"alias"`
Description string `json:"description"`
Project string `json:"project"`
ComponentType string `json:"componentType" validate:"checkname"`
Properties string `json:"properties,omitempty"`
@@ -1111,13 +1116,27 @@ type DetailRevisionResponse struct {
type SystemInfoResponse struct {
SystemInfo
SystemVersion SystemVersion `json:"systemVersion"`
StatisticInfo StatisticInfo `json:"statisticInfo,omitempty"`
}
// SystemInfo system info
type SystemInfo struct {
InstallID string `json:"installID"`
EnableCollection bool `json:"enableCollection"`
LoginType string `json:"loginType"`
PlatformID string `json:"platformID"`
EnableCollection bool `json:"enableCollection"`
LoginType string `json:"loginType"`
InstallTime time.Time `json:"installTime,omitempty"`
}
// StatisticInfo generated by cronJob running in backend
type StatisticInfo struct {
ClusterCount string `json:"clusterCount,omitempty"`
AppCount string `json:"appCount,omitempty"`
EnableAddonList map[string]string `json:"enableAddonList,omitempty"`
ComponentDefinitionTopList []string `json:"componentDefinitionTopList,omitempty"`
TraitDefinitionTopList []string `json:"traitDefinitionTopList,omitempty"`
WorkflowDefinitionTopList []string `json:"workflowDefinitionTopList,omitempty"`
PolicyDefinitionTopList []string `json:"policyDefinitionTopList,omitempty"`
UpdateTime time.Time `json:"updateTime,omitempty"`
}
// SystemInfoRequest request by update SystemInfo

View File

@@ -23,6 +23,8 @@ import (
"os"
"time"
"github.com/oam-dev/kubevela/pkg/apiserver/collect"
restfulspec "github.com/emicklei/go-restful-openapi/v2"
"github.com/emicklei/go-restful/v3"
"github.com/go-openapi/spec"
@@ -60,6 +62,9 @@ type Config struct {
// AddonCacheTime is how long between two cache operations
AddonCacheTime time.Duration
// DisableStatisticCronJob close the calculate system info cronJob
DisableStatisticCronJob bool
}
type leaderConfig struct {
@@ -141,6 +146,10 @@ func (s *restServer) setupLeaderElection() (*leaderelection.LeaderElectionConfig
Callbacks: leaderelection.LeaderCallbacks{
OnStartedLeading: func(ctx context.Context) {
go velasync.Start(ctx, s.dataStore, restCfg, s.usecases)
if !s.cfg.DisableStatisticCronJob {
collect.StartCalculatingInfoCronJob(s.dataStore)
}
// this process would block the whole process, any other handler should start before this func
s.runWorkflowRecordSync(ctx, s.cfg.LeaderConfig.Duration)
},
OnStoppedLeading: func() {

View File

@@ -314,10 +314,10 @@ func (u *defaultAddonHandler) CreateAddonRegistry(ctx context.Context, req apis.
func convertAddonRegistry(r pkgaddon.Registry) *apis.AddonRegistry {
return &apis.AddonRegistry{
Name: r.Name,
Git: r.Git,
Gitee: r.Gitee,
Git: r.Git.SafeCopy(),
Gitee: r.Gitee.SafeCopy(),
OSS: r.OSS,
Helm: r.Helm,
Helm: r.Helm.SafeCopy(),
}
}
@@ -398,7 +398,8 @@ func (u *defaultAddonHandler) EnableAddon(ctx context.Context, name string, args
}
// wrap this error with special bcode
if errors.Is(err, pkgaddon.ErrVersionMismatch) {
if errors.As(err, &pkgaddon.VersionUnMatchError{}) {
log.Logger.Error(err)
return bcode.ErrAddonSystemVersionMismatch
}
// except `addon not found`, other errors should return directly
@@ -464,7 +465,7 @@ func (u *defaultAddonHandler) UpdateAddon(ctx context.Context, name string, args
}
// wrap this error with special bcode
if errors.Is(err, pkgaddon.ErrVersionMismatch) {
if errors.As(err, &pkgaddon.VersionUnMatchError{}) {
return bcode.ErrAddonSystemVersionMismatch
}
// except `addon not found`, other errors should return directly

View File

@@ -33,18 +33,18 @@ import (
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/selection"
"k8s.io/apimachinery/pkg/types"
"k8s.io/klog/v2"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/yaml"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1alpha1"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
velatypes "github.com/oam-dev/kubevela/apis/types"
"github.com/oam-dev/kubevela/pkg/apiserver/clients"
"github.com/oam-dev/kubevela/pkg/apiserver/datastore"
"github.com/oam-dev/kubevela/pkg/apiserver/log"
"github.com/oam-dev/kubevela/pkg/apiserver/model"
velatypes "github.com/oam-dev/kubevela/apis/types"
apisv1 "github.com/oam-dev/kubevela/pkg/apiserver/rest/apis/v1"
"github.com/oam-dev/kubevela/pkg/apiserver/rest/utils"
"github.com/oam-dev/kubevela/pkg/apiserver/rest/utils/bcode"
@@ -705,21 +705,9 @@ func (c *applicationUsecaseImpl) Deploy(ctx context.Context, app *model.Applicat
}
// sync configs to clusters
// TODO(zzxwill) need to check the type of the componentDefinition, if it is `Cloud`, skip the sync
targets, err := listTarget(ctx, c.ds, app.Project, nil)
if err != nil {
if err := c.syncConfigs4Application(ctx, oamApp, app.Project, workflow.EnvName); err != nil {
return nil, err
}
var clusterTargets []*model.ClusterTarget
for i, t := range targets {
if t.Cluster != nil {
clusterTargets = append(clusterTargets, targets[i].Cluster)
}
}
if err := SyncConfigs(ctx, c.kubeClient, app.Project, clusterTargets); err != nil {
return nil, fmt.Errorf("sync config failure %w", err)
}
// step2: check and create deploy event
if !req.Force {
@@ -809,6 +797,44 @@ func (c *applicationUsecaseImpl) Deploy(ctx context.Context, app *model.Applicat
}, nil
}
// sync configs to clusters
func (c *applicationUsecaseImpl) syncConfigs4Application(ctx context.Context, app *v1beta1.Application, projectName, envName string) error {
var areTerraformComponents = true
for _, m := range app.Spec.Components {
d := &v1beta1.ComponentDefinition{}
if err := c.kubeClient.Get(ctx, client.ObjectKey{Namespace: velatypes.DefaultKubeVelaNS, Name: m.Type}, d); err != nil {
klog.ErrorS(err, "failed to get config type", "ComponentDefinition", m.Type)
}
// check the type of the componentDefinition is Terraform
if d.Spec.Schematic != nil && d.Spec.Schematic.Terraform == nil {
areTerraformComponents = false
}
}
// skip configs sync
if areTerraformComponents {
return nil
}
env, err := c.envUsecase.GetEnv(ctx, envName)
if err != nil {
return err
}
var clusterTargets []*model.ClusterTarget
for _, t := range env.Targets {
target, err := c.targetUsecase.GetTarget(ctx, t)
if err != nil {
return err
}
if target.Cluster != nil {
clusterTargets = append(clusterTargets, target.Cluster)
}
}
if err := SyncConfigs(ctx, c.kubeClient, projectName, clusterTargets); err != nil {
return fmt.Errorf("sync config failure %w", err)
}
return nil
}
func (c *applicationUsecaseImpl) renderOAMApplication(ctx context.Context, appModel *model.Application, reqWorkflowName, version string) (*v1beta1.Application, error) {
// Priority 1 uses the requested workflow as release .
// Priority 2 uses the default workflow as release .

View File

@@ -94,7 +94,6 @@ type authHandler interface {
}
type dexHandlerImpl struct {
token *oauth2.Token
idToken *oidc.IDToken
ds datastore.DataStore
}
@@ -135,7 +134,6 @@ func (a *authenticationUsecaseImpl) newDexHandler(ctx context.Context, req apisv
return nil, err
}
return &dexHandlerImpl{
token: token,
idToken: idToken,
ds: a.ds,
}, nil
@@ -183,11 +181,11 @@ func (a *authenticationUsecaseImpl) Login(ctx context.Context, loginReq apisv1.L
if userBase.Disabled {
return nil, bcode.ErrUserAlreadyDisabled
}
accessToken, err := a.generateJWTToken(ctx, userBase.Name, GrantTypeAccess, time.Hour)
accessToken, err := a.generateJWTToken(userBase.Name, GrantTypeAccess, time.Hour)
if err != nil {
return nil, err
}
refreshToken, err := a.generateJWTToken(ctx, userBase.Name, GrantTypeRefresh, time.Hour*24)
refreshToken, err := a.generateJWTToken(userBase.Name, GrantTypeRefresh, time.Hour*24)
if err != nil {
return nil, err
}
@@ -198,7 +196,7 @@ func (a *authenticationUsecaseImpl) Login(ctx context.Context, loginReq apisv1.L
}, nil
}
func (a *authenticationUsecaseImpl) generateJWTToken(ctx context.Context, username, grantType string, expireDuration time.Duration) (string, error) {
func (a *authenticationUsecaseImpl) generateJWTToken(username, grantType string, expireDuration time.Duration) (string, error) {
expire := time.Now().Add(expireDuration)
claims := model.CustomClaims{
StandardClaims: jwt.StandardClaims{
@@ -210,24 +208,7 @@ func (a *authenticationUsecaseImpl) generateJWTToken(ctx context.Context, userna
GrantType: grantType,
}
token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims)
signed, err := a.getSignedKey(ctx)
if err != nil {
return "", err
}
return token.SignedString([]byte(signed))
}
func (a *authenticationUsecaseImpl) getSignedKey(ctx context.Context) (string, error) {
if signedKey != "" {
return signedKey, nil
}
info, err := a.sysUsecase.Get(ctx)
if err != nil {
return "", err
}
signedKey = info.InstallID
return signedKey, nil
return token.SignedString([]byte(signedKey))
}
func (a *authenticationUsecaseImpl) RefreshToken(ctx context.Context, refreshToken string) (*apisv1.RefreshTokenResponse, error) {
@@ -239,7 +220,7 @@ func (a *authenticationUsecaseImpl) RefreshToken(ctx context.Context, refreshTok
return nil, err
}
if claim.GrantType == GrantTypeRefresh {
accessToken, err := a.generateJWTToken(ctx, claim.Username, GrantTypeAccess, time.Hour)
accessToken, err := a.generateJWTToken(claim.Username, GrantTypeAccess, time.Hour)
if err != nil {
return nil, err
}
@@ -429,19 +410,18 @@ func (d *dexHandlerImpl) login(ctx context.Context) (*apisv1.UserBase, error) {
}
user := &model.User{Email: claims.Email}
userBase := &apisv1.UserBase{Email: claims.Email, Name: claims.Name}
users, err := d.ds.List(ctx, user, &datastore.ListOptions{})
if err != nil {
return nil, err
}
if len(users) > 0 {
u := users[0].(*model.User)
if u.Name != claims.Name {
u.Name = claims.Name
}
u.LastLoginTime = time.Now()
if err := d.ds.Put(ctx, u); err != nil {
return nil, err
}
userBase.Name = u.Name
} else if err := d.ds.Add(ctx, &model.User{
Email: claims.Email,
Name: claims.Name,
@@ -450,10 +430,7 @@ func (d *dexHandlerImpl) login(ctx context.Context) (*apisv1.UserBase, error) {
return nil, err
}
return &apisv1.UserBase{
Name: claims.Name,
Email: claims.Email,
}, nil
return userBase, nil
}
func (l *localHandlerImpl) login(ctx context.Context) (*apisv1.UserBase, error) {

View File

@@ -28,7 +28,6 @@ import (
"github.com/coreos/go-oidc"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"golang.org/x/oauth2"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
@@ -68,10 +67,6 @@ var _ = Describe("Test authentication usecase functions", func() {
})
defer patch.Reset()
dexHandler := dexHandlerImpl{
token: &oauth2.Token{
AccessToken: "access-token",
RefreshToken: "refresh-token",
},
idToken: testIDToken,
ds: ds,
}
@@ -86,6 +81,20 @@ var _ = Describe("Test authentication usecase functions", func() {
err = ds.Get(context.Background(), user)
Expect(err).Should(BeNil())
Expect(user.Email).Should(Equal("test@test.com"))
existUser := &model.User{
Name: "test",
}
err = ds.Delete(context.Background(), existUser)
Expect(err).Should(BeNil())
existUser.Name = "exist-user"
existUser.Email = "test@test.com"
err = ds.Add(context.Background(), existUser)
Expect(err).Should(BeNil())
resp, err = dexHandler.login(context.Background())
Expect(err).Should(BeNil())
Expect(resp.Email).Should(Equal("test@test.com"))
Expect(resp.Name).Should(Equal("exist-user"))
})
It("Test local login", func() {
@@ -175,6 +184,8 @@ var _ = Describe("Test authentication usecase functions", func() {
It("Test get dex config", func() {
_, err := authUsecase.GetDexConfig(context.Background())
Expect(err).Should(Equal(bcode.ErrInvalidDexConfig))
err = ds.Add(context.Background(), &model.User{Name: "admin", Email: "test@test.com"})
Expect(err).Should(BeNil())
_, err = sysUsecase.UpdateSystemInfo(context.Background(), apisv1.SystemInfoRequest{
LoginType: model.LoginTypeDex,
VelaAddress: "http://velaux.com",

View File

@@ -20,15 +20,16 @@ import (
"context"
"encoding/json"
"fmt"
"time"
"strings"
set "github.com/deckarep/golang-set"
terraformtypes "github.com/oam-dev/terraform-controller/api/types"
"github.com/pkg/errors"
v1 "k8s.io/api/core/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
"sigs.k8s.io/yaml"
"github.com/pkg/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/klog/v2"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
@@ -38,13 +39,17 @@ import (
"github.com/oam-dev/kubevela/pkg/apiserver/model"
apis "github.com/oam-dev/kubevela/pkg/apiserver/rest/apis/v1"
"github.com/oam-dev/kubevela/pkg/definition"
"github.com/oam-dev/kubevela/pkg/utils/config"
)
const (
definitionAlias = definition.UserPrefix + "alias.config.oam.dev"
definitionType = definition.UserPrefix + "type.config.oam.dev"
velaCoreConfig = "velacore-config"
configIsReady = "Ready"
configIsNotReady = "Not ready"
terraformProviderAlias = "Terraform Cloud Provider"
configSyncProjectPrefix = "config-sync"
)
// ConfigHandler handle CRUD of configs
@@ -78,7 +83,7 @@ type configUseCaseImpl struct {
func (u *configUseCaseImpl) ListConfigTypes(ctx context.Context, query string) ([]*apis.ConfigType, error) {
defs := &v1beta1.ComponentDefinitionList{}
if err := u.kubeClient.List(ctx, defs, client.InNamespace(types.DefaultKubeVelaNS),
client.MatchingLabels{definition.UserPrefix + "catalog.config.oam.dev": velaCoreConfig}); err != nil {
client.MatchingLabels{definition.UserPrefix + "catalog.config.oam.dev": types.VelaCoreConfig}); err != nil {
return nil, err
}
@@ -98,18 +103,21 @@ func (u *configUseCaseImpl) ListConfigTypes(ctx context.Context, query string) (
})
}
tfType := &apis.ConfigType{
Alias: "Terraform Cloud Provider",
Name: types.TerraformProvider,
}
definitions := make([]string, len(tfDefs))
if len(tfDefs) > 0 {
tfType := &apis.ConfigType{
Alias: terraformProviderAlias,
Name: types.TerraformProvider,
}
definitions := make([]string, len(tfDefs))
for i, tf := range tfDefs {
definitions[i] = tf.Name
}
tfType.Definitions = definitions
for i, tf := range tfDefs {
definitions[i] = tf.Name
}
tfType.Definitions = definitions
return append(configTypes, tfType), nil
return append(configTypes, tfType), nil
}
return configTypes, nil
}
// GetConfigType returns a config type
@@ -128,79 +136,59 @@ func (u *configUseCaseImpl) GetConfigType(ctx context.Context, configType string
}
func (u *configUseCaseImpl) CreateConfig(ctx context.Context, req apis.CreateConfigRequest) error {
app := v1beta1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: req.Name,
Namespace: types.DefaultKubeVelaNS,
Annotations: map[string]string{
types.AnnotationConfigAlias: req.Alias,
},
Labels: map[string]string{
model.LabelSourceOfTruth: model.FromInner,
types.LabelConfigCatalog: velaCoreConfig,
types.LabelConfigType: req.ComponentType,
types.LabelConfigProject: req.Project,
},
},
Spec: v1beta1.ApplicationSpec{
Components: []common.ApplicationComponent{
{
Name: req.Name,
Type: req.ComponentType,
Properties: &runtime.RawExtension{Raw: []byte(req.Properties)},
},
},
},
}
if err := u.kubeClient.Create(ctx, &app); err != nil {
return err
}
// try to check whether the underlying config secrets is successfully created
var succeeded bool
var configApp v1beta1.Application
for i := 0; i < 100; i++ {
if err := u.kubeClient.Get(ctx, client.ObjectKey{Namespace: types.DefaultKubeVelaNS, Name: req.Name}, &configApp); err == nil {
if configApp.Status.Phase == common.ApplicationRunning {
succeeded = true
break
}
p := req.Properties
// If the component is Terraform type, set the provider name same as the application name and the component name
if strings.HasPrefix(req.ComponentType, types.TerraformComponentPrefix) {
var properties map[string]interface{}
if err := json.Unmarshal([]byte(p), &properties); err != nil {
return errors.Wrapf(err, "unable to process the properties of %s", req.ComponentType)
}
time.Sleep(time.Second)
}
// clean up failed application
if !succeeded {
if err := u.kubeClient.Delete(ctx, &app); err != nil {
return err
properties["name"] = req.Name
tmp, err := json.Marshal(properties)
if err != nil {
return errors.Wrapf(err, "unable to process the properties of %s", req.ComponentType)
}
return errors.New("failed to create config")
p = string(tmp)
}
if succeeded && req.ComponentType == types.DexConnector {
return u.authenticationUseCase.UpdateDexConfig(ctx)
ui := config.UIParam{
Alias: req.Alias,
Description: req.Description,
Project: req.Project,
}
return nil
return config.CreateApplication(ctx, u.kubeClient, req.Name, req.ComponentType, p, ui)
}
func (u *configUseCaseImpl) GetConfigs(ctx context.Context, configType string) ([]*apis.Config, error) {
switch configType {
case types.TerraformProvider:
defs := &v1beta1.ComponentDefinitionList{}
if err := u.kubeClient.List(ctx, defs, client.InNamespace(types.DefaultKubeVelaNS),
client.MatchingLabels{
definition.UserPrefix + "catalog.config.oam.dev": velaCoreConfig,
definition.UserPrefix + "type.config.oam.dev": types.TerraformProvider,
}); err != nil {
providers, err := config.ListTerraformProviders(ctx, u.kubeClient)
if err != nil {
return nil, err
}
var configs []*apis.Config
for _, d := range defs.Items {
subConfigs, err := u.getConfigsByConfigType(ctx, d.Name)
if err != nil {
configs := make([]*apis.Config, len(providers))
for i, p := range providers {
var a v1beta1.Application
if err := u.kubeClient.Get(ctx, client.ObjectKey{Namespace: types.DefaultKubeVelaNS, Name: p.Name}, &a); err != nil {
if kerrors.IsNotFound(err) {
t := p.CreationTimestamp.Time
configs[i] = &apis.Config{
Name: p.Name,
CreatedTime: &t,
}
if p.Status.State == terraformtypes.ProviderIsReady {
configs[i].Status = configIsReady
} else {
configs[i].Status = configIsNotReady
}
continue
}
return nil, err
}
configs = append(configs, subConfigs...)
// If the application doesn't have any components, skip it as something wrong happened.
if !strings.HasPrefix(a.Labels[types.LabelConfigType], types.TerraformComponentPrefix) {
continue
}
configs[i] = retrieveConfigFromApplication(a, a.Labels[types.LabelConfigProject])
}
return configs, nil
@@ -215,7 +203,7 @@ func (u *configUseCaseImpl) getConfigsByConfigType(ctx context.Context, configTy
if err := u.kubeClient.List(ctx, apps, client.InNamespace(types.DefaultKubeVelaNS),
client.MatchingLabels{
model.LabelSourceOfTruth: model.FromInner,
types.LabelConfigCatalog: velaCoreConfig,
types.LabelConfigCatalog: types.VelaCoreConfig,
types.LabelConfigType: configType,
}); err != nil {
return nil, err
@@ -223,12 +211,7 @@ func (u *configUseCaseImpl) getConfigsByConfigType(ctx context.Context, configTy
configs := make([]*apis.Config, len(apps.Items))
for i, a := range apps.Items {
configs[i] = &apis.Config{
ConfigType: a.Labels[types.LabelConfigType],
Name: a.Name,
Project: a.Labels[types.LabelConfigProject],
CreatedTime: &(a.CreationTimestamp.Time),
}
configs[i] = retrieveConfigFromApplication(a, a.Labels[types.LabelConfigProject])
}
return configs, nil
}
@@ -250,11 +233,11 @@ func (u *configUseCaseImpl) GetConfig(ctx context.Context, configType, name stri
}
func (u *configUseCaseImpl) DeleteConfig(ctx context.Context, configType, name string) error {
var a = &v1beta1.Application{}
if err := u.kubeClient.Get(ctx, client.ObjectKey{Namespace: types.DefaultKubeVelaNS, Name: name}, a); err != nil {
return err
var isTerraformProvider bool
if strings.HasPrefix(configType, types.TerraformComponentPrefix) {
isTerraformProvider = true
}
return u.kubeClient.Delete(ctx, a)
return config.DeleteApplication(ctx, u.kubeClient, name, isTerraformProvider)
}
// ApplicationDeployTarget is the struct of application deploy target
@@ -265,13 +248,12 @@ type ApplicationDeployTarget struct {
// SyncConfigs will sync configs to working clusters
func SyncConfigs(ctx context.Context, k8sClient client.Client, project string, targets []*model.ClusterTarget) error {
name := fmt.Sprintf("config-sync-%s", project)
name := fmt.Sprintf("%s-%s", configSyncProjectPrefix, project)
// get all configs which can be synced to working clusters in the project
var secrets v1.SecretList
if err := k8sClient.List(ctx, &secrets, client.InNamespace(types.DefaultKubeVelaNS),
client.MatchingLabels{
types.LabelConfigCatalog: velaCoreConfig,
types.LabelConfigProject: project,
types.LabelConfigCatalog: types.VelaCoreConfig,
types.LabelConfigSyncToMultiCluster: "true",
}); err != nil {
return err
@@ -279,13 +261,19 @@ func SyncConfigs(ctx context.Context, k8sClient client.Client, project string, t
if len(secrets.Items) == 0 {
return nil
}
objects := make([]map[string]string, len(secrets.Items))
for i, s := range secrets.Items {
objects[i] = map[string]string{
"name": s.Name,
"resource": "secret",
var objects []map[string]string
for _, s := range secrets.Items {
if s.Labels[types.LabelConfigProject] == "" || s.Labels[types.LabelConfigProject] == project {
objects = append(objects, map[string]string{
"name": s.Name,
"resource": "secret",
})
}
}
if len(objects) == 0 {
klog.InfoS("no configs need to sync to working clusters", "project", project)
return nil
}
objectsBytes, err := json.Marshal(map[string][]map[string]string{"objects": objects})
if err != nil {
return err
@@ -297,6 +285,26 @@ func SyncConfigs(ctx context.Context, k8sClient client.Client, project string, t
return err
}
// config sync application doesn't exist, create one
clusterTargets := convertClusterTargets(targets)
if len(clusterTargets) == 0 {
errMsg := "no policy (no targets found) to sync configs"
klog.InfoS(errMsg, "project", project)
return errors.New(errMsg)
}
policies := make([]v1beta1.AppPolicy, len(clusterTargets))
for i, t := range clusterTargets {
properties, err := json.Marshal(t)
if err != nil {
return err
}
policies[i] = v1beta1.AppPolicy{
Type: "topology",
Name: t.Namespace,
Properties: &runtime.RawExtension{
Raw: properties,
},
}
}
scratch := &v1beta1.Application{
ObjectMeta: metav1.ObjectMeta{
@@ -304,7 +312,7 @@ func SyncConfigs(ctx context.Context, k8sClient client.Client, project string, t
Namespace: types.DefaultKubeVelaNS,
Labels: map[string]string{
model.LabelSourceOfTruth: model.FromInner,
types.LabelConfigCatalog: velaCoreConfig,
types.LabelConfigCatalog: types.VelaCoreConfig,
types.LabelConfigProject: project,
},
},
@@ -316,11 +324,10 @@ func SyncConfigs(ctx context.Context, k8sClient client.Client, project string, t
Properties: &runtime.RawExtension{Raw: objectsBytes},
},
},
Policies: policies,
},
}
if err := k8sClient.Create(ctx, scratch); err != nil {
return err
}
return k8sClient.Create(ctx, scratch)
}
// config sync application exists, update it
app.Spec.Components = []common.ApplicationComponent{
@@ -340,6 +347,11 @@ func SyncConfigs(ctx context.Context, k8sClient client.Client, project string, t
}
mergedTarget := mergeTargets(currentTargets, targets)
if len(mergedTarget) == 0 {
errMsg := "no policy (no targets found) to sync configs"
klog.InfoS(errMsg, "project", project)
return errors.New(errMsg)
}
mergedPolicies := make([]v1beta1.AppPolicy, len(mergedTarget))
for i, t := range mergedTarget {
properties, err := json.Marshal(t)
@@ -355,21 +367,27 @@ func SyncConfigs(ctx context.Context, k8sClient client.Client, project string, t
}
}
app.Spec.Policies = mergedPolicies
out, _ := yaml.Marshal(app)
fmt.Println(string(out))
return k8sClient.Update(ctx, app)
}
func mergeTargets(currentTargets []ApplicationDeployTarget, targets []*model.ClusterTarget) []ApplicationDeployTarget {
var mergedTargets []ApplicationDeployTarget
var (
mergedTargets []ApplicationDeployTarget
// make sure the clusters of target with same namespace are merged
clusterTargets = convertClusterTargets(targets)
)
for _, c := range currentTargets {
var hasSameNamespace bool
for _, t := range targets {
for _, t := range clusterTargets {
if c.Namespace == t.Namespace {
hasSameNamespace = true
clusters := append(c.Clusters, t.ClusterName)
mergedTargets = append(mergedTargets, ApplicationDeployTarget{Namespace: c.Namespace, Clusters: clusters})
clusters := set.NewSetFromSlice(stringToInterfaceSlice(t.Clusters))
for _, cluster := range c.Clusters {
clusters.Add(cluster)
}
mergedTargets = append(mergedTargets, ApplicationDeployTarget{Namespace: c.Namespace,
Clusters: interfaceToStringSlice(clusters.ToSlice())})
}
}
if !hasSameNamespace {
@@ -377,7 +395,7 @@ func mergeTargets(currentTargets []ApplicationDeployTarget, targets []*model.Clu
}
}
for _, t := range targets {
for _, t := range clusterTargets {
var hasSameNamespace bool
for _, c := range currentTargets {
if c.Namespace == t.Namespace {
@@ -385,9 +403,75 @@ func mergeTargets(currentTargets []ApplicationDeployTarget, targets []*model.Clu
}
}
if !hasSameNamespace {
mergedTargets = append(mergedTargets, ApplicationDeployTarget{Namespace: t.Namespace, Clusters: []string{t.ClusterName}})
mergedTargets = append(mergedTargets, t)
}
}
return mergedTargets
}
func convertClusterTargets(targets []*model.ClusterTarget) []ApplicationDeployTarget {
type Target struct {
Namespace string `json:"namespace"`
Clusters []interface{} `json:"clusters"`
}
var (
clusterTargets []Target
namespaceSet = set.NewSet()
)
for i := 0; i < len(targets); i++ {
clusters := set.NewSet(targets[i].ClusterName)
for j := i + 1; j < len(targets); j++ {
if targets[i].Namespace == targets[j].Namespace {
clusters.Add(targets[j].ClusterName)
}
}
if namespaceSet.Contains(targets[i].Namespace) {
continue
}
clusterTargets = append(clusterTargets, Target{
Namespace: targets[i].Namespace,
Clusters: clusters.ToSlice(),
})
namespaceSet.Add(targets[i].Namespace)
}
t := make([]ApplicationDeployTarget, len(clusterTargets))
for i, ct := range clusterTargets {
t[i] = ApplicationDeployTarget{
Namespace: ct.Namespace,
Clusters: interfaceToStringSlice(ct.Clusters),
}
}
return t
}
func interfaceToStringSlice(i []interface{}) []string {
var s []string
for _, v := range i {
s = append(s, v.(string))
}
return s
}
func stringToInterfaceSlice(i []string) []interface{} {
var s []interface{}
for _, v := range i {
s = append(s, v)
}
return s
}
// destroySyncConfigsApp will delete the application which is used to sync configs
func destroySyncConfigsApp(ctx context.Context, k8sClient client.Client, project string) error {
name := fmt.Sprintf("%s-%s", configSyncProjectPrefix, project)
var app = &v1beta1.Application{}
if err := k8sClient.Get(ctx, client.ObjectKey{Namespace: types.DefaultKubeVelaNS, Name: name}, app); err != nil {
if !kerrors.IsNotFound(err) {
return err
}
}
return k8sClient.Delete(ctx, app)
}

View File

@@ -18,20 +18,27 @@ package usecase
import (
"context"
"encoding/json"
"sort"
"strings"
"testing"
corev1 "k8s.io/api/core/v1"
"time"
. "github.com/agiledragon/gomonkey/v2"
terraformtypes "github.com/oam-dev/terraform-controller/api/types"
terraformapi "github.com/oam-dev/terraform-controller/api/v1beta1"
"gotest.tools/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/rest"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
"github.com/oam-dev/kubevela/apis/types"
"github.com/oam-dev/kubevela/pkg/apiserver/model"
apis "github.com/oam-dev/kubevela/pkg/apiserver/rest/apis/v1"
"github.com/oam-dev/kubevela/pkg/definition"
"github.com/oam-dev/kubevela/pkg/multicluster"
@@ -50,7 +57,7 @@ func TestListConfigTypes(t *testing.T) {
Name: "def1",
Namespace: types.DefaultKubeVelaNS,
Labels: map[string]string{
definition.UserPrefix + "catalog.config.oam.dev": velaCoreConfig,
definition.UserPrefix + "catalog.config.oam.dev": types.VelaCoreConfig,
definitionType: types.TerraformProvider,
},
},
@@ -67,7 +74,7 @@ func TestListConfigTypes(t *testing.T) {
definitionAlias: "Def2",
},
Labels: map[string]string{
definition.UserPrefix + "catalog.config.oam.dev": velaCoreConfig,
definition.UserPrefix + "catalog.config.oam.dev": types.VelaCoreConfig,
},
},
}
@@ -147,7 +154,7 @@ func TestGetConfigType(t *testing.T) {
definitionAlias: "Def2",
},
Labels: map[string]string{
definition.UserPrefix + "catalog.config.oam.dev": velaCoreConfig,
definition.UserPrefix + "catalog.config.oam.dev": types.VelaCoreConfig,
},
},
}
@@ -233,6 +240,11 @@ func TestCreateConfig(t *testing.T) {
ctx := context.Background()
properties, err := json.Marshal(map[string]interface{}{
"name": "default",
})
assert.NilError(t, err)
testcases := []struct {
name string
args args
@@ -249,6 +261,18 @@ func TestCreateConfig(t *testing.T) {
},
},
},
{
name: "create terraform-alibaba config",
args: args{
h: h,
req: apis.CreateConfigRequest{
Name: "n1",
ComponentType: "terraform-alibaba",
Project: "p1",
Properties: string(properties),
},
},
},
}
for _, tc := range testcases {
@@ -265,37 +289,53 @@ func TestGetConfigs(t *testing.T) {
s := runtime.NewScheme()
v1beta1.AddToScheme(s)
corev1.AddToScheme(s)
def1 := &v1beta1.ComponentDefinition{
TypeMeta: metav1.TypeMeta{
Kind: "ComponentDefinition",
APIVersion: "core.oam.dev/v1beta1",
},
terraformapi.AddToScheme(s)
createdTime, _ := time.Parse(time.UnixDate, "Wed Apr 7 11:06:39 PST 2022")
provider1 := &terraformapi.Provider{
ObjectMeta: metav1.ObjectMeta{
Name: "def1",
Namespace: types.DefaultKubeVelaNS,
Labels: map[string]string{
definition.UserPrefix + "catalog.config.oam.dev": velaCoreConfig,
definitionType: types.TerraformProvider,
},
Name: "provider1",
Namespace: "default",
CreationTimestamp: metav1.NewTime(createdTime),
},
Status: terraformapi.ProviderStatus{
State: terraformtypes.ProviderIsReady,
},
}
def2 := &v1beta1.ComponentDefinition{
TypeMeta: metav1.TypeMeta{
Kind: "ComponentDefinition",
APIVersion: "core.oam.dev/v1beta1",
},
provider2 := &terraformapi.Provider{
ObjectMeta: metav1.ObjectMeta{
Name: "def2",
Namespace: types.DefaultKubeVelaNS,
Annotations: map[string]string{
definitionAlias: "Def2",
},
Labels: map[string]string{
definition.UserPrefix + "catalog.config.oam.dev": velaCoreConfig,
},
Name: "provider2",
Namespace: "default",
CreationTimestamp: metav1.NewTime(createdTime),
},
Status: terraformapi.ProviderStatus{
State: terraformtypes.ProviderIsNotReady,
},
}
k8sClient := fake.NewClientBuilder().WithScheme(s).WithObjects(def1, def2).Build()
provider3 := &terraformapi.Provider{
ObjectMeta: metav1.ObjectMeta{
Name: "provider3",
Namespace: "default",
},
}
app1 := &v1beta1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "provider3",
Namespace: types.DefaultKubeVelaNS,
Labels: map[string]string{
types.LabelConfigType: "terraform-alibaba",
},
CreationTimestamp: metav1.NewTime(createdTime),
},
Status: common.AppStatus{
Phase: common.ApplicationRendering,
},
}
k8sClient := fake.NewClientBuilder().WithScheme(s).WithObjects(provider1, provider2, provider3, app1).Build()
h := &configUseCaseImpl{kubeClient: k8sClient}
@@ -323,7 +363,25 @@ func TestGetConfigs(t *testing.T) {
h: h,
},
want: want{
configs: nil,
configs: []*apis.Config{
{
Name: "provider1",
CreatedTime: &createdTime,
Status: "Ready",
},
{
Name: "provider2",
CreatedTime: &createdTime,
Status: "Not ready",
},
{
Name: "provider3",
CreatedTime: &createdTime,
Status: "Not ready",
ConfigType: "terraform-alibaba",
ApplicationStatus: common.ApplicationRendering,
},
},
},
},
}
@@ -338,3 +396,375 @@ func TestGetConfigs(t *testing.T) {
})
}
}
func TestMergeTargets(t *testing.T) {
currentTargets := []ApplicationDeployTarget{
{
Namespace: "n1",
Clusters: []string{"c1", "c2"},
}, {
Namespace: "n2",
Clusters: []string{"c3"},
},
}
targets := []*model.ClusterTarget{
{
Namespace: "n3",
ClusterName: "c4",
}, {
Namespace: "n1",
ClusterName: "c5",
},
{
Namespace: "n2",
ClusterName: "c3",
},
}
expected := []ApplicationDeployTarget{
{
Namespace: "n1",
Clusters: []string{"c1", "c2", "c5"},
}, {
Namespace: "n2",
Clusters: []string{"c3"},
}, {
Namespace: "n3",
Clusters: []string{"c4"},
},
}
got := mergeTargets(currentTargets, targets)
for i, g := range got {
clusters := g.Clusters
sort.SliceStable(clusters, func(i, j int) bool {
return clusters[i] < clusters[j]
})
got[i].Clusters = clusters
}
assert.DeepEqual(t, expected, got)
}
func TestConvert(t *testing.T) {
targets := []*model.ClusterTarget{
{
Namespace: "n3",
ClusterName: "c4",
}, {
Namespace: "n1",
ClusterName: "c5",
},
{
Namespace: "n2",
ClusterName: "c3",
},
{
Namespace: "n3",
ClusterName: "c5",
},
}
expected := []ApplicationDeployTarget{
{
Namespace: "n3",
Clusters: []string{"c4", "c5"},
},
{
Namespace: "n1",
Clusters: []string{"c5"},
}, {
Namespace: "n2",
Clusters: []string{"c3"},
},
}
got := convertClusterTargets(targets)
for i, g := range got {
clusters := g.Clusters
sort.SliceStable(clusters, func(i, j int) bool {
return clusters[i] < clusters[j]
})
got[i].Clusters = clusters
}
assert.DeepEqual(t, expected, got)
}
func TestDestroySyncConfigsApp(t *testing.T) {
s := runtime.NewScheme()
v1beta1.AddToScheme(s)
corev1.AddToScheme(s)
app1 := &v1beta1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "config-sync-p1",
Namespace: types.DefaultKubeVelaNS,
},
}
k8sClient1 := fake.NewClientBuilder().WithScheme(s).WithObjects(app1).Build()
k8sClient2 := fake.NewClientBuilder().Build()
type args struct {
project string
k8sClient client.Client
}
type want struct {
errMsg string
}
ctx := context.Background()
testcases := map[string]struct {
args args
want want
}{
"found": {
args: args{
project: "p1",
k8sClient: k8sClient1,
},
},
"not found": {
args: args{
project: "p1",
k8sClient: k8sClient2,
},
want: want{
errMsg: "no kind is registered for the type v1beta1.Application",
},
},
}
for name, tc := range testcases {
t.Run(name, func(t *testing.T) {
err := destroySyncConfigsApp(ctx, tc.args.k8sClient, tc.args.project)
if err != nil || tc.want.errMsg != "" {
if !strings.Contains(err.Error(), tc.want.errMsg) {
assert.ErrorContains(t, err, tc.want.errMsg)
}
}
})
}
}
func TestSyncConfigs(t *testing.T) {
s := runtime.NewScheme()
v1beta1.AddToScheme(s)
corev1.AddToScheme(s)
secret1 := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "s1",
Namespace: types.DefaultKubeVelaNS,
Labels: map[string]string{
types.LabelConfigCatalog: types.VelaCoreConfig,
types.LabelConfigProject: "p1",
types.LabelConfigSyncToMultiCluster: "true",
},
},
}
policies := []ApplicationDeployTarget{{
Namespace: "n9",
Clusters: []string{"c19"},
}}
properties, _ := json.Marshal(policies)
app1 := &v1beta1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "config-sync-p2",
Namespace: types.DefaultKubeVelaNS,
},
Spec: v1beta1.ApplicationSpec{
Policies: []v1beta1.AppPolicy{{
Name: "c19",
Type: "topology",
Properties: &runtime.RawExtension{Raw: properties},
}},
},
}
k8sClient := fake.NewClientBuilder().WithScheme(s).WithObjects(secret1, app1).Build()
type args struct {
project string
targets []*model.ClusterTarget
}
type want struct {
errMsg string
}
ctx := context.Background()
testcases := []struct {
name string
args args
want want
}{
{
name: "create",
args: args{
project: "p1",
targets: []*model.ClusterTarget{{
ClusterName: "c1",
Namespace: "n1",
}},
},
},
{
name: "update",
args: args{
project: "p2",
targets: []*model.ClusterTarget{{
ClusterName: "c1",
Namespace: "n1",
}},
},
},
{
name: "skip config sync",
args: args{
project: "p3",
targets: []*model.ClusterTarget{{
ClusterName: "c1",
Namespace: "n1",
}},
},
},
}
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
err := SyncConfigs(ctx, k8sClient, tc.args.project, tc.args.targets)
if tc.want.errMsg != "" || err != nil {
assert.ErrorContains(t, err, tc.want.errMsg)
}
})
}
}
func TestDeleteConfig(t *testing.T) {
s := runtime.NewScheme()
v1beta1.AddToScheme(s)
corev1.AddToScheme(s)
terraformapi.AddToScheme(s)
provider1 := &terraformapi.Provider{
ObjectMeta: metav1.ObjectMeta{
Name: "p1",
Namespace: "default",
},
}
provider2 := &terraformapi.Provider{
ObjectMeta: metav1.ObjectMeta{
Name: "p2",
Namespace: "default",
},
}
provider3 := &terraformapi.Provider{
ObjectMeta: metav1.ObjectMeta{
Name: "p3",
Namespace: "default",
},
}
app1 := &v1beta1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "config-terraform-provider-p1",
Namespace: types.DefaultKubeVelaNS,
Labels: map[string]string{
types.LabelConfigType: "terraform-alibaba",
},
},
}
app2 := &v1beta1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "p2",
Namespace: types.DefaultKubeVelaNS,
Labels: map[string]string{
types.LabelConfigType: "terraform-alibaba",
},
},
}
normalApp := &v1beta1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "a9",
Namespace: types.DefaultKubeVelaNS,
},
}
k8sClient := fake.NewClientBuilder().WithScheme(s).WithObjects(provider1, provider2, provider3, app1, app2, normalApp).Build()
h := &configUseCaseImpl{kubeClient: k8sClient}
type args struct {
configType string
name string
h ConfigHandler
}
type want struct {
errMsg string
}
ctx := context.Background()
testcases := []struct {
name string
args args
want want
}{
{
name: "delete a legacy terraform provider",
args: args{
configType: "terraform-alibaba",
name: "p1",
h: h,
},
want: want{},
},
{
name: "delete a terraform provider",
args: args{
configType: "terraform-alibaba",
name: "p2",
h: h,
},
want: want{},
},
{
name: "delete a terraform provider, but its application not found",
args: args{
configType: "terraform-alibaba",
name: "p3",
h: h,
},
want: want{
errMsg: "could not be disabled because it was created by enabling a Terraform provider or was manually created",
},
},
{
name: "delete a normal config, but failed",
args: args{
configType: "config-image-registry",
name: "a10",
h: h,
},
want: want{
errMsg: "not found",
},
},
}
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
err := tc.args.h.DeleteConfig(ctx, tc.args.configType, tc.args.name)
if tc.want.errMsg != "" || err != nil {
assert.ErrorContains(t, err, tc.want.errMsg)
}
})
}
}

View File

@@ -20,16 +20,20 @@ import (
"context"
"strconv"
"github.com/oam-dev/kubevela/pkg/utils/config"
"github.com/oam-dev/kubevela/apis/types"
"github.com/oam-dev/kubevela/pkg/apiserver/clients"
"github.com/oam-dev/kubevela/pkg/apiserver/log"
v1 "github.com/oam-dev/kubevela/pkg/apiserver/rest/apis/v1"
"github.com/oam-dev/kubevela/pkg/apiserver/rest/utils/bcode"
"github.com/oam-dev/kubevela/pkg/oam"
"github.com/oam-dev/kubevela/pkg/utils"
"github.com/oam-dev/kubevela/pkg/utils/common"
"github.com/oam-dev/kubevela/pkg/utils/helm"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
types2 "k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"helm.sh/helm/v3/pkg/repo"
@@ -60,33 +64,65 @@ type defaultHelmHandler struct {
k8sClient client.Client
}
func (d defaultHelmHandler) ListChartNames(ctx context.Context, url string, secretName string, skipCache bool) ([]string, error) {
// TODO(wangyikewxgm): support authority helm repo
charts, err := d.helper.ListChartsFromRepo(url, skipCache)
func (d defaultHelmHandler) ListChartNames(ctx context.Context, repoURL string, secretName string, skipCache bool) ([]string, error) {
if !utils.IsValidURL(repoURL) {
return nil, bcode.ErrRepoInvalidURL
}
var opts *common.HTTPOption
var err error
if len(secretName) != 0 {
opts, err = helm.SetBasicAuthInfo(ctx, d.k8sClient, types2.NamespacedName{Namespace: types.DefaultKubeVelaNS, Name: secretName})
if err != nil {
return nil, bcode.ErrRepoBasicAuth
}
}
charts, err := d.helper.ListChartsFromRepo(repoURL, skipCache, opts)
if err != nil {
log.Logger.Errorf("cannot fetch charts repo: %s, error: %s", url, err.Error())
log.Logger.Errorf("cannot fetch charts repo: %s, error: %s", utils.Sanitize(repoURL), err.Error())
return nil, bcode.ErrListHelmChart
}
return charts, nil
}
func (d defaultHelmHandler) ListChartVersions(ctx context.Context, url string, chartName string, secretName string, skipCache bool) (repo.ChartVersions, error) {
chartVersions, err := d.helper.ListVersions(url, chartName, skipCache)
func (d defaultHelmHandler) ListChartVersions(ctx context.Context, repoURL string, chartName string, secretName string, skipCache bool) (repo.ChartVersions, error) {
if !utils.IsValidURL(repoURL) {
return nil, bcode.ErrRepoInvalidURL
}
var opts *common.HTTPOption
var err error
if len(secretName) != 0 {
opts, err = helm.SetBasicAuthInfo(ctx, d.k8sClient, types2.NamespacedName{Namespace: types.DefaultKubeVelaNS, Name: secretName})
if err != nil {
return nil, bcode.ErrRepoBasicAuth
}
}
chartVersions, err := d.helper.ListVersions(repoURL, chartName, skipCache, opts)
if err != nil {
log.Logger.Errorf("cannot fetch chart versions repo: %s, chart: %s error: %s", url, chartName, err.Error())
log.Logger.Errorf("cannot fetch chart versions repo: %s, chart: %s error: %s", utils.Sanitize(repoURL), utils.Sanitize(chartName), err.Error())
return nil, bcode.ErrListHelmVersions
}
if len(chartVersions) == 0 {
log.Logger.Errorf("cannot fetch chart versions repo: %s, chart: %s", url, chartName)
log.Logger.Errorf("cannot fetch chart versions repo: %s, chart: %s", utils.Sanitize(repoURL), utils.Sanitize(chartName))
return nil, bcode.ErrChartNotExist
}
return chartVersions, nil
}
func (d defaultHelmHandler) GetChartValues(ctx context.Context, url string, chartName string, version string, secretName string, skipCache bool) (map[string]interface{}, error) {
v, err := d.helper.GetValuesFromChart(url, chartName, version, skipCache)
func (d defaultHelmHandler) GetChartValues(ctx context.Context, repoURL string, chartName string, version string, secretName string, skipCache bool) (map[string]interface{}, error) {
if !utils.IsValidURL(repoURL) {
return nil, bcode.ErrRepoInvalidURL
}
var opts *common.HTTPOption
var err error
if len(secretName) != 0 {
opts, err = helm.SetBasicAuthInfo(ctx, d.k8sClient, types2.NamespacedName{Namespace: types.DefaultKubeVelaNS, Name: secretName})
if err != nil {
return nil, bcode.ErrRepoBasicAuth
}
}
v, err := d.helper.GetValuesFromChart(repoURL, chartName, version, skipCache, opts)
if err != nil {
log.Logger.Errorf("cannot fetch chart values repo: %s, chart: %s, version: %s, error: %s", url, chartName, version, err.Error())
log.Logger.Errorf("cannot fetch chart values repo: %s, chart: %s, version: %s, error: %s", utils.Sanitize(repoURL), utils.Sanitize(chartName), utils.Sanitize(version), err.Error())
return nil, bcode.ErrGetChartValues
}
res := make(map[string]interface{}, len(v))
@@ -98,41 +134,20 @@ func (d defaultHelmHandler) ListChartRepo(ctx context.Context, projectName strin
var res []*v1.ChartRepoResponse
var err error
if len(projectName) != 0 {
projectSecrets := corev1.SecretList{}
opts := []client.ListOption{
client.MatchingLabels{oam.LabelConfigType: "config-helm-repository", types.LabelConfigProject: projectName},
client.InNamespace(types.DefaultKubeVelaNS),
}
err = d.k8sClient.List(ctx, &projectSecrets, opts...)
if err != nil {
return nil, err
}
for _, item := range projectSecrets.Items {
res = append(res, &v1.ChartRepoResponse{URL: string(item.Data["url"]), SecretName: item.Name})
}
projectSecrets := corev1.SecretList{}
opts := []client.ListOption{
client.MatchingLabels{oam.LabelConfigType: "config-helm-repository"},
client.InNamespace(types.DefaultKubeVelaNS),
}
globalSecrets := corev1.SecretList{}
selector := metav1.LabelSelector{
MatchLabels: map[string]string{oam.LabelConfigType: "config-helm-repository"},
MatchExpressions: []metav1.LabelSelectorRequirement{
{Key: types.LabelConfigProject, Operator: metav1.LabelSelectorOpDoesNotExist},
},
}
ls, _ := metav1.LabelSelectorAsSelector(&selector)
err = d.k8sClient.List(ctx, &globalSecrets, &client.ListOptions{
LabelSelector: ls,
Namespace: types.DefaultKubeVelaNS,
})
err = d.k8sClient.List(ctx, &projectSecrets, opts...)
if err != nil {
return nil, err
}
for _, item := range globalSecrets.Items {
res = append(res, &v1.ChartRepoResponse{URL: string(item.Data["url"]), SecretName: item.Name})
for _, item := range projectSecrets.Items {
if config.ProjectMatched(item.DeepCopy(), projectName) {
res = append(res, &v1.ChartRepoResponse{URL: string(item.Data["url"]), SecretName: item.Name})
}
}
return &v1.ChartRepoResponseList{ChartRepoResponse: res}, nil

View File

@@ -19,6 +19,10 @@ package usecase
import (
"context"
"encoding/json"
"io/ioutil"
"net/http"
"net/http/httptest"
"strings"
"testing"
. "github.com/onsi/ginkgo"
@@ -104,6 +108,90 @@ var _ = Describe("Test helm repo list", func() {
})
})
var _ = Describe("test helm usecasae", func() {
ctx := context.Background()
var repoSec v1.Secret
BeforeEach(func() {
Expect(k8sClient.Create(ctx, &v1.Namespace{ObjectMeta: metav1.ObjectMeta{Name: "vela-system"}})).Should(SatisfyAny(BeNil(), util.AlreadyExistMatcher{}))
repoSec = v1.Secret{}
Expect(yaml.Unmarshal([]byte(repoSecret), &repoSec)).Should(BeNil())
Expect(k8sClient.Create(ctx, &repoSec)).Should(BeNil())
})
AfterEach(func() {
Expect(k8sClient.Delete(ctx, &repoSec)).Should(BeNil())
})
It("helm associated usecase interface test", func() {
var mockServer *httptest.Server
handler := http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) {
u, p, ok := request.BasicAuth()
if !ok || u != "admin" || p != "admin" {
writer.WriteHeader(401)
return
}
switch {
case request.URL.Path == "/index.yaml":
index, err := ioutil.ReadFile("./testdata/helm/index.yaml")
indexFile := string(index)
indexFile = strings.ReplaceAll(indexFile, "server-url", mockServer.URL)
if err != nil {
writer.Write([]byte(err.Error()))
return
}
writer.Write([]byte(indexFile))
return
case strings.Contains(request.URL.Path, "mysql-8.8.23.tgz"):
pkg, err := ioutil.ReadFile("./testdata/helm/mysql-8.8.23.tgz")
if err != nil {
writer.Write([]byte(err.Error()))
return
}
writer.Write(pkg)
return
default:
writer.Write([]byte("404 page not found"))
}
})
mockServer = httptest.NewServer(handler)
defer mockServer.Close()
u := NewHelmUsecase()
charts, err := u.ListChartNames(ctx, mockServer.URL, "repo-secret", false)
Expect(err).Should(BeNil())
Expect(len(charts)).Should(BeEquivalentTo(1))
Expect(charts[0]).Should(BeEquivalentTo("mysql"))
versions, err := u.ListChartVersions(ctx, mockServer.URL, "mysql", "repo-secret", false)
Expect(err).Should(BeNil())
Expect(len(versions)).Should(BeEquivalentTo(1))
Expect(versions[0].Version).Should(BeEquivalentTo("8.8.23"))
values, err := u.GetChartValues(ctx, mockServer.URL, "mysql", "8.8.23", "repo-secret", false)
Expect(err).Should(BeNil())
Expect(values).ShouldNot(BeNil())
Expect(len(values)).ShouldNot(BeEquivalentTo(0))
})
It("coverage not secret notExist error", func() {
u := NewHelmUsecase()
_, err := u.ListChartNames(ctx, "http://127.0.0.1:8080", "repo-secret-notExist", false)
Expect(err).ShouldNot(BeNil())
_, err = u.ListChartVersions(ctx, "http://127.0.0.1:8080", "mysql", "repo-secret-notExist", false)
Expect(err).ShouldNot(BeNil())
_, err = u.GetChartValues(ctx, "http://127.0.0.1:8080", "mysql", "8.8.23", "repo-secret-notExist", false)
Expect(err).ShouldNot(BeNil())
})
})
var (
src = `{
"OAMSpecVer":"v0.2",
@@ -250,6 +338,7 @@ kind: Secret
metadata:
labels:
config.oam.dev/type: config-helm-repository
config.oam.dev/project: ""
name: global-helm-repo
namespace: vela-system
type: Opaque
@@ -266,5 +355,19 @@ metadata:
stringData:
url: https://kedacore.github.io/charts
type: Opaque
`
repoSecret = `
apiVersion: v1
kind: Secret
metadata:
name: repo-secret
namespace: vela-system
labels:
config.oam.dev/type: config-helm-repository
config.oam.dev/project: my-project-2
stringData:
username: admin
password: admin
type: Opaque
`
)

View File

@@ -20,9 +20,16 @@ import (
"context"
"errors"
"fmt"
"strings"
terraformtypes "github.com/oam-dev/terraform-controller/api/types"
terraformapi "github.com/oam-dev/terraform-controller/api/v1beta1"
"k8s.io/klog/v2"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
"github.com/oam-dev/kubevela/apis/types"
"github.com/oam-dev/kubevela/pkg/apiserver/clients"
"github.com/oam-dev/kubevela/pkg/apiserver/datastore"
"github.com/oam-dev/kubevela/pkg/apiserver/log"
@@ -46,6 +53,7 @@ type ProjectUsecase interface {
DeleteProjectUser(ctx context.Context, projectName string, userName string) error
UpdateProjectUser(ctx context.Context, projectName string, userName string, req apisv1.UpdateProjectUserRequest) (*apisv1.ProjectUserBase, error)
Init(ctx context.Context) error
GetConfigs(ctx context.Context, projectName, configType string) ([]*apisv1.Config, error)
}
type projectUsecaseImpl struct {
@@ -290,7 +298,11 @@ func (p *projectUsecaseImpl) DeleteProject(ctx context.Context, name string) err
return err
}
}
return p.ds.Delete(ctx, &model.Project{Name: name})
if err := p.ds.Delete(ctx, &model.Project{Name: name}); err != nil {
return err
}
// delete config-sync application
return destroySyncConfigsApp(ctx, p.k8sClient, name)
}
// CreateProject create project
@@ -466,6 +478,93 @@ func (p *projectUsecaseImpl) UpdateProjectUser(ctx context.Context, projectName
return ConvertProjectUserModel2Base(&projectUser), nil
}
func (p *projectUsecaseImpl) GetConfigs(ctx context.Context, projectName, configType string) ([]*apisv1.Config, error) {
var (
configs []*apisv1.Config
legacyTerraformProviders []*apisv1.Config
apps = &v1beta1.ApplicationList{}
)
if err := p.k8sClient.List(ctx, apps, client.InNamespace(types.DefaultKubeVelaNS),
client.MatchingLabels{
model.LabelSourceOfTruth: model.FromInner,
types.LabelConfigCatalog: types.VelaCoreConfig,
}); err != nil {
return nil, err
}
if configType == types.TerraformProvider || configType == "" {
// legacy providers
var providers = &terraformapi.ProviderList{}
if err := p.k8sClient.List(ctx, providers, client.InNamespace(types.DefaultAppNamespace)); err != nil {
return nil, err
}
for _, p := range providers.Items {
if p.Labels[types.LabelConfigCatalog] == types.VelaCoreConfig {
continue
}
t := p.CreationTimestamp.Time
var status = configIsNotReady
if p.Status.State == terraformtypes.ProviderIsReady {
status = configIsReady
}
legacyTerraformProviders = append(legacyTerraformProviders, &apisv1.Config{
Name: p.Name,
CreatedTime: &t,
Status: status,
})
}
}
switch configType {
case types.TerraformProvider:
for _, a := range apps.Items {
appProject := a.Labels[types.LabelConfigProject]
if a.Status.Phase != common.ApplicationRunning || (appProject != "" && appProject != projectName) ||
!strings.Contains(a.Labels[types.LabelConfigType], types.TerraformComponentPrefix) {
continue
}
configs = append(configs, retrieveConfigFromApplication(a, appProject))
}
configs = append(configs, legacyTerraformProviders...)
case "":
for _, a := range apps.Items {
appProject := a.Labels[types.LabelConfigProject]
if appProject != "" && appProject != projectName {
continue
}
configs = append(configs, retrieveConfigFromApplication(a, appProject))
}
configs = append(configs, legacyTerraformProviders...)
case types.DexConnector, types.HelmRepository, types.ImageRegistry:
t := strings.ReplaceAll(configType, "config-", "")
for _, a := range apps.Items {
appProject := a.Labels[types.LabelConfigProject]
if a.Status.Phase != common.ApplicationRunning || (appProject != "" && appProject != projectName) {
continue
}
if a.Labels[types.LabelConfigType] == t {
configs = append(configs, retrieveConfigFromApplication(a, appProject))
}
}
default:
return nil, errors.New("unsupported config type")
}
for i, c := range configs {
if c.ConfigType != "" {
d := &v1beta1.ComponentDefinition{}
err := p.k8sClient.Get(ctx, client.ObjectKey{Namespace: types.DefaultKubeVelaNS, Name: c.ConfigType}, d)
if err != nil {
klog.InfoS("failed to get component definition", "ComponentDefinition", configType, "err", err)
} else {
configs[i].ConfigTypeAlias = d.Annotations[definitionAlias]
}
}
}
return configs, nil
}
// ConvertProjectModel2Base convert project model to base struct
func ConvertProjectModel2Base(project *model.Project, owner *model.User) *apisv1.ProjectBase {
base := &apisv1.ProjectBase{
@@ -492,3 +591,25 @@ func ConvertProjectUserModel2Base(user *model.ProjectUser) *apisv1.ProjectUserBa
}
return base
}
func retrieveConfigFromApplication(a v1beta1.Application, project string) *apisv1.Config {
var (
applicationStatus = a.Status.Phase
status string
)
if applicationStatus == common.ApplicationRunning {
status = configIsReady
} else {
status = configIsNotReady
}
return &apisv1.Config{
ConfigType: a.Labels[types.LabelConfigType],
Name: a.Name,
Project: project,
CreatedTime: &(a.CreationTimestamp.Time),
ApplicationStatus: applicationStatus,
Status: status,
Alias: a.Annotations[types.AnnotationConfigAlias],
Description: a.Annotations[types.AnnotationConfigDescription],
}
}

View File

@@ -18,14 +18,23 @@ package usecase
import (
"context"
"testing"
"time"
"github.com/google/go-cmp/cmp"
terraformtypes "github.com/oam-dev/terraform-controller/api/types"
terraformapi "github.com/oam-dev/terraform-controller/api/v1beta1"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"gotest.tools/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
velatypes "github.com/oam-dev/kubevela/apis/types"
"github.com/oam-dev/kubevela/pkg/apiserver/datastore"
"github.com/oam-dev/kubevela/pkg/apiserver/model"
@@ -155,6 +164,18 @@ var _ = Describe("Test project usecase functions", func() {
Name: "test-project",
Description: "this is a project description",
}
app1 := &v1beta1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "config-sync-test-project",
Namespace: "vela-system",
},
Spec: v1beta1.ApplicationSpec{
Components: []common.ApplicationComponent{{
Type: "aaa",
}},
},
}
Expect(k8sClient.Create(context.TODO(), app1)).Should(BeNil())
_, err := projectUsecase.CreateProject(context.TODO(), req)
Expect(err).Should(BeNil())
@@ -223,6 +244,19 @@ var _ = Describe("Test project usecase functions", func() {
Name: "test-project",
Description: "this is a project description",
}
app1 := &v1beta1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "config-sync-test-project",
Namespace: "vela-system",
},
Spec: v1beta1.ApplicationSpec{
Components: []common.ApplicationComponent{{
Type: "aaa",
}},
},
}
Expect(k8sClient.Create(context.TODO(), app1)).Should(BeNil())
_, err := projectUsecase.CreateProject(context.TODO(), req)
Expect(err).Should(BeNil())
@@ -244,3 +278,222 @@ var _ = Describe("Test project usecase functions", func() {
Expect(roles.Total).Should(BeEquivalentTo(0))
})
})
func TestProjectGetConfigs(t *testing.T) {
s := runtime.NewScheme()
v1beta1.AddToScheme(s)
corev1.AddToScheme(s)
terraformapi.AddToScheme(s)
createdTime, _ := time.Parse(time.UnixDate, "Wed Apr 7 11:06:39 PST 2022")
app1 := &v1beta1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "a1",
Namespace: velatypes.DefaultKubeVelaNS,
Labels: map[string]string{
model.LabelSourceOfTruth: model.FromInner,
velatypes.LabelConfigCatalog: velatypes.VelaCoreConfig,
velatypes.LabelConfigType: "terraform-provider",
"config.oam.dev/project": "p1",
},
CreationTimestamp: metav1.NewTime(createdTime),
},
Status: common.AppStatus{Phase: common.ApplicationRunning},
}
app2 := &v1beta1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "a2",
Namespace: velatypes.DefaultKubeVelaNS,
Labels: map[string]string{
model.LabelSourceOfTruth: model.FromInner,
velatypes.LabelConfigCatalog: velatypes.VelaCoreConfig,
velatypes.LabelConfigType: "terraform-provider",
},
CreationTimestamp: metav1.NewTime(createdTime),
},
Status: common.AppStatus{Phase: common.ApplicationRunning},
}
app3 := &v1beta1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "a3",
Namespace: velatypes.DefaultKubeVelaNS,
Labels: map[string]string{
model.LabelSourceOfTruth: model.FromInner,
velatypes.LabelConfigCatalog: velatypes.VelaCoreConfig,
velatypes.LabelConfigType: "dex-connector",
"config.oam.dev/project": "p3",
},
CreationTimestamp: metav1.NewTime(createdTime),
},
Status: common.AppStatus{Phase: common.ApplicationRunning},
}
provider1 := &terraformapi.Provider{
ObjectMeta: metav1.ObjectMeta{
Name: "provider1",
Namespace: "default",
CreationTimestamp: metav1.NewTime(createdTime),
},
Status: terraformapi.ProviderStatus{
State: terraformtypes.ProviderIsReady,
},
}
provider2 := &terraformapi.Provider{
ObjectMeta: metav1.ObjectMeta{
Name: "provider2",
Namespace: "default",
Labels: map[string]string{
velatypes.LabelConfigCatalog: velatypes.VelaCoreConfig,
},
},
Status: terraformapi.ProviderStatus{
State: terraformtypes.ProviderIsNotReady,
},
}
k8sClient := fake.NewClientBuilder().WithScheme(s).WithObjects(app1, app2, app3, provider1, provider2).Build()
h := &projectUsecaseImpl{k8sClient: k8sClient}
type args struct {
projectName string
configType string
h ProjectUsecase
}
type want struct {
configs []*apisv1.Config
errMsg string
}
ctx := context.Background()
testcases := []struct {
name string
args args
want want
}{
{
name: "project is matched",
args: args{
projectName: "p1",
configType: "terraform-provider",
h: h,
},
want: want{
configs: []*apisv1.Config{{
ConfigType: "terraform-provider",
Name: "a1",
Project: "p1",
CreatedTime: &createdTime,
ApplicationStatus: "running",
Status: "Ready",
}, {
ConfigType: "terraform-provider",
Name: "a2",
Project: "",
CreatedTime: &createdTime,
ApplicationStatus: "running",
Status: "Ready",
}, {
Name: "provider1",
CreatedTime: &createdTime,
Status: "Ready",
}},
},
},
{
name: "project is not matched",
args: args{
projectName: "p999",
configType: "terraform-provider",
h: h,
},
want: want{
configs: []*apisv1.Config{{
ConfigType: "terraform-provider",
Name: "a2",
Project: "",
CreatedTime: &createdTime,
ApplicationStatus: "running",
Status: "Ready",
}, {
Name: "provider1",
CreatedTime: &createdTime,
Status: "Ready",
}},
},
},
{
name: "config type is empty",
args: args{
projectName: "p3",
configType: "",
h: h,
},
want: want{
configs: []*apisv1.Config{{
ConfigType: "terraform-provider",
Name: "a2",
Project: "",
CreatedTime: &createdTime,
ApplicationStatus: "running",
Status: "Ready",
}, {
ConfigType: "dex-connector",
Name: "a3",
Project: "p3",
CreatedTime: &createdTime,
ApplicationStatus: "running",
Status: "Ready",
}, {
Name: "provider1",
CreatedTime: &createdTime,
Status: "Ready",
}},
},
},
{
name: "config type is dex",
args: args{
projectName: "p3",
configType: "config-dex-connector",
h: h,
},
want: want{
configs: []*apisv1.Config{{
ConfigType: "dex-connector",
Name: "a3",
Project: "p3",
CreatedTime: &createdTime,
ApplicationStatus: "running",
Status: "Ready",
}},
},
},
{
name: "config type is invalid",
args: args{
configType: "xxx",
h: h,
},
want: want{
errMsg: "unsupported config type",
},
},
}
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
got, err := tc.args.h.GetConfigs(ctx, tc.args.projectName, tc.args.configType)
if tc.want.errMsg != "" || err != nil {
assert.ErrorContains(t, err, tc.want.errMsg)
}
assert.DeepEqual(t, got, tc.want.configs)
})
}
}

View File

@@ -182,6 +182,7 @@ var ResourceMaps = map[string]resourceMetadata{
pathName: "userName",
},
"applicationTemplate": {},
"configs": {},
},
pathName: "projectName",
},

View File

@@ -21,6 +21,7 @@ import (
"errors"
"fmt"
"reflect"
"time"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
@@ -81,6 +82,7 @@ func (u systemInfoUsecaseImpl) Get(ctx context.Context) (*model.SystemInfo, erro
info.InstallID = installID
info.EnableCollection = true
info.LoginType = model.LoginTypeLocal
info.BaseModel = model.BaseModel{CreateTime: time.Now()}
err = u.ds.Add(ctx, info)
if err != nil {
return nil, err
@@ -100,6 +102,16 @@ func (u systemInfoUsecaseImpl) GetSystemInfo(ctx context.Context) (*v1.SystemInf
VelaVersion: version.VelaVersion,
GitVersion: version.GitRevision,
},
StatisticInfo: v1.StatisticInfo{
AppCount: info.StatisticInfo.AppCount,
ClusterCount: info.StatisticInfo.ClusterCount,
EnableAddonList: info.StatisticInfo.EnabledAddon,
ComponentDefinitionTopList: info.StatisticInfo.TopKCompDef,
TraitDefinitionTopList: info.StatisticInfo.TopKTraitDef,
WorkflowDefinitionTopList: info.StatisticInfo.TopKWorkflowStepDef,
PolicyDefinitionTopList: info.StatisticInfo.TopKPolicyDef,
UpdateTime: info.StatisticInfo.UpdateTime,
},
}, nil
}
@@ -114,10 +126,19 @@ func (u systemInfoUsecaseImpl) UpdateSystemInfo(ctx context.Context, sysInfo v1.
LoginType: sysInfo.LoginType,
BaseModel: model.BaseModel{
CreateTime: info.CreateTime,
UpdateTime: time.Now(),
},
StatisticInfo: info.StatisticInfo,
}
if sysInfo.LoginType == model.LoginTypeDex {
admin := &model.User{Name: model.DefaultAdminUserName}
if err := u.ds.Get(ctx, admin); err != nil {
return nil, err
}
if admin.Email == "" {
return nil, bcode.ErrEmptyAdminEmail
}
if err := generateDexConfig(ctx, u.kubeClient, sysInfo.VelaAddress, &modifiedInfo); err != nil {
return nil, err
}
@@ -128,24 +149,32 @@ func (u systemInfoUsecaseImpl) UpdateSystemInfo(ctx context.Context, sysInfo v1.
}
return &v1.SystemInfoResponse{
SystemInfo: v1.SystemInfo{
InstallID: modifiedInfo.InstallID,
PlatformID: modifiedInfo.InstallID,
EnableCollection: modifiedInfo.EnableCollection,
LoginType: modifiedInfo.LoginType,
// always use the initial createTime as system's installTime
InstallTime: info.CreateTime,
},
SystemVersion: v1.SystemVersion{VelaVersion: version.VelaVersion, GitVersion: version.GitRevision},
}, nil
}
func (u systemInfoUsecaseImpl) Init(ctx context.Context) error {
_, err := initDexConfig(ctx, u.kubeClient, "http://velaux.com", &model.SystemInfo{})
info, err := u.Get(ctx)
if err != nil {
return err
}
signedKey = info.InstallID
_, err = initDexConfig(ctx, u.kubeClient, "http://velaux.com", &model.SystemInfo{})
return err
}
func convertInfoToBase(info *model.SystemInfo) v1.SystemInfo {
return v1.SystemInfo{
InstallID: info.InstallID,
PlatformID: info.InstallID,
EnableCollection: info.EnableCollection,
LoginType: info.LoginType,
InstallTime: info.CreateTime,
}
}
@@ -158,6 +187,9 @@ func generateDexConfig(ctx context.Context, kubeClient client.Client, velaAddres
if err != nil {
return err
}
if len(connectors) < 1 {
return bcode.ErrNoDexConnector
}
config, err := model.NewJSONStructByStruct(info.DexConfig)
if err != nil {
return err

View File

@@ -0,0 +1,36 @@
apiVersion: v1
entries:
mysql:
- annotations:
category: Database
apiVersion: v2
appVersion: 8.0.28
created: "2022-04-07T03:26:37.378966939Z"
dependencies:
- name: common
repository: https://charts.bitnami.com/bitnami
tags:
- bitnami-common
version: 1.x.x
description: Chart to create a Highly available MySQL cluster
digest: 96f79c6daba90fb40fc698979fab33f7a60987b1d23cd5080bc885129568a423
home: https://github.com/bitnami/charts/tree/master/bitnami/mysql
icon: https://bitnami.com/assets/stacks/mysql/img/mysql-stack-220x234.png
keywords:
- mysql
- database
- sql
- cluster
- high availability
maintainers:
- email: containers@bitnami.com
name: Bitnami
name: mysql
sources:
- https://github.com/bitnami/bitnami-docker-mysql
- https://mysql.com
urls:
- server-url/mysql-8.8.23.tgz
version: 8.8.23
generated: "2022-04-07T03:26:37Z"
serverInfo: {}

Binary file not shown.

View File

@@ -19,6 +19,8 @@ package usecase
import (
"context"
"errors"
"math/rand"
"strconv"
"golang.org/x/crypto/bcrypt"
"helm.sh/helm/v3/pkg/time"
@@ -82,7 +84,15 @@ func (u *userUsecaseImpl) Init(ctx context.Context) error {
Name: admin,
}); err != nil {
if errors.Is(err, datastore.ErrRecordNotExist) {
pwd := utils2.RandomString(8)
pwd := func() string {
p := utils2.RandomString(8)
p += strconv.Itoa(rand.Intn(9)) // #nosec
r := append([]rune(p), 'a'+rune(rand.Intn(26))) // #nosec
rand.Shuffle(len(r), func(i, j int) { r[i], r[j] = r[j], r[i] })
p = string(r)
return p
}()
encrypted, err := GeneratePasswordHash(pwd)
if err != nil {
return err
@@ -184,7 +194,7 @@ func (u *userUsecaseImpl) DeleteUser(ctx context.Context, username string) error
}
}
if err := u.ds.Delete(ctx, &model.User{Name: username}); err != nil {
log.Logger.Errorf("failed to delete user", username, err.Error())
log.Logger.Errorf("failed to delete user %s %v", utils2.Sanitize(username), err.Error())
return err
}
return nil

View File

@@ -37,4 +37,6 @@ var (
ErrInvalidDexConfig = NewBcode(400, 12009, "the dex config is invalid")
// ErrRefreshTokenExpired is the error of refresh token expired
ErrRefreshTokenExpired = NewBcode(400, 12010, "the refresh token is expired")
// ErrNoDexConnector is the error of no dex connector
ErrNoDexConnector = NewBcode(400, 12011, "there is no dex connector")
)

View File

@@ -30,3 +30,9 @@ var ErrChartNotExist = NewBcode(200, 13004, "this chart not exist in the reposit
// ErrSkipCacheParameter means the skip cache parameter miss config
var ErrSkipCacheParameter = NewBcode(400, 13005, "skip cache parameter miss config, the value only can be true or false")
// ErrRepoBasicAuth means extract repo auth info from secret error
var ErrRepoBasicAuth = NewBcode(400, 13006, "extract repo auth info from secret error")
// ErrRepoInvalidURL means user input url is invalid
var ErrRepoInvalidURL = NewBcode(400, 13007, "user input repository url is invalid")

View File

@@ -35,4 +35,6 @@ var (
ErrUsernameNotExist = NewBcode(401, 14008, "the username is not exist")
// ErrDexNotFound is the error of dex not found
ErrDexNotFound = NewBcode(200, 14009, "the dex is not found")
// ErrEmptyAdminEmail is the error of empty admin email
ErrEmptyAdminEmail = NewBcode(400, 14010, "the admin email is empty, please set the admin email before using sso login")
)

View File

@@ -296,7 +296,7 @@ func (s *enabledAddonWebService) GetWebService() *restful.WebService {
Filter(s.rbacUsecase.CheckPerm("addon", "list")).
Param(ws.QueryParameter("registry", "filter addons from given registry").DataType("string")).
Param(ws.QueryParameter("query", "Fuzzy search based on name and description.").DataType("string")).
Returns(200, "OK", apis.ListAddonResponse{}).
Returns(200, "OK", apis.ListEnabledAddonResponse{}).
Returns(400, "Bad Request", bcode.Bcode{}).
Writes(apis.ListAddonResponse{}))

View File

@@ -20,7 +20,6 @@ import (
restfulspec "github.com/emicklei/go-restful-openapi/v2"
"github.com/emicklei/go-restful/v3"
"github.com/oam-dev/kubevela/pkg/apiserver/log"
apis "github.com/oam-dev/kubevela/pkg/apiserver/rest/apis/v1"
"github.com/oam-dev/kubevela/pkg/apiserver/rest/usecase"
"github.com/oam-dev/kubevela/pkg/apiserver/rest/utils/bcode"
@@ -151,7 +150,10 @@ func (s *configWebService) createConfig(req *restful.Request, res *restful.Respo
err := s.handler.CreateConfig(req.Request.Context(), createReq)
if err != nil {
log.Logger.Errorf("failed to create config: %s", err.Error())
bcode.ReturnError(req, res, err)
return
}
if err := res.WriteEntity(apis.EmptyResponse{}); err != nil {
bcode.ReturnError(req, res, err)
return
}

View File

@@ -176,6 +176,16 @@ func (n *projectWebService) GetWebService() *restful.WebService {
Returns(200, "OK", []apis.PermissionBase{}).
Writes([]apis.PermissionBase{}))
ws.Route(ws.GET("/{projectName}/configs").To(n.getConfigs).
Doc("get configs which are in a project").
Metadata(restfulspec.KeyOpenAPITags, tags).
Filter(n.rbacUsecase.CheckPerm("project", "list")).
Param(ws.QueryParameter("configType", "config type").DataType("string")).
Param(ws.PathParameter("projectName", "identifier of the project").DataType("string")).
Returns(200, "OK", []*apis.Config{}).
Returns(400, "Bad Request", bcode.Bcode{}).
Writes([]*apis.Config{}))
ws.Filter(authCheckFilter)
return ws
}
@@ -503,3 +513,23 @@ func (n *projectWebService) listProjectPermissions(req *restful.Request, res *re
return
}
}
func (n *projectWebService) getConfigs(req *restful.Request, res *restful.Response) {
configs, err := n.projectUsecase.GetConfigs(req.Request.Context(), req.PathParameter("projectName"), req.QueryParameter("configType"))
if err != nil {
bcode.ReturnError(req, res, err)
return
}
if configs == nil {
if err := res.WriteEntity(apis.EmptyResponse{}); err != nil {
bcode.ReturnError(req, res, err)
return
}
return
}
err = res.WriteEntity(configs)
if err != nil {
bcode.ReturnError(req, res, err)
return
}
}

View File

@@ -66,6 +66,11 @@ func (c *CR2UX) initCache(ctx context.Context) error {
}
func (c *CR2UX) shouldSync(ctx context.Context, targetApp *v1beta1.Application, del bool) bool {
if targetApp != nil && targetApp.Labels != nil && targetApp.Labels[model.LabelSourceOfTruth] == model.FromInner {
return false
}
key := formatAppComposedName(targetApp.Name, targetApp.Namespace)
cachedData, ok := c.cache.Load(key)
if ok {
@@ -85,9 +90,11 @@ func (c *CR2UX) shouldSync(ctx context.Context, targetApp *v1beta1.Application,
// This is a double check to make sure the app not be converted and un-deployed
sot := c.CheckSoTFromAppMeta(ctx, targetApp.Name, targetApp.Namespace, CheckSoTFromCR(targetApp))
switch sot {
case model.FromUX, model.FromInner:
case model.FromUX:
// we don't sync if the application is not created from CR
return false
case model.FromInner:
// we don't sync if the application is not created from CR
return false
case model.FromCR:

View File

@@ -25,6 +25,7 @@ import (
corev1 "k8s.io/api/core/v1"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
"github.com/oam-dev/kubevela/pkg/apiserver/datastore"
"github.com/oam-dev/kubevela/pkg/apiserver/model"
@@ -95,5 +96,28 @@ var _ = Describe("Test Cache", func() {
Expect(cr2ux.shouldSync(ctx, app1, false)).Should(BeEquivalentTo(true))
})
It("Test don't cache with from inner system label", func() {
dbNamespace := "cache-db-ns2-test"
ds, err := NewDatastore(datastore.Config{Type: "kubeapi", Database: dbNamespace})
Expect(ds).ToNot(BeNil())
Expect(err).Should(BeNil())
var ns = corev1.Namespace{}
ns.Name = dbNamespace
err = k8sClient.Create(context.TODO(), &ns)
Expect(err).Should(SatisfyAny(BeNil(), &util.AlreadyExistMatcher{}))
cr2ux := CR2UX{ds: ds, cli: k8sClient, cache: sync.Map{}}
ctx := context.Background()
app1 := &v1beta1.Application{}
app1.Name = "app1"
app1.Namespace = dbNamespace
app1.Generation = 1
app1.Spec.Components = []common.ApplicationComponent{}
app1.Labels = make(map[string]string)
app1.Labels[model.LabelSourceOfTruth] = model.FromInner
Expect(k8sClient.Create(ctx, app1)).Should(BeNil())
Expect(cr2ux.shouldSync(ctx, app1, false)).Should(BeEquivalentTo(false))
})
})

View File

@@ -20,6 +20,7 @@ import (
"context"
"strconv"
"strings"
"time"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
"github.com/oam-dev/kubevela/pkg/apiserver/model"
@@ -49,7 +50,8 @@ func (c *CR2UX) ConvertApp2DatastoreApp(ctx context.Context, targetApp *v1beta1.
model.LabelSourceOfTruth: model.FromCR,
},
}
appMeta.CreateTime = targetApp.CreationTimestamp.Time
appMeta.UpdateTime = time.Now()
// 1. convert app meta and env
dsApp := &model.DataStoreApp{
AppMeta: appMeta,

View File

@@ -27,7 +27,7 @@ import (
"cuelang.org/go/cue/format"
json2cue "cuelang.org/go/encoding/json"
"github.com/crossplane/crossplane-runtime/pkg/fieldpath"
terraformapi "github.com/oam-dev/terraform-controller/api/v1beta1"
terraformapi "github.com/oam-dev/terraform-controller/api/v1beta2"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
@@ -63,7 +63,9 @@ const (
// WriteConnectionSecretToRefKey is used to create a secret for cloud resource connection
WriteConnectionSecretToRefKey = "writeConnectionSecretToRef"
// RegionKey is the region of a Cloud Provider
RegionKey = "region"
// It's used to override the region of a Cloud Provider
// Refer to https://github.com/oam-dev/terraform-controller/blob/master/api/v1beta2/configuration_types.go#L66 for details
RegionKey = "customRegion"
// ProviderRefKey is the reference of a Provider
ProviderRefKey = "providerRef"
)
@@ -666,7 +668,7 @@ func generateTerraformConfigurationWorkload(wl *Workload, ns string) (*unstructu
}
configuration := terraformapi.Configuration{
TypeMeta: metav1.TypeMeta{APIVersion: "terraform.core.oam.dev/v1beta1", Kind: "Configuration"},
TypeMeta: metav1.TypeMeta{APIVersion: "terraform.core.oam.dev/v1beta2", Kind: "Configuration"},
ObjectMeta: metav1.ObjectMeta{
Name: wl.Name,
Namespace: ns,
@@ -679,8 +681,6 @@ func generateTerraformConfigurationWorkload(wl *Workload, ns string) (*unstructu
switch wl.FullTemplate.Terraform.Type {
case "hcl":
configuration.Spec.HCL = wl.FullTemplate.Terraform.Configuration
case "json":
configuration.Spec.JSON = wl.FullTemplate.Terraform.Configuration
case "remote":
configuration.Spec.Remote = wl.FullTemplate.Terraform.Configuration
configuration.Spec.Path = wl.FullTemplate.Terraform.Path

View File

@@ -30,7 +30,7 @@ import (
"github.com/crossplane/crossplane-runtime/pkg/test"
"github.com/google/go-cmp/cmp"
terraformtypes "github.com/oam-dev/terraform-controller/api/types/crossplane-runtime"
terraformapi "github.com/oam-dev/terraform-controller/api/v1beta1"
terraformapi "github.com/oam-dev/terraform-controller/api/v1beta2"
"github.com/pkg/errors"
"gotest.tools/assert"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -584,10 +584,6 @@ variable "password" {
raw.Raw = data
workload := terraformapi.Configuration{
TypeMeta: metav1.TypeMeta{
APIVersion: "terraform.core.oam.dev/v1beta1",
Kind: "Configuration",
},
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"app.oam.dev/appRevision": "v1",
@@ -902,7 +898,6 @@ func TestGenerateTerraformConfigurationWorkload(t *testing.T) {
type args struct {
writeConnectionSecretToRef *terraformtypes.SecretReference
json string
hcl string
remote string
params map[string]interface{}
@@ -917,16 +912,6 @@ func TestGenerateTerraformConfigurationWorkload(t *testing.T) {
args args
want want
}{
"json workload with secret": {
args: args{
json: "abc",
params: map[string]interface{}{"acl": "private",
"writeConnectionSecretToRef": map[string]interface{}{"name": "oss", "namespace": ""}},
writeConnectionSecretToRef: &terraformtypes.SecretReference{Name: "oss", Namespace: "default"},
},
want: want{err: nil}},
"valid hcl workload": {
args: args{
hcl: "abc",
@@ -999,19 +984,6 @@ func TestGenerateTerraformConfigurationWorkload(t *testing.T) {
}
configSpec.WriteConnectionSecretToReference = tc.args.writeConnectionSecretToRef
}
if tc.args.json != "" {
template = &Template{
Terraform: &common.Terraform{
Configuration: tc.args.json,
Type: "json",
},
}
configSpec = terraformapi.ConfigurationSpec{
JSON: tc.args.json,
Variable: raw,
}
configSpec.WriteConnectionSecretToReference = tc.args.writeConnectionSecretToRef
}
if tc.args.remote != "" {
template = &Template{
Terraform: &common.Terraform{
@@ -1025,7 +997,7 @@ func TestGenerateTerraformConfigurationWorkload(t *testing.T) {
}
configSpec.WriteConnectionSecretToReference = tc.args.writeConnectionSecretToRef
}
if tc.args.hcl == "" && tc.args.json == "" && tc.args.remote == "" {
if tc.args.hcl == "" && tc.args.remote == "" {
template = &Template{
Terraform: &common.Terraform{},
}
@@ -1061,7 +1033,7 @@ func TestGenerateTerraformConfigurationWorkload(t *testing.T) {
if err == nil {
tfConfiguration := terraformapi.Configuration{
TypeMeta: metav1.TypeMeta{APIVersion: "terraform.core.oam.dev/v1beta1", Kind: "Configuration"},
TypeMeta: metav1.TypeMeta{APIVersion: "terraform.core.oam.dev/v1beta2", Kind: "Configuration"},
ObjectMeta: metav1.ObjectMeta{Name: name, Namespace: ns},
Spec: configSpec,
}

View File

@@ -288,6 +288,30 @@ func (p *Parser) GenerateAppFileFromRevision(appRev *v1beta1.ApplicationRevision
for k, v := range appRev.Spec.WorkflowStepDefinitions {
appfile.RelatedWorkflowStepDefinitions[k] = v.DeepCopy()
}
// add compatible code for upgrading to v1.3 as the workflow steps were not recorded before v1.2
if len(appfile.RelatedWorkflowStepDefinitions) == 0 && len(appfile.WorkflowSteps) > 0 {
ctx := context.Background()
for _, workflowStep := range appfile.WorkflowSteps {
if wftypes.IsBuiltinWorkflowStepType(workflowStep.Type) {
continue
}
if _, found := appfile.RelatedWorkflowStepDefinitions[workflowStep.Type]; found {
continue
}
def := &v1beta1.WorkflowStepDefinition{}
if err := util.GetCapabilityDefinition(ctx, p.client, def, workflowStep.Type); err != nil {
return nil, errors.Wrapf(err, "failed to get workflow step definition %s", workflowStep.Type)
}
appfile.RelatedWorkflowStepDefinitions[workflowStep.Type] = def
}
appRev.Spec.WorkflowStepDefinitions = make(map[string]v1beta1.WorkflowStepDefinition)
for name, def := range appfile.RelatedWorkflowStepDefinitions {
appRev.Spec.WorkflowStepDefinitions[name] = *def
}
}
for k, v := range appRev.Spec.ScopeDefinitions {
appfile.RelatedScopeDefinitions[k] = v.DeepCopy()
}

View File

@@ -22,6 +22,8 @@ import (
"reflect"
"strings"
common2 "github.com/oam-dev/kubevela/pkg/utils/common"
"github.com/google/go-cmp/cmp"
"github.com/crossplane/crossplane-runtime/pkg/test"
@@ -506,3 +508,202 @@ var _ = Describe("Test appFile parser", func() {
})
})
var _ = Describe("Test application parser", func() {
var app v1beta1.Application
var apprev v1beta1.ApplicationRevision
var wsd v1beta1.WorkflowStepDefinition
var expectedExceptAppfile *Appfile
var mockClient test.MockClient
BeforeEach(func() {
// prepare WorkflowStepDefinition
Expect(common2.ReadYamlToObject("testdata/backport-1-2/wsd.yaml", &wsd)).Should(BeNil())
// prepare verify data
expectedExceptAppfile = &Appfile{
Name: "backport-1-2-test-demo",
Workloads: []*Workload{
{
Name: "backport-1-2-test-demo",
Type: "webservice",
Params: map[string]interface{}{
"image": "nginx",
},
FullTemplate: &Template{
TemplateStr: `
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
}]
}
}
selector:
matchLabels:
"app.oam.dev/component": context.name
}
}
parameter: {
// +usage=Which image would you like to use for your service
// +short=i
image: string
cmd?: [...string]
}`,
},
Traits: []*Trait{
{
Name: "scaler",
Params: map[string]interface{}{
"replicas": float64(1),
},
Template: `
parameter: {
// +usage=Specify the number of workload
replicas: *1 | int
}
// +patchStrategy=retainKeys
patch: spec: replicas: parameter.replicas
`,
},
},
},
},
WorkflowSteps: []v1beta1.WorkflowStep{
{
Name: "apply",
Type: "apply-application",
},
},
}
// Create mock client
mockClient = test.MockClient{
MockGet: func(ctx context.Context, key client.ObjectKey, obj client.Object) error {
if strings.Contains(key.Name, "unknown") {
return &errors2.StatusError{ErrStatus: metav1.Status{Reason: "NotFound", Message: "not found"}}
}
switch o := obj.(type) {
case *v1beta1.ComponentDefinition:
wd, err := util.UnMarshalStringToComponentDefinition(componenetDefinition)
if err != nil {
return err
}
*o = *wd
case *v1beta1.TraitDefinition:
td, err := util.UnMarshalStringToTraitDefinition(traitDefinition)
if err != nil {
return err
}
*o = *td
case *v1beta1.WorkflowStepDefinition:
*o = wsd
case *v1beta1.ApplicationRevision:
*o = apprev
default:
// skip
}
return nil
},
}
})
When("with apply-application workflowStep", func() {
BeforeEach(func() {
// prepare application
Expect(common2.ReadYamlToObject("testdata/backport-1-2/app.yaml", &app)).Should(BeNil())
// prepare application revision
Expect(common2.ReadYamlToObject("testdata/backport-1-2/apprev1.yaml", &apprev)).Should(BeNil())
})
It("Test we can parse an application revision to an appFile 1", func() {
appfile, err := NewApplicationParser(&mockClient, dm, pd).GenerateAppFile(context.TODO(), &app)
Expect(err).ShouldNot(HaveOccurred())
Expect(equal(expectedExceptAppfile, appfile)).Should(BeTrue())
Expect(len(appfile.WorkflowSteps) > 0 &&
len(appfile.RelatedWorkflowStepDefinitions) == len(appfile.AppRevision.Spec.WorkflowStepDefinitions)).Should(BeTrue())
Expect(len(appfile.WorkflowSteps) > 0 && func() bool {
this := appfile.RelatedWorkflowStepDefinitions
that := appfile.AppRevision.Spec.WorkflowStepDefinitions
for i, w := range this {
thatW := that[i]
if !reflect.DeepEqual(*w, thatW) {
fmt.Printf("appfile wsd:%s apprev wsd%s", (*w).Name, thatW.Name)
return false
}
}
return true
}()).Should(BeTrue())
})
})
When("with apply-application and apply-component build-in workflowStep", func() {
BeforeEach(func() {
// prepare application
Expect(common2.ReadYamlToObject("testdata/backport-1-2/app.yaml", &app)).Should(BeNil())
// prepare application revision
Expect(common2.ReadYamlToObject("testdata/backport-1-2/apprev2.yaml", &apprev)).Should(BeNil())
})
It("Test we can parse an application revision to an appFile 2", func() {
appfile, err := NewApplicationParser(&mockClient, dm, pd).GenerateAppFile(context.TODO(), &app)
Expect(err).ShouldNot(HaveOccurred())
Expect(equal(expectedExceptAppfile, appfile)).Should(BeTrue())
Expect(len(appfile.WorkflowSteps) > 0 &&
len(appfile.RelatedWorkflowStepDefinitions) == len(appfile.AppRevision.Spec.WorkflowStepDefinitions)).Should(BeTrue())
Expect(len(appfile.WorkflowSteps) > 0 && func() bool {
this := appfile.RelatedWorkflowStepDefinitions
that := appfile.AppRevision.Spec.WorkflowStepDefinitions
for i, w := range this {
thatW := that[i]
if !reflect.DeepEqual(*w, thatW) {
fmt.Printf("appfile wsd:%s apprev wsd%s", (*w).Name, thatW.Name)
return false
}
}
return true
}()).Should(BeTrue())
})
})
When("with unknown workflowStep", func() {
BeforeEach(func() {
// prepare application
Expect(common2.ReadYamlToObject("testdata/backport-1-2/app.yaml", &app)).Should(BeNil())
// prepare application revision
Expect(common2.ReadYamlToObject("testdata/backport-1-2/apprev3.yaml", &apprev)).Should(BeNil())
})
It("Test we can parse an application revision to an appFile 3", func() {
_, err := NewApplicationParser(&mockClient, dm, pd).GenerateAppFile(context.TODO(), &app)
Expect(err).Should(HaveOccurred())
Expect(err.Error() == "failed to get workflow step definition apply-application-unknown: not found").Should(BeTrue())
})
})
})

View File

@@ -0,0 +1,26 @@
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
annotations:
app.oam.dev/publishVersion: workflow-default-123456
name: backport-1-2-test-demo
namespace: default
spec:
components:
- name: backport-1-2-test-demo
properties:
image: nginx
traits:
- properties:
replicas: 1
type: scaler
type: webservice
workflow:
steps:
- name: apply
type: apply-application
status:
latestRevision:
name: backport-1-2-test-demo-v1
revision: 1
revisionHash: 38ddf4e721073703

View File

@@ -0,0 +1,271 @@
apiVersion: core.oam.dev/v1beta1
kind: ApplicationRevision
metadata:
annotations:
app.oam.dev/publishVersion: workflow-default-123456
name: backport-1-2-test-demo-v1
namespace: default
spec:
application:
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
annotations:
app.oam.dev/publishVersion: workflow-default-123456
name: backport-1-2-test-demo
namespace: default
spec:
components:
- name: backport-1-2-test-demo
properties:
image: nginx
traits:
- properties:
replicas: 1
type: scaler
type: webservice
workflow:
steps:
- name: apply
type: apply-application
status: {}
componentDefinitions:
webservice:
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
annotations:
definition.oam.dev/description: Describes long-running, scalable, containerized
services that have a stable network endpoint to receive external network
traffic from customers.
meta.helm.sh/release-name: kubevela
meta.helm.sh/release-namespace: vela-system
labels:
app.kubernetes.io/managed-by: Helm
name: webservice
namespace: vela-system
spec:
schematic:
cue:
template: "import (\n\t\"strconv\"\n)\n\nmountsArray: {\n\tpvc: *[\n\t\tfor
v in parameter.volumeMounts.pvc {\n\t\t\t{\n\t\t\t\tmountPath: v.mountPath\n\t\t\t\tname:
\ v.name\n\t\t\t}\n\t\t},\n\t] | []\n\n\tconfigMap: *[\n\t\t\tfor
v in parameter.volumeMounts.configMap {\n\t\t\t{\n\t\t\t\tmountPath:
v.mountPath\n\t\t\t\tname: v.name\n\t\t\t}\n\t\t},\n\t] | []\n\n\tsecret:
*[\n\t\tfor v in parameter.volumeMounts.secret {\n\t\t\t{\n\t\t\t\tmountPath:
v.mountPath\n\t\t\t\tname: v.name\n\t\t\t}\n\t\t},\n\t] | []\n\n\temptyDir:
*[\n\t\t\tfor v in parameter.volumeMounts.emptyDir {\n\t\t\t{\n\t\t\t\tmountPath:
v.mountPath\n\t\t\t\tname: v.name\n\t\t\t}\n\t\t},\n\t] | []\n\n\thostPath:
*[\n\t\t\tfor v in parameter.volumeMounts.hostPath {\n\t\t\t{\n\t\t\t\tmountPath:
v.mountPath\n\t\t\t\tname: v.name\n\t\t\t}\n\t\t},\n\t] | []\n}\nvolumesArray:
{\n\tpvc: *[\n\t\tfor v in parameter.volumeMounts.pvc {\n\t\t\t{\n\t\t\t\tname:
v.name\n\t\t\t\tpersistentVolumeClaim: claimName: v.claimName\n\t\t\t}\n\t\t},\n\t]
| []\n\n\tconfigMap: *[\n\t\t\tfor v in parameter.volumeMounts.configMap
{\n\t\t\t{\n\t\t\t\tname: v.name\n\t\t\t\tconfigMap: {\n\t\t\t\t\tdefaultMode:
v.defaultMode\n\t\t\t\t\tname: v.cmName\n\t\t\t\t\tif v.items
!= _|_ {\n\t\t\t\t\t\titems: v.items\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t]
| []\n\n\tsecret: *[\n\t\tfor v in parameter.volumeMounts.secret {\n\t\t\t{\n\t\t\t\tname:
v.name\n\t\t\t\tsecret: {\n\t\t\t\t\tdefaultMode: v.defaultMode\n\t\t\t\t\tsecretName:
\ v.secretName\n\t\t\t\t\tif v.items != _|_ {\n\t\t\t\t\t\titems: v.items\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t]
| []\n\n\temptyDir: *[\n\t\t\tfor v in parameter.volumeMounts.emptyDir
{\n\t\t\t{\n\t\t\t\tname: v.name\n\t\t\t\temptyDir: medium: v.medium\n\t\t\t}\n\t\t},\n\t]
| []\n\n\thostPath: *[\n\t\t\tfor v in parameter.volumeMounts.hostPath
{\n\t\t\t{\n\t\t\t\tname: v.name\n\t\t\t\thostPath: path: v.path\n\t\t\t}\n\t\t},\n\t]
| []\n}\noutput: {\n\tapiVersion: \"apps/v1\"\n\tkind: \"Deployment\"\n\tspec:
{\n\t\tselector: matchLabels: \"app.oam.dev/component\": context.name\n\n\t\ttemplate:
{\n\t\t\tmetadata: {\n\t\t\t\tlabels: {\n\t\t\t\t\tif parameter.labels
!= _|_ {\n\t\t\t\t\t\tparameter.labels\n\t\t\t\t\t}\n\t\t\t\t\tif parameter.addRevisionLabel
{\n\t\t\t\t\t\t\"app.oam.dev/revision\": context.revision\n\t\t\t\t\t}\n\t\t\t\t\t\"app.oam.dev/component\":
context.name\n\t\t\t\t}\n\t\t\t\tif parameter.annotations != _|_ {\n\t\t\t\t\tannotations:
parameter.annotations\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tspec: {\n\t\t\t\tcontainers:
[{\n\t\t\t\t\tname: context.name\n\t\t\t\t\timage: parameter.image\n\t\t\t\t\tif
parameter[\"port\"] != _|_ && parameter[\"ports\"] == _|_ {\n\t\t\t\t\t\tports:
[{\n\t\t\t\t\t\t\tcontainerPort: parameter.port\n\t\t\t\t\t\t}]\n\t\t\t\t\t}\n\t\t\t\t\tif
parameter[\"ports\"] != _|_ {\n\t\t\t\t\t\tports: [ for v in parameter.ports
{\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tcontainerPort: v.port\n\t\t\t\t\t\t\t\tprotocol:
\ v.protocol\n\t\t\t\t\t\t\t\tif v.name != _|_ {\n\t\t\t\t\t\t\t\t\tname:
v.name\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tif v.name == _|_ {\n\t\t\t\t\t\t\t\t\tname:
\"port-\" + strconv.FormatInt(v.port, 10)\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}}]\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"imagePullPolicy\"] != _|_ {\n\t\t\t\t\t\timagePullPolicy:
parameter.imagePullPolicy\n\t\t\t\t\t}\n\n\t\t\t\t\tif parameter[\"cmd\"]
!= _|_ {\n\t\t\t\t\t\tcommand: parameter.cmd\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"env\"] != _|_ {\n\t\t\t\t\t\tenv: parameter.env\n\t\t\t\t\t}\n\n\t\t\t\t\tif
context[\"config\"] != _|_ {\n\t\t\t\t\t\tenv: context.config\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"cpu\"] != _|_ {\n\t\t\t\t\t\tresources: {\n\t\t\t\t\t\t\tlimits:
cpu: parameter.cpu\n\t\t\t\t\t\t\trequests: cpu: parameter.cpu\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"memory\"] != _|_ {\n\t\t\t\t\t\tresources: {\n\t\t\t\t\t\t\tlimits:
memory: parameter.memory\n\t\t\t\t\t\t\trequests: memory: parameter.memory\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"volumes\"] != _|_ && parameter[\"volumeMounts\"] == _|_
{\n\t\t\t\t\t\tvolumeMounts: [ for v in parameter.volumes {\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tmountPath:
v.mountPath\n\t\t\t\t\t\t\t\tname: v.name\n\t\t\t\t\t\t\t}}]\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"volumeMounts\"] != _|_ {\n\t\t\t\t\t\tvolumeMounts: mountsArray.pvc
+ mountsArray.configMap + mountsArray.secret + mountsArray.emptyDir
+ mountsArray.hostPath\n\t\t\t\t\t}\n\n\t\t\t\t\tif parameter[\"livenessProbe\"]
!= _|_ {\n\t\t\t\t\t\tlivenessProbe: parameter.livenessProbe\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"readinessProbe\"] != _|_ {\n\t\t\t\t\t\treadinessProbe:
parameter.readinessProbe\n\t\t\t\t\t}\n\n\t\t\t\t}]\n\n\t\t\t\tif parameter[\"hostAliases\"]
!= _|_ {\n\t\t\t\t\t// +patchKey=ip\n\t\t\t\t\thostAliases: parameter.hostAliases\n\t\t\t\t}\n\n\t\t\t\tif
parameter[\"imagePullSecrets\"] != _|_ {\n\t\t\t\t\timagePullSecrets:
[ for v in parameter.imagePullSecrets {\n\t\t\t\t\t\tname: v\n\t\t\t\t\t},\n\t\t\t\t\t]\n\t\t\t\t}\n\n\t\t\t\tif
parameter[\"volumes\"] != _|_ && parameter[\"volumeMounts\"] == _|_
{\n\t\t\t\t\tvolumes: [ for v in parameter.volumes {\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tname:
v.name\n\t\t\t\t\t\t\tif v.type == \"pvc\" {\n\t\t\t\t\t\t\t\tpersistentVolumeClaim:
claimName: v.claimName\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif v.type ==
\"configMap\" {\n\t\t\t\t\t\t\t\tconfigMap: {\n\t\t\t\t\t\t\t\t\tdefaultMode:
v.defaultMode\n\t\t\t\t\t\t\t\t\tname: v.cmName\n\t\t\t\t\t\t\t\t\tif
v.items != _|_ {\n\t\t\t\t\t\t\t\t\t\titems: v.items\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif
v.type == \"secret\" {\n\t\t\t\t\t\t\t\tsecret: {\n\t\t\t\t\t\t\t\t\tdefaultMode:
v.defaultMode\n\t\t\t\t\t\t\t\t\tsecretName: v.secretName\n\t\t\t\t\t\t\t\t\tif
v.items != _|_ {\n\t\t\t\t\t\t\t\t\t\titems: v.items\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif
v.type == \"emptyDir\" {\n\t\t\t\t\t\t\t\temptyDir: medium: v.medium\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}]\n\t\t\t\t}\n\n\t\t\t\tif
parameter[\"volumeMounts\"] != _|_ {\n\t\t\t\t\tvolumes: volumesArray.pvc
+ volumesArray.configMap + volumesArray.secret + volumesArray.emptyDir
+ volumesArray.hostPath\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\nexposePorts:
[\n\tfor v in parameter.ports if v.expose == true {\n\t\tport: v.port\n\t\ttargetPort:
v.port\n\t\tif v.name != _|_ {\n\t\t\tname: v.name\n\t\t}\n\t\tif v.name
== _|_ {\n\t\t\tname: \"port-\" + strconv.FormatInt(v.port, 10)\n\t\t}\n\t},\n]\noutputs:
{\n\tif len(exposePorts) != 0 {\n\t\twebserviceExpose: {\n\t\t\tapiVersion:
\"v1\"\n\t\t\tkind: \"Service\"\n\t\t\tmetadata: name: context.name\n\t\t\tspec:
{\n\t\t\t\tselector: \"app.oam.dev/component\": context.name\n\t\t\t\tports:
exposePorts\n\t\t\t\ttype: parameter.exposeType\n\t\t\t}\n\t\t}\n\t}\n}\nparameter:
{\n\t// +usage=Specify the labels in the workload\n\tlabels?: [string]:
string\n\n\t// +usage=Specify the annotations in the workload\n\tannotations?:
[string]: string\n\n\t// +usage=Which image would you like to use for
your service\n\t// +short=i\n\timage: string\n\n\t// +usage=Specify
image pull policy for your service\n\timagePullPolicy?: \"Always\" |
\"Never\" | \"IfNotPresent\"\n\n\t// +usage=Specify image pull secrets
for your service\n\timagePullSecrets?: [...string]\n\n\t// +ignore\n\t//
+usage=Deprecated field, please use ports instead\n\t// +short=p\n\tport?:
int\n\n\t// +usage=Which ports do you want customer traffic sent to,
defaults to 80\n\tports?: [...{\n\t\t// +usage=Number of port to expose
on the pod's IP address\n\t\tport: int\n\t\t// +usage=Name of the port\n\t\tname?:
string\n\t\t// +usage=Protocol for port. Must be UDP, TCP, or SCTP\n\t\tprotocol:
*\"TCP\" | \"UDP\" | \"SCTP\"\n\t\t// +usage=Specify if the port should
be exposed\n\t\texpose: *false | bool\n\t}]\n\n\t// +ignore\n\t// +usage=Specify
what kind of Service you want. options: \"ClusterIP\", \"NodePort\",
\"LoadBalancer\", \"ExternalName\"\n\texposeType: *\"ClusterIP\" | \"NodePort\"
| \"LoadBalancer\" | \"ExternalName\"\n\n\t// +ignore\n\t// +usage=If
addRevisionLabel is true, the revision label will be added to the underlying
pods\n\taddRevisionLabel: *false | bool\n\n\t// +usage=Commands to run
in the container\n\tcmd?: [...string]\n\n\t// +usage=Define arguments
by using environment variables\n\tenv?: [...{\n\t\t// +usage=Environment
variable name\n\t\tname: string\n\t\t// +usage=The value of the environment
variable\n\t\tvalue?: string\n\t\t// +usage=Specifies a source the value
of this var should come from\n\t\tvalueFrom?: {\n\t\t\t// +usage=Selects
a key of a secret in the pod's namespace\n\t\t\tsecretKeyRef?: {\n\t\t\t\t//
+usage=The name of the secret in the pod's namespace to select from\n\t\t\t\tname:
string\n\t\t\t\t// +usage=The key of the secret to select from. Must
be a valid secret key\n\t\t\t\tkey: string\n\t\t\t}\n\t\t\t// +usage=Selects
a key of a config map in the pod's namespace\n\t\t\tconfigMapKeyRef?:
{\n\t\t\t\t// +usage=The name of the config map in the pod's namespace
to select from\n\t\t\t\tname: string\n\t\t\t\t// +usage=The key of the
config map to select from. Must be a valid secret key\n\t\t\t\tkey:
string\n\t\t\t}\n\t\t}\n\t}]\n\n\t// +usage=Number of CPU units for
the service, like \n\tcpu?: string\n\n\t//
+usage=Specifies the attributes of the memory resource required for
the container.\n\tmemory?: string\n\n\tvolumeMounts?: {\n\t\t// +usage=Mount
PVC type volume\n\t\tpvc?: [...{\n\t\t\tname: string\n\t\t\tmountPath:
string\n\t\t\t// +usage=The name of the PVC\n\t\t\tclaimName: string\n\t\t}]\n\t\t//
+usage=Mount ConfigMap type volume\n\t\tconfigMap?: [...{\n\t\t\tname:
\ string\n\t\t\tmountPath: string\n\t\t\tdefaultMode: *420 |
int\n\t\t\tcmName: string\n\t\t\titems?: [...{\n\t\t\t\tkey: string\n\t\t\t\tpath:
string\n\t\t\t\tmode: *511 | int\n\t\t\t}]\n\t\t}]\n\t\t// +usage=Mount
Secret type volume\n\t\tsecret?: [...{\n\t\t\tname: string\n\t\t\tmountPath:
\ string\n\t\t\tdefaultMode: *420 | int\n\t\t\tsecretName: string\n\t\t\titems?:
[...{\n\t\t\t\tkey: string\n\t\t\t\tpath: string\n\t\t\t\tmode: *511
| int\n\t\t\t}]\n\t\t}]\n\t\t// +usage=Mount EmptyDir type volume\n\t\temptyDir?:
[...{\n\t\t\tname: string\n\t\t\tmountPath: string\n\t\t\tmedium:
\ *\"\" | \"Memory\"\n\t\t}]\n\t\t// +usage=Mount HostPath type volume\n\t\thostPath?:
[...{\n\t\t\tname: string\n\t\t\tmountPath: string\n\t\t\tpath:
\ string\n\t\t}]\n\t}\n\n\t// +usage=Deprecated field, use volumeMounts
instead.\n\tvolumes?: [...{\n\t\tname: string\n\t\tmountPath: string\n\t\t//
+usage=Specify volume type, options: \"pvc\",\"configMap\",\"secret\",\"emptyDir\"\n\t\ttype:
\"pvc\" | \"configMap\" | \"secret\" | \"emptyDir\"\n\t\tif type ==
\"pvc\" {\n\t\t\tclaimName: string\n\t\t}\n\t\tif type == \"configMap\"
{\n\t\t\tdefaultMode: *420 | int\n\t\t\tcmName: string\n\t\t\titems?:
[...{\n\t\t\t\tkey: string\n\t\t\t\tpath: string\n\t\t\t\tmode: *511
| int\n\t\t\t}]\n\t\t}\n\t\tif type == \"secret\" {\n\t\t\tdefaultMode:
*420 | int\n\t\t\tsecretName: string\n\t\t\titems?: [...{\n\t\t\t\tkey:
\ string\n\t\t\t\tpath: string\n\t\t\t\tmode: *511 | int\n\t\t\t}]\n\t\t}\n\t\tif
type == \"emptyDir\" {\n\t\t\tmedium: *\"\" | \"Memory\"\n\t\t}\n\t}]\n\n\t//
+usage=Instructions for assessing whether the container is alive.\n\tlivenessProbe?:
#HealthProbe\n\n\t// +usage=Instructions for assessing whether the container
is in a suitable state to serve traffic.\n\treadinessProbe?: #HealthProbe\n\n\t//
+usage=Specify the hostAliases to add\n\thostAliases?: [...{\n\t\tip:
string\n\t\thostnames: [...string]\n\t}]\n}\n#HealthProbe: {\n\n\t//
+usage=Instructions for assessing container health by executing a command.
Either this attribute or the httpGet attribute or the tcpSocket attribute
MUST be specified. This attribute is mutually exclusive with both the
httpGet attribute and the tcpSocket attribute.\n\texec?: {\n\t\t// +usage=A
command to be executed inside the container to assess its health. Each
space delimited token of the command is a separate array element. Commands
exiting 0 are considered to be successful probes, whilst all other exit
codes are considered failures.\n\t\tcommand: [...string]\n\t}\n\n\t//
+usage=Instructions for assessing container health by executing an HTTP
GET request. Either this attribute or the exec attribute or the tcpSocket
attribute MUST be specified. This attribute is mutually exclusive with
both the exec attribute and the tcpSocket attribute.\n\thttpGet?: {\n\t\t//
+usage=The endpoint, relative to the port, to which the HTTP GET request
should be directed.\n\t\tpath: string\n\t\t// +usage=The TCP socket
within the container to which the HTTP GET request should be directed.\n\t\tport:
int\n\t\thttpHeaders?: [...{\n\t\t\tname: string\n\t\t\tvalue: string\n\t\t}]\n\t}\n\n\t//
+usage=Instructions for assessing container health by probing a TCP
socket. Either this attribute or the exec attribute or the httpGet attribute
MUST be specified. This attribute is mutually exclusive with both the
exec attribute and the httpGet attribute.\n\ttcpSocket?: {\n\t\t// +usage=The
TCP socket within the container that should be probed to assess container
health.\n\t\tport: int\n\t}\n\n\t// +usage=Number of seconds after the
container is started before the first probe is initiated.\n\tinitialDelaySeconds:
*0 | int\n\n\t// +usage=How often, in seconds, to execute the probe.\n\tperiodSeconds:
*10 | int\n\n\t// +usage=Number of seconds after which the probe times
out.\n\ttimeoutSeconds: *1 | int\n\n\t// +usage=Minimum consecutive
successes for the probe to be considered successful after having failed.\n\tsuccessThreshold:
*1 | int\n\n\t// +usage=Number of consecutive failures required to determine
the container is not alive (liveness probe) or not ready (readiness
probe).\n\tfailureThreshold: *3 | int\n}\n"
status:
customStatus: "ready: {\n\treadyReplicas: *0 | int\n} & {\n\tif context.output.status.readyReplicas
!= _|_ {\n\t\treadyReplicas: context.output.status.readyReplicas\n\t}\n}\nmessage:
\"Ready:\\(ready.readyReplicas)/\\(context.output.spec.replicas)\""
healthPolicy: "ready: {\n\tupdatedReplicas: *0 | int\n\treadyReplicas:
\ *0 | int\n\treplicas: *0 | int\n\tobservedGeneration:
*0 | int\n} & {\n\tif context.output.status.updatedReplicas != _|_ {\n\t\tupdatedReplicas:
context.output.status.updatedReplicas\n\t}\n\tif context.output.status.readyReplicas
!= _|_ {\n\t\treadyReplicas: context.output.status.readyReplicas\n\t}\n\tif
context.output.status.replicas != _|_ {\n\t\treplicas: context.output.status.replicas\n\t}\n\tif
context.output.status.observedGeneration != _|_ {\n\t\tobservedGeneration:
context.output.status.observedGeneration\n\t}\n}\nisHealth: (context.output.spec.replicas
== ready.readyReplicas) && (context.output.spec.replicas == ready.updatedReplicas)
&& (context.output.spec.replicas == ready.replicas) && (ready.observedGeneration
== context.output.metadata.generation || ready.observedGeneration > context.output.metadata.generation)"
workload:
definition:
apiVersion: apps/v1
kind: Deployment
type: deployments.apps
status: {}
traitDefinitions:
scaler:
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: Manually scale K8s pod for your workload
which follows the pod spec in path 'spec.template'.
meta.helm.sh/release-name: kubevela
meta.helm.sh/release-namespace: vela-system
labels:
app.kubernetes.io/managed-by: Helm
name: scaler
namespace: vela-system
spec:
appliesToWorkloads:
- '*'
definitionRef:
name: ""
schematic:
cue:
template: "parameter: {\n\t// +usage=Specify the number of workload\n\treplicas:
*1 | int\n}\n// +patchStrategy=retainKeys\npatch: spec: replicas: parameter.replicas\n"
status: {}
status: {}

View File

@@ -0,0 +1,277 @@
apiVersion: core.oam.dev/v1beta1
kind: ApplicationRevision
metadata:
annotations:
app.oam.dev/publishVersion: workflow-default-123456
name: backport-1-2-test-demo-v1
namespace: default
spec:
application:
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
annotations:
app.oam.dev/publishVersion: workflow-default-123456
name: backport-1-2-test-demo
namespace: default
spec:
components:
- name: backport-1-2-test-demo
properties:
image: nginx
traits:
- properties:
replicas: 1
type: scaler
type: webservice
workflow:
steps:
- name: apply-component
type: apply-component
properties:
name: backport-1-2-test-demo
- name: apply1
type: apply-application
- name: apply2
type: apply-application
status: {}
componentDefinitions:
webservice:
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
annotations:
definition.oam.dev/description: Describes long-running, scalable, containerized
services that have a stable network endpoint to receive external network
traffic from customers.
meta.helm.sh/release-name: kubevela
meta.helm.sh/release-namespace: vela-system
labels:
app.kubernetes.io/managed-by: Helm
name: webservice
namespace: vela-system
spec:
schematic:
cue:
template: "import (\n\t\"strconv\"\n)\n\nmountsArray: {\n\tpvc: *[\n\t\tfor
v in parameter.volumeMounts.pvc {\n\t\t\t{\n\t\t\t\tmountPath: v.mountPath\n\t\t\t\tname:
\ v.name\n\t\t\t}\n\t\t},\n\t] | []\n\n\tconfigMap: *[\n\t\t\tfor
v in parameter.volumeMounts.configMap {\n\t\t\t{\n\t\t\t\tmountPath:
v.mountPath\n\t\t\t\tname: v.name\n\t\t\t}\n\t\t},\n\t] | []\n\n\tsecret:
*[\n\t\tfor v in parameter.volumeMounts.secret {\n\t\t\t{\n\t\t\t\tmountPath:
v.mountPath\n\t\t\t\tname: v.name\n\t\t\t}\n\t\t},\n\t] | []\n\n\temptyDir:
*[\n\t\t\tfor v in parameter.volumeMounts.emptyDir {\n\t\t\t{\n\t\t\t\tmountPath:
v.mountPath\n\t\t\t\tname: v.name\n\t\t\t}\n\t\t},\n\t] | []\n\n\thostPath:
*[\n\t\t\tfor v in parameter.volumeMounts.hostPath {\n\t\t\t{\n\t\t\t\tmountPath:
v.mountPath\n\t\t\t\tname: v.name\n\t\t\t}\n\t\t},\n\t] | []\n}\nvolumesArray:
{\n\tpvc: *[\n\t\tfor v in parameter.volumeMounts.pvc {\n\t\t\t{\n\t\t\t\tname:
v.name\n\t\t\t\tpersistentVolumeClaim: claimName: v.claimName\n\t\t\t}\n\t\t},\n\t]
| []\n\n\tconfigMap: *[\n\t\t\tfor v in parameter.volumeMounts.configMap
{\n\t\t\t{\n\t\t\t\tname: v.name\n\t\t\t\tconfigMap: {\n\t\t\t\t\tdefaultMode:
v.defaultMode\n\t\t\t\t\tname: v.cmName\n\t\t\t\t\tif v.items
!= _|_ {\n\t\t\t\t\t\titems: v.items\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t]
| []\n\n\tsecret: *[\n\t\tfor v in parameter.volumeMounts.secret {\n\t\t\t{\n\t\t\t\tname:
v.name\n\t\t\t\tsecret: {\n\t\t\t\t\tdefaultMode: v.defaultMode\n\t\t\t\t\tsecretName:
\ v.secretName\n\t\t\t\t\tif v.items != _|_ {\n\t\t\t\t\t\titems: v.items\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t]
| []\n\n\temptyDir: *[\n\t\t\tfor v in parameter.volumeMounts.emptyDir
{\n\t\t\t{\n\t\t\t\tname: v.name\n\t\t\t\temptyDir: medium: v.medium\n\t\t\t}\n\t\t},\n\t]
| []\n\n\thostPath: *[\n\t\t\tfor v in parameter.volumeMounts.hostPath
{\n\t\t\t{\n\t\t\t\tname: v.name\n\t\t\t\thostPath: path: v.path\n\t\t\t}\n\t\t},\n\t]
| []\n}\noutput: {\n\tapiVersion: \"apps/v1\"\n\tkind: \"Deployment\"\n\tspec:
{\n\t\tselector: matchLabels: \"app.oam.dev/component\": context.name\n\n\t\ttemplate:
{\n\t\t\tmetadata: {\n\t\t\t\tlabels: {\n\t\t\t\t\tif parameter.labels
!= _|_ {\n\t\t\t\t\t\tparameter.labels\n\t\t\t\t\t}\n\t\t\t\t\tif parameter.addRevisionLabel
{\n\t\t\t\t\t\t\"app.oam.dev/revision\": context.revision\n\t\t\t\t\t}\n\t\t\t\t\t\"app.oam.dev/component\":
context.name\n\t\t\t\t}\n\t\t\t\tif parameter.annotations != _|_ {\n\t\t\t\t\tannotations:
parameter.annotations\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tspec: {\n\t\t\t\tcontainers:
[{\n\t\t\t\t\tname: context.name\n\t\t\t\t\timage: parameter.image\n\t\t\t\t\tif
parameter[\"port\"] != _|_ && parameter[\"ports\"] == _|_ {\n\t\t\t\t\t\tports:
[{\n\t\t\t\t\t\t\tcontainerPort: parameter.port\n\t\t\t\t\t\t}]\n\t\t\t\t\t}\n\t\t\t\t\tif
parameter[\"ports\"] != _|_ {\n\t\t\t\t\t\tports: [ for v in parameter.ports
{\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tcontainerPort: v.port\n\t\t\t\t\t\t\t\tprotocol:
\ v.protocol\n\t\t\t\t\t\t\t\tif v.name != _|_ {\n\t\t\t\t\t\t\t\t\tname:
v.name\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tif v.name == _|_ {\n\t\t\t\t\t\t\t\t\tname:
\"port-\" + strconv.FormatInt(v.port, 10)\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}}]\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"imagePullPolicy\"] != _|_ {\n\t\t\t\t\t\timagePullPolicy:
parameter.imagePullPolicy\n\t\t\t\t\t}\n\n\t\t\t\t\tif parameter[\"cmd\"]
!= _|_ {\n\t\t\t\t\t\tcommand: parameter.cmd\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"env\"] != _|_ {\n\t\t\t\t\t\tenv: parameter.env\n\t\t\t\t\t}\n\n\t\t\t\t\tif
context[\"config\"] != _|_ {\n\t\t\t\t\t\tenv: context.config\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"cpu\"] != _|_ {\n\t\t\t\t\t\tresources: {\n\t\t\t\t\t\t\tlimits:
cpu: parameter.cpu\n\t\t\t\t\t\t\trequests: cpu: parameter.cpu\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"memory\"] != _|_ {\n\t\t\t\t\t\tresources: {\n\t\t\t\t\t\t\tlimits:
memory: parameter.memory\n\t\t\t\t\t\t\trequests: memory: parameter.memory\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"volumes\"] != _|_ && parameter[\"volumeMounts\"] == _|_
{\n\t\t\t\t\t\tvolumeMounts: [ for v in parameter.volumes {\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tmountPath:
v.mountPath\n\t\t\t\t\t\t\t\tname: v.name\n\t\t\t\t\t\t\t}}]\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"volumeMounts\"] != _|_ {\n\t\t\t\t\t\tvolumeMounts: mountsArray.pvc
+ mountsArray.configMap + mountsArray.secret + mountsArray.emptyDir
+ mountsArray.hostPath\n\t\t\t\t\t}\n\n\t\t\t\t\tif parameter[\"livenessProbe\"]
!= _|_ {\n\t\t\t\t\t\tlivenessProbe: parameter.livenessProbe\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"readinessProbe\"] != _|_ {\n\t\t\t\t\t\treadinessProbe:
parameter.readinessProbe\n\t\t\t\t\t}\n\n\t\t\t\t}]\n\n\t\t\t\tif parameter[\"hostAliases\"]
!= _|_ {\n\t\t\t\t\t// +patchKey=ip\n\t\t\t\t\thostAliases: parameter.hostAliases\n\t\t\t\t}\n\n\t\t\t\tif
parameter[\"imagePullSecrets\"] != _|_ {\n\t\t\t\t\timagePullSecrets:
[ for v in parameter.imagePullSecrets {\n\t\t\t\t\t\tname: v\n\t\t\t\t\t},\n\t\t\t\t\t]\n\t\t\t\t}\n\n\t\t\t\tif
parameter[\"volumes\"] != _|_ && parameter[\"volumeMounts\"] == _|_
{\n\t\t\t\t\tvolumes: [ for v in parameter.volumes {\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tname:
v.name\n\t\t\t\t\t\t\tif v.type == \"pvc\" {\n\t\t\t\t\t\t\t\tpersistentVolumeClaim:
claimName: v.claimName\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif v.type ==
\"configMap\" {\n\t\t\t\t\t\t\t\tconfigMap: {\n\t\t\t\t\t\t\t\t\tdefaultMode:
v.defaultMode\n\t\t\t\t\t\t\t\t\tname: v.cmName\n\t\t\t\t\t\t\t\t\tif
v.items != _|_ {\n\t\t\t\t\t\t\t\t\t\titems: v.items\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif
v.type == \"secret\" {\n\t\t\t\t\t\t\t\tsecret: {\n\t\t\t\t\t\t\t\t\tdefaultMode:
v.defaultMode\n\t\t\t\t\t\t\t\t\tsecretName: v.secretName\n\t\t\t\t\t\t\t\t\tif
v.items != _|_ {\n\t\t\t\t\t\t\t\t\t\titems: v.items\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif
v.type == \"emptyDir\" {\n\t\t\t\t\t\t\t\temptyDir: medium: v.medium\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}]\n\t\t\t\t}\n\n\t\t\t\tif
parameter[\"volumeMounts\"] != _|_ {\n\t\t\t\t\tvolumes: volumesArray.pvc
+ volumesArray.configMap + volumesArray.secret + volumesArray.emptyDir
+ volumesArray.hostPath\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\nexposePorts:
[\n\tfor v in parameter.ports if v.expose == true {\n\t\tport: v.port\n\t\ttargetPort:
v.port\n\t\tif v.name != _|_ {\n\t\t\tname: v.name\n\t\t}\n\t\tif v.name
== _|_ {\n\t\t\tname: \"port-\" + strconv.FormatInt(v.port, 10)\n\t\t}\n\t},\n]\noutputs:
{\n\tif len(exposePorts) != 0 {\n\t\twebserviceExpose: {\n\t\t\tapiVersion:
\"v1\"\n\t\t\tkind: \"Service\"\n\t\t\tmetadata: name: context.name\n\t\t\tspec:
{\n\t\t\t\tselector: \"app.oam.dev/component\": context.name\n\t\t\t\tports:
exposePorts\n\t\t\t\ttype: parameter.exposeType\n\t\t\t}\n\t\t}\n\t}\n}\nparameter:
{\n\t// +usage=Specify the labels in the workload\n\tlabels?: [string]:
string\n\n\t// +usage=Specify the annotations in the workload\n\tannotations?:
[string]: string\n\n\t// +usage=Which image would you like to use for
your service\n\t// +short=i\n\timage: string\n\n\t// +usage=Specify
image pull policy for your service\n\timagePullPolicy?: \"Always\" |
\"Never\" | \"IfNotPresent\"\n\n\t// +usage=Specify image pull secrets
for your service\n\timagePullSecrets?: [...string]\n\n\t// +ignore\n\t//
+usage=Deprecated field, please use ports instead\n\t// +short=p\n\tport?:
int\n\n\t// +usage=Which ports do you want customer traffic sent to,
defaults to 80\n\tports?: [...{\n\t\t// +usage=Number of port to expose
on the pod's IP address\n\t\tport: int\n\t\t// +usage=Name of the port\n\t\tname?:
string\n\t\t// +usage=Protocol for port. Must be UDP, TCP, or SCTP\n\t\tprotocol:
*\"TCP\" | \"UDP\" | \"SCTP\"\n\t\t// +usage=Specify if the port should
be exposed\n\t\texpose: *false | bool\n\t}]\n\n\t// +ignore\n\t// +usage=Specify
what kind of Service you want. options: \"ClusterIP\", \"NodePort\",
\"LoadBalancer\", \"ExternalName\"\n\texposeType: *\"ClusterIP\" | \"NodePort\"
| \"LoadBalancer\" | \"ExternalName\"\n\n\t// +ignore\n\t// +usage=If
addRevisionLabel is true, the revision label will be added to the underlying
pods\n\taddRevisionLabel: *false | bool\n\n\t// +usage=Commands to run
in the container\n\tcmd?: [...string]\n\n\t// +usage=Define arguments
by using environment variables\n\tenv?: [...{\n\t\t// +usage=Environment
variable name\n\t\tname: string\n\t\t// +usage=The value of the environment
variable\n\t\tvalue?: string\n\t\t// +usage=Specifies a source the value
of this var should come from\n\t\tvalueFrom?: {\n\t\t\t// +usage=Selects
a key of a secret in the pod's namespace\n\t\t\tsecretKeyRef?: {\n\t\t\t\t//
+usage=The name of the secret in the pod's namespace to select from\n\t\t\t\tname:
string\n\t\t\t\t// +usage=The key of the secret to select from. Must
be a valid secret key\n\t\t\t\tkey: string\n\t\t\t}\n\t\t\t// +usage=Selects
a key of a config map in the pod's namespace\n\t\t\tconfigMapKeyRef?:
{\n\t\t\t\t// +usage=The name of the config map in the pod's namespace
to select from\n\t\t\t\tname: string\n\t\t\t\t// +usage=The key of the
config map to select from. Must be a valid secret key\n\t\t\t\tkey:
string\n\t\t\t}\n\t\t}\n\t}]\n\n\t// +usage=Number of CPU units for
the service, like \n\tcpu?: string\n\n\t//
+usage=Specifies the attributes of the memory resource required for
the container.\n\tmemory?: string\n\n\tvolumeMounts?: {\n\t\t// +usage=Mount
PVC type volume\n\t\tpvc?: [...{\n\t\t\tname: string\n\t\t\tmountPath:
string\n\t\t\t// +usage=The name of the PVC\n\t\t\tclaimName: string\n\t\t}]\n\t\t//
+usage=Mount ConfigMap type volume\n\t\tconfigMap?: [...{\n\t\t\tname:
\ string\n\t\t\tmountPath: string\n\t\t\tdefaultMode: *420 |
int\n\t\t\tcmName: string\n\t\t\titems?: [...{\n\t\t\t\tkey: string\n\t\t\t\tpath:
string\n\t\t\t\tmode: *511 | int\n\t\t\t}]\n\t\t}]\n\t\t// +usage=Mount
Secret type volume\n\t\tsecret?: [...{\n\t\t\tname: string\n\t\t\tmountPath:
\ string\n\t\t\tdefaultMode: *420 | int\n\t\t\tsecretName: string\n\t\t\titems?:
[...{\n\t\t\t\tkey: string\n\t\t\t\tpath: string\n\t\t\t\tmode: *511
| int\n\t\t\t}]\n\t\t}]\n\t\t// +usage=Mount EmptyDir type volume\n\t\temptyDir?:
[...{\n\t\t\tname: string\n\t\t\tmountPath: string\n\t\t\tmedium:
\ *\"\" | \"Memory\"\n\t\t}]\n\t\t// +usage=Mount HostPath type volume\n\t\thostPath?:
[...{\n\t\t\tname: string\n\t\t\tmountPath: string\n\t\t\tpath:
\ string\n\t\t}]\n\t}\n\n\t// +usage=Deprecated field, use volumeMounts
instead.\n\tvolumes?: [...{\n\t\tname: string\n\t\tmountPath: string\n\t\t//
+usage=Specify volume type, options: \"pvc\",\"configMap\",\"secret\",\"emptyDir\"\n\t\ttype:
\"pvc\" | \"configMap\" | \"secret\" | \"emptyDir\"\n\t\tif type ==
\"pvc\" {\n\t\t\tclaimName: string\n\t\t}\n\t\tif type == \"configMap\"
{\n\t\t\tdefaultMode: *420 | int\n\t\t\tcmName: string\n\t\t\titems?:
[...{\n\t\t\t\tkey: string\n\t\t\t\tpath: string\n\t\t\t\tmode: *511
| int\n\t\t\t}]\n\t\t}\n\t\tif type == \"secret\" {\n\t\t\tdefaultMode:
*420 | int\n\t\t\tsecretName: string\n\t\t\titems?: [...{\n\t\t\t\tkey:
\ string\n\t\t\t\tpath: string\n\t\t\t\tmode: *511 | int\n\t\t\t}]\n\t\t}\n\t\tif
type == \"emptyDir\" {\n\t\t\tmedium: *\"\" | \"Memory\"\n\t\t}\n\t}]\n\n\t//
+usage=Instructions for assessing whether the container is alive.\n\tlivenessProbe?:
#HealthProbe\n\n\t// +usage=Instructions for assessing whether the container
is in a suitable state to serve traffic.\n\treadinessProbe?: #HealthProbe\n\n\t//
+usage=Specify the hostAliases to add\n\thostAliases?: [...{\n\t\tip:
string\n\t\thostnames: [...string]\n\t}]\n}\n#HealthProbe: {\n\n\t//
+usage=Instructions for assessing container health by executing a command.
Either this attribute or the httpGet attribute or the tcpSocket attribute
MUST be specified. This attribute is mutually exclusive with both the
httpGet attribute and the tcpSocket attribute.\n\texec?: {\n\t\t// +usage=A
command to be executed inside the container to assess its health. Each
space delimited token of the command is a separate array element. Commands
exiting 0 are considered to be successful probes, whilst all other exit
codes are considered failures.\n\t\tcommand: [...string]\n\t}\n\n\t//
+usage=Instructions for assessing container health by executing an HTTP
GET request. Either this attribute or the exec attribute or the tcpSocket
attribute MUST be specified. This attribute is mutually exclusive with
both the exec attribute and the tcpSocket attribute.\n\thttpGet?: {\n\t\t//
+usage=The endpoint, relative to the port, to which the HTTP GET request
should be directed.\n\t\tpath: string\n\t\t// +usage=The TCP socket
within the container to which the HTTP GET request should be directed.\n\t\tport:
int\n\t\thttpHeaders?: [...{\n\t\t\tname: string\n\t\t\tvalue: string\n\t\t}]\n\t}\n\n\t//
+usage=Instructions for assessing container health by probing a TCP
socket. Either this attribute or the exec attribute or the httpGet attribute
MUST be specified. This attribute is mutually exclusive with both the
exec attribute and the httpGet attribute.\n\ttcpSocket?: {\n\t\t// +usage=The
TCP socket within the container that should be probed to assess container
health.\n\t\tport: int\n\t}\n\n\t// +usage=Number of seconds after the
container is started before the first probe is initiated.\n\tinitialDelaySeconds:
*0 | int\n\n\t// +usage=How often, in seconds, to execute the probe.\n\tperiodSeconds:
*10 | int\n\n\t// +usage=Number of seconds after which the probe times
out.\n\ttimeoutSeconds: *1 | int\n\n\t// +usage=Minimum consecutive
successes for the probe to be considered successful after having failed.\n\tsuccessThreshold:
*1 | int\n\n\t// +usage=Number of consecutive failures required to determine
the container is not alive (liveness probe) or not ready (readiness
probe).\n\tfailureThreshold: *3 | int\n}\n"
status:
customStatus: "ready: {\n\treadyReplicas: *0 | int\n} & {\n\tif context.output.status.readyReplicas
!= _|_ {\n\t\treadyReplicas: context.output.status.readyReplicas\n\t}\n}\nmessage:
\"Ready:\\(ready.readyReplicas)/\\(context.output.spec.replicas)\""
healthPolicy: "ready: {\n\tupdatedReplicas: *0 | int\n\treadyReplicas:
\ *0 | int\n\treplicas: *0 | int\n\tobservedGeneration:
*0 | int\n} & {\n\tif context.output.status.updatedReplicas != _|_ {\n\t\tupdatedReplicas:
context.output.status.updatedReplicas\n\t}\n\tif context.output.status.readyReplicas
!= _|_ {\n\t\treadyReplicas: context.output.status.readyReplicas\n\t}\n\tif
context.output.status.replicas != _|_ {\n\t\treplicas: context.output.status.replicas\n\t}\n\tif
context.output.status.observedGeneration != _|_ {\n\t\tobservedGeneration:
context.output.status.observedGeneration\n\t}\n}\nisHealth: (context.output.spec.replicas
== ready.readyReplicas) && (context.output.spec.replicas == ready.updatedReplicas)
&& (context.output.spec.replicas == ready.replicas) && (ready.observedGeneration
== context.output.metadata.generation || ready.observedGeneration > context.output.metadata.generation)"
workload:
definition:
apiVersion: apps/v1
kind: Deployment
type: deployments.apps
status: {}
traitDefinitions:
scaler:
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: Manually scale K8s pod for your workload
which follows the pod spec in path 'spec.template'.
meta.helm.sh/release-name: kubevela
meta.helm.sh/release-namespace: vela-system
labels:
app.kubernetes.io/managed-by: Helm
name: scaler
namespace: vela-system
spec:
appliesToWorkloads:
- '*'
definitionRef:
name: ""
schematic:
cue:
template: "parameter: {\n\t// +usage=Specify the number of workload\n\treplicas:
*1 | int\n}\n// +patchStrategy=retainKeys\npatch: spec: replicas: parameter.replicas\n"
status: {}
status: {}

View File

@@ -0,0 +1,271 @@
apiVersion: core.oam.dev/v1beta1
kind: ApplicationRevision
metadata:
annotations:
app.oam.dev/publishVersion: workflow-default-123456
name: backport-1-2-test-demo-v1
namespace: default
spec:
application:
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
annotations:
app.oam.dev/publishVersion: workflow-default-123456
name: backport-1-2-test-demo
namespace: default
spec:
components:
- name: backport-1-2-test-demo
properties:
image: nginx
traits:
- properties:
replicas: 1
type: scaler
type: webservice
workflow:
steps:
- name: apply
type: apply-application-unknown
status: {}
componentDefinitions:
webservice:
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
annotations:
definition.oam.dev/description: Describes long-running, scalable, containerized
services that have a stable network endpoint to receive external network
traffic from customers.
meta.helm.sh/release-name: kubevela
meta.helm.sh/release-namespace: vela-system
labels:
app.kubernetes.io/managed-by: Helm
name: webservice
namespace: vela-system
spec:
schematic:
cue:
template: "import (\n\t\"strconv\"\n)\n\nmountsArray: {\n\tpvc: *[\n\t\tfor
v in parameter.volumeMounts.pvc {\n\t\t\t{\n\t\t\t\tmountPath: v.mountPath\n\t\t\t\tname:
\ v.name\n\t\t\t}\n\t\t},\n\t] | []\n\n\tconfigMap: *[\n\t\t\tfor
v in parameter.volumeMounts.configMap {\n\t\t\t{\n\t\t\t\tmountPath:
v.mountPath\n\t\t\t\tname: v.name\n\t\t\t}\n\t\t},\n\t] | []\n\n\tsecret:
*[\n\t\tfor v in parameter.volumeMounts.secret {\n\t\t\t{\n\t\t\t\tmountPath:
v.mountPath\n\t\t\t\tname: v.name\n\t\t\t}\n\t\t},\n\t] | []\n\n\temptyDir:
*[\n\t\t\tfor v in parameter.volumeMounts.emptyDir {\n\t\t\t{\n\t\t\t\tmountPath:
v.mountPath\n\t\t\t\tname: v.name\n\t\t\t}\n\t\t},\n\t] | []\n\n\thostPath:
*[\n\t\t\tfor v in parameter.volumeMounts.hostPath {\n\t\t\t{\n\t\t\t\tmountPath:
v.mountPath\n\t\t\t\tname: v.name\n\t\t\t}\n\t\t},\n\t] | []\n}\nvolumesArray:
{\n\tpvc: *[\n\t\tfor v in parameter.volumeMounts.pvc {\n\t\t\t{\n\t\t\t\tname:
v.name\n\t\t\t\tpersistentVolumeClaim: claimName: v.claimName\n\t\t\t}\n\t\t},\n\t]
| []\n\n\tconfigMap: *[\n\t\t\tfor v in parameter.volumeMounts.configMap
{\n\t\t\t{\n\t\t\t\tname: v.name\n\t\t\t\tconfigMap: {\n\t\t\t\t\tdefaultMode:
v.defaultMode\n\t\t\t\t\tname: v.cmName\n\t\t\t\t\tif v.items
!= _|_ {\n\t\t\t\t\t\titems: v.items\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t]
| []\n\n\tsecret: *[\n\t\tfor v in parameter.volumeMounts.secret {\n\t\t\t{\n\t\t\t\tname:
v.name\n\t\t\t\tsecret: {\n\t\t\t\t\tdefaultMode: v.defaultMode\n\t\t\t\t\tsecretName:
\ v.secretName\n\t\t\t\t\tif v.items != _|_ {\n\t\t\t\t\t\titems: v.items\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t]
| []\n\n\temptyDir: *[\n\t\t\tfor v in parameter.volumeMounts.emptyDir
{\n\t\t\t{\n\t\t\t\tname: v.name\n\t\t\t\temptyDir: medium: v.medium\n\t\t\t}\n\t\t},\n\t]
| []\n\n\thostPath: *[\n\t\t\tfor v in parameter.volumeMounts.hostPath
{\n\t\t\t{\n\t\t\t\tname: v.name\n\t\t\t\thostPath: path: v.path\n\t\t\t}\n\t\t},\n\t]
| []\n}\noutput: {\n\tapiVersion: \"apps/v1\"\n\tkind: \"Deployment\"\n\tspec:
{\n\t\tselector: matchLabels: \"app.oam.dev/component\": context.name\n\n\t\ttemplate:
{\n\t\t\tmetadata: {\n\t\t\t\tlabels: {\n\t\t\t\t\tif parameter.labels
!= _|_ {\n\t\t\t\t\t\tparameter.labels\n\t\t\t\t\t}\n\t\t\t\t\tif parameter.addRevisionLabel
{\n\t\t\t\t\t\t\"app.oam.dev/revision\": context.revision\n\t\t\t\t\t}\n\t\t\t\t\t\"app.oam.dev/component\":
context.name\n\t\t\t\t}\n\t\t\t\tif parameter.annotations != _|_ {\n\t\t\t\t\tannotations:
parameter.annotations\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tspec: {\n\t\t\t\tcontainers:
[{\n\t\t\t\t\tname: context.name\n\t\t\t\t\timage: parameter.image\n\t\t\t\t\tif
parameter[\"port\"] != _|_ && parameter[\"ports\"] == _|_ {\n\t\t\t\t\t\tports:
[{\n\t\t\t\t\t\t\tcontainerPort: parameter.port\n\t\t\t\t\t\t}]\n\t\t\t\t\t}\n\t\t\t\t\tif
parameter[\"ports\"] != _|_ {\n\t\t\t\t\t\tports: [ for v in parameter.ports
{\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tcontainerPort: v.port\n\t\t\t\t\t\t\t\tprotocol:
\ v.protocol\n\t\t\t\t\t\t\t\tif v.name != _|_ {\n\t\t\t\t\t\t\t\t\tname:
v.name\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tif v.name == _|_ {\n\t\t\t\t\t\t\t\t\tname:
\"port-\" + strconv.FormatInt(v.port, 10)\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}}]\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"imagePullPolicy\"] != _|_ {\n\t\t\t\t\t\timagePullPolicy:
parameter.imagePullPolicy\n\t\t\t\t\t}\n\n\t\t\t\t\tif parameter[\"cmd\"]
!= _|_ {\n\t\t\t\t\t\tcommand: parameter.cmd\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"env\"] != _|_ {\n\t\t\t\t\t\tenv: parameter.env\n\t\t\t\t\t}\n\n\t\t\t\t\tif
context[\"config\"] != _|_ {\n\t\t\t\t\t\tenv: context.config\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"cpu\"] != _|_ {\n\t\t\t\t\t\tresources: {\n\t\t\t\t\t\t\tlimits:
cpu: parameter.cpu\n\t\t\t\t\t\t\trequests: cpu: parameter.cpu\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"memory\"] != _|_ {\n\t\t\t\t\t\tresources: {\n\t\t\t\t\t\t\tlimits:
memory: parameter.memory\n\t\t\t\t\t\t\trequests: memory: parameter.memory\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"volumes\"] != _|_ && parameter[\"volumeMounts\"] == _|_
{\n\t\t\t\t\t\tvolumeMounts: [ for v in parameter.volumes {\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tmountPath:
v.mountPath\n\t\t\t\t\t\t\t\tname: v.name\n\t\t\t\t\t\t\t}}]\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"volumeMounts\"] != _|_ {\n\t\t\t\t\t\tvolumeMounts: mountsArray.pvc
+ mountsArray.configMap + mountsArray.secret + mountsArray.emptyDir
+ mountsArray.hostPath\n\t\t\t\t\t}\n\n\t\t\t\t\tif parameter[\"livenessProbe\"]
!= _|_ {\n\t\t\t\t\t\tlivenessProbe: parameter.livenessProbe\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"readinessProbe\"] != _|_ {\n\t\t\t\t\t\treadinessProbe:
parameter.readinessProbe\n\t\t\t\t\t}\n\n\t\t\t\t}]\n\n\t\t\t\tif parameter[\"hostAliases\"]
!= _|_ {\n\t\t\t\t\t// +patchKey=ip\n\t\t\t\t\thostAliases: parameter.hostAliases\n\t\t\t\t}\n\n\t\t\t\tif
parameter[\"imagePullSecrets\"] != _|_ {\n\t\t\t\t\timagePullSecrets:
[ for v in parameter.imagePullSecrets {\n\t\t\t\t\t\tname: v\n\t\t\t\t\t},\n\t\t\t\t\t]\n\t\t\t\t}\n\n\t\t\t\tif
parameter[\"volumes\"] != _|_ && parameter[\"volumeMounts\"] == _|_
{\n\t\t\t\t\tvolumes: [ for v in parameter.volumes {\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tname:
v.name\n\t\t\t\t\t\t\tif v.type == \"pvc\" {\n\t\t\t\t\t\t\t\tpersistentVolumeClaim:
claimName: v.claimName\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif v.type ==
\"configMap\" {\n\t\t\t\t\t\t\t\tconfigMap: {\n\t\t\t\t\t\t\t\t\tdefaultMode:
v.defaultMode\n\t\t\t\t\t\t\t\t\tname: v.cmName\n\t\t\t\t\t\t\t\t\tif
v.items != _|_ {\n\t\t\t\t\t\t\t\t\t\titems: v.items\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif
v.type == \"secret\" {\n\t\t\t\t\t\t\t\tsecret: {\n\t\t\t\t\t\t\t\t\tdefaultMode:
v.defaultMode\n\t\t\t\t\t\t\t\t\tsecretName: v.secretName\n\t\t\t\t\t\t\t\t\tif
v.items != _|_ {\n\t\t\t\t\t\t\t\t\t\titems: v.items\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif
v.type == \"emptyDir\" {\n\t\t\t\t\t\t\t\temptyDir: medium: v.medium\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}]\n\t\t\t\t}\n\n\t\t\t\tif
parameter[\"volumeMounts\"] != _|_ {\n\t\t\t\t\tvolumes: volumesArray.pvc
+ volumesArray.configMap + volumesArray.secret + volumesArray.emptyDir
+ volumesArray.hostPath\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\nexposePorts:
[\n\tfor v in parameter.ports if v.expose == true {\n\t\tport: v.port\n\t\ttargetPort:
v.port\n\t\tif v.name != _|_ {\n\t\t\tname: v.name\n\t\t}\n\t\tif v.name
== _|_ {\n\t\t\tname: \"port-\" + strconv.FormatInt(v.port, 10)\n\t\t}\n\t},\n]\noutputs:
{\n\tif len(exposePorts) != 0 {\n\t\twebserviceExpose: {\n\t\t\tapiVersion:
\"v1\"\n\t\t\tkind: \"Service\"\n\t\t\tmetadata: name: context.name\n\t\t\tspec:
{\n\t\t\t\tselector: \"app.oam.dev/component\": context.name\n\t\t\t\tports:
exposePorts\n\t\t\t\ttype: parameter.exposeType\n\t\t\t}\n\t\t}\n\t}\n}\nparameter:
{\n\t// +usage=Specify the labels in the workload\n\tlabels?: [string]:
string\n\n\t// +usage=Specify the annotations in the workload\n\tannotations?:
[string]: string\n\n\t// +usage=Which image would you like to use for
your service\n\t// +short=i\n\timage: string\n\n\t// +usage=Specify
image pull policy for your service\n\timagePullPolicy?: \"Always\" |
\"Never\" | \"IfNotPresent\"\n\n\t// +usage=Specify image pull secrets
for your service\n\timagePullSecrets?: [...string]\n\n\t// +ignore\n\t//
+usage=Deprecated field, please use ports instead\n\t// +short=p\n\tport?:
int\n\n\t// +usage=Which ports do you want customer traffic sent to,
defaults to 80\n\tports?: [...{\n\t\t// +usage=Number of port to expose
on the pod's IP address\n\t\tport: int\n\t\t// +usage=Name of the port\n\t\tname?:
string\n\t\t// +usage=Protocol for port. Must be UDP, TCP, or SCTP\n\t\tprotocol:
*\"TCP\" | \"UDP\" | \"SCTP\"\n\t\t// +usage=Specify if the port should
be exposed\n\t\texpose: *false | bool\n\t}]\n\n\t// +ignore\n\t// +usage=Specify
what kind of Service you want. options: \"ClusterIP\", \"NodePort\",
\"LoadBalancer\", \"ExternalName\"\n\texposeType: *\"ClusterIP\" | \"NodePort\"
| \"LoadBalancer\" | \"ExternalName\"\n\n\t// +ignore\n\t// +usage=If
addRevisionLabel is true, the revision label will be added to the underlying
pods\n\taddRevisionLabel: *false | bool\n\n\t// +usage=Commands to run
in the container\n\tcmd?: [...string]\n\n\t// +usage=Define arguments
by using environment variables\n\tenv?: [...{\n\t\t// +usage=Environment
variable name\n\t\tname: string\n\t\t// +usage=The value of the environment
variable\n\t\tvalue?: string\n\t\t// +usage=Specifies a source the value
of this var should come from\n\t\tvalueFrom?: {\n\t\t\t// +usage=Selects
a key of a secret in the pod's namespace\n\t\t\tsecretKeyRef?: {\n\t\t\t\t//
+usage=The name of the secret in the pod's namespace to select from\n\t\t\t\tname:
string\n\t\t\t\t// +usage=The key of the secret to select from. Must
be a valid secret key\n\t\t\t\tkey: string\n\t\t\t}\n\t\t\t// +usage=Selects
a key of a config map in the pod's namespace\n\t\t\tconfigMapKeyRef?:
{\n\t\t\t\t// +usage=The name of the config map in the pod's namespace
to select from\n\t\t\t\tname: string\n\t\t\t\t// +usage=The key of the
config map to select from. Must be a valid secret key\n\t\t\t\tkey:
string\n\t\t\t}\n\t\t}\n\t}]\n\n\t// +usage=Number of CPU units for
the service, like \n\tcpu?: string\n\n\t//
+usage=Specifies the attributes of the memory resource required for
the container.\n\tmemory?: string\n\n\tvolumeMounts?: {\n\t\t// +usage=Mount
PVC type volume\n\t\tpvc?: [...{\n\t\t\tname: string\n\t\t\tmountPath:
string\n\t\t\t// +usage=The name of the PVC\n\t\t\tclaimName: string\n\t\t}]\n\t\t//
+usage=Mount ConfigMap type volume\n\t\tconfigMap?: [...{\n\t\t\tname:
\ string\n\t\t\tmountPath: string\n\t\t\tdefaultMode: *420 |
int\n\t\t\tcmName: string\n\t\t\titems?: [...{\n\t\t\t\tkey: string\n\t\t\t\tpath:
string\n\t\t\t\tmode: *511 | int\n\t\t\t}]\n\t\t}]\n\t\t// +usage=Mount
Secret type volume\n\t\tsecret?: [...{\n\t\t\tname: string\n\t\t\tmountPath:
\ string\n\t\t\tdefaultMode: *420 | int\n\t\t\tsecretName: string\n\t\t\titems?:
[...{\n\t\t\t\tkey: string\n\t\t\t\tpath: string\n\t\t\t\tmode: *511
| int\n\t\t\t}]\n\t\t}]\n\t\t// +usage=Mount EmptyDir type volume\n\t\temptyDir?:
[...{\n\t\t\tname: string\n\t\t\tmountPath: string\n\t\t\tmedium:
\ *\"\" | \"Memory\"\n\t\t}]\n\t\t// +usage=Mount HostPath type volume\n\t\thostPath?:
[...{\n\t\t\tname: string\n\t\t\tmountPath: string\n\t\t\tpath:
\ string\n\t\t}]\n\t}\n\n\t// +usage=Deprecated field, use volumeMounts
instead.\n\tvolumes?: [...{\n\t\tname: string\n\t\tmountPath: string\n\t\t//
+usage=Specify volume type, options: \"pvc\",\"configMap\",\"secret\",\"emptyDir\"\n\t\ttype:
\"pvc\" | \"configMap\" | \"secret\" | \"emptyDir\"\n\t\tif type ==
\"pvc\" {\n\t\t\tclaimName: string\n\t\t}\n\t\tif type == \"configMap\"
{\n\t\t\tdefaultMode: *420 | int\n\t\t\tcmName: string\n\t\t\titems?:
[...{\n\t\t\t\tkey: string\n\t\t\t\tpath: string\n\t\t\t\tmode: *511
| int\n\t\t\t}]\n\t\t}\n\t\tif type == \"secret\" {\n\t\t\tdefaultMode:
*420 | int\n\t\t\tsecretName: string\n\t\t\titems?: [...{\n\t\t\t\tkey:
\ string\n\t\t\t\tpath: string\n\t\t\t\tmode: *511 | int\n\t\t\t}]\n\t\t}\n\t\tif
type == \"emptyDir\" {\n\t\t\tmedium: *\"\" | \"Memory\"\n\t\t}\n\t}]\n\n\t//
+usage=Instructions for assessing whether the container is alive.\n\tlivenessProbe?:
#HealthProbe\n\n\t// +usage=Instructions for assessing whether the container
is in a suitable state to serve traffic.\n\treadinessProbe?: #HealthProbe\n\n\t//
+usage=Specify the hostAliases to add\n\thostAliases?: [...{\n\t\tip:
string\n\t\thostnames: [...string]\n\t}]\n}\n#HealthProbe: {\n\n\t//
+usage=Instructions for assessing container health by executing a command.
Either this attribute or the httpGet attribute or the tcpSocket attribute
MUST be specified. This attribute is mutually exclusive with both the
httpGet attribute and the tcpSocket attribute.\n\texec?: {\n\t\t// +usage=A
command to be executed inside the container to assess its health. Each
space delimited token of the command is a separate array element. Commands
exiting 0 are considered to be successful probes, whilst all other exit
codes are considered failures.\n\t\tcommand: [...string]\n\t}\n\n\t//
+usage=Instructions for assessing container health by executing an HTTP
GET request. Either this attribute or the exec attribute or the tcpSocket
attribute MUST be specified. This attribute is mutually exclusive with
both the exec attribute and the tcpSocket attribute.\n\thttpGet?: {\n\t\t//
+usage=The endpoint, relative to the port, to which the HTTP GET request
should be directed.\n\t\tpath: string\n\t\t// +usage=The TCP socket
within the container to which the HTTP GET request should be directed.\n\t\tport:
int\n\t\thttpHeaders?: [...{\n\t\t\tname: string\n\t\t\tvalue: string\n\t\t}]\n\t}\n\n\t//
+usage=Instructions for assessing container health by probing a TCP
socket. Either this attribute or the exec attribute or the httpGet attribute
MUST be specified. This attribute is mutually exclusive with both the
exec attribute and the httpGet attribute.\n\ttcpSocket?: {\n\t\t// +usage=The
TCP socket within the container that should be probed to assess container
health.\n\t\tport: int\n\t}\n\n\t// +usage=Number of seconds after the
container is started before the first probe is initiated.\n\tinitialDelaySeconds:
*0 | int\n\n\t// +usage=How often, in seconds, to execute the probe.\n\tperiodSeconds:
*10 | int\n\n\t// +usage=Number of seconds after which the probe times
out.\n\ttimeoutSeconds: *1 | int\n\n\t// +usage=Minimum consecutive
successes for the probe to be considered successful after having failed.\n\tsuccessThreshold:
*1 | int\n\n\t// +usage=Number of consecutive failures required to determine
the container is not alive (liveness probe) or not ready (readiness
probe).\n\tfailureThreshold: *3 | int\n}\n"
status:
customStatus: "ready: {\n\treadyReplicas: *0 | int\n} & {\n\tif context.output.status.readyReplicas
!= _|_ {\n\t\treadyReplicas: context.output.status.readyReplicas\n\t}\n}\nmessage:
\"Ready:\\(ready.readyReplicas)/\\(context.output.spec.replicas)\""
healthPolicy: "ready: {\n\tupdatedReplicas: *0 | int\n\treadyReplicas:
\ *0 | int\n\treplicas: *0 | int\n\tobservedGeneration:
*0 | int\n} & {\n\tif context.output.status.updatedReplicas != _|_ {\n\t\tupdatedReplicas:
context.output.status.updatedReplicas\n\t}\n\tif context.output.status.readyReplicas
!= _|_ {\n\t\treadyReplicas: context.output.status.readyReplicas\n\t}\n\tif
context.output.status.replicas != _|_ {\n\t\treplicas: context.output.status.replicas\n\t}\n\tif
context.output.status.observedGeneration != _|_ {\n\t\tobservedGeneration:
context.output.status.observedGeneration\n\t}\n}\nisHealth: (context.output.spec.replicas
== ready.readyReplicas) && (context.output.spec.replicas == ready.updatedReplicas)
&& (context.output.spec.replicas == ready.replicas) && (ready.observedGeneration
== context.output.metadata.generation || ready.observedGeneration > context.output.metadata.generation)"
workload:
definition:
apiVersion: apps/v1
kind: Deployment
type: deployments.apps
status: {}
traitDefinitions:
scaler:
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: Manually scale K8s pod for your workload
which follows the pod spec in path 'spec.template'.
meta.helm.sh/release-name: kubevela
meta.helm.sh/release-namespace: vela-system
labels:
app.kubernetes.io/managed-by: Helm
name: scaler
namespace: vela-system
spec:
appliesToWorkloads:
- '*'
definitionRef:
name: ""
schematic:
cue:
template: "parameter: {\n\t// +usage=Specify the number of workload\n\treplicas:
*1 | int\n}\n// +patchStrategy=retainKeys\npatch: spec: replicas: parameter.replicas\n"
status: {}
status: {}

View File

@@ -0,0 +1,19 @@
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: Apply application for your workflow steps
labels:
custom.definition.oam.dev/ui-hidden: "true"
name: apply-application
namespace: vela-system
spec:
schematic:
cue:
template: |
import (
"vela/op"
)
// apply application
output: op.#ApplyApplication & {}

View File

@@ -51,7 +51,7 @@ func (c *HTTPCmd) Run(meta *registry.Meta) (res interface{}, err error) {
var (
r io.Reader
client = &http.Client{
Transport: &http.Transport{},
Transport: http.DefaultTransport,
Timeout: time.Second * 3,
}
)

View File

@@ -26,7 +26,7 @@ import (
openapi "github.com/alibabacloud-go/darabonba-openapi/client"
"github.com/alibabacloud-go/tea/tea"
types "github.com/oam-dev/terraform-controller/api/types/crossplane-runtime"
v1beta12 "github.com/oam-dev/terraform-controller/api/v1beta1"
v1beta12 "github.com/oam-dev/terraform-controller/api/v1beta2"
"github.com/pkg/errors"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"

117
pkg/cmd/builder.go Normal file
View File

@@ -0,0 +1,117 @@
/*
Copyright 2022 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cmd
import (
"github.com/spf13/cobra"
"k8s.io/kubectl/pkg/util/term"
cmdutil "github.com/oam-dev/kubevela/pkg/cmd/util"
)
// Builder build command with factory
type Builder struct {
cmd *cobra.Command
f Factory
}
// NamespaceFlagConfig config for namespace flag in cmd
type NamespaceFlagConfig struct {
completion bool
usage string
loadEnv bool
}
// NamespaceFlagOption the option for configuring namespace flag in cmd
type NamespaceFlagOption interface {
ApplyToNamespaceFlagOptions(*NamespaceFlagConfig)
}
func newNamespaceFlagOptions(options ...NamespaceFlagOption) NamespaceFlagConfig {
cfg := NamespaceFlagConfig{
completion: true,
usage: usageNamespace,
loadEnv: true,
}
for _, option := range options {
option.ApplyToNamespaceFlagOptions(&cfg)
}
return cfg
}
// NamespaceFlagNoCompletionOption disable auto-completion for namespace flag
type NamespaceFlagNoCompletionOption struct{}
// ApplyToNamespaceFlagOptions .
func (option NamespaceFlagNoCompletionOption) ApplyToNamespaceFlagOptions(cfg *NamespaceFlagConfig) {
cfg.completion = false
}
// NamespaceFlagUsageOption the usage description for namespace flag
type NamespaceFlagUsageOption string
// ApplyToNamespaceFlagOptions .
func (option NamespaceFlagUsageOption) ApplyToNamespaceFlagOptions(cfg *NamespaceFlagConfig) {
cfg.usage = string(option)
}
// NamespaceFlagDisableEnvOption disable loading namespace from env
type NamespaceFlagDisableEnvOption struct{}
// ApplyToNamespaceFlagOptions .
func (option NamespaceFlagDisableEnvOption) ApplyToNamespaceFlagOptions(cfg *NamespaceFlagConfig) {
cfg.loadEnv = false
}
// WithNamespaceFlag add namespace flag to the command, by default, it will also add env flag to the command
func (builder *Builder) WithNamespaceFlag(options ...NamespaceFlagOption) *Builder {
cfg := newNamespaceFlagOptions(options...)
builder.cmd.Flags().StringP(flagNamespace, "n", "", cfg.usage)
if cfg.completion {
cmdutil.CheckErr(builder.cmd.RegisterFlagCompletionFunc(
flagNamespace,
func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return GetNamespacesForCompletion(cmd.Context(), builder.f, toComplete)
}))
}
if cfg.loadEnv {
return builder.WithEnvFlag()
}
return builder
}
// WithEnvFlag add env flag to the command
func (builder *Builder) WithEnvFlag() *Builder {
builder.cmd.PersistentFlags().StringP(flagEnv, "e", "", usageEnv)
return builder
}
// WithResponsiveWriter format the command outputs
func (builder *Builder) WithResponsiveWriter() *Builder {
builder.cmd.SetOut(term.NewResponsiveWriter(builder.cmd.OutOrStdout()))
return builder
}
// Build construct the command
func (builder *Builder) Build() *cobra.Command {
return builder.cmd
}
// NewCommandBuilder builder for command
func NewCommandBuilder(f Factory, cmd *cobra.Command) *Builder {
return &Builder{cmd: cmd, f: f}
}

72
pkg/cmd/completion.go Normal file
View File

@@ -0,0 +1,72 @@
/*
Copyright 2022 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cmd
import (
"context"
"strings"
"github.com/spf13/cobra"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
"github.com/oam-dev/kubevela/pkg/oam"
)
func listObjectNamesForCompletion(ctx context.Context, f Factory, gvk schema.GroupVersionKind, listOptions []client.ListOption, toComplete string) ([]string, cobra.ShellCompDirective) {
uns := &unstructured.UnstructuredList{}
uns.SetGroupVersionKind(gvk)
if err := f.Client().List(ctx, uns, listOptions...); err != nil {
return nil, cobra.ShellCompDirectiveError
}
var candidates []string
for _, obj := range uns.Items {
if name := obj.GetName(); strings.HasPrefix(name, toComplete) {
candidates = append(candidates, name)
}
}
return candidates, cobra.ShellCompDirectiveNoFileComp
}
// GetNamespacesForCompletion auto-complete the namespace
func GetNamespacesForCompletion(ctx context.Context, f Factory, toComplete string) ([]string, cobra.ShellCompDirective) {
return listObjectNamesForCompletion(ctx, f, corev1.SchemeGroupVersion.WithKind("Namespace"), nil, toComplete)
}
// GetRevisionForCompletion auto-complete the revision according to the application
func GetRevisionForCompletion(ctx context.Context, f Factory, appName string, namespace string, toComplete string) ([]string, cobra.ShellCompDirective) {
var options []client.ListOption
if namespace != "" {
options = append(options, client.InNamespace(namespace))
}
if appName != "" {
options = append(options, client.MatchingLabels{oam.LabelAppName: appName})
}
return listObjectNamesForCompletion(ctx, f, v1beta1.SchemeGroupVersion.WithKind(v1beta1.ApplicationRevisionKind), options, toComplete)
}
// GetApplicationsForCompletion auto-complete application
func GetApplicationsForCompletion(ctx context.Context, f Factory, namespace string, toComplete string) ([]string, cobra.ShellCompDirective) {
var options []client.ListOption
if namespace != "" {
options = append(options, client.InNamespace(namespace))
}
return listObjectNamesForCompletion(ctx, f, v1beta1.SchemeGroupVersion.WithKind(v1beta1.ApplicationKind), options, toComplete)
}

47
pkg/cmd/factory.go Normal file
View File

@@ -0,0 +1,47 @@
/*
Copyright 2022 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cmd
import (
"sigs.k8s.io/controller-runtime/pkg/client"
cmdutil "github.com/oam-dev/kubevela/pkg/cmd/util"
)
// Factory client factory for running command
type Factory interface {
Client() client.Client
}
// ClientGetter function for getting client
type ClientGetter func() (client.Client, error)
type defaultFactory struct {
ClientGetter
}
// Client return the client for command line use, interrupt if error encountered
func (f *defaultFactory) Client() client.Client {
cli, err := f.ClientGetter()
cmdutil.CheckErr(err)
return cli
}
// NewDefaultFactory create a factory based on client getter function
func NewDefaultFactory(clientGetter ClientGetter) Factory {
return &defaultFactory{ClientGetter: clientGetter}
}

27
pkg/cmd/types.go Normal file
View File

@@ -0,0 +1,27 @@
/*
Copyright 2022 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cmd
const (
flagNamespace = "namespace"
flagEnv = "env"
)
const (
usageNamespace = "If present, the namespace scope for this CLI request"
usageEnv = "The environment name for the CLI request"
)

32
pkg/cmd/util/helpers.go Normal file
View File

@@ -0,0 +1,32 @@
/*
Copyright 2022 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package util
import (
"github.com/pkg/errors"
cmdutil "k8s.io/kubectl/pkg/cmd/util"
)
// CheckErr wraps the kubectl CheckErr func by preventing inappropriate type conversion and panic
func CheckErr(err error) {
defer func() {
if r := recover(); r != nil {
cmdutil.CheckErr(errors.New(err.Error()))
}
}()
cmdutil.CheckErr(err)
}

52
pkg/cmd/utils.go Normal file
View File

@@ -0,0 +1,52 @@
/*
Copyright 2022 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cmd
import (
"github.com/spf13/cobra"
"github.com/oam-dev/kubevela/apis/types"
cmdutil "github.com/oam-dev/kubevela/pkg/cmd/util"
"github.com/oam-dev/kubevela/pkg/utils/common"
"github.com/oam-dev/kubevela/pkg/utils/env"
)
// GetNamespace get namespace from command flags and env
func GetNamespace(f Factory, cmd *cobra.Command) string {
namespace, err := cmd.Flags().GetString(flagNamespace)
cmdutil.CheckErr(err)
if namespace != "" {
return namespace
}
// find namespace from env
envName, err := cmd.Flags().GetString(flagEnv)
if err != nil {
// ignore env if the command does not use the flag
return ""
}
cmdutil.CheckErr(common.SetGlobalClient(f.Client()))
var envMeta *types.EnvMeta
if envName != "" {
envMeta, err = env.GetEnvByName(envName)
} else {
envMeta, err = env.GetCurrentEnv()
}
if err != nil {
return ""
}
return envMeta.Namespace
}

View File

@@ -134,7 +134,7 @@ func (r *Reconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Resu
if annotations := app.GetAnnotations(); annotations == nil || annotations[oam.AnnotationKubeVelaVersion] == "" {
metav1.SetMetaDataAnnotation(&app.ObjectMeta, oam.AnnotationKubeVelaVersion, version.VelaVersion)
}
logCtx.AddTag("publish_version", app.GetAnnotations()[oam.AnnotationKubeVelaVersion])
logCtx.AddTag("publish_version", app.GetAnnotations()[oam.AnnotationPublishVersion])
appParser := appfile.NewApplicationParser(r.Client, r.dm, r.pd)
handler, err := NewAppHandler(logCtx, r, app, appParser)

View File

@@ -3897,7 +3897,7 @@ spec:
}]
}
}
parameter: #PatchParams | close({
parameter: *#PatchParams | close({
// +usage=Specify the environment variables for multiple containers
containers: [...#PatchParams]
})

View File

@@ -21,9 +21,11 @@ import (
"sync"
terraformtypes "github.com/oam-dev/terraform-controller/api/types"
terraformapi "github.com/oam-dev/terraform-controller/api/v1beta1"
terraforv1beta1 "github.com/oam-dev/terraform-controller/api/v1beta1"
terraforv1beta2 "github.com/oam-dev/terraform-controller/api/v1beta2"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"sigs.k8s.io/controller-runtime/pkg/client"
@@ -32,11 +34,10 @@ import (
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
"github.com/oam-dev/kubevela/apis/types"
"github.com/oam-dev/kubevela/pkg/appfile"
"github.com/oam-dev/kubevela/pkg/oam"
monitorContext "github.com/oam-dev/kubevela/pkg/monitor/context"
"github.com/oam-dev/kubevela/pkg/monitor/metrics"
"github.com/oam-dev/kubevela/pkg/multicluster"
"github.com/oam-dev/kubevela/pkg/oam"
"github.com/oam-dev/kubevela/pkg/resourcekeeper"
)
@@ -229,34 +230,22 @@ func (h *AppHandler) collectHealthStatus(ctx context.Context, wl *appfile.Worklo
)
if wl.CapabilityCategory == types.TerraformCategory {
var configuration terraformapi.Configuration
var configuration terraforv1beta2.Configuration
if err := h.r.Client.Get(ctx, client.ObjectKey{Name: wl.Name, Namespace: namespace}, &configuration); err != nil {
return nil, false, errors.WithMessagef(err, "app=%s, comp=%s, check health error", appName, wl.Name)
}
isLatest := func() bool {
if configuration.Status.ObservedGeneration != 0 {
if configuration.Status.ObservedGeneration != configuration.Generation {
return false
if kerrors.IsNotFound(err) {
var legacyConfiguration terraforv1beta1.Configuration
if err := h.r.Client.Get(ctx, client.ObjectKey{Name: wl.Name, Namespace: namespace}, &legacyConfiguration); err != nil {
return nil, false, errors.WithMessagef(err, "app=%s, comp=%s, check health error", appName, wl.Name)
}
isHealth = setStatus(&status, legacyConfiguration.Status.ObservedGeneration, legacyConfiguration.Generation,
legacyConfiguration.GetLabels(), appRev.Name, legacyConfiguration.Status.Apply.State, legacyConfiguration.Status.Apply.Message)
} else {
return nil, false, errors.WithMessagef(err, "app=%s, comp=%s, check health error", appName, wl.Name)
}
// Use AppRevision to avoid getting the configuration before the patch.
if v, ok := configuration.GetLabels()[oam.LabelAppRevision]; ok {
if v != appRev.Name {
return false
}
}
return true
}
if !isLatest() || configuration.Status.Apply.State != terraformtypes.Available {
status.Healthy = false
isHealth = false
} else {
status.Healthy = true
isHealth = true
isHealth = setStatus(&status, configuration.Status.ObservedGeneration, configuration.Generation, configuration.GetLabels(),
appRev.Name, configuration.Status.Apply.State, configuration.Status.Apply.Message)
}
status.Message = configuration.Status.Apply.Message
} else {
if ok, err := wl.EvalHealth(wl.Ctx, h.r.Client, namespace); !ok || err != nil {
isHealth = false
@@ -292,6 +281,29 @@ func (h *AppHandler) collectHealthStatus(ctx context.Context, wl *appfile.Worklo
return &status, isHealth, nil
}
func setStatus(status *common.ApplicationComponentStatus, observedGeneration, generation int64, labels map[string]string,
appRevName string, state terraformtypes.ConfigurationState, message string) bool {
isLatest := func() bool {
if observedGeneration != 0 && observedGeneration != generation {
return false
}
// Use AppRevision to avoid getting the configuration before the patch.
if v, ok := labels[oam.LabelAppRevision]; ok {
if v != appRevName {
return false
}
}
return true
}
if !isLatest() || state != terraformtypes.Available {
status.Healthy = false
return false
}
status.Healthy = true
status.Message = message
return true
}
func generateScopeReference(scopes []appfile.Scope) []corev1.ObjectReference {
var references []corev1.ObjectReference
for _, scope := range scopes {

View File

@@ -26,7 +26,7 @@ import (
"github.com/oam-dev/kubevela/pkg/oam/testutil"
terraformtypes "github.com/oam-dev/terraform-controller/api/types"
terraformapi "github.com/oam-dev/terraform-controller/api/v1beta1"
terraformapi "github.com/oam-dev/terraform-controller/api/v1beta2"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
appsv1 "k8s.io/api/apps/v1"
@@ -343,7 +343,7 @@ var _ = Describe("Test Application health check", func() {
Spec: v1beta1.ComponentDefinitionSpec{
Workload: common.WorkloadTypeDescriptor{
Definition: common.WorkloadGVK{
APIVersion: "terraform.core.oam.dev/v1beta1",
APIVersion: "terraform.core.oam.dev/v1beta2",
Kind: "Configuration",
},
},

View File

@@ -117,8 +117,16 @@ func convertStepProperties(step *v1beta1.WorkflowStep, app *v1beta1.Application)
return err
}
var componentNames []string
for _, c := range app.Spec.Components {
componentNames = append(componentNames, c.Name)
}
for _, c := range app.Spec.Components {
if c.Name == o.Component {
if dcName, ok := checkDependsOnValidComponent(c.DependsOn, componentNames); !ok {
return errors.Errorf("component %s not found, which is depended by %s", dcName, c.Name)
}
step.Inputs = append(step.Inputs, c.Inputs...)
for index := range step.Inputs {
parameterKey := strings.TrimSpace(step.Inputs[index].ParameterKey)
@@ -135,11 +143,23 @@ func convertStepProperties(step *v1beta1.WorkflowStep, app *v1beta1.Application)
step.Properties = util.Object2RawExtension(c)
return nil
}
}
return errors.Errorf("component %s not found", o.Component)
}
func checkDependsOnValidComponent(dependsOnComponentNames, allComponentNames []string) (string, bool) {
// does not depends on other components
if dependsOnComponentNames == nil {
return "", true
}
for _, dc := range dependsOnComponentNames {
if !utils.StringsContain(allComponentNames, dc) {
return dc, false
}
}
return "", true
}
func (h *AppHandler) renderComponentFunc(appParser *appfile.Parser, appRev *v1beta1.ApplicationRevision, af *appfile.Appfile) oamProvider.ComponentRender {
return func(comp common.ApplicationComponent, patcher *value.Value, clusterName string, overrideNamespace string, env string) (*unstructured.Unstructured, []*unstructured.Unstructured, error) {
ctx := multicluster.ContextWithClusterName(context.Background(), clusterName)

View File

@@ -240,4 +240,118 @@ var _ = Describe("Test Application workflow generator", func() {
_, _, err = renderFunc(comp, nil, "", "", "")
Expect(err).Should(BeNil())
})
It("Test generate application workflow with dependsOn", func() {
app := &oamcore.Application{
TypeMeta: metav1.TypeMeta{
Kind: "Application",
APIVersion: "core.oam.dev/v1beta1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "app-with-input-output",
Namespace: namespaceName,
},
Spec: oamcore.ApplicationSpec{
Components: []common.ApplicationComponent{
{
Name: "myweb1",
Type: "worker-with-health",
Properties: &runtime.RawExtension{Raw: []byte(`{"cmd":["sleep","1000"],"image":"busybox"}`)},
},
{
Name: "myweb2",
Type: "worker-with-health",
DependsOn: []string{"myweb1"},
Properties: &runtime.RawExtension{Raw: []byte(`{"cmd":["sleep","1000"],"image":"busybox","lives": "i am lives","enemies": "empty"}`)},
},
},
},
}
af, err := appParser.GenerateAppFile(ctx, app)
Expect(err).Should(BeNil())
appRev := &oamcore.ApplicationRevision{}
handler, err := NewAppHandler(ctx, reconciler, app, appParser)
Expect(err).Should(Succeed())
taskRunner, err := handler.GenerateApplicationSteps(ctx, app, appParser, af, appRev)
Expect(err).To(BeNil())
Expect(len(taskRunner)).Should(BeEquivalentTo(2))
Expect(taskRunner[0].Name()).Should(BeEquivalentTo("myweb1"))
Expect(taskRunner[1].Name()).Should(BeEquivalentTo("myweb2"))
})
It("Test generate application workflow with invalid dependsOn", func() {
app := &oamcore.Application{
TypeMeta: metav1.TypeMeta{
Kind: "Application",
APIVersion: "core.oam.dev/v1beta1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "app-with-input-output",
Namespace: namespaceName,
},
Spec: oamcore.ApplicationSpec{
Components: []common.ApplicationComponent{
{
Name: "myweb1",
Type: "worker-with-health",
Properties: &runtime.RawExtension{Raw: []byte(`{"cmd":["sleep","1000"],"image":"busybox"}`)},
},
{
Name: "myweb2",
Type: "worker-with-health",
DependsOn: []string{"myweb0"},
Properties: &runtime.RawExtension{Raw: []byte(`{"cmd":["sleep","1000"],"image":"busybox","lives": "i am lives","enemies": "empty"}`)},
},
},
},
}
af, err := appParser.GenerateAppFile(ctx, app)
Expect(err).Should(BeNil())
appRev := &oamcore.ApplicationRevision{}
handler, err := NewAppHandler(ctx, reconciler, app, appParser)
Expect(err).Should(Succeed())
_, err = handler.GenerateApplicationSteps(ctx, app, appParser, af, appRev)
Expect(err).NotTo(BeNil())
})
It("Test generate application workflow with multiple invalid dependsOn", func() {
app := &oamcore.Application{
TypeMeta: metav1.TypeMeta{
Kind: "Application",
APIVersion: "core.oam.dev/v1beta1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "app-with-input-output",
Namespace: namespaceName,
},
Spec: oamcore.ApplicationSpec{
Components: []common.ApplicationComponent{
{
Name: "myweb1",
Type: "worker-with-health",
Properties: &runtime.RawExtension{Raw: []byte(`{"cmd":["sleep","1000"],"image":"busybox"}`)},
},
{
Name: "myweb2",
Type: "worker-with-health",
DependsOn: []string{"myweb1", "myweb0", "myweb3"},
Properties: &runtime.RawExtension{Raw: []byte(`{"cmd":["sleep","1000"],"image":"busybox","lives": "i am lives","enemies": "empty"}`)},
},
},
},
}
af, err := appParser.GenerateAppFile(ctx, app)
Expect(err).Should(BeNil())
appRev := &oamcore.ApplicationRevision{}
handler, err := NewAppHandler(ctx, reconciler, app, appParser)
Expect(err).Should(Succeed())
_, err = handler.GenerateApplicationSteps(ctx, app, appParser, af, appRev)
Expect(err).NotTo(BeNil())
})
})

View File

@@ -410,10 +410,12 @@ func (h *AppHandler) currentAppRevIsNew(ctx context.Context) (bool, bool, error)
if isLatestRev {
appSpec := h.currentAppRev.Spec.Application.Spec
traitDef := h.currentAppRev.Spec.TraitDefinitions
workflowStepDef := h.currentAppRev.Spec.WorkflowStepDefinitions
h.currentAppRev = h.latestAppRev.DeepCopy()
h.currentRevHash = h.app.Status.LatestRevision.RevisionHash
h.currentAppRev.Spec.Application.Spec = appSpec
h.currentAppRev.Spec.TraitDefinitions = traitDef
h.currentAppRev.Spec.WorkflowStepDefinitions = workflowStepDef
return false, false, nil
}
@@ -755,6 +757,7 @@ func (h *AppHandler) FinalizeAndApplyAppRevision(ctx context.Context) error {
appRev.SetGroupVersionKind(v1beta1.ApplicationRevisionGroupVersionKind)
// pass application's annotations & labels to app revision
appRev.SetAnnotations(h.app.GetAnnotations())
delete(appRev.Annotations, oam.AnnotationLastAppliedConfiguration)
appRev.SetLabels(h.app.GetLabels())
util.AddLabels(appRev, map[string]string{
oam.LabelAppName: h.app.GetName(),

View File

@@ -19,6 +19,8 @@ package application
import (
"context"
"encoding/json"
"fmt"
"reflect"
"strconv"
"time"
@@ -825,3 +827,347 @@ var _ = Describe("Test remove SkipAppRev func", func() {
Expect(res.Components[0].Traits[1].Type).Should(BeEquivalentTo("service"))
})
})
var _ = Describe("Test PrepareCurrentAppRevision", func() {
var app v1beta1.Application
var apprev v1beta1.ApplicationRevision
ctx := context.Background()
var handler *AppHandler
BeforeEach(func() {
// prepare ComponentDefinition
var compd v1beta1.ComponentDefinition
Expect(yaml.Unmarshal([]byte(componentDefYaml), &compd)).To(Succeed())
Expect(k8sClient.Create(ctx, &compd)).Should(SatisfyAny(BeNil(), &util.AlreadyExistMatcher{}))
// prepare WorkflowStepDefinition
wsdYaml := `
apiVersion: core.oam.dev/v1beta1
kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: Apply application for your workflow steps
labels:
custom.definition.oam.dev/ui-hidden: "true"
name: apply-application
namespace: vela-system
spec:
schematic:
cue:
template: |
import (
"vela/op"
)
// apply application
output: op.#ApplyApplication & {}
`
var wsd v1beta1.WorkflowStepDefinition
Expect(yaml.Unmarshal([]byte(wsdYaml), &wsd)).To(Succeed())
Expect(k8sClient.Create(ctx, &wsd)).Should(SatisfyAny(BeNil(), &util.AlreadyExistMatcher{}))
// prepare application and application revision
appYaml := `
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: backport-1-2-test-demo
namespace: default
spec:
components:
- name: backport-1-2-test-demo
properties:
image: nginx
type: worker
workflow:
steps:
- name: apply
type: apply-application
status:
latestRevision:
name: backport-1-2-test-demo-v1
revision: 1
revisionHash: 38ddf4e721073703
`
Expect(yaml.Unmarshal([]byte(appYaml), &app)).To(Succeed())
Expect(k8sClient.Create(ctx, &app)).Should(SatisfyAny(BeNil(), &util.AlreadyExistMatcher{}))
// prepare application revision
apprevYaml := `
apiVersion: core.oam.dev/v1beta1
kind: ApplicationRevision
metadata:
name: backport-1-2-test-demo-v1
namespace: default
ownerReferences:
- apiVersion: core.oam.dev/v1beta1
controller: true
kind: Application
name: backport-1-2-test-demo
uid: b69fab34-7058-412b-994d-1465a9421f06
spec:
application:
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: backport-1-2-test-demo
namespace: default
spec:
components:
- name: backport-1-2-test-demo
properties:
image: nginx
type: worker
status: {}
componentDefinitions:
webservice:
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
annotations:
definition.oam.dev/description: Describes long-running, scalable, containerized
services that have a stable network endpoint to receive external network
traffic from customers.
meta.helm.sh/release-name: kubevela
meta.helm.sh/release-namespace: vela-system
labels:
app.kubernetes.io/managed-by: Helm
name: webservice
namespace: vela-system
spec:
schematic:
cue:
template: "import (\n\t\"strconv\"\n)\n\nmountsArray: {\n\tpvc: *[\n\t\tfor
v in parameter.volumeMounts.pvc {\n\t\t\t{\n\t\t\t\tmountPath: v.mountPath\n\t\t\t\tname:
\ v.name\n\t\t\t}\n\t\t},\n\t] | []\n\n\tconfigMap: *[\n\t\t\tfor
v in parameter.volumeMounts.configMap {\n\t\t\t{\n\t\t\t\tmountPath:
v.mountPath\n\t\t\t\tname: v.name\n\t\t\t}\n\t\t},\n\t] | []\n\n\tsecret:
*[\n\t\tfor v in parameter.volumeMounts.secret {\n\t\t\t{\n\t\t\t\tmountPath:
v.mountPath\n\t\t\t\tname: v.name\n\t\t\t}\n\t\t},\n\t] | []\n\n\temptyDir:
*[\n\t\t\tfor v in parameter.volumeMounts.emptyDir {\n\t\t\t{\n\t\t\t\tmountPath:
v.mountPath\n\t\t\t\tname: v.name\n\t\t\t}\n\t\t},\n\t] | []\n\n\thostPath:
*[\n\t\t\tfor v in parameter.volumeMounts.hostPath {\n\t\t\t{\n\t\t\t\tmountPath:
v.mountPath\n\t\t\t\tname: v.name\n\t\t\t}\n\t\t},\n\t] | []\n}\nvolumesArray:
{\n\tpvc: *[\n\t\tfor v in parameter.volumeMounts.pvc {\n\t\t\t{\n\t\t\t\tname:
v.name\n\t\t\t\tpersistentVolumeClaim: claimName: v.claimName\n\t\t\t}\n\t\t},\n\t]
| []\n\n\tconfigMap: *[\n\t\t\tfor v in parameter.volumeMounts.configMap
{\n\t\t\t{\n\t\t\t\tname: v.name\n\t\t\t\tconfigMap: {\n\t\t\t\t\tdefaultMode:
v.defaultMode\n\t\t\t\t\tname: v.cmName\n\t\t\t\t\tif v.items
!= _|_ {\n\t\t\t\t\t\titems: v.items\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t]
| []\n\n\tsecret: *[\n\t\tfor v in parameter.volumeMounts.secret {\n\t\t\t{\n\t\t\t\tname:
v.name\n\t\t\t\tsecret: {\n\t\t\t\t\tdefaultMode: v.defaultMode\n\t\t\t\t\tsecretName:
\ v.secretName\n\t\t\t\t\tif v.items != _|_ {\n\t\t\t\t\t\titems: v.items\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t]
| []\n\n\temptyDir: *[\n\t\t\tfor v in parameter.volumeMounts.emptyDir
{\n\t\t\t{\n\t\t\t\tname: v.name\n\t\t\t\temptyDir: medium: v.medium\n\t\t\t}\n\t\t},\n\t]
| []\n\n\thostPath: *[\n\t\t\tfor v in parameter.volumeMounts.hostPath
{\n\t\t\t{\n\t\t\t\tname: v.name\n\t\t\t\thostPath: path: v.path\n\t\t\t}\n\t\t},\n\t]
| []\n}\noutput: {\n\tapiVersion: \"apps/v1\"\n\tkind: \"Deployment\"\n\tspec:
{\n\t\tselector: matchLabels: \"app.oam.dev/component\": context.name\n\n\t\ttemplate:
{\n\t\t\tmetadata: {\n\t\t\t\tlabels: {\n\t\t\t\t\tif parameter.labels
!= _|_ {\n\t\t\t\t\t\tparameter.labels\n\t\t\t\t\t}\n\t\t\t\t\tif parameter.addRevisionLabel
{\n\t\t\t\t\t\t\"app.oam.dev/revision\": context.revision\n\t\t\t\t\t}\n\t\t\t\t\t\"app.oam.dev/component\":
context.name\n\t\t\t\t}\n\t\t\t\tif parameter.annotations != _|_ {\n\t\t\t\t\tannotations:
parameter.annotations\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tspec: {\n\t\t\t\tcontainers:
[{\n\t\t\t\t\tname: context.name\n\t\t\t\t\timage: parameter.image\n\t\t\t\t\tif
parameter[\"port\"] != _|_ && parameter[\"ports\"] == _|_ {\n\t\t\t\t\t\tports:
[{\n\t\t\t\t\t\t\tcontainerPort: parameter.port\n\t\t\t\t\t\t}]\n\t\t\t\t\t}\n\t\t\t\t\tif
parameter[\"ports\"] != _|_ {\n\t\t\t\t\t\tports: [ for v in parameter.ports
{\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tcontainerPort: v.port\n\t\t\t\t\t\t\t\tprotocol:
\ v.protocol\n\t\t\t\t\t\t\t\tif v.name != _|_ {\n\t\t\t\t\t\t\t\t\tname:
v.name\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tif v.name == _|_ {\n\t\t\t\t\t\t\t\t\tname:
\"port-\" + strconv.FormatInt(v.port, 10)\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}}]\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"imagePullPolicy\"] != _|_ {\n\t\t\t\t\t\timagePullPolicy:
parameter.imagePullPolicy\n\t\t\t\t\t}\n\n\t\t\t\t\tif parameter[\"cmd\"]
!= _|_ {\n\t\t\t\t\t\tcommand: parameter.cmd\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"env\"] != _|_ {\n\t\t\t\t\t\tenv: parameter.env\n\t\t\t\t\t}\n\n\t\t\t\t\tif
context[\"config\"] != _|_ {\n\t\t\t\t\t\tenv: context.config\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"cpu\"] != _|_ {\n\t\t\t\t\t\tresources: {\n\t\t\t\t\t\t\tlimits:
cpu: parameter.cpu\n\t\t\t\t\t\t\trequests: cpu: parameter.cpu\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"memory\"] != _|_ {\n\t\t\t\t\t\tresources: {\n\t\t\t\t\t\t\tlimits:
memory: parameter.memory\n\t\t\t\t\t\t\trequests: memory: parameter.memory\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"volumes\"] != _|_ && parameter[\"volumeMounts\"] == _|_
{\n\t\t\t\t\t\tvolumeMounts: [ for v in parameter.volumes {\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tmountPath:
v.mountPath\n\t\t\t\t\t\t\t\tname: v.name\n\t\t\t\t\t\t\t}}]\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"volumeMounts\"] != _|_ {\n\t\t\t\t\t\tvolumeMounts: mountsArray.pvc
+ mountsArray.configMap + mountsArray.secret + mountsArray.emptyDir
+ mountsArray.hostPath\n\t\t\t\t\t}\n\n\t\t\t\t\tif parameter[\"livenessProbe\"]
!= _|_ {\n\t\t\t\t\t\tlivenessProbe: parameter.livenessProbe\n\t\t\t\t\t}\n\n\t\t\t\t\tif
parameter[\"readinessProbe\"] != _|_ {\n\t\t\t\t\t\treadinessProbe:
parameter.readinessProbe\n\t\t\t\t\t}\n\n\t\t\t\t}]\n\n\t\t\t\tif parameter[\"hostAliases\"]
!= _|_ {\n\t\t\t\t\t// +patchKey=ip\n\t\t\t\t\thostAliases: parameter.hostAliases\n\t\t\t\t}\n\n\t\t\t\tif
parameter[\"imagePullSecrets\"] != _|_ {\n\t\t\t\t\timagePullSecrets:
[ for v in parameter.imagePullSecrets {\n\t\t\t\t\t\tname: v\n\t\t\t\t\t},\n\t\t\t\t\t]\n\t\t\t\t}\n\n\t\t\t\tif
parameter[\"volumes\"] != _|_ && parameter[\"volumeMounts\"] == _|_
{\n\t\t\t\t\tvolumes: [ for v in parameter.volumes {\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tname:
v.name\n\t\t\t\t\t\t\tif v.type == \"pvc\" {\n\t\t\t\t\t\t\t\tpersistentVolumeClaim:
claimName: v.claimName\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif v.type ==
\"configMap\" {\n\t\t\t\t\t\t\t\tconfigMap: {\n\t\t\t\t\t\t\t\t\tdefaultMode:
v.defaultMode\n\t\t\t\t\t\t\t\t\tname: v.cmName\n\t\t\t\t\t\t\t\t\tif
v.items != _|_ {\n\t\t\t\t\t\t\t\t\t\titems: v.items\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif
v.type == \"secret\" {\n\t\t\t\t\t\t\t\tsecret: {\n\t\t\t\t\t\t\t\t\tdefaultMode:
v.defaultMode\n\t\t\t\t\t\t\t\t\tsecretName: v.secretName\n\t\t\t\t\t\t\t\t\tif
v.items != _|_ {\n\t\t\t\t\t\t\t\t\t\titems: v.items\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif
v.type == \"emptyDir\" {\n\t\t\t\t\t\t\t\temptyDir: medium: v.medium\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}]\n\t\t\t\t}\n\n\t\t\t\tif
parameter[\"volumeMounts\"] != _|_ {\n\t\t\t\t\tvolumes: volumesArray.pvc
+ volumesArray.configMap + volumesArray.secret + volumesArray.emptyDir
+ volumesArray.hostPath\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\nexposePorts:
[\n\tfor v in parameter.ports if v.expose == true {\n\t\tport: v.port\n\t\ttargetPort:
v.port\n\t\tif v.name != _|_ {\n\t\t\tname: v.name\n\t\t}\n\t\tif v.name
== _|_ {\n\t\t\tname: \"port-\" + strconv.FormatInt(v.port, 10)\n\t\t}\n\t},\n]\noutputs:
{\n\tif len(exposePorts) != 0 {\n\t\twebserviceExpose: {\n\t\t\tapiVersion:
\"v1\"\n\t\t\tkind: \"Service\"\n\t\t\tmetadata: name: context.name\n\t\t\tspec:
{\n\t\t\t\tselector: \"app.oam.dev/component\": context.name\n\t\t\t\tports:
exposePorts\n\t\t\t\ttype: parameter.exposeType\n\t\t\t}\n\t\t}\n\t}\n}\nparameter:
{\n\t// +usage=Specify the labels in the workload\n\tlabels?: [string]:
string\n\n\t// +usage=Specify the annotations in the workload\n\tannotations?:
[string]: string\n\n\t// +usage=Which image would you like to use for
your service\n\t// +short=i\n\timage: string\n\n\t// +usage=Specify
image pull policy for your service\n\timagePullPolicy?: \"Always\" |
\"Never\" | \"IfNotPresent\"\n\n\t// +usage=Specify image pull secrets
for your service\n\timagePullSecrets?: [...string]\n\n\t// +ignore\n\t//
+usage=Deprecated field, please use ports instead\n\t// +short=p\n\tport?:
int\n\n\t// +usage=Which ports do you want customer traffic sent to,
defaults to 80\n\tports?: [...{\n\t\t// +usage=Number of port to expose
on the pod's IP address\n\t\tport: int\n\t\t// +usage=Name of the port\n\t\tname?:
string\n\t\t// +usage=Protocol for port. Must be UDP, TCP, or SCTP\n\t\tprotocol:
*\"TCP\" | \"UDP\" | \"SCTP\"\n\t\t// +usage=Specify if the port should
be exposed\n\t\texpose: *false | bool\n\t}]\n\n\t// +ignore\n\t// +usage=Specify
what kind of Service you want. options: \"ClusterIP\", \"NodePort\",
\"LoadBalancer\", \"ExternalName\"\n\texposeType: *\"ClusterIP\" | \"NodePort\"
| \"LoadBalancer\" | \"ExternalName\"\n\n\t// +ignore\n\t// +usage=If
addRevisionLabel is true, the revision label will be added to the underlying
pods\n\taddRevisionLabel: *false | bool\n\n\t// +usage=Commands to run
in the container\n\tcmd?: [...string]\n\n\t// +usage=Define arguments
by using environment variables\n\tenv?: [...{\n\t\t// +usage=Environment
variable name\n\t\tname: string\n\t\t// +usage=The value of the environment
variable\n\t\tvalue?: string\n\t\t// +usage=Specifies a source the value
of this var should come from\n\t\tvalueFrom?: {\n\t\t\t// +usage=Selects
a key of a secret in the pod's namespace\n\t\t\tsecretKeyRef?: {\n\t\t\t\t//
+usage=The name of the secret in the pod's namespace to select from\n\t\t\t\tname:
string\n\t\t\t\t// +usage=The key of the secret to select from. Must
be a valid secret key\n\t\t\t\tkey: string\n\t\t\t}\n\t\t\t// +usage=Selects
a key of a config map in the pod's namespace\n\t\t\tconfigMapKeyRef?:
{\n\t\t\t\t// +usage=The name of the config map in the pod's namespace
to select from\n\t\t\t\tname: string\n\t\t\t\t// +usage=The key of the
config map to select from. Must be a valid secret key\n\t\t\t\tkey:
string\n\t\t\t}\n\t\t}\n\t}]\n\n\t// +usage=Number of CPU units for
the service, like \n\tcpu?: string\n\n\t//
+usage=Specifies the attributes of the memory resource required for
the container.\n\tmemory?: string\n\n\tvolumeMounts?: {\n\t\t// +usage=Mount
PVC type volume\n\t\tpvc?: [...{\n\t\t\tname: string\n\t\t\tmountPath:
string\n\t\t\t// +usage=The name of the PVC\n\t\t\tclaimName: string\n\t\t}]\n\t\t//
+usage=Mount ConfigMap type volume\n\t\tconfigMap?: [...{\n\t\t\tname:
\ string\n\t\t\tmountPath: string\n\t\t\tdefaultMode: *420 |
int\n\t\t\tcmName: string\n\t\t\titems?: [...{\n\t\t\t\tkey: string\n\t\t\t\tpath:
string\n\t\t\t\tmode: *511 | int\n\t\t\t}]\n\t\t}]\n\t\t// +usage=Mount
Secret type volume\n\t\tsecret?: [...{\n\t\t\tname: string\n\t\t\tmountPath:
\ string\n\t\t\tdefaultMode: *420 | int\n\t\t\tsecretName: string\n\t\t\titems?:
[...{\n\t\t\t\tkey: string\n\t\t\t\tpath: string\n\t\t\t\tmode: *511
| int\n\t\t\t}]\n\t\t}]\n\t\t// +usage=Mount EmptyDir type volume\n\t\temptyDir?:
[...{\n\t\t\tname: string\n\t\t\tmountPath: string\n\t\t\tmedium:
\ *\"\" | \"Memory\"\n\t\t}]\n\t\t// +usage=Mount HostPath type volume\n\t\thostPath?:
[...{\n\t\t\tname: string\n\t\t\tmountPath: string\n\t\t\tpath:
\ string\n\t\t}]\n\t}\n\n\t// +usage=Deprecated field, use volumeMounts
instead.\n\tvolumes?: [...{\n\t\tname: string\n\t\tmountPath: string\n\t\t//
+usage=Specify volume type, options: \"pvc\",\"configMap\",\"secret\",\"emptyDir\"\n\t\ttype:
\"pvc\" | \"configMap\" | \"secret\" | \"emptyDir\"\n\t\tif type ==
\"pvc\" {\n\t\t\tclaimName: string\n\t\t}\n\t\tif type == \"configMap\"
{\n\t\t\tdefaultMode: *420 | int\n\t\t\tcmName: string\n\t\t\titems?:
[...{\n\t\t\t\tkey: string\n\t\t\t\tpath: string\n\t\t\t\tmode: *511
| int\n\t\t\t}]\n\t\t}\n\t\tif type == \"secret\" {\n\t\t\tdefaultMode:
*420 | int\n\t\t\tsecretName: string\n\t\t\titems?: [...{\n\t\t\t\tkey:
\ string\n\t\t\t\tpath: string\n\t\t\t\tmode: *511 | int\n\t\t\t}]\n\t\t}\n\t\tif
type == \"emptyDir\" {\n\t\t\tmedium: *\"\" | \"Memory\"\n\t\t}\n\t}]\n\n\t//
+usage=Instructions for assessing whether the container is alive.\n\tlivenessProbe?:
#HealthProbe\n\n\t// +usage=Instructions for assessing whether the container
is in a suitable state to serve traffic.\n\treadinessProbe?: #HealthProbe\n\n\t//
+usage=Specify the hostAliases to add\n\thostAliases?: [...{\n\t\tip:
string\n\t\thostnames: [...string]\n\t}]\n}\n#HealthProbe: {\n\n\t//
+usage=Instructions for assessing container health by executing a command.
Either this attribute or the httpGet attribute or the tcpSocket attribute
MUST be specified. This attribute is mutually exclusive with both the
httpGet attribute and the tcpSocket attribute.\n\texec?: {\n\t\t// +usage=A
command to be executed inside the container to assess its health. Each
space delimited token of the command is a separate array element. Commands
exiting 0 are considered to be successful probes, whilst all other exit
codes are considered failures.\n\t\tcommand: [...string]\n\t}\n\n\t//
+usage=Instructions for assessing container health by executing an HTTP
GET request. Either this attribute or the exec attribute or the tcpSocket
attribute MUST be specified. This attribute is mutually exclusive with
both the exec attribute and the tcpSocket attribute.\n\thttpGet?: {\n\t\t//
+usage=The endpoint, relative to the port, to which the HTTP GET request
should be directed.\n\t\tpath: string\n\t\t// +usage=The TCP socket
within the container to which the HTTP GET request should be directed.\n\t\tport:
int\n\t\thttpHeaders?: [...{\n\t\t\tname: string\n\t\t\tvalue: string\n\t\t}]\n\t}\n\n\t//
+usage=Instructions for assessing container health by probing a TCP
socket. Either this attribute or the exec attribute or the httpGet attribute
MUST be specified. This attribute is mutually exclusive with both the
exec attribute and the httpGet attribute.\n\ttcpSocket?: {\n\t\t// +usage=The
TCP socket within the container that should be probed to assess container
health.\n\t\tport: int\n\t}\n\n\t// +usage=Number of seconds after the
container is started before the first probe is initiated.\n\tinitialDelaySeconds:
*0 | int\n\n\t// +usage=How often, in seconds, to execute the probe.\n\tperiodSeconds:
*10 | int\n\n\t// +usage=Number of seconds after which the probe times
out.\n\ttimeoutSeconds: *1 | int\n\n\t// +usage=Minimum consecutive
successes for the probe to be considered successful after having failed.\n\tsuccessThreshold:
*1 | int\n\n\t// +usage=Number of consecutive failures required to determine
the container is not alive (liveness probe) or not ready (readiness
probe).\n\tfailureThreshold: *3 | int\n}\n"
status:
customStatus: "ready: {\n\treadyReplicas: *0 | int\n} & {\n\tif context.output.status.readyReplicas
!= _|_ {\n\t\treadyReplicas: context.output.status.readyReplicas\n\t}\n}\nmessage:
\"Ready:\\(ready.readyReplicas)/\\(context.output.spec.replicas)\""
healthPolicy: "ready: {\n\tupdatedReplicas: *0 | int\n\treadyReplicas:
\ *0 | int\n\treplicas: *0 | int\n\tobservedGeneration:
*0 | int\n} & {\n\tif context.output.status.updatedReplicas != _|_ {\n\t\tupdatedReplicas:
context.output.status.updatedReplicas\n\t}\n\tif context.output.status.readyReplicas
!= _|_ {\n\t\treadyReplicas: context.output.status.readyReplicas\n\t}\n\tif
context.output.status.replicas != _|_ {\n\t\treplicas: context.output.status.replicas\n\t}\n\tif
context.output.status.observedGeneration != _|_ {\n\t\tobservedGeneration:
context.output.status.observedGeneration\n\t}\n}\nisHealth: (context.output.spec.replicas
== ready.readyReplicas) && (context.output.spec.replicas == ready.updatedReplicas)
&& (context.output.spec.replicas == ready.replicas) && (ready.observedGeneration
== context.output.metadata.generation || ready.observedGeneration > context.output.metadata.generation)"
workload:
definition:
apiVersion: apps/v1
kind: Deployment
type: deployments.apps
status: {}
status: {}
`
Expect(yaml.Unmarshal([]byte(apprevYaml), &apprev)).To(Succeed())
// simulate 1.2 version that WorkflowStepDefinitions are not patched in appliacation revision
apprev.ObjectMeta.OwnerReferences[0].UID = app.ObjectMeta.UID
Expect(k8sClient.Create(ctx, &apprev)).Should(SatisfyAny(BeNil(), &util.AlreadyExistMatcher{}))
// prepare handler
_handler, err := NewAppHandler(ctx, reconciler, &app, nil)
Expect(err).Should(Succeed())
handler = _handler
})
It("Test currentAppRevIsNew func", func() {
By("Backport 1.2 version that WorkflowStepDefinitions are not patched to application revision")
// generate appfile
appfile, err := appfile.NewApplicationParser(reconciler.Client, reconciler.dm, reconciler.pd).GenerateAppFile(ctx, &app)
ctx = util.SetNamespaceInCtx(ctx, app.Namespace)
Expect(err).To(Succeed())
Expect(handler.PrepareCurrentAppRevision(ctx, appfile)).Should(Succeed())
// prepare apprev
thisWSD := handler.currentAppRev.Spec.WorkflowStepDefinitions
Expect(len(thisWSD) > 0 && func() bool {
expected := appfile.RelatedWorkflowStepDefinitions
for i, w := range thisWSD {
expW := *(expected[i])
if !reflect.DeepEqual(w, expW) {
fmt.Printf("appfile wsd:%s apprev wsd%s", w.Name, expW.Name)
return false
}
}
return true
}()).Should(BeTrue())
})
})

View File

@@ -28,7 +28,7 @@ import (
"github.com/crossplane/crossplane-runtime/pkg/event"
"github.com/go-logr/logr"
terraformv1beta1 "github.com/oam-dev/terraform-controller/api/v1beta1"
terraformv1beta2 "github.com/oam-dev/terraform-controller/api/v1beta2"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"github.com/pkg/errors"
@@ -121,7 +121,7 @@ var _ = BeforeSuite(func(done Done) {
err = scheme.AddToScheme(testScheme)
Expect(err).NotTo(HaveOccurred())
terraformv1beta1.AddToScheme(testScheme)
terraformv1beta2.AddToScheme(testScheme)
crdv1.AddToScheme(testScheme)

View File

@@ -1,20 +1,11 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.2.5
controller-gen.kubebuilder.io/version: v0.6.0
creationTimestamp: null
name: configurations.terraform.core.oam.dev
spec:
additionalPrinterColumns:
- JSONPath: .status.state
name: STATE
type: string
- JSONPath: .metadata.creationTimestamp
name: AGE
type: date
group: terraform.core.oam.dev
names:
kind: Configuration
@@ -22,96 +13,294 @@ spec:
plural: configurations
singular: configuration
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
description: Configuration is the Schema for the configurations API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: ConfigurationSpec defines the desired state of Configuration
properties:
JSON:
description: JSON is the Terraform JSON syntax configuration
type: string
backend:
description: Backend stores the state in a Kubernetes secret with locking
done using a Lease resource. TODO(zzxwill) If a backend exists in
HCL/JSON, this can be optional. Currently, if Backend is not set by
users, it still will set by the controller, ignoring the settings
in HCL/JSON backend
properties:
inClusterConfig:
description: InClusterConfig Used to authenticate to the cluster
from inside a pod. Only `true` is allowed
type: boolean
secretSuffix:
description: 'SecretSuffix used when creating secrets. Secrets will
be named in the format: tfstate-{workspace}-{secretSuffix}'
type: string
type: object
hcl:
description: HCL is the Terraform HCL type configuration
type: string
variable:
type: object
x-kubernetes-preserve-unknown-fields: true
writeConnectionSecretToRef:
description: WriteConnectionSecretToReference specifies the namespace
and name of a Secret to which any connection details for this managed
resource should be written. Connection details frequently include
the endpoint, username, and password required to connect to the managed
resource.
properties:
name:
description: Name of the secret.
type: string
namespace:
description: Namespace of the secret.
type: string
required:
- name
type: object
type: object
status:
description: ConfigurationStatus defines the observed state of Configuration
properties:
message:
type: string
outputs:
additionalProperties:
properties:
type:
type: string
value:
type: string
type: object
type: object
state:
description: A ResourceState represents the status of a resource
type: string
type: object
type: object
version: v1beta1
versions:
- name: v1beta1
served: true
storage: true
- additionalPrinterColumns:
- jsonPath: .status.apply.state
name: STATE
type: string
- jsonPath: .metadata.creationTimestamp
name: AGE
type: date
name: v1beta1
schema:
openAPIV3Schema:
description: Configuration is the Schema for the configurations API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: ConfigurationSpec defines the desired state of Configuration
properties:
JSON:
description: 'JSON is the Terraform JSON syntax configuration. Deprecated:
after v0.3.1, use HCL instead.'
type: string
backend:
description: Backend stores the state in a Kubernetes secret with
locking done using a Lease resource. TODO(zzxwill) If a backend
exists in HCL/JSON, this can be optional. Currently, if Backend
is not set by users, it still will set by the controller, ignoring
the settings in HCL/JSON backend
properties:
inClusterConfig:
description: InClusterConfig Used to authenticate to the cluster
from inside a pod. Only `true` is allowed
type: boolean
secretSuffix:
description: 'SecretSuffix used when creating secrets. Secrets
will be named in the format: tfstate-{workspace}-{secretSuffix}'
type: string
type: object
deleteResource:
default: true
description: DeleteResource will determine whether provisioned cloud
resources will be deleted when CR is deleted
type: boolean
hcl:
description: HCL is the Terraform HCL type configuration
type: string
path:
description: Path is the sub-directory of remote git repository.
type: string
providerRef:
description: ProviderReference specifies the reference to Provider
properties:
name:
description: Name of the referenced object.
type: string
namespace:
default: default
description: Namespace of the referenced object.
type: string
required:
- name
type: object
region:
description: Region is cloud provider's region. It will override the
region in the region field of ProviderReference
type: string
remote:
description: Remote is a git repo which contains hcl files. Currently,
only public git repos are supported.
type: string
variable:
type: object
x-kubernetes-preserve-unknown-fields: true
writeConnectionSecretToRef:
description: WriteConnectionSecretToReference specifies the namespace
and name of a Secret to which any connection details for this managed
resource should be written. Connection details frequently include
the endpoint, username, and password required to connect to the
managed resource.
properties:
name:
description: Name of the secret.
type: string
namespace:
description: Namespace of the secret.
type: string
required:
- name
type: object
type: object
status:
description: ConfigurationStatus defines the observed state of Configuration
properties:
apply:
description: ConfigurationApplyStatus is the status for Configuration
apply
properties:
message:
type: string
outputs:
additionalProperties:
description: Property is the property for an output
properties:
type:
type: string
value:
type: string
type: object
type: object
state:
description: A ConfigurationState represents the status of a resource
type: string
type: object
destroy:
description: ConfigurationDestroyStatus is the status for Configuration
destroy
properties:
message:
type: string
state:
description: A ConfigurationState represents the status of a resource
type: string
type: object
observedGeneration:
description: observedGeneration is the most recent generation observed
for this Configuration. It corresponds to the Configuration's generation,
which is updated on mutation by the API Server.
format: int64
type: integer
type: object
type: object
served: true
storage: false
subresources:
status: {}
- additionalPrinterColumns:
- jsonPath: .status.apply.state
name: STATE
type: string
- jsonPath: .metadata.creationTimestamp
name: AGE
type: date
name: v1beta2
schema:
openAPIV3Schema:
description: Configuration is the Schema for the configurations API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: ConfigurationSpec defines the desired state of Configuration
properties:
backend:
description: Backend stores the state in a Kubernetes secret with
locking done using a Lease resource. TODO(zzxwill) If a backend
exists in HCL/JSON, this can be optional. Currently, if Backend
is not set by users, it still will set by the controller, ignoring
the settings in HCL/JSON backend
properties:
inClusterConfig:
description: InClusterConfig Used to authenticate to the cluster
from inside a pod. Only `true` is allowed
type: boolean
secretSuffix:
description: 'SecretSuffix used when creating secrets. Secrets
will be named in the format: tfstate-{workspace}-{secretSuffix}'
type: string
type: object
customRegion:
description: Region is cloud provider's region. It will override the
region in the region field of ProviderReference
type: string
deleteResource:
default: true
description: DeleteResource will determine whether provisioned cloud
resources will be deleted when CR is deleted
type: boolean
hcl:
description: HCL is the Terraform HCL type configuration
type: string
path:
description: Path is the sub-directory of remote git repository.
type: string
providerRef:
description: ProviderReference specifies the reference to Provider
properties:
name:
description: Name of the referenced object.
type: string
namespace:
default: default
description: Namespace of the referenced object.
type: string
required:
- name
type: object
remote:
description: Remote is a git repo which contains hcl files. Currently,
only public git repos are supported.
type: string
variable:
type: object
x-kubernetes-preserve-unknown-fields: true
writeConnectionSecretToRef:
description: WriteConnectionSecretToReference specifies the namespace
and name of a Secret to which any connection details for this managed
resource should be written. Connection details frequently include
the endpoint, username, and password required to connect to the
managed resource.
properties:
name:
description: Name of the secret.
type: string
namespace:
description: Namespace of the secret.
type: string
required:
- name
type: object
type: object
status:
description: ConfigurationStatus defines the observed state of Configuration
properties:
apply:
description: ConfigurationApplyStatus is the status for Configuration
apply
properties:
message:
type: string
outputs:
additionalProperties:
description: Property is the property for an output
properties:
value:
type: string
type: object
type: object
state:
description: A ConfigurationState represents the status of a resource
type: string
type: object
destroy:
description: ConfigurationDestroyStatus is the status for Configuration
destroy
properties:
message:
type: string
state:
description: A ConfigurationState represents the status of a resource
type: string
type: object
observedGeneration:
description: observedGeneration is the most recent generation observed
for this Configuration. It corresponds to the Configuration's generation,
which is updated on mutation by the API Server. If ObservedGeneration
equals Generation, and State is Available, the value of Outputs
is latest
format: int64
type: integer
type: object
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
storedVersions: []

Some files were not shown because too many files have changed in this diff Show More