Compare commits

..

28 Commits

Author SHA1 Message Date
github-actions[bot]
eb9ddaabd3 [Backport release-1.4] Feat: optimize the API that list and detail definition (#4160)
* Fix: ignore the error that the definition API schema is not exist

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 855716d989)

* Fix: disable the cache when listing the definitions

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit f25882974d)

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2022-06-13 13:25:41 +08:00
github-actions[bot]
f11a94612f Feat: support insecure cluster (#4159)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 8abedf3005)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2022-06-13 13:15:34 +08:00
github-actions[bot]
56f9d7cb9c [Backport release-1.4] Fix: mongoDB datastore can't list special email user(#4104) (#4148)
* Add description column to vela trait and component command (#4107)

Signed-off-by: Holger Protzek <holger.protzek@springernature.com>
Signed-off-by: fengkang <fengkangb@digitalchina.com>
(cherry picked from commit 9c57ae2a15)

* Fix: mongoDB datastore can't list special email user(#4104)

Signed-off-by: fengkang <fengkangb@digitalchina.com>
(cherry picked from commit ec07790935)

* Fix: mongoDB datastore can't list special email user(#4104)
     change the function name from verifyUserValue to verifyValue
     add test case to test kubeapi.go:87

Signed-off-by: fengkang <fengkangb@digitalchina.com>
(cherry picked from commit 4c55e0688f)

* Fix: mongoDB datastore can't list special email user(#4104)
     change the function name from verifyUserValue to verifyValue
     add test case to test kubeapi.go:87
     add delete test case

Signed-off-by: fengkang <fengkangb@digitalchina.com>
(cherry picked from commit b2e76001c9)

* Fix: mongoDB datastore can't list special email user(#4104)
     optimize the test case

Signed-off-by: fengkang <fengkangb@digitalchina.com>
(cherry picked from commit a63c96fae1)

* Fix: mongoDB datastore can't list special email user(#4104)
     optimize the test case use user
     change all verify timing in kubeapi

Signed-off-by: fengkang <fengkangb@digitalchina.com>
(cherry picked from commit 2ede4f7b0c)

* Fix: mongoDB datastore can't list special email user(#4104)

Signed-off-by: fengkang <fengkangb@digitalchina.com>
(cherry picked from commit 356150642f)

* Fix: mongoDB datastore can't list special email user(#4104)

Signed-off-by: fengkang <fengkangb@digitalchina.com>
(cherry picked from commit 48a9e55937)

Co-authored-by: Holger Protzek <3481523+hprotzek@users.noreply.github.com>
Co-authored-by: fengkang01 <fengkangb@digitalchina.com>
2022-06-10 15:33:53 +08:00
github-actions[bot]
fbbc666019 [Backport release-1.4] Fix: api not exist don't break whole query process (#4137)
* make resource tree more

resourceTree more robust

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 327ec35cf4)

* log the error

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 1136afe78c)

Co-authored-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2022-06-09 11:36:57 +08:00
github-actions[bot]
d0788254cb Fix: vela addon registry get panic (#4136)
Signed-off-by: ZhongsJie <zhongsjie@gmail.com>
(cherry picked from commit 0d329e394e)

Co-authored-by: ZhongsJie <zhongsjie@gmail.com>
2022-06-09 10:19:28 +08:00
github-actions[bot]
c72a6aef87 Chore(deps): Bump github.com/containerd/containerd from 1.5.10 to 1.5.13 (#4125)
Bumps [github.com/containerd/containerd](https://github.com/containerd/containerd) from 1.5.10 to 1.5.13.
- [Release notes](https://github.com/containerd/containerd/releases)
- [Changelog](https://github.com/containerd/containerd/blob/main/RELEASES.md)
- [Commits](https://github.com/containerd/containerd/compare/v1.5.10...v1.5.13)

---
updated-dependencies:
- dependency-name: github.com/containerd/containerd
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
(cherry picked from commit 9982c7ceaa)

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-06-08 12:56:05 +08:00
github-actions[bot]
195b7fe0c7 [Backport release-1.4] Fix: bump oamdev/kube-webhook-certgen to v2.4.1 to support arm64 (#4117)
* Fix: split the image build process to make it faster

Signed-off-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
(cherry picked from commit 339977e874)

* Fix: bump oamdev/kube-webhook-certgen to v2.4.1 to support arm64

Signed-off-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
(cherry picked from commit dc054e2ce2)

Co-authored-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
2022-06-05 15:10:48 +08:00
github-actions[bot]
33c9e3b170 Fix: change the image name in ghcr to align with docker image registry (#4111)
Signed-off-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
(cherry picked from commit 71776c374b)

Co-authored-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
2022-06-04 14:19:21 +08:00
github-actions[bot]
ea0508a634 [Backport release-1.4] Fix: hold the force uninstalling process untill the last addon been deleted (#4103)
* hold the force uninstalling process untill the last addon been deleted

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix lint

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix comments

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix comments

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 1d4266432c)

* fix comments

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix lint

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit cecd8c28d0)

* add period

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 62e4dac538)

Co-authored-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2022-06-02 16:29:02 +08:00
github-actions[bot]
23e29aa62a Fix: vela provider delete command's example is wrong (#4099)
Signed-off-by: StevenLeiZhang <zhangleiic@163.com>
(cherry picked from commit 9b66f90100)

Co-authored-by: StevenLeiZhang <zhangleiic@163.com>
2022-06-02 11:04:34 +08:00
github-actions[bot]
ed2cb80219 Fix: the new default values do not take effect when upgrading the vela core (#4097)
Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 0ccbbb2636)

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2022-06-02 10:23:30 +08:00
github-actions[bot]
1a3d5debd5 Fix: show the default password (#4095)
Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit c3cfc4729e)

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2022-06-02 10:14:43 +08:00
github-actions[bot]
d4a82fe292 Fix: load the provider subcommands on demand (#4090)
Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 52bbf937bb)

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2022-06-01 16:32:53 +08:00
github-actions[bot]
963ae400fa Feat: use deferred config in CLI (#4085)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 478434003b)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2022-06-01 10:24:57 +08:00
github-actions[bot]
8f7a8258fe [Backport release-1.4] Fix(addon): more note info and filter prerelease addon version (#4082)
* more note info and filter prerelease addon version

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 6eefcd2c14)

* wrap the error optimize the show info

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 00ad8304c8)

* fix golint

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 9de32d5474)

Co-authored-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2022-05-31 18:56:54 +08:00
github-actions[bot]
70bc306678 Chore: hide some definitions in VelaUX (#4080)
Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 68b8db1d00)

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2022-05-31 16:41:54 +08:00
github-actions[bot]
57428bbc8d Fix: change the region to customRegion (#4079)
Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 9f0008f001)

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2022-05-31 16:19:37 +08:00
github-actions[bot]
e08541ca5c Fix: Improve vela provider add response (#4078)
Signed-off-by: StevenLeiZhang <zhangleiic@163.com>
(cherry picked from commit 39e96d287b)

Co-authored-by: StevenLeiZhang <zhangleiic@163.com>
2022-05-31 16:16:54 +08:00
github-actions[bot]
521a4edc10 Feat: vela ql into vela-cli (#4077)
Signed-off-by: Shukun Zhang <2236407598@qq.com>
(cherry picked from commit 38886a0ee2)

Co-authored-by: Shukun Zhang <2236407598@qq.com>
2022-05-31 16:13:00 +08:00
github-actions[bot]
82b330710c Fix: upgrade from v1.3+ to v1.4+ with new secret for cluster-gateway (#4076)
Signed-off-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
(cherry picked from commit 161588c64e)

Co-authored-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
2022-05-31 16:05:36 +08:00
github-actions[bot]
4a649f2cf1 Fix: Can not delete terraform provider (#4074)
Signed-off-by: StevenLeiZhang <zhangleiic@163.com>
(cherry picked from commit de79442f53)

Co-authored-by: StevenLeiZhang <zhangleiic@163.com>
2022-05-31 15:21:17 +08:00
github-actions[bot]
f6664106a2 Fix: remove the tcp protocol prefix in the endpoint string (#4068)
Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 8852ac691a)

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2022-05-30 20:17:47 +08:00
github-actions[bot]
bbe2a2dec6 [Backport release-1.4] Fix: CI workflow for rollout acr image build and push (#4067)
* fix the rollout acr image

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit f6f4a36909)

* fix

test

test

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

finish fix

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 1ce37d8ce1)

* merge two sction

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit adf2517e72)

Co-authored-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2022-05-30 19:35:47 +08:00
github-actions[bot]
404c7f6975 Fix: set workflow to finish before record in controller revision (#4066)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit 7042c63e90)

Co-authored-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-05-30 19:30:03 +08:00
github-actions[bot]
2edfbabdab Fix: fix the dependency gc policy to reverse dependency (#4065)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit 45cf38edb0)

Co-authored-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-05-30 19:29:42 +08:00
github-actions[bot]
e7b304de3b Fix: the policies can not be deleted (#4062)
Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 0c11a69779)

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2022-05-30 16:23:03 +08:00
github-actions[bot]
b8b54baf26 Fix: fail to get the endpoints via the velaql (#4058)
Signed-off-by: yangsoon <songyang.song@alibaba-inc.com>
(cherry picked from commit a40d99f8d8)

Co-authored-by: yangsoon <songyang.song@alibaba-inc.com>
2022-05-30 15:52:38 +08:00
github-actions[bot]
87b6c9416e [Backport release-1.4] Feat: minimize controller privileges & enforce authentication in multicluster e2e test (#4044)
* Feat: enable auth in multicluster test & restrict controller privileges while enabling authentication

Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit fc3fc39eb0)

* Feat: fix statekeep permission leak & comprev cleanup leak

Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit eaa317316d)

* Fix: use user info in ref-object select

Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 67463d13fe)

* Feat: set legacy-rt-gc to disabled by default

Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 77f1fc4286)

* Fix: pending healthscope with authentication test

Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit c21ae8ac6a)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2022-05-28 01:27:35 +08:00
83 changed files with 1147 additions and 302 deletions

View File

@@ -103,7 +103,7 @@ jobs:
run: |
make e2e-cleanup
make vela-cli
make e2e-setup-core
make e2e-setup-core-auth
make
make setup-runtime-e2e-cluster

View File

@@ -15,7 +15,7 @@ env:
ARTIFACT_HUB_REPOSITORY_ID: ${{ secrets.ARTIFACT_HUB_REPOSITORY_ID }}
jobs:
publish-images:
publish-core-images:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
@@ -55,12 +55,8 @@ jobs:
with:
driver-opts: image=moby/buildkit:master
- name: Build & Pushing vela-core for ACR
run: |
docker build --build-arg GOPROXY=https://proxy.golang.org --build-arg VERSION=${{ steps.get_version.outputs.VERSION }} --build-arg GITVERSION=git-${{ steps.vars.outputs.git_revision }} -t kubevela-registry.cn-hangzhou.cr.aliyuncs.com/oamdev/vela-core:${{ steps.get_version.outputs.VERSION }} .
docker push kubevela-registry.cn-hangzhou.cr.aliyuncs.com/oamdev/vela-core:${{ steps.get_version.outputs.VERSION }}
- uses: docker/build-push-action@v2
name: Build & Pushing vela-core for Dockerhub and GHCR
name: Build & Pushing vela-core for Dockerhub, GHCR and ACR
with:
context: .
file: Dockerfile
@@ -75,36 +71,11 @@ jobs:
GOPROXY=https://proxy.golang.org
tags: |-
docker.io/oamdev/vela-core:${{ steps.get_version.outputs.VERSION }}
ghcr.io/${{ github.repository }}/vela-core:${{ steps.get_version.outputs.VERSION }}
ghcr.io/${{ github.repository_owner }}/oamdev/vela-core:${{ steps.get_version.outputs.VERSION }}
kubevela-registry.cn-hangzhou.cr.aliyuncs.com/oamdev/vela-core:${{ steps.get_version.outputs.VERSION }}
- name: Build & Pushing vela-apiserver for ACR
run: |
docker build --build-arg GOPROXY=https://proxy.golang.org --build-arg VERSION=${{ steps.get_version.outputs.VERSION }} --build-arg GITVERSION=git-${{ steps.vars.outputs.git_revision }} -t kubevela-registry.cn-hangzhou.cr.aliyuncs.com/oamdev/vela-apiserver:${{ steps.get_version.outputs.VERSION }} -f Dockerfile.apiserver .
docker push kubevela-registry.cn-hangzhou.cr.aliyuncs.com/oamdev/vela-apiserver:${{ steps.get_version.outputs.VERSION }}
- uses: docker/build-push-action@v2
name: Build & Pushing vela-apiserver for Dockerhub and GHCR
with:
context: .
file: Dockerfile.apiserver
labels: |-
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' }}
build-args: |
GITVERSION=git-${{ steps.vars.outputs.git_revision }}
VERSION=${{ steps.get_version.outputs.VERSION }}
GOPROXY=https://proxy.golang.org
tags: |-
docker.io/oamdev/vela-apiserver:${{ steps.get_version.outputs.VERSION }}
ghcr.io/${{ github.repository }}/vela-apiserver:${{ steps.get_version.outputs.VERSION }}
- name: Build & Pushing vela CLI for ACR
run: |
docker build --build-arg GOPROXY=https://proxy.golang.org --build-arg VERSION=${{ steps.get_version.outputs.VERSION }} --build-arg GITVERSION=git-${{ steps.vars.outputs.git_revision }} -t kubevela-registry.cn-hangzhou.cr.aliyuncs.com/oamdev/vela-cli:${{ steps.get_version.outputs.VERSION }} -f Dockerfile.cli .
docker push kubevela-registry.cn-hangzhou.cr.aliyuncs.com/oamdev/vela-cli:${{ steps.get_version.outputs.VERSION }}
- uses: docker/build-push-action@v2
name: Build & Pushing CLI for Dockerhub and GHCR
name: Build & Pushing CLI for Dockerhub, GHCR and ACR
with:
context: .
file: Dockerfile.cli
@@ -119,14 +90,70 @@ jobs:
GOPROXY=https://proxy.golang.org
tags: |-
docker.io/oamdev/vela-cli:${{ steps.get_version.outputs.VERSION }}
ghcr.io/${{ github.repository }}/vela-cli:${{ steps.get_version.outputs.VERSION }}
ghcr.io/${{ github.repository_owner }}/oamdev/vela-cli:${{ steps.get_version.outputs.VERSION }}
kubevela-registry.cn-hangzhou.cr.aliyuncs.com/oamdev/vela-cli:${{ steps.get_version.outputs.VERSION }}
- name: Build & Pushing vela runtime rollout for ACR
publish-addon-images:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- name: Get the version
id: get_version
run: |
docker build --build-arg GOPROXY=https://proxy.golang.org --build-arg VERSION=${{ steps.get_version.outputs.VERSION }} --build-arg GITVERSION=git-${{ steps.vars.outputs.git_revision }} -t kubevela-registry.cn-hangzhou.cr.aliyuncs.com/oamdev/vela-rollout:${{ steps.get_version.outputs.VERSION }} -f runtime/rollout/Dockerfile .
docker push kubevela-registry.cn-hangzhou.cr.aliyuncs.com/oamdev/vela-rollout:${{ steps.get_version.outputs.VERSION }}
VERSION=${GITHUB_REF#refs/tags/}
if [[ ${GITHUB_REF} == "refs/heads/master" ]]; then
VERSION=latest
fi
echo ::set-output name=VERSION::${VERSION}
- name: Get git revision
id: vars
shell: bash
run: |
echo "::set-output name=git_revision::$(git rev-parse --short HEAD)"
- name: Login ghcr.io
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login docker.io
uses: docker/login-action@v1
with:
registry: docker.io
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login Alibaba Cloud ACR
uses: docker/login-action@v1
with:
registry: kubevela-registry.cn-hangzhou.cr.aliyuncs.com
username: ${{ secrets.ACR_USERNAME }}@aliyun-inner.com
password: ${{ secrets.ACR_PASSWORD }}
- uses: docker/setup-qemu-action@v1
- uses: docker/setup-buildx-action@v1
with:
driver-opts: image=moby/buildkit:master
- uses: docker/build-push-action@v2
name: Build & Pushing runtime rollout for Dockerhub and GHCR
name: Build & Pushing vela-apiserver for Dockerhub, GHCR and ACR
with:
context: .
file: Dockerfile.apiserver
labels: |-
org.opencontainers.image.source=https://github.com/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' }}
build-args: |
GITVERSION=git-${{ steps.vars.outputs.git_revision }}
VERSION=${{ steps.get_version.outputs.VERSION }}
GOPROXY=https://proxy.golang.org
tags: |-
docker.io/oamdev/vela-apiserver:${{ steps.get_version.outputs.VERSION }}
ghcr.io/${{ github.repository_owner }}/oamdev/vela-apiserver:${{ steps.get_version.outputs.VERSION }}
kubevela-registry.cn-hangzhou.cr.aliyuncs.com/oamdev/vela-apiserver:${{ steps.get_version.outputs.VERSION }}
- uses: docker/build-push-action@v2
name: Build & Pushing runtime rollout Dockerhub, GHCR and ACR
with:
context: .
file: runtime/rollout/Dockerfile
@@ -141,7 +168,8 @@ jobs:
GOPROXY=https://proxy.golang.org
tags: |-
docker.io/oamdev/vela-rollout:${{ steps.get_version.outputs.VERSION }}
ghcr.io/${{ github.repository }}/vela-rollout:${{ steps.get_version.outputs.VERSION }}
ghcr.io/${{ github.repository_owner }}/oamdev/vela-rollout:${{ steps.get_version.outputs.VERSION }}
kubevela-registry.cn-hangzhou.cr.aliyuncs.com/oamdev/vela-rollout:${{ steps.get_version.outputs.VERSION }}
publish-charts:
env:

View File

@@ -13,7 +13,7 @@ metadata:
name: {{ template "kubevela.fullname" . }}-cluster-gateway-tls
namespace: {{ .Release.Namespace }}
spec:
secretName: {{ template "kubevela.fullname" . }}-cluster-gateway-tls
secretName: {{ template "kubevela.fullname" . }}-cluster-gateway-tls-v2
duration: 8760h # 1y
issuerRef:
name: {{ template "kubevela.fullname" . }}-cluster-gateway-issuer

View File

@@ -53,7 +53,7 @@ spec:
- name: tls-cert-vol
secret:
defaultMode: 420
secretName: {{ template "kubevela.fullname" . }}-cluster-gateway-tls
secretName: {{ template "kubevela.fullname" . }}-cluster-gateway-tls-v2
{{ end }}
{{- with .Values.nodeSelector }}
nodeSelector:
@@ -147,4 +147,7 @@ subjects:
- kind: Group
name: kubevela:client
apiGroup: rbac.authorization.k8s.io
- kind: ServiceAccount
name: {{ include "kubevela.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{ end }}

View File

@@ -86,7 +86,7 @@ spec:
- create
- --host={{ .Release.Name }}-cluster-gateway-service,{{ .Release.Name }}-cluster-gateway-service.{{ .Release.Namespace }}.svc
- --namespace={{ .Release.Namespace }}
- --secret-name={{ template "kubevela.fullname" . }}-cluster-gateway-tls
- --secret-name={{ template "kubevela.fullname" . }}-cluster-gateway-tls-v2
- --cert-name=tls.crt
- --key-name=tls.key
restartPolicy: OnFailure
@@ -131,7 +131,7 @@ spec:
- /patch
args:
- --secret-namespace={{ .Release.Namespace }}
- --secret-name={{ template "kubevela.fullname" . }}-cluster-gateway-tls
- --secret-name={{ template "kubevela.fullname" . }}-cluster-gateway-tls-v2
restartPolicy: OnFailure
serviceAccountName: {{ include "kubevela.serviceAccountName" . }}
securityContext:

View File

@@ -5,6 +5,8 @@ kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: Set the image of the container.
labels:
custom.definition.oam.dev/ui-hidden: "true"
name: container-image
namespace: {{ include "systemDefinitionNamespace" . }}
spec:

View File

@@ -5,6 +5,8 @@ kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: Deploy env binding component to target env
labels:
custom.definition.oam.dev/ui-hidden: "true"
name: deploy2env
namespace: {{ include "systemDefinitionNamespace" . }}
spec:

View File

@@ -5,6 +5,8 @@ kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: Patch the output following Json Merge Patch strategy, following RFC 7396.
labels:
custom.definition.oam.dev/ui-hidden: "true"
name: json-merge-patch
namespace: {{ include "systemDefinitionNamespace" . }}
spec:

View File

@@ -5,6 +5,8 @@ kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: Patch the output following Json Patch strategy, following RFC 6902.
labels:
custom.definition.oam.dev/ui-hidden: "true"
name: json-patch
namespace: {{ include "systemDefinitionNamespace" . }}
spec:

View File

@@ -5,6 +5,8 @@ kind: ComponentDefinition
metadata:
annotations:
definition.oam.dev/description: Ref-objects allow users to specify ref objects to use. Notice that this component type have special handle logic.
labels:
custom.definition.oam.dev/ui-hidden: "true"
name: ref-objects
namespace: {{ include "systemDefinitionNamespace" . }}
spec:

View File

@@ -5,6 +5,8 @@ kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: step group
labels:
custom.definition.oam.dev/ui-hidden: "true"
name: step-group
namespace: {{ include "systemDefinitionNamespace" . }}
spec:

View File

@@ -20,12 +20,54 @@ metadata:
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: "cluster-admin"
name: {{ if .Values.authentication.enabled }} {{ include "kubevela.fullname" . }}:manager {{ else }} "cluster-admin" {{ end }}
subjects:
- kind: ServiceAccount
name: {{ include "kubevela.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{ if .Values.authentication.enabled }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "kubevela.fullname" . }}:manager
rules:
- apiGroups: ["core.oam.dev", "terraform.core.oam.dev", "prism.oam.dev"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["cluster.open-cluster-management.io"]
resources: ["managedclusters"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["users", "groups", "serviceaccounts"]
verbs: ["impersonate"]
- apiGroups: [""]
resources: ["namespaces", "secrets", "services"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["configmaps", "events"]
verbs: ["*"]
- apiGroups: ["apps"]
resources: ["controllerrevisions"]
verbs: ["*"]
- apiGroups: ["apiregistration.k8s.io"]
resources: ["apiservices"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["*"]
- apiGroups: ["admissionregistration.k8s.io"]
resources: ["mutatingwebhookconfigurations", "validatingwebhookconfigurations"]
verbs: ["get", "list", "watch"]
- apiGroups: ["flowcontrol.apiserver.k8s.io"]
resources: ["prioritylevelconfigurations", "flowschemas"]
verbs: ["get", "list", "watch"]
- apiGroups: ["authorization.k8s.io"]
resources: ["subjectaccessreviews"]
verbs: ["*"]
{{ end }}
---
# permissions to do leader election.
apiVersion: rbac.authorization.k8s.io/v1

View File

@@ -226,7 +226,7 @@ admissionWebhooks:
enabled: true
image:
repository: oamdev/kube-webhook-certgen
tag: v2.4.0
tag: v2.4.1
pullPolicy: IfNotPresent
nodeSelector: {}
affinity: {}

View File

@@ -212,4 +212,7 @@ subjects:
- kind: Group
name: kubevela:client
apiGroup: rbac.authorization.k8s.io
- kind: ServiceAccount
name: {{ include "kubevela.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{ end }}

View File

@@ -5,6 +5,8 @@ kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: Set the image of the container.
labels:
custom.definition.oam.dev/ui-hidden: "true"
name: container-image
namespace: {{ include "systemDefinitionNamespace" . }}
spec:

View File

@@ -5,6 +5,8 @@ kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: Patch the output following Json Merge Patch strategy, following RFC 7396.
labels:
custom.definition.oam.dev/ui-hidden: "true"
name: json-merge-patch
namespace: {{ include "systemDefinitionNamespace" . }}
spec:

View File

@@ -5,6 +5,8 @@ kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: Patch the output following Json Patch strategy, following RFC 6902.
labels:
custom.definition.oam.dev/ui-hidden: "true"
name: json-patch
namespace: {{ include "systemDefinitionNamespace" . }}
spec:

View File

@@ -5,6 +5,8 @@ kind: ComponentDefinition
metadata:
annotations:
definition.oam.dev/description: Ref-objects allow users to specify ref objects to use. Notice that this component type have special handle logic.
labels:
custom.definition.oam.dev/ui-hidden: "true"
name: ref-objects
namespace: {{ include "systemDefinitionNamespace" . }}
spec:

View File

@@ -5,6 +5,8 @@ kind: WorkflowStepDefinition
metadata:
annotations:
definition.oam.dev/description: step group
labels:
custom.definition.oam.dev/ui-hidden: "true"
name: step-group
namespace: {{ include "systemDefinitionNamespace" . }}
spec:

View File

@@ -15,6 +15,7 @@ metadata:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
@@ -22,12 +23,54 @@ metadata:
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: "cluster-admin"
name: {{ if .Values.authentication.enabled }} {{ include "kubevela.fullname" . }}:manager {{ else }} "cluster-admin" {{ end }}
subjects:
- kind: ServiceAccount
name: {{ include "kubevela.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{ if .Values.authentication.enabled }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "kubevela.fullname" . }}:manager
rules:
- apiGroups: ["core.oam.dev", "terraform.core.oam.dev", "prism.oam.dev"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["cluster.open-cluster-management.io"]
resources: ["managedclusters"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["users", "groups", "serviceaccounts"]
verbs: ["impersonate"]
- apiGroups: [""]
resources: ["namespaces", "secrets", "services"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["configmaps", "events"]
verbs: ["*"]
- apiGroups: ["apps"]
resources: ["controllerrevisions"]
verbs: ["*"]
- apiGroups: ["apiregistration.k8s.io"]
resources: ["apiservices"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["*"]
- apiGroups: ["admissionregistration.k8s.io"]
resources: ["mutatingwebhookconfigurations", "validatingwebhookconfigurations"]
verbs: ["get", "list", "watch"]
- apiGroups: ["flowcontrol.apiserver.k8s.io"]
resources: ["prioritylevelconfigurations", "flowschemas"]
verbs: ["get", "list", "watch"]
- apiGroups: ["authorization.k8s.io"]
resources: ["subjectaccessreviews"]
verbs: ["*"]
{{ end }}
---
# permissions to do leader election.
apiVersion: rbac.authorization.k8s.io/v1

View File

@@ -203,7 +203,7 @@ admissionWebhooks:
enabled: true
image:
repository: oamdev/kube-webhook-certgen
tag: v2.4.0
tag: v2.4.1
pullPolicy: IfNotPresent
nodeSelector: {}
affinity: {}

View File

@@ -1,6 +1,6 @@
# How to garbage collect resources in the order of dependency
If you want to garbage collect resources in the order of dependency, you can add `order: dependency` in the `garbage-collect` policy.
If you want to garbage collect resources in the order of reverse dependency, you can add `order: dependency` in the `garbage-collect` policy.
> Notice that this order policy is only valid for the resources that are created in the components.
@@ -8,7 +8,7 @@ In the following example, component `test1` depends on `test2`, and `test2` need
So the order of deployment is: `test3 -> test2 -> test1`.
When we add `order: dependency` in `garbage-collect` policy and delete the application, the order of garbage collect is: `test3 -> test2 -> test1`.
When we add `order: dependency` in `garbage-collect` policy and delete the application, the order of garbage collect is: `test1 -> test2 -> test3`.
```yaml
apiVersion: core.oam.dev/v1beta1

View File

@@ -18,10 +18,13 @@ package e2e
import (
context2 "context"
"encoding/json"
"fmt"
"strings"
"time"
corev1 "k8s.io/api/core/v1"
"github.com/Netflix/go-expect"
"github.com/crossplane/crossplane-runtime/pkg/meta"
"github.com/onsi/ginkgo"
@@ -43,6 +46,7 @@ var (
testDeleteJsonAppFile = `{"name":"test-vela-delete","services":{"nginx-test":{"type":"webservice","image":"nginx:1.9.4","port":80}}}`
appbasicJsonAppFile = `{"name":"app-basic","services":{"app-basic":{"type":"webservice","image":"nginx:1.9.4","port":80}}}`
appbasicAddTraitJsonAppFile = `{"name":"app-basic","services":{"app-basic":{"type":"webservice","image":"nginx:1.9.4","port":80,"scaler":{"replicas":2}}}}`
velaQL = "test-component-pod-view{appNs=default,appName=nginx-vela,name=nginx}"
)
var _ = ginkgo.Describe("Test Vela Application", func() {
@@ -68,6 +72,9 @@ var _ = ginkgo.Describe("Test Vela Application", func() {
e2e.JsonAppFileContext("json appfile apply", testDeleteJsonAppFile)
ApplicationDeleteWithForceOptions("test delete with force option", "test-vela-delete")
e2e.JsonAppFileContext("json appfile apply", testDeleteJsonAppFile)
VelaQLPodListContext("ql", velaQL)
})
var ApplicationStatusContext = func(context string, applicationName string, workloadType string) bool {
@@ -230,3 +237,75 @@ var ApplicationDeleteWithForceOptions = func(context string, appName string) boo
})
})
}
type PodList struct {
PodList []Pod `form:"podList" json:"podList"`
}
type Pod struct {
Status Status `form:"status" json:"status"`
Cluster string `form:"cluster" json:"cluster"`
Metadata Metadata `form:"metadata" json:"metadata"`
Workload Workload `form:"workload" json:"workload"`
}
type Status struct {
Phase string `form:"phase" json:"phase"`
NodeName string `form:"nodeName" json:"nodeName"`
}
type Metadata struct {
Namespace string `form:"namespace" json:"namespace"`
}
type Workload struct {
ApiVersion string `form:"apiVersion" json:"apiVersion"`
Kind string `form:"kind" json:"kind"`
}
var VelaQLPodListContext = func(context string, velaQL string) bool {
return ginkgo.Context(context, func() {
ginkgo.It("should get successful result for executing vela ql", func() {
args := common.Args{
Schema: common.Scheme,
}
ctx := context2.Background()
k8sClient, err := args.GetClient()
gomega.Expect(err).NotTo(gomega.HaveOccurred())
componentView := new(corev1.ConfigMap)
gomega.Eventually(func(g gomega.Gomega) {
g.Expect(common.ReadYamlToObject("./component-pod-view.yaml", componentView)).Should(gomega.BeNil())
g.Expect(k8sClient.Create(ctx, componentView)).Should(gomega.Succeed())
}, time.Second*3, time.Millisecond*300).Should(gomega.Succeed())
cli := fmt.Sprintf("vela ql %s", velaQL)
output, err := e2e.Exec(cli)
gomega.Expect(err).NotTo(gomega.HaveOccurred())
var list PodList
err = json.Unmarshal([]byte(output), &list)
gomega.Expect(err).NotTo(gomega.HaveOccurred())
for _, v := range list.PodList {
if v.Cluster != "" {
gomega.Expect(v.Cluster).To(gomega.ContainSubstring("local"))
}
if v.Status.Phase != "" {
gomega.Expect(v.Status.Phase).To(gomega.ContainSubstring("Running"))
}
if v.Status.NodeName != "" {
gomega.Expect(v.Status.NodeName).To(gomega.ContainSubstring("kind-control-plane"))
}
if v.Metadata.Namespace != "" {
gomega.Expect(v.Metadata.Namespace).To(gomega.ContainSubstring("default"))
}
if v.Workload.ApiVersion != "" {
gomega.Expect(v.Workload.ApiVersion).To(gomega.ContainSubstring("apps/v1"))
}
if v.Workload.Kind != "" {
gomega.Expect(v.Workload.Kind).To(gomega.ContainSubstring("Deployment"))
}
}
})
})
}

View File

@@ -0,0 +1,98 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: test-component-pod-view
namespace: vela-system
data:
template: |
import (
"vela/ql"
"vela/op"
)
parameter: {
appName: string
appNs: string
name?: string
cluster?: string
clusterNs?: string
}
application: ql.#ListResourcesInApp & {
app: {
name: parameter.appName
namespace: parameter.appNs
filter: {
if parameter.cluster != _|_ {
cluster: parameter.cluster
}
if parameter.clusterNs != _|_ {
clusterNamespace: parameter.clusterNs
}
if parameter.name != _|_ {
components: [parameter.name]
}
}
}
}
if application.err != _|_ {
status: error: application.err
}
if application.err == _|_ {
resources: application.list
podsMap: op.#Steps & {
for i, resource in resources {
"\(i)": ql.#CollectPods & {
value: resource.object
cluster: resource.cluster
}
}
}
podsWithCluster: [ for i, pods in podsMap for podObj in pods.list {
cluster: pods.cluster
obj: podObj
}]
podStatus: op.#Steps & {
for i, pod in podsWithCluster {
"\(i)": op.#Steps & {
name: pod.obj.metadata.name
containers: {for container in pod.obj.status.containerStatuses {
"\(container.name)": {
image: container.image
state: container.state
}
}}
events: ql.#SearchEvents & {
value: pod.obj
cluster: pod.cluster
}
metrics: ql.#Read & {
cluster: pod.cluster
value: {
apiVersion: "metrics.k8s.io/v1beta1"
kind: "PodMetrics"
metadata: {
name: pod.obj.metadata.name
namespace: pod.obj.metadata.namespace
}
}
}
}
}
}
status: {
podList: [ for podInfo in podStatus {
name: podInfo.name
containers: [ for containerName, container in podInfo.containers {
containerName
}]
events: podInfo.events.list
}]
}
}

4
go.mod
View File

@@ -16,7 +16,7 @@ require (
github.com/barnettZQG/inject v0.0.1
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869
github.com/briandowns/spinner v1.11.1
github.com/containerd/containerd v1.5.10
github.com/containerd/containerd v1.5.13
github.com/coreos/go-oidc v2.1.0+incompatible
github.com/coreos/prometheus-operator v0.41.1
github.com/crossplane/crossplane-runtime v0.14.1-0.20210722005935-0b469fcc77cd
@@ -122,7 +122,7 @@ require (
github.com/Masterminds/sprig/v3 v3.2.2 // indirect
github.com/Masterminds/squirrel v1.5.2 // indirect
github.com/Microsoft/go-winio v0.5.2 // indirect
github.com/Microsoft/hcsshim v0.8.23 // indirect
github.com/Microsoft/hcsshim v0.8.24 // indirect
github.com/NYTimes/gziphandler v1.1.1 // indirect
github.com/PuerkitoBio/purell v1.1.1 // indirect
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect

11
go.sum
View File

@@ -178,8 +178,8 @@ github.com/Microsoft/hcsshim v0.8.14/go.mod h1:NtVKoYxQuTLx6gEq0L96c9Ju4JbRJ4nY2
github.com/Microsoft/hcsshim v0.8.15/go.mod h1:x38A4YbHbdxJtc0sF6oIz+RG0npwSCAvn69iY6URG00=
github.com/Microsoft/hcsshim v0.8.16/go.mod h1:o5/SZqmR7x9JNKsW3pu+nqHm0MF8vbA+VxGOoXdC600=
github.com/Microsoft/hcsshim v0.8.21/go.mod h1:+w2gRZ5ReXQhFOrvSQeNfhrYB/dg3oDwTOcER2fw4I4=
github.com/Microsoft/hcsshim v0.8.23 h1:47MSwtKGXet80aIn+7h4YI6fwPmwIghAnsx2aOUrG2M=
github.com/Microsoft/hcsshim v0.8.23/go.mod h1:4zegtUJth7lAvFyc6cH2gGQ5B3OFQim01nnU2M8jKDg=
github.com/Microsoft/hcsshim v0.8.24 h1:jP+GMeRXIR1sH1kG4lJr9ShmSjVrua5jmFZDtfYGkn4=
github.com/Microsoft/hcsshim v0.8.24/go.mod h1:4zegtUJth7lAvFyc6cH2gGQ5B3OFQim01nnU2M8jKDg=
github.com/Microsoft/hcsshim/test v0.0.0-20201218223536-d3e5debf77da/go.mod h1:5hlzMzRKMLyo42nCZ9oml8AdTlq/0cvIaBv6tK1RehU=
github.com/Microsoft/hcsshim/test v0.0.0-20210227013316-43a75bb4edd3/go.mod h1:mw7qgWloBUl75W/gVH3cQszUg1+gUITj7D6NY7ywVnY=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
@@ -412,8 +412,9 @@ github.com/containerd/cgroups v0.0.0-20200531161412-0dbf7f05ba59/go.mod h1:pA0z1
github.com/containerd/cgroups v0.0.0-20200710171044-318312a37340/go.mod h1:s5q4SojHctfxANBDvMeIaIovkq29IP48TKAxnhYRxvo=
github.com/containerd/cgroups v0.0.0-20200824123100-0b889c03f102/go.mod h1:s5q4SojHctfxANBDvMeIaIovkq29IP48TKAxnhYRxvo=
github.com/containerd/cgroups v0.0.0-20210114181951-8a68de567b68/go.mod h1:ZJeTFisyysqgcCdecO57Dj79RfL0LNeGiFUqLYQRYLE=
github.com/containerd/cgroups v1.0.1 h1:iJnMvco9XGvKUvNQkv88bE4uJXxRQH18efbKo9w5vHQ=
github.com/containerd/cgroups v1.0.1/go.mod h1:0SJrPIenamHDcZhEcJMNBB85rHcUsw4f25ZfBiPYRkU=
github.com/containerd/cgroups v1.0.3 h1:ADZftAkglvCiD44c77s5YmMqaP2pzVCFZvBmAlBdAP4=
github.com/containerd/cgroups v1.0.3/go.mod h1:/ofk34relqNjSGyqPrmEULrO4Sc8LJhvJmWbUCUKqj8=
github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
github.com/containerd/console v0.0.0-20181022165439-0650fd9eeb50/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
github.com/containerd/console v0.0.0-20191206165004-02ecf6a7291e/go.mod h1:8Pf4gM6VEbTNRIT26AyyU7hxdQU3MvAvxVI0sc00XBE=
@@ -435,8 +436,8 @@ github.com/containerd/containerd v1.5.0-rc.0/go.mod h1:V/IXoMqNGgBlabz3tHD2TWDoT
github.com/containerd/containerd v1.5.1/go.mod h1:0DOxVqwDy2iZvrZp2JUx/E+hS0UNTVn7dJnIOwtYR4g=
github.com/containerd/containerd v1.5.2/go.mod h1:0DOxVqwDy2iZvrZp2JUx/E+hS0UNTVn7dJnIOwtYR4g=
github.com/containerd/containerd v1.5.7/go.mod h1:gyvv6+ugqY25TiXxcZC3L5yOeYgEw0QMhscqVp1AR9c=
github.com/containerd/containerd v1.5.10 h1:3cQ2uRVCkJVcx5VombsE7105Gl9Wrl7ORAO3+4+ogf4=
github.com/containerd/containerd v1.5.10/go.mod h1:fvQqCfadDGga5HZyn3j4+dx56qj2I9YwBrlSdalvJYQ=
github.com/containerd/containerd v1.5.13 h1:XqvKw9i4P7/mFrC3TSM7yV5cwFZ9avXe6M3YANKnzEE=
github.com/containerd/containerd v1.5.13/go.mod h1:3AlCrzKROjIuP3JALsY14n8YtntaUDBu7vek+rPN5Vc=
github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20190815185530-f2a389ac0a02/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20191127005431-f65d91d395eb/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=

View File

@@ -1,14 +1,29 @@
.PHONY: e2e-setup-core
e2e-setup-core:
.PHONY: e2e-setup-core-pre-hook
e2e-setup-core-pre-hook:
sh ./hack/e2e/modify_charts.sh
helm upgrade --install --create-namespace --namespace vela-system --set image.pullPolicy=IfNotPresent --set image.repository=vela-core-test --set applicationRevisionLimit=5 --set dependCheckWait=10s --set image.tag=$(GIT_COMMIT) --wait kubevela ./charts/vela-core
.PHONY: e2e-setup-core-post-hook
e2e-setup-core-post-hook:
kubectl wait --for=condition=Available deployment/kubevela-vela-core -n vela-system --timeout=180s
helm upgrade --install --namespace vela-system --wait oam-rollout --set image.repository=vela-runtime-rollout-test --set image.tag=$(GIT_COMMIT) ./runtime/rollout/charts
go run ./e2e/addon/mock &
sleep 15
bin/vela addon enable rollout
.PHONY: e2e-setup-core-wo-auth
e2e-setup-core-wo-auth:
helm upgrade --install --create-namespace --namespace vela-system --set image.pullPolicy=IfNotPresent --set image.repository=vela-core-test --set applicationRevisionLimit=5 --set dependCheckWait=10s --set image.tag=$(GIT_COMMIT) --wait kubevela ./charts/vela-core
.PHONY: e2e-setup-core-w-auth
e2e-setup-core-w-auth:
helm upgrade --install --create-namespace --namespace vela-system --set image.pullPolicy=IfNotPresent --set image.repository=vela-core-test --set applicationRevisionLimit=5 --set dependCheckWait=10s --set image.tag=$(GIT_COMMIT) --wait kubevela ./charts/vela-core --set authentication.enabled=true --set authentication.withUser=true --set authentication.groupPattern=*
.PHONY: e2e-setup-core
e2e-setup-core: e2e-setup-core-pre-hook e2e-setup-core-wo-auth e2e-setup-core-post-hook
.PHONY: e2e-setup-core-auth
e2e-setup-core-auth: e2e-setup-core-pre-hook e2e-setup-core-w-auth e2e-setup-core-post-hook
.PHONY: setup-runtime-e2e-cluster
setup-runtime-e2e-cluster:
helm upgrade --install --create-namespace --namespace vela-system --kubeconfig=$(RUNTIME_CLUSTER_CONFIG) --set image.pullPolicy=IfNotPresent --set image.repository=vela-runtime-rollout-test --set image.tag=$(GIT_COMMIT) --wait vela-rollout ./runtime/rollout/charts

View File

@@ -23,6 +23,9 @@ import (
"sort"
"github.com/Masterminds/semver/v3"
"github.com/oam-dev/kubevela/pkg/apiserver/utils/log"
"github.com/oam-dev/kubevela/pkg/utils"
"github.com/oam-dev/kubevela/pkg/utils/common"
"github.com/oam-dev/kubevela/pkg/utils/helm"
@@ -128,19 +131,8 @@ func (i versionedRegistry) loadAddon(ctx context.Context, name, version string)
if len(versions) == 0 {
return nil, ErrNotExist
}
var addonVersion *repo.ChartVersion
sort.Sort(sort.Reverse(versions))
if len(version) == 0 {
// if not specify version will always use the latest version
addonVersion = versions[0]
}
var availableVersions []string
for i, v := range versions {
availableVersions = append(availableVersions, v.Version)
if v.Version == version {
addonVersion = versions[i]
}
}
addonVersion, availableVersions := chooseVersion(version, versions)
if addonVersion == nil {
return nil, fmt.Errorf("specified version %s not exist", version)
}
@@ -191,3 +183,32 @@ func loadAddonPackage(addonName string, files []*loader.BufferedFile) (*WholeAdd
APISchema: addonUIData.APISchema,
}, nil
}
// chooseVersion will return the target version and all available versions
func chooseVersion(specifiedVersion string, versions []*repo.ChartVersion) (*repo.ChartVersion, []string) {
var addonVersion *repo.ChartVersion
var availableVersions []string
for i, v := range versions {
availableVersions = append(availableVersions, v.Version)
if addonVersion != nil {
// already find the latest not-prerelease version, skip the find
continue
}
if len(specifiedVersion) != 0 {
if v.Version == specifiedVersion {
addonVersion = versions[i]
}
} else {
vv, err := semver.NewVersion(v.Version)
if err != nil {
continue
}
if len(vv.Prerelease()) != 0 {
continue
}
addonVersion = v
log.Logger.Infof("Not specified any version, so use the latest version %s", v.Version)
}
}
return addonVersion, availableVersions
}

View File

@@ -27,6 +27,9 @@ import (
"testing"
"time"
"helm.sh/helm/v3/pkg/chart"
"helm.sh/helm/v3/pkg/repo"
"github.com/oam-dev/kubevela/pkg/utils/common"
"github.com/stretchr/testify/assert"
@@ -98,6 +101,33 @@ func TestVersionRegistry(t *testing.T) {
}
func TestChooseAddonVersion(t *testing.T) {
versions := []*repo.ChartVersion{
{
Metadata: &chart.Metadata{
Version: "v1.4.0-beta1",
},
},
{
Metadata: &chart.Metadata{
Version: "v1.3.6",
},
},
{
Metadata: &chart.Metadata{
Version: "v1.2.0",
},
},
}
targetVersion, availableVersion := chooseVersion("v1.2.0", versions)
assert.Equal(t, availableVersion, []string{"v1.4.0-beta1", "v1.3.6", "v1.2.0"})
assert.Equal(t, targetVersion.Version, "v1.2.0")
targetVersion, availableVersion = chooseVersion("", versions)
assert.Equal(t, availableVersion, []string{"v1.4.0-beta1", "v1.3.6", "v1.2.0"})
assert.Equal(t, targetVersion.Version, "v1.3.6")
}
var versionedHandler http.HandlerFunc = func(writer http.ResponseWriter, request *http.Request) {
switch {
case strings.Contains(request.URL.Path, "index.yaml"):

View File

@@ -18,7 +18,6 @@ package model
import (
"fmt"
"strings"
"time"
"github.com/form3tech-oss/jwt-go"
@@ -63,17 +62,17 @@ func (u *User) ShortTableName() string {
// PrimaryKey return custom primary key
func (u *User) PrimaryKey() string {
return verifyUserValue(u.Name)
return u.Name
}
// Index return custom index
func (u *User) Index() map[string]string {
index := make(map[string]string)
if u.Name != "" {
index["name"] = verifyUserValue(u.Name)
index["name"] = u.Name
}
if u.Email != "" {
index["email"] = verifyUserValue(u.Email)
index["email"] = u.Email
}
return index
}
@@ -99,14 +98,14 @@ func (u *ProjectUser) ShortTableName() string {
// PrimaryKey return custom primary key
func (u *ProjectUser) PrimaryKey() string {
return fmt.Sprintf("%s-%s", u.ProjectName, verifyUserValue(u.Username))
return fmt.Sprintf("%s-%s", u.ProjectName, u.Username)
}
// Index return custom index
func (u *ProjectUser) Index() map[string]string {
index := make(map[string]string)
if u.Username != "" {
index["username"] = verifyUserValue(u.Username)
index["username"] = u.Username
}
if u.ProjectName != "" {
index["projectName"] = u.ProjectName
@@ -114,12 +113,6 @@ func (u *ProjectUser) Index() map[string]string {
return index
}
func verifyUserValue(v string) string {
s := strings.ReplaceAll(v, "@", "-")
s = strings.ReplaceAll(s, " ", "-")
return strings.ToLower(s)
}
// CustomClaims is the custom claims
type CustomClaims struct {
Username string `json:"username"`

View File

@@ -0,0 +1,97 @@
/*
Copyright 2022 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package repository
import (
"context"
"errors"
"github.com/oam-dev/kubevela/pkg/apiserver/domain/model"
"github.com/oam-dev/kubevela/pkg/apiserver/infrastructure/datastore"
"github.com/oam-dev/kubevela/pkg/apiserver/utils/log"
)
// ListApplicationPolicies query the application policies
func ListApplicationPolicies(ctx context.Context, store datastore.DataStore, app *model.Application) (list []*model.ApplicationPolicy, err error) {
var policy = model.ApplicationPolicy{
AppPrimaryKey: app.PrimaryKey(),
}
policies, err := store.List(ctx, &policy, &datastore.ListOptions{})
if err != nil {
return nil, err
}
for _, policy := range policies {
pm := policy.(*model.ApplicationPolicy)
list = append(list, pm)
}
return
}
// ListApplicationEnvPolicies list the policies that only belong to the specified env
func ListApplicationEnvPolicies(ctx context.Context, store datastore.DataStore, app *model.Application, envName string) (list []*model.ApplicationPolicy, err error) {
var policy = model.ApplicationPolicy{
AppPrimaryKey: app.PrimaryKey(),
EnvName: envName,
}
policies, err := store.List(ctx, &policy, &datastore.ListOptions{})
if err != nil {
return nil, err
}
for _, policy := range policies {
pm := policy.(*model.ApplicationPolicy)
list = append(list, pm)
}
return
}
// ListApplicationCommonPolicies list the policies that common to all environments
func ListApplicationCommonPolicies(ctx context.Context, store datastore.DataStore, app *model.Application) (list []*model.ApplicationPolicy, err error) {
var policy = model.ApplicationPolicy{
AppPrimaryKey: app.PrimaryKey(),
}
policies, err := store.List(ctx, &policy, &datastore.ListOptions{
FilterOptions: datastore.FilterOptions{
IsNotExist: []datastore.IsNotExistQueryOption{{
Key: "envName",
}},
},
})
if err != nil {
return nil, err
}
for _, policy := range policies {
pm := policy.(*model.ApplicationPolicy)
list = append(list, pm)
}
return
}
// DeleteApplicationEnvPolicies delete the policies via app name and env name
func DeleteApplicationEnvPolicies(ctx context.Context, store datastore.DataStore, app *model.Application, envName string) error {
log.Logger.Debugf("clear the policies via app name %s and env name %s", app.PrimaryKey(), envName)
policies, err := ListApplicationEnvPolicies(ctx, store, app, envName)
if err != nil {
return err
}
for _, policy := range policies {
if err := store.Delete(ctx, policy); err != nil && !errors.Is(err, datastore.ErrRecordNotExist) {
log.Logger.Errorf("fail to clear the policies belong to the env %w", err)
continue
}
}
return nil
}

View File

@@ -457,13 +457,13 @@ func HaveTerraformWorkload(ctx context.Context, kubeClient client.Client, compon
return terraformComponents
}
func createOverriteConfigForTerraformComponent(env *model.Env, target *model.Target, terraformComponents []*model.ApplicationComponent) v1alpha1.EnvConfig {
func createOverrideConfigForTerraformComponent(env *model.Env, target *model.Target, terraformComponents []*model.ApplicationComponent) v1alpha1.EnvConfig {
placement := v1alpha1.EnvPlacement{}
if target.Cluster != nil {
placement.ClusterSelector = &common.ClusterSelector{Name: target.Cluster.ClusterName}
placement.NamespaceSelector = &v1alpha1.NamespaceSelector{Name: target.Cluster.Namespace}
}
var componentPatchs []v1alpha1.EnvComponentPatch
var componentPatches []v1alpha1.EnvComponentPatch
// init cloud application region and provider info
for _, component := range terraformComponents {
properties := model.JSONStruct{
@@ -476,7 +476,7 @@ func createOverriteConfigForTerraformComponent(env *model.Env, target *model.Tar
},
}
if region, ok := target.Variable["region"]; ok {
properties["region"] = region
properties["customRegion"] = region
}
if providerName, ok := target.Variable["providerName"]; ok {
properties["providerRef"].(map[string]interface{})["name"] = providerName
@@ -484,7 +484,7 @@ func createOverriteConfigForTerraformComponent(env *model.Env, target *model.Tar
if providerNamespace, ok := target.Variable["providerNamespace"]; ok {
properties["providerRef"].(map[string]interface{})["namespace"] = providerNamespace
}
componentPatchs = append(componentPatchs, v1alpha1.EnvComponentPatch{
componentPatches = append(componentPatches, v1alpha1.EnvComponentPatch{
Name: component.Name,
Properties: properties.RawExtension(),
Type: component.Type,
@@ -495,7 +495,7 @@ func createOverriteConfigForTerraformComponent(env *model.Env, target *model.Tar
Name: genPolicyEnvName(target.Name),
Placement: placement,
Patch: v1alpha1.EnvPatch{
Components: componentPatchs,
Components: componentPatches,
},
}
}
@@ -543,7 +543,7 @@ func GenEnvWorkflowStepsAndPolicies(ctx context.Context, kubeClient client.Clien
}),
}
workflowSteps = append(workflowSteps, step)
envs = append(envs, createOverriteConfigForTerraformComponent(env, target, terraformComponents))
envs = append(envs, createOverrideConfigForTerraformComponent(env, target, terraformComponents))
}
properties, err := model.NewJSONStructByStruct(v1alpha1.EnvBindingSpec{
Envs: envs,

View File

@@ -244,7 +244,7 @@ func (c *applicationServiceImpl) DetailApplication(ctx context.Context, app *mod
}
}
base := assembler.ConvertAppModelToBase(app, []*apisv1.ProjectBase{project})
policies, err := c.queryApplicationPolicies(ctx, app)
policies, err := repository.ListApplicationPolicies(ctx, c.Store, app)
if err != nil {
return nil, err
}
@@ -589,7 +589,7 @@ func (c *applicationServiceImpl) DetailComponent(ctx context.Context, app *model
// ListPolicies list application policies
func (c *applicationServiceImpl) ListPolicies(ctx context.Context, app *model.Application) ([]*apisv1.PolicyBase, error) {
policies, err := c.queryApplicationPolicies(ctx, app)
policies, err := repository.ListApplicationPolicies(ctx, c.Store, app)
if err != nil {
return nil, err
}
@@ -600,21 +600,6 @@ func (c *applicationServiceImpl) ListPolicies(ctx context.Context, app *model.Ap
return list, nil
}
func (c *applicationServiceImpl) queryApplicationPolicies(ctx context.Context, app *model.Application) (list []*model.ApplicationPolicy, err error) {
var policy = model.ApplicationPolicy{
AppPrimaryKey: app.PrimaryKey(),
}
policies, err := c.Store.List(ctx, &policy, &datastore.ListOptions{})
if err != nil {
return nil, err
}
for _, policy := range policies {
pm := policy.(*model.ApplicationPolicy)
list = append(list, pm)
}
return
}
// DetailPolicy detail app policy
// TODO: Add status data about the policy.
func (c *applicationServiceImpl) DetailPolicy(ctx context.Context, app *model.Application, policyName string) (*apisv1.DetailPolicyResponse, error) {
@@ -853,22 +838,11 @@ func (c *applicationServiceImpl) renderOAMApplication(ctx context.Context, appMo
}
// query the policies for this environment
var policy = model.ApplicationPolicy{
AppPrimaryKey: appModel.PrimaryKey(),
}
policies, err := c.Store.List(ctx, &policy, &datastore.ListOptions{
FilterOptions: datastore.FilterOptions{
IsNotExist: []datastore.IsNotExistQueryOption{{
Key: "envName",
},
},
},
})
policies, err := repository.ListApplicationCommonPolicies(ctx, c.Store, appModel)
if err != nil {
return nil, err
}
policy.EnvName = env.Name
envPolicies, err := c.Store.List(ctx, &policy, &datastore.ListOptions{})
envPolicies, err := repository.ListApplicationEnvPolicies(ctx, c.Store, appModel, env.Name)
if err != nil {
return nil, err
}
@@ -903,8 +877,7 @@ func (c *applicationServiceImpl) renderOAMApplication(ctx context.Context, appMo
app.Spec.Components = append(app.Spec.Components, bc)
}
for _, entity := range policies {
policy := entity.(*model.ApplicationPolicy)
for _, policy := range policies {
appPolicy := v1beta1.AppPolicy{
Name: policy.Name,
Type: policy.Type,

View File

@@ -40,7 +40,6 @@ import (
"github.com/oam-dev/kubevela/pkg/apiserver/domain/repository"
"github.com/oam-dev/kubevela/pkg/apiserver/infrastructure/datastore"
v1 "github.com/oam-dev/kubevela/pkg/apiserver/interfaces/api/dto/v1"
"github.com/oam-dev/kubevela/pkg/apiserver/utils"
"github.com/oam-dev/kubevela/pkg/apiserver/utils/bcode"
"github.com/oam-dev/kubevela/pkg/oam"
"github.com/oam-dev/kubevela/pkg/oam/util"
@@ -78,7 +77,7 @@ var _ = Describe("Test application service function", func() {
projectService = &projectServiceImpl{Store: ds, K8sClient: k8sClient, RbacService: rbacService}
envService = &envServiceImpl{Store: ds, KubeClient: k8sClient, ProjectService: projectService}
workflowService = &workflowServiceImpl{Store: ds, EnvService: envService}
definitionService = &definitionServiceImpl{KubeClient: k8sClient, caches: utils.NewMemoryCacheStore(context.Background())}
definitionService = &definitionServiceImpl{KubeClient: k8sClient}
envBindingService = &envBindingServiceImpl{Store: ds, EnvService: envService, WorkflowService: workflowService, KubeClient: k8sClient, DefinitionService: definitionService}
targetService = &targetServiceImpl{Store: ds, K8sClient: k8sClient}
appService = &applicationServiceImpl{

View File

@@ -21,7 +21,6 @@ import (
"encoding/json"
"fmt"
"sort"
"time"
"github.com/getkin/kin-openapi/openapi3"
"github.com/pkg/errors"
@@ -55,7 +54,6 @@ type DefinitionService interface {
type definitionServiceImpl struct {
KubeClient client.Client `inject:"kubeClient"`
caches *utils.MemoryCacheStore
}
// DefinitionQueryOption define a set of query options
@@ -80,7 +78,7 @@ const (
// NewDefinitionService new definition service
func NewDefinitionService() DefinitionService {
return &definitionServiceImpl{caches: utils.NewMemoryCacheStore(context.Background())}
return &definitionServiceImpl{}
}
func (d *definitionServiceImpl) ListDefinitions(ctx context.Context, ops DefinitionQueryOption) ([]*apisv1.DefinitionBase, error) {
@@ -95,9 +93,6 @@ func (d *definitionServiceImpl) ListDefinitions(ctx context.Context, ops Definit
}
func (d *definitionServiceImpl) listDefinitions(ctx context.Context, list *unstructured.UnstructuredList, kind string, ops DefinitionQueryOption) ([]*apisv1.DefinitionBase, error) {
if mc := d.caches.Get(ops.String()); mc != nil {
return mc.([]*apisv1.DefinitionBase), nil
}
matchLabels := metav1.LabelSelector{
MatchExpressions: []metav1.LabelSelectorRequirement{
{
@@ -146,9 +141,6 @@ func (d *definitionServiceImpl) listDefinitions(ctx context.Context, list *unstr
}
defs = append(defs, definition)
}
if ops.AppliedWorkloads == "" {
d.caches.Put(ops.String(), defs, time.Minute*3)
}
return defs, nil
}
@@ -240,30 +232,27 @@ func (d *definitionServiceImpl) DetailDefinition(ctx context.Context, name, defT
if err := d.KubeClient.Get(ctx, k8stypes.NamespacedName{
Namespace: types.DefaultKubeVelaNS,
Name: fmt.Sprintf("%s-schema-%s", defType, name),
}, &cm); err != nil {
if apierrors.IsNotFound(err) {
return nil, bcode.ErrDefinitionNoSchema
}
}, &cm); err != nil && !apierrors.IsNotFound(err) {
return nil, err
}
data, ok := cm.Data[types.OpenapiV3JSONSchema]
if !ok {
return nil, bcode.ErrDefinitionNoSchema
}
schema := &openapi3.Schema{}
if err := schema.UnmarshalJSON([]byte(data)); err != nil {
return nil, err
}
// render default ui schema
defaultUISchema := renderDefaultUISchema(schema)
// patch from custom ui schema
customUISchema := d.renderCustomUISchema(ctx, name, defType, defaultUISchema)
return &apisv1.DetailDefinitionResponse{
definition := &apisv1.DetailDefinitionResponse{
DefinitionBase: *base,
APISchema: schema,
UISchema: customUISchema,
}, nil
}
data, ok := cm.Data[types.OpenapiV3JSONSchema]
if ok {
schema := &openapi3.Schema{}
if err := schema.UnmarshalJSON([]byte(data)); err != nil {
return nil, err
}
definition.APISchema = schema
// render default ui schema
defaultUISchema := renderDefaultUISchema(schema)
// patch from custom ui schema
definition.UISchema = d.renderCustomUISchema(ctx, name, defType, defaultUISchema)
}
return definition, nil
}
func (d *definitionServiceImpl) renderCustomUISchema(ctx context.Context, name, defType string, defaultSchema []*utils.UIParameter) []*utils.UIParameter {

View File

@@ -44,7 +44,7 @@ var _ = Describe("Test namespace service functions", func() {
)
BeforeEach(func() {
definitionService = &definitionServiceImpl{KubeClient: k8sClient, caches: utils.NewMemoryCacheStore(context.TODO())}
definitionService = &definitionServiceImpl{KubeClient: k8sClient}
err := k8sClient.Create(context.Background(), &corev1.Namespace{
ObjectMeta: metav1.ObjectMeta{
Name: "vela-system",
@@ -215,7 +215,6 @@ var _ = Describe("Test namespace service functions", func() {
It("Test update ui schema", func() {
du := &definitionServiceImpl{
KubeClient: k8sClient,
caches: utils.NewMemoryCacheStore(context.Background()),
}
cdata, err := ioutil.ReadFile("./testdata/workflowstep-apply-object.yaml")
Expect(err).Should(Succeed())
@@ -235,7 +234,6 @@ var _ = Describe("Test namespace service functions", func() {
It("Test update status of the definition", func() {
du := &definitionServiceImpl{
KubeClient: k8sClient,
caches: utils.NewMemoryCacheStore(context.Background()),
}
detail, err := du.UpdateDefinitionStatus(context.TODO(), "apply-object", v1.UpdateDefinitionStatusRequest{
DefinitionType: "workflowstep",

View File

@@ -204,10 +204,7 @@ func (e *envBindingServiceImpl) DeleteEnvBinding(ctx context.Context, appModel *
}
// delete the topology and env-bindings policies
if err := e.Store.Delete(ctx, &model.ApplicationPolicy{AppPrimaryKey: appModel.PrimaryKey(), EnvName: envName}); err != nil && !errors.Is(err, datastore.ErrRecordNotExist) {
return fmt.Errorf("fail to clear the policies belong to the env %w", err)
}
return nil
return repository.DeleteApplicationEnvPolicies(ctx, e.Store, appModel, envName)
}
func (e *envBindingServiceImpl) BatchDeleteEnvBinding(ctx context.Context, app *model.Application) error {

View File

@@ -27,7 +27,6 @@ import (
"github.com/oam-dev/kubevela/pkg/apiserver/domain/repository"
"github.com/oam-dev/kubevela/pkg/apiserver/infrastructure/datastore"
apisv1 "github.com/oam-dev/kubevela/pkg/apiserver/interfaces/api/dto/v1"
"github.com/oam-dev/kubevela/pkg/apiserver/utils"
)
var _ = Describe("Test envBindingService functions", func() {
@@ -54,7 +53,7 @@ var _ = Describe("Test envBindingService functions", func() {
projectService := &projectServiceImpl{Store: ds, K8sClient: k8sClient, RbacService: rbacService}
envService = &envServiceImpl{Store: ds, KubeClient: k8sClient, ProjectService: projectService}
workflowService = &workflowServiceImpl{Store: ds, KubeClient: k8sClient, EnvService: envService}
definitionService = &definitionServiceImpl{KubeClient: k8sClient, caches: utils.NewMemoryCacheStore(context.TODO())}
definitionService = &definitionServiceImpl{KubeClient: k8sClient}
envBindingService = &envBindingServiceImpl{Store: ds, WorkflowService: workflowService, DefinitionService: definitionService, KubeClient: k8sClient, EnvService: envService}
envBindingDemo1 = apisv1.EnvBinding{
Name: "envbinding-dev",
@@ -99,7 +98,7 @@ var _ = Describe("Test envBindingService functions", func() {
Expect(err).Should(BeNil())
Expect(cmp.Diff(base.Name, req.Name)).Should(BeEmpty())
By("auto create two workflow")
By("test the auto created workflow")
workflow, err := workflowService.GetWorkflow(context.TODO(), testApp, repository.ConvertWorkflowName("envbinding-dev"))
Expect(err).Should(BeNil())
Expect(cmp.Diff(workflow.Steps[0].Name, "dev-target")).Should(BeEmpty())
@@ -139,15 +138,19 @@ var _ = Describe("Test envBindingService functions", func() {
Expect(err).Should(BeNil())
_, err = workflowService.GetWorkflow(context.TODO(), testApp, repository.ConvertWorkflowName("envbinding-dev"))
Expect(err).ShouldNot(BeNil())
err = envBindingService.DeleteEnvBinding(context.TODO(), testApp, "envbinding-prod")
Expect(err).Should(BeNil())
_, err = workflowService.GetWorkflow(context.TODO(), testApp, repository.ConvertWorkflowName("envbinding-prod"))
Expect(err).ShouldNot(BeNil())
policies, err := repository.ListApplicationPolicies(context.TODO(), ds, testApp)
Expect(err).Should(BeNil())
Expect(len(policies)).Should(Equal(0))
})
It("Test Application BatchCreateEnv function", func() {
testBatchApp := &model.Application{
Name: "test-batch-createt",
Name: "test-batch-created",
}
err := envBindingService.BatchCreateEnvBinding(context.TODO(), testBatchApp, apisv1.EnvBindingList{&envBindingDemo1, &envBindingDemo2})
Expect(err).Should(BeNil())

View File

@@ -33,7 +33,8 @@ import (
)
const (
initAdminPassword = "VelaUX12345"
// InitAdminPassword the password of first admin user
InitAdminPassword = "VelaUX12345"
)
// UserService User manage api
@@ -70,7 +71,7 @@ func (u *userServiceImpl) Init(ctx context.Context) error {
Name: admin,
}); err != nil {
if errors.Is(err, datastore.ErrRecordNotExist) {
encrypted, err := GeneratePasswordHash(initAdminPassword)
encrypted, err := GeneratePasswordHash(InitAdminPassword)
if err != nil {
return err
}
@@ -83,7 +84,7 @@ func (u *userServiceImpl) Init(ctx context.Context) error {
return err
}
// print default password of admin user in log
log.Logger.Infof("initialized admin username and password: admin / %s", initAdminPassword)
log.Logger.Infof("initialized admin username and password: admin / %s", InitAdminPassword)
} else {
return err
}

View File

@@ -75,6 +75,7 @@ func generateName(entity datastore.Entity) string {
// record the old ways here, it'll be migrated
// name := fmt.Sprintf("veladatabase-%s-%s", entity.TableName(), entity.PrimaryKey())
name := fmt.Sprintf("%s-%s", entity.ShortTableName(), entity.PrimaryKey())
name = verifyValue(name)
return strings.ReplaceAll(name, "_", "-")
}
@@ -86,6 +87,9 @@ func (m *kubeapi) generateConfigMap(entity datastore.Entity) *corev1.ConfigMap {
}
labels["table"] = entity.TableName()
labels["primaryKey"] = entity.PrimaryKey()
for k, v := range labels {
labels[k] = verifyValue(v)
}
var configMap = corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: generateName(entity),
@@ -178,6 +182,9 @@ func (m *kubeapi) Put(ctx context.Context, entity datastore.Entity) error {
}
labels["table"] = entity.TableName()
labels["primaryKey"] = entity.PrimaryKey()
for k, v := range labels {
labels[k] = verifyValue(v)
}
entity.SetUpdateTime(time.Now())
var configMap corev1.ConfigMap
if err := m.kubeClient.Get(ctx, types.NamespacedName{Namespace: m.namespace, Name: generateName(entity)}, &configMap); err != nil {
@@ -345,7 +352,7 @@ func (m *kubeapi) List(ctx context.Context, entity datastore.Entity, op *datasto
selector = selector.Add(*rq)
for k, v := range entity.Index() {
rq, err := labels.NewRequirement(k, selection.Equals, []string{v})
rq, err := labels.NewRequirement(k, selection.Equals, []string{verifyValue(v)})
if err != nil {
return nil, datastore.ErrIndexInvalid
}
@@ -353,7 +360,11 @@ func (m *kubeapi) List(ctx context.Context, entity datastore.Entity, op *datasto
}
if op != nil {
for _, inFilter := range op.In {
rq, err := labels.NewRequirement(inFilter.Key, selection.In, inFilter.Values)
var values []string
for _, value := range inFilter.Values {
values = append(values, verifyValue(value))
}
rq, err := labels.NewRequirement(inFilter.Key, selection.In, values)
if err != nil {
log.Logger.Errorf("new list requirement failure %s", err.Error())
return nil, datastore.ErrIndexInvalid
@@ -431,7 +442,7 @@ func (m *kubeapi) Count(ctx context.Context, entity datastore.Entity, filterOpti
return 0, datastore.NewDBError(err)
}
for k, v := range entity.Index() {
rq, err := labels.NewRequirement(k, selection.Equals, []string{v})
rq, err := labels.NewRequirement(k, selection.Equals, []string{verifyValue(v)})
if err != nil {
return 0, datastore.ErrIndexInvalid
}
@@ -439,7 +450,11 @@ func (m *kubeapi) Count(ctx context.Context, entity datastore.Entity, filterOpti
}
if filterOptions != nil {
for _, inFilter := range filterOptions.In {
rq, err := labels.NewRequirement(inFilter.Key, selection.In, inFilter.Values)
var values []string
for _, value := range inFilter.Values {
values = append(values, verifyValue(value))
}
rq, err := labels.NewRequirement(inFilter.Key, selection.In, values)
if err != nil {
return 0, datastore.ErrIndexInvalid
}
@@ -473,3 +488,9 @@ func (m *kubeapi) Count(ctx context.Context, entity datastore.Entity, filterOpti
}
return int64(len(items)), nil
}
func verifyValue(v string) string {
s := strings.ReplaceAll(v, "@", "-")
s = strings.ReplaceAll(s, " ", "-")
return strings.ToLower(s)
}

View File

@@ -246,4 +246,42 @@ var _ = Describe("Test kubeapi datastore driver", func() {
equal := cmp.Equal(err, datastore.ErrRecordNotExist, cmpopts.EquateErrors())
Expect(equal).Should(BeTrue())
})
It("Test verify index", func() {
var usr = model.User{Name: "can@delete", Email: "xxx@xx.com"}
err := kubeStore.Add(context.TODO(), &usr)
Expect(err).ToNot(HaveOccurred())
usr.Email = "change"
err = kubeStore.Put(context.TODO(), &usr)
Expect(err).ToNot(HaveOccurred())
err = kubeStore.Get(context.TODO(), &usr)
Expect(err).Should(BeNil())
diff := cmp.Diff(usr.Email, "change")
Expect(diff).Should(BeEmpty())
list, err := kubeStore.List(context.TODO(), &usr, &datastore.ListOptions{FilterOptions: datastore.FilterOptions{In: []datastore.InQueryOption{
{
Key: "name",
Values: []string{"can@delete"},
},
}}})
Expect(err).ShouldNot(HaveOccurred())
diff = cmp.Diff(len(list), 1)
Expect(diff).Should(BeEmpty())
count, err := kubeStore.Count(context.TODO(), &usr, &datastore.FilterOptions{In: []datastore.InQueryOption{
{
Key: "name",
Values: []string{"can@delete"},
},
}})
Expect(err).ShouldNot(HaveOccurred())
Expect(count).Should(Equal(int64(1)))
usr.Name = "can@delete"
err = kubeStore.Delete(context.TODO(), &usr)
Expect(err).ShouldNot(HaveOccurred())
})
})

View File

@@ -554,8 +554,9 @@ func (c *applicationAPIInterface) GetWebServiceRoute() *restful.WebService {
Metadata(restfulspec.KeyOpenAPITags, tags).
Filter(c.RbacService.CheckPerm("application", "compare")).
Filter(c.appCheckFilter).
Reads(apis.AppCompareReq{}).
Param(ws.PathParameter("appName", "identifier of the application ").DataType("string")).
Returns(200, "OK", apis.ApplicationBase{}).
Returns(200, "OK", apis.AppCompareResponse{}).
Returns(400, "Bad Request", bcode.Bcode{}).
Writes(apis.AppCompareResponse{}))
@@ -574,6 +575,7 @@ func (c *applicationAPIInterface) GetWebServiceRoute() *restful.WebService {
Metadata(restfulspec.KeyOpenAPITags, tags).
Filter(c.RbacService.CheckPerm("application", "detail")).
Filter(c.appCheckFilter).
Reads(apis.AppDryRunReq{}).
Param(ws.PathParameter("appName", "identifier of the application ").DataType("string")).
Returns(200, "OK", apis.AppDryRunResponse{}).
Returns(400, "Bad Request", bcode.Bcode{}).

View File

@@ -42,6 +42,7 @@ import (
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
"github.com/oam-dev/kubevela/apis/types"
"github.com/oam-dev/kubevela/pkg/appfile/helm"
"github.com/oam-dev/kubevela/pkg/auth"
velaclient "github.com/oam-dev/kubevela/pkg/client"
"github.com/oam-dev/kubevela/pkg/component"
"github.com/oam-dev/kubevela/pkg/cue/definition"
@@ -922,6 +923,7 @@ func (af *Appfile) LoadDynamicComponent(ctx context.Context, cli client.Client,
return nil, errors.Wrapf(err, "invalid ref-objects component properties")
}
var uns []*unstructured.Unstructured
ctx = auth.ContextWithUserInfo(ctx, af.app)
for _, selector := range spec.Objects {
objs, err := component.SelectRefObjectsForDispatch(ctx, component.ReferredObjectsDelegatingClient(cli, af.ReferredObjects), af.Namespace, comp.Name, selector)
if err != nil {

View File

@@ -34,6 +34,7 @@ import (
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1alpha1"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
"github.com/oam-dev/kubevela/apis/types"
"github.com/oam-dev/kubevela/pkg/auth"
"github.com/oam-dev/kubevela/pkg/component"
"github.com/oam-dev/kubevela/pkg/cue/definition"
"github.com/oam-dev/kubevela/pkg/cue/packages"
@@ -332,6 +333,7 @@ func (p *Parser) parseReferredObjectsFromRevision(af *Appfile) error {
}
func (p *Parser) parseReferredObjects(ctx context.Context, af *Appfile) error {
ctx = auth.ContextWithUserInfo(ctx, af.app)
for _, comp := range af.Components {
if comp.Type != v1alpha1.RefObjectsComponentType {
continue

View File

@@ -99,3 +99,40 @@ func NewDefaultFactory(cfg *rest.Config) Factory {
copiedCfg.Wrap(multicluster.NewSecretModeMultiClusterRoundTripper)
return &defaultFactory{cfg: &copiedCfg}
}
type deferredFactory struct {
sync.Mutex
Factory
ConfigGetter
}
// NewDeferredFactory create a factory that will only get KubeConfig until it is needed for the first time
func NewDeferredFactory(getter ConfigGetter) Factory {
return &deferredFactory{ConfigGetter: getter}
}
func (f *deferredFactory) init() {
cfg, err := f.ConfigGetter()
cmdutil.CheckErr(err)
f.Factory = NewDefaultFactory(cfg)
}
// Config return the kubeConfig
func (f *deferredFactory) Config() *rest.Config {
f.Lock()
defer f.Unlock()
if f.Factory == nil {
f.init()
}
return f.Factory.Config()
}
// Client return the kubeClient
func (f *deferredFactory) Client() client.Client {
f.Lock()
defer f.Unlock()
if f.Factory == nil {
f.init()
}
return f.Factory.Client()
}

View File

@@ -442,10 +442,10 @@ func (r *Reconciler) updateStatus(ctx context.Context, app *v1beta1.Application,
}
func (r *Reconciler) doWorkflowFinish(app *v1beta1.Application, wf workflow.Workflow) error {
app.Status.Workflow.Finished = true
if err := wf.Trace(); err != nil {
return errors.WithMessage(err, "record workflow state")
}
app.Status.Workflow.Finished = true
return nil
}

View File

@@ -561,15 +561,21 @@ var _ = Describe("Test Application with GC options", func() {
By("delete application")
Expect(k8sClient.Delete(ctx, app)).Should(BeNil())
By("worker3 will be deleted")
By("worker1 will be deleted")
testutil.ReconcileOnce(reconciler, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(app)})
Expect(k8sClient.List(ctx, workerList, listOpts...)).Should(BeNil())
for _, worker := range workerList.Items {
Expect(worker.Name).ShouldNot(Equal("worker1"))
}
Expect(len(workerList.Items)).Should(Equal(2))
By("worker2 will be deleted")
testutil.ReconcileOnce(reconciler, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(app)})
Expect(k8sClient.List(ctx, workerList, listOpts...)).Should(BeNil())
Expect(len(workerList.Items)).Should(Equal(1))
By("worker1 will be deleted")
for _, worker := range workerList.Items {
Expect(worker.Name).ShouldNot(Equal("worker2"))
}
By("worker3 will be deleted")
testutil.ReconcileOnce(reconciler, reconcile.Request{NamespacedName: client.ObjectKeyFromObject(app)})
Expect(k8sClient.List(ctx, workerList, listOpts...)).Should(BeNil())
Expect(len(workerList.Items)).Should(Equal(0))

View File

@@ -237,7 +237,7 @@ func (h *AppHandler) checkComponentHealth(appParser *appfile.Parser, appRev *v1b
return false, err
}
_, isHealth, err := h.collectHealthStatus(ctx, wl, appRev, overrideNamespace)
_, isHealth, err := h.collectHealthStatus(auth.ContextWithUserInfo(ctx, h.app), wl, appRev, overrideNamespace)
return isHealth, err
}
}
@@ -288,7 +288,7 @@ func (h *AppHandler) applyComponentFunc(appParser *appfile.Parser, appRev *v1bet
if DisableResourceApplyDoubleCheck {
return readyWorkload, readyTraits, true, nil
}
workload, traits, err := getComponentResources(ctx, manifest, wl.SkipApplyWorkload, h.r.Client)
workload, traits, err := getComponentResources(auth.ContextWithUserInfo(ctx, h.app), manifest, wl.SkipApplyWorkload, h.r.Client)
return workload, traits, true, err
}
}

View File

@@ -898,6 +898,7 @@ func cleanUpWorkflowComponentRevision(ctx context.Context, h *AppHandler) error
}
// collect component revision in use
compRevisionInUse := map[string]map[string]struct{}{}
ctx = auth.ContextWithUserInfo(ctx, h.app)
for i, resource := range h.app.Status.AppliedResources {
compName := resource.Name
ns := resource.Namespace

View File

@@ -46,7 +46,7 @@ var defaultFeatureGates = map[featuregate.Feature]featuregate.FeatureSpec{
DeprecatedPolicySpec: {Default: false, PreRelease: featuregate.Alpha},
LegacyObjectTypeIdentifier: {Default: false, PreRelease: featuregate.Alpha},
DeprecatedObjectLabelSelector: {Default: false, PreRelease: featuregate.Alpha},
LegacyResourceTrackerGC: {Default: true, PreRelease: featuregate.Alpha},
LegacyResourceTrackerGC: {Default: false, PreRelease: featuregate.Beta},
EnableSuspendOnFailure: {Default: false, PreRelease: featuregate.Alpha},
AuthenticateApplication: {Default: false, PreRelease: featuregate.Alpha},
}

View File

@@ -92,7 +92,9 @@ func (clusterConfig *KubeClusterConfig) RegisterByVelaSecret(ctx context.Context
var credentialType clusterv1alpha1.CredentialType
data := map[string][]byte{
"endpoint": []byte(clusterConfig.Cluster.Server),
"ca.crt": clusterConfig.Cluster.CertificateAuthorityData,
}
if !clusterConfig.Cluster.InsecureSkipTLSVerify {
data["ca.crt"] = clusterConfig.Cluster.CertificateAuthorityData
}
if len(clusterConfig.AuthInfo.Token) > 0 {
credentialType = clusterv1alpha1.CredentialTypeServiceAccountToken

View File

@@ -327,26 +327,25 @@ func (h *gcHandler) deleteManagedResource(ctx context.Context, mr v1beta1.Manage
func (h *gcHandler) checkDependentComponent(mr v1beta1.ManagedResource) []string {
dependent := make([]string, 0)
inputs := make([]string, 0)
outputs := make([]string, 0)
for _, comp := range h.app.Spec.Components {
if comp.Name == mr.Component {
dependent = comp.DependsOn
if len(comp.Inputs) > 0 {
for _, input := range comp.Inputs {
inputs = append(inputs, input.From)
}
} else {
return dependent
for _, output := range comp.Outputs {
outputs = append(outputs, output.Name)
}
} else {
for _, dependsOn := range comp.DependsOn {
if dependsOn == mr.Component {
dependent = append(dependent, comp.Name)
break
}
}
break
}
}
for _, comp := range h.app.Spec.Components {
if len(comp.Outputs) > 0 {
for _, output := range comp.Outputs {
if utils.StringsContain(inputs, output.Name) {
dependent = append(dependent, comp.Name)
}
for _, input := range comp.Inputs {
if utils.StringsContain(outputs, input.From) {
dependent = append(dependent, comp.Name)
}
}
}

View File

@@ -19,6 +19,7 @@ package resourcekeeper
import (
"context"
"encoding/json"
"testing"
"time"
"github.com/crossplane/crossplane-runtime/pkg/meta"
@@ -30,10 +31,13 @@ import (
v12 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
utilfeature "k8s.io/apiserver/pkg/util/feature"
featuregatetesting "k8s.io/component-base/featuregate/testing"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1alpha1"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
"github.com/oam-dev/kubevela/pkg/features"
"github.com/oam-dev/kubevela/pkg/multicluster"
"github.com/oam-dev/kubevela/pkg/oam"
"github.com/oam-dev/kubevela/pkg/utils"
@@ -56,6 +60,7 @@ var _ = Describe("Test ResourceKeeper garbage collection", func() {
})
It("Test gcHandler garbage collect legacy RT", func() {
defer featuregatetesting.SetFeatureGateDuringTest(&testing.T{}, utilfeature.DefaultFeatureGate, features.LegacyResourceTrackerGC, true)()
version.VelaVersion = velaVersionNumberToUpgradeResourceTracker
ctx := context.Background()
cli := multicluster.NewFakeClient(testClient)

View File

@@ -253,3 +253,86 @@ func TestResourceKeeperGarbageCollect(t *testing.T) {
r.NoError(err)
r.True(finished)
}
func TestCheckDependentComponent(t *testing.T) {
rk := &resourceKeeper{
app: &v1beta1.Application{
Spec: v1beta1.ApplicationSpec{
Components: []apicommon.ApplicationComponent{
{
Name: "comp-1",
Outputs: apicommon.StepOutputs{
{
Name: "output-1",
},
},
},
{
Name: "comp-2",
Outputs: apicommon.StepOutputs{
{
Name: "output-2",
},
},
},
{
Name: "comp-3",
Inputs: apicommon.StepInputs{
{
From: "output-1",
},
{
From: "output-2",
},
},
},
{
Name: "comp-4",
DependsOn: []string{"comp-3"},
},
{
Name: "comp-5",
DependsOn: []string{"comp-4", "comp-3"},
},
},
},
},
}
testCases := []struct {
comp string
result []string
}{
{
comp: "comp-1",
result: []string{"comp-3"},
},
{
comp: "comp-2",
result: []string{"comp-3"},
},
{
comp: "comp-3",
result: []string{"comp-4", "comp-5"},
},
{
comp: "comp-4",
result: []string{"comp-5"},
},
{
comp: "comp-5",
result: []string{},
},
}
gcHandler := &gcHandler{
resourceKeeper: rk,
}
r := require.New(t)
for _, tc := range testCases {
mr := v1beta1.ManagedResource{
OAMObjectReference: apicommon.OAMObjectReference{
Component: tc.comp,
},
}
r.Equal(gcHandler.checkDependentComponent(mr), tc.result)
}
}

View File

@@ -35,6 +35,7 @@ func (h *resourceKeeper) StateKeep(ctx context.Context) error {
if h.applyOncePolicy != nil && h.applyOncePolicy.Enable && h.applyOncePolicy.Rules == nil {
return nil
}
ctx = auth.ContextWithUserInfo(ctx, h.app)
for _, rt := range []*v1beta1.ResourceTracker{h._currentRT, h._rootRT} {
if rt != nil && rt.GetDeletionTimestamp() == nil {
for _, mr := range rt.Spec.ManagedResources {
@@ -45,7 +46,6 @@ func (h *resourceKeeper) StateKeep(ctx context.Context) error {
if mr.Deleted {
if entry.exists && entry.obj != nil && entry.obj.GetDeletionTimestamp() == nil {
deleteCtx := multicluster.ContextWithClusterName(ctx, mr.Cluster)
deleteCtx = auth.ContextWithUserInfo(deleteCtx, h.app)
if err := h.Client.Delete(deleteCtx, entry.obj); err != nil {
return errors.Wrapf(err, "failed to delete outdated resource %s in resourcetracker %s", mr.ResourceKey(), rt.Name)
}
@@ -64,7 +64,6 @@ func (h *resourceKeeper) StateKeep(ctx context.Context) error {
if err != nil {
return errors.Wrapf(err, "failed to apply once resource %s from resourcetracker %s", mr.ResourceKey(), rt.Name)
}
applyCtx = auth.ContextWithUserInfo(applyCtx, h.app)
if err = h.applicator.Apply(applyCtx, manifest, apply.MustBeControlledByApp(h.app)); err != nil {
return errors.Wrapf(err, "failed to re-apply resource %s from resourcetracker %s", mr.ResourceKey(), rt.Name)
}

View File

@@ -37,7 +37,7 @@ import (
const (
errAuthenticateProvider = "failed to authenticate Terraform cloud provider %s for %s"
errProviderExists = "terraform provider %s for %s already exists"
errDeleteProvider = "failed to delete Terraform Provider %s"
errDeleteProvider = "failed to delete Terraform Provider %s err: %w"
errCouldNotDeleteProvider = "the Terraform Provider %s could not be disabled because it was created by enabling a Terraform provider or was manually created"
errCheckProviderExistence = "failed to check if Terraform Provider %s exists"
)
@@ -112,16 +112,16 @@ func DeleteApplication(ctx context.Context, k8sClient client.Client, name string
name = legacyName
}
if err1 != nil {
if kerrors.IsNotFound(err1) {
err2 := k8sClient.Get(ctx, client.ObjectKey{Namespace: types.DefaultKubeVelaNS, Name: name}, &v1beta1.Application{})
if err2 != nil {
if kerrors.IsNotFound(err2) {
return fmt.Errorf(errCouldNotDeleteProvider, name)
}
return fmt.Errorf(errDeleteProvider, name)
}
if !kerrors.IsNotFound(err1) {
return fmt.Errorf(errDeleteProvider, name, err1)
}
err2 := k8sClient.Get(ctx, client.ObjectKey{Namespace: types.DefaultKubeVelaNS, Name: name}, &v1beta1.Application{})
if err2 != nil {
if kerrors.IsNotFound(err2) {
return fmt.Errorf(errCouldNotDeleteProvider, name)
}
return fmt.Errorf(errDeleteProvider, name, err2)
}
return fmt.Errorf(errDeleteProvider, name)
}
}
}

View File

@@ -140,20 +140,16 @@ func (h *Helper) UpgradeChart(ch *chart.Chart, releaseName, namespace string, va
r.Info.Status == release.StatusPendingRollback {
return nil, fmt.Errorf("previous installation (e.g., using vela install or helm upgrade) is still in progress. Please try again in %d minutes", timeoutInMinutes)
}
}
// merge un-existing values into the values as user-input, because the helm chart upgrade didn't handle the new default values in the chart.
// the new default values <= the old custom values <= the new custom values
if config.ReuseValues {
// sort will sort the release by revision from old to new
relutil.SortByRevision(releases)
rel := releases[len(releases)-1]
// merge new chart values into old values, the values of old chart has the high priority
mergedWithNewValues := chartutil.CoalesceTables(rel.Chart.Values, ch.Values)
// merge the chart with the released chart config but follow the old config
mergeWithConfigs := chartutil.CoalesceTables(rel.Config, mergedWithNewValues)
// merge new values as the user input, follow the new user input for --set
values = chartutil.CoalesceTables(values, mergeWithConfigs)
values = chartutil.CoalesceTables(values, rel.Config)
}
// overwrite existing installation
@@ -161,7 +157,8 @@ func (h *Helper) UpgradeChart(ch *chart.Chart, releaseName, namespace string, va
install.Namespace = namespace
install.Wait = config.Wait
install.Timeout = time.Duration(timeoutInMinutes) * time.Minute
install.ReuseValues = config.ReuseValues
// use the new default value set.
install.ReuseValues = false
newRelease, err = install.Run(releaseName, ch, values)
}
// check if install/upgrade worked

View File

@@ -49,7 +49,10 @@ var _ = Describe("Test helm helper", func() {
helper := NewHelper()
chart, err := helper.LoadCharts("./testdata/autoscalertrait-0.1.0.tgz", nil)
Expect(err).Should(BeNil())
release, err := helper.UpgradeChart(chart, "autoscalertrait", "default", nil, UpgradeChartOptions{
release, err := helper.UpgradeChart(chart, "autoscalertrait", "default", map[string]interface{}{
"replicaCount": 2,
"image.tag": "0.1.0",
}, UpgradeChartOptions{
Config: cfg,
Detail: false,
Logging: util.IOStreams{Out: os.Stdout, ErrOut: os.Stderr},
@@ -58,6 +61,42 @@ var _ = Describe("Test helm helper", func() {
crds := GetCRDFromChart(release.Chart)
Expect(cmp.Diff(len(crds), 1)).Should(BeEmpty())
Expect(err).Should(BeNil())
deployments := GetDeploymentsFromManifest(release.Manifest)
Expect(cmp.Diff(len(deployments), 1)).Should(BeEmpty())
Expect(cmp.Diff(*deployments[0].Spec.Replicas, int32(2))).Should(BeEmpty())
containers := deployments[0].Spec.Template.Spec.Containers
Expect(cmp.Diff(len(containers), 1)).Should(BeEmpty())
// add new default value
Expect(cmp.Diff(containers[0].Image, "ghcr.io/oam-dev/catalog/autoscalertrait:0.1.0")).Should(BeEmpty())
chartNew, err := helper.LoadCharts("./testdata/autoscalertrait-0.2.0.tgz", nil)
Expect(err).Should(BeNil())
// the new custom values should override the last release custom values
releaseNew, err := helper.UpgradeChart(chartNew, "autoscalertrait", "default", map[string]interface{}{
"image.tag": "0.2.0",
}, UpgradeChartOptions{
Config: cfg,
Detail: false,
ReuseValues: true,
Logging: util.IOStreams{Out: os.Stdout, ErrOut: os.Stderr},
Wait: false,
})
Expect(err).Should(BeNil())
deployments = GetDeploymentsFromManifest(releaseNew.Manifest)
Expect(cmp.Diff(len(deployments), 1)).Should(BeEmpty())
// keep the custom values
Expect(cmp.Diff(*deployments[0].Spec.Replicas, int32(2))).Should(BeEmpty())
containers = deployments[0].Spec.Template.Spec.Containers
Expect(cmp.Diff(len(containers), 1)).Should(BeEmpty())
// change the default value
Expect(cmp.Diff(containers[0].Image, "ghcr.io/oam-dev/catalog/autoscalertrait:0.2.0")).Should(BeEmpty())
// add new default value
Expect(cmp.Diff(len(containers[0].Env), 1)).Should(BeEmpty())
Expect(cmp.Diff(containers[0].Env[0].Name, "env1")).Should(BeEmpty())
})
It("Test UninstallRelease", func() {

Binary file not shown.

Binary file not shown.

View File

@@ -110,6 +110,9 @@ func (h *provider) ListResourcesInApp(ctx wfContext.Context, v *value.Value, act
if err != nil {
return v.FillObject(err.Error(), "err")
}
if appResList == nil {
appResList = make([]Resource, 0)
}
return fillQueryResult(v, appResList, "list")
}
@@ -133,6 +136,9 @@ func (h *provider) ListAppliedResources(ctx wfContext.Context, v *value.Value, a
if err != nil {
return v.FillObject(err.Error(), "err")
}
if appResList == nil {
appResList = make([]*querytypes.AppliedResource, 0)
}
return fillQueryResult(v, appResList, "list")
}
@@ -293,7 +299,7 @@ func (h *provider) GeneratorServiceEndpoints(wfctx wfContext.Context, v *value.V
if err != nil {
return fmt.Errorf("query app failure %w", err)
}
var serviceEndpoints []querytypes.ServiceEndpoint
serviceEndpoints := make([]querytypes.ServiceEndpoint, 0)
var clusterGatewayNodeIP = make(map[string]string)
collector := NewAppCollector(h.cli, opt)
resources, err := collector.ListApplicationResources(app)

View File

@@ -960,12 +960,12 @@ options: {
fmt.Sprintf("http://%s:30229", gatewayIP),
"http://10.10.10.10",
"http://text.example.com",
"tcp://10.10.10.10:81",
"tcp://text.example.com:81",
"10.10.10.10:81",
"text.example.com:81",
// helmRelease
fmt.Sprintf("http://%s:30002", gatewayIP),
"http://ingress.domain.helm",
"tcp://1.1.1.1:80/seldon/test",
"1.1.1.1:80/seldon/test",
}
endValue, err := v.Field("list")
Expect(err).Should(BeNil())

View File

@@ -23,6 +23,7 @@ import (
appsv1 "k8s.io/api/apps/v1"
v12 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/meta"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
@@ -677,6 +678,10 @@ func iteratorChildResources(ctx context.Context, cluster string, k8sClient clien
clusterCTX := multicluster.ContextWithClusterName(ctx, cluster)
items, err := listItemByRule(clusterCTX, k8sClient, resource, *parentObject, specifiedFunc, rules.DefaultGenListOptionFunc)
if err != nil {
if meta.IsNoMatchError(err) || runtime.IsNotRegisteredError(err) {
log.Logger.Errorf("error to list subresources: %s err: %v", resource.Kind, err)
continue
}
return nil, err
}
for _, item := range items {

View File

@@ -1380,6 +1380,54 @@ var _ = Describe("unit-test to e2e test", func() {
Expect(err).Should(BeNil())
Expect(len(res.List)).Should(Equal(2))
})
It("Test not exist api don't break whole process", func() {
notExistRuleStr := `
- parentResourceType:
group: apps
kind: Deployment
childrenResourceType:
- apiVersion: v2
kind: Pod
`
notExistParentResourceStr := `
- parentResourceType:
group: badgroup
kind: Deployment
childrenResourceType:
- apiVersion: v2
kind: Pod
`
Expect(k8sClient.Create(ctx, &v1.Namespace{ObjectMeta: metav1.ObjectMeta{Name: "vela-system"}})).Should(SatisfyAny(BeNil(), util.AlreadyExistMatcher{}))
badRuleConfigMap := v1.ConfigMap{TypeMeta: metav1.TypeMeta{APIVersion: "v1", Kind: "ConfigMap"},
ObjectMeta: metav1.ObjectMeta{Namespace: types3.DefaultKubeVelaNS, Name: "bad-rule", Labels: map[string]string{oam.LabelResourceRules: "true"}},
Data: map[string]string{relationshipKey: notExistRuleStr},
}
Expect(k8sClient.Create(ctx, &badRuleConfigMap)).Should(BeNil())
notExistParentConfigMap := v1.ConfigMap{TypeMeta: metav1.TypeMeta{APIVersion: "v1", Kind: "ConfigMap"},
ObjectMeta: metav1.ObjectMeta{Namespace: types3.DefaultKubeVelaNS, Name: "not-exist-parent", Labels: map[string]string{oam.LabelResourceRules: "true"}},
Data: map[string]string{relationshipKey: notExistParentResourceStr},
}
Expect(k8sClient.Create(ctx, &notExistParentConfigMap)).Should(BeNil())
prd := provider{cli: k8sClient}
opt := `app: {
name: "app"
namespace: "test-namespace"
}`
v, err := value.NewValue(opt, nil, "")
Expect(err).Should(BeNil())
Expect(prd.GetApplicationResourceTree(nil, v, nil)).Should(BeNil())
type Res struct {
List []types.AppliedResource `json:"list"`
}
var res Res
err = v.UnmarshalTo(&res)
Expect(err).Should(BeNil())
Expect(len(res.List)).Should(Equal(2))
})
})
var _ = Describe("test merge globalRules", func() {

View File

@@ -58,6 +58,9 @@ func (s *ServiceEndpoint) String() string {
if (protocol == HTTPS && s.Endpoint.Port == 443) || (protocol == HTTP && s.Endpoint.Port == 80) {
return fmt.Sprintf("%s://%s%s", protocol, s.Endpoint.Host, path)
}
if protocol == "tcp" {
return fmt.Sprintf("%s:%d%s", s.Endpoint.Host, s.Endpoint.Port, path)
}
return fmt.Sprintf("%s://%s:%d%s", protocol, s.Endpoint.Host, s.Endpoint.Port, path)
}

View File

@@ -223,12 +223,25 @@ func getAddonRegistry(ctx context.Context, c common.Args, name string) error {
return err
}
table := uitable.New()
if registry.OSS != nil {
switch {
case registry.OSS != nil:
table.AddRow("NAME", "Type", "ENDPOINT", "BUCKET", "PATH")
table.AddRow(registry.Name, "OSS", registry.OSS.Endpoint, registry.OSS.Bucket, registry.OSS.Path)
} else {
case registry.Helm != nil:
table.AddRow("NAME", "Type", "ENDPOINT")
table.AddRow(registry.Name, "Helm", registry.Helm.URL)
case registry.Gitee != nil:
table.AddRow("NAME", "Type", "ENDPOINT", "PATH")
table.AddRow(registry.Name, "git", registry.Git.URL, registry.Git.Path)
table.AddRow(registry.Name, "Gitee", registry.Gitee.URL, registry.Gitee.Path)
case registry.Gitlab != nil:
table.AddRow("NAME", "Type", "ENDPOINT", "REPOSITORY", "PATH")
table.AddRow(registry.Name, "Gitlab", registry.Gitlab.URL, registry.Gitlab.Repo, registry.Gitlab.Path)
case registry.Git != nil:
table.AddRow("NAME", "Type", "ENDPOINT", "PATH")
table.AddRow(registry.Name, "Git", registry.Git.URL, registry.Git.Path)
default:
table.AddRow("Name")
table.AddRow(registry.Name)
}
fmt.Println(table.String())
return nil

View File

@@ -30,6 +30,7 @@ import (
"helm.sh/helm/v3/pkg/strvals"
"github.com/oam-dev/kubevela/pkg/apiserver/domain/service"
"github.com/oam-dev/kubevela/pkg/oam"
"k8s.io/client-go/rest"
@@ -201,8 +202,7 @@ func AdditionalEndpointPrinter(ctx context.Context, c common.Args, k8sClient cli
}
if name == "velaux" {
if !isUpgrade {
fmt.Println(`To check the initialized admin user name and password by:`)
fmt.Println(` vela logs -n vela-system --name apiserver addon-velaux | grep "initialized admin username"`)
fmt.Printf("Initialized admin username and password: admin / %s \n", service.InitAdminPassword)
}
fmt.Println(`To open the dashboard directly by port-forward:`)
fmt.Println(` vela port-forward -n vela-system addon-velaux 9082:80`)
@@ -367,6 +367,9 @@ func enableAddon(ctx context.Context, k8sClient client.Client, dc *discovery.Dis
continue
}
if err != nil {
if errors.As(err, &pkgaddon.VersionUnMatchError{}) {
return fmt.Errorf("%w\nyou can try another version by command: \"vela addon enable %s --version <version> \" ", err, name)
}
return err
}
if err = waitApplicationRunning(k8sClient, name); err != nil {

View File

@@ -71,7 +71,7 @@ func NewCommandWithIOStreams(ioStream util.IOStreams) *cobra.Command {
commandArgs := common.Args{
Schema: common.Scheme,
}
f := velacmd.NewDefaultFactory(config.GetConfigOrDie())
f := velacmd.NewDeferredFactory(config.GetConfig)
if err := system.InitDirs(); err != nil {
fmt.Println("InitDir err", err)
@@ -86,6 +86,7 @@ func NewCommandWithIOStreams(ioStream util.IOStreams) *cobra.Command {
NewCapabilityShowCommand(commandArgs, ioStream),
// Manage Apps
NewQlCommand(commandArgs, "10", ioStream),
NewListCommand(commandArgs, "9", ioStream),
NewAppStatusCommand(commandArgs, "8", ioStream),
NewDeleteCommand(commandArgs, "7", ioStream),

View File

@@ -226,7 +226,7 @@ func PrintInstalledCompDef(c common2.Args, io cmdutil.IOStreams, filter filterFu
}
table := newUITable()
table.AddRow("NAME", "DEFINITION")
table.AddRow("NAME", "DEFINITION", "DESCRIPTION")
for _, cd := range list.Items {
data, err := json.Marshal(cd)
@@ -242,7 +242,7 @@ func PrintInstalledCompDef(c common2.Args, io cmdutil.IOStreams, filter filterFu
if filter != nil && !filter(capa) {
continue
}
table.AddRow(capa.Name, capa.CrdName)
table.AddRow(capa.Name, capa.CrdName, capa.Description)
}
io.Info(table.String())
return nil

View File

@@ -20,7 +20,9 @@ import (
"context"
"encoding/json"
"fmt"
"os"
"strings"
"time"
"github.com/gosuri/uitable"
tcv1beta1 "github.com/oam-dev/terraform-controller/api/v1beta1"
@@ -42,7 +44,7 @@ import (
const (
providerNameParam = "name"
errAuthenticateProvider = "failed to authenticate Terraform cloud provider %s"
errAuthenticateProvider = "failed to authenticate Terraform cloud provider %s err: %w"
)
// NewProviderCommand create `addon` command
@@ -56,19 +58,11 @@ func NewProviderCommand(c common.Args, order string, ioStreams cmdutil.IOStreams
types.TagCommandType: types.TypeExtension,
},
}
add, err := prepareProviderAddCommand(c, ioStreams)
if err == nil {
cmd.AddCommand(add)
}
delete, err := prepareProviderDeleteCommand(c, ioStreams)
if err == nil {
cmd.AddCommand(delete)
}
cmd.AddCommand(
NewProviderListCommand(c, ioStreams),
)
cmd.AddCommand(prepareProviderAddCommand(c, ioStreams))
cmd.AddCommand(prepareProviderDeleteCommand(c, ioStreams))
return cmd
}
@@ -93,13 +87,7 @@ func NewProviderListCommand(c common.Args, ioStreams cmdutil.IOStreams) *cobra.C
}
}
func prepareProviderAddCommand(c common.Args, ioStreams cmdutil.IOStreams) (*cobra.Command, error) {
ctx := context.Background()
k8sClient, err := c.GetClient()
if err != nil {
return nil, err
}
func prepareProviderAddCommand(c common.Args, ioStreams cmdutil.IOStreams) *cobra.Command {
cmd := &cobra.Command{
Use: "add",
Short: "Authenticate Terraform Cloud Provider",
@@ -107,13 +95,13 @@ func prepareProviderAddCommand(c common.Args, ioStreams cmdutil.IOStreams) (*cob
Example: "vela provider add <provider-type>",
}
addSubCommands, err := prepareProviderAddSubCommand(c, ioStreams)
if err != nil {
return nil, err
}
cmd.AddCommand(addSubCommands...)
cmd.RunE = func(cmd *cobra.Command, args []string) error {
ctx, cancel := context.WithTimeout(context.Background(), time.Minute*1)
defer cancel()
k8sClient, err := c.GetClient()
if err != nil {
return err
}
defs, err := getTerraformProviderTypes(ctx, k8sClient)
if len(args) < 1 {
errMsg := "must specify a Terraform Cloud Provider type"
@@ -143,11 +131,21 @@ func prepareProviderAddCommand(c common.Args, ioStreams cmdutil.IOStreams) (*cob
}
return nil
}
return cmd, nil
addSubCommands, err := prepareProviderAddSubCommand(c, ioStreams)
if err != nil {
ioStreams.Errorf("Fail to prepare the sub commands for the add command:%s \n", err.Error())
}
cmd.AddCommand(addSubCommands...)
return cmd
}
func prepareProviderAddSubCommand(c common.Args, ioStreams cmdutil.IOStreams) ([]*cobra.Command, error) {
ctx := context.Background()
if len(os.Args) < 2 || os.Args[1] != "provider" {
return nil, nil
}
ctx, cancel := context.WithTimeout(context.Background(), time.Minute*1)
defer cancel()
k8sClient, err := c.GetClient()
if err != nil {
return nil, err
@@ -188,11 +186,11 @@ func prepareProviderAddSubCommand(c common.Args, ioStreams cmdutil.IOStreams) ([
}
data, err := json.Marshal(properties)
if err != nil {
return fmt.Errorf(errAuthenticateProvider, providerType)
return fmt.Errorf(errAuthenticateProvider, providerType, err)
}
if err := config.CreateApplication(ctx, k8sClient, name, providerType, string(data), config.UIParam{}); err != nil {
return fmt.Errorf(errAuthenticateProvider, providerType)
return fmt.Errorf(errAuthenticateProvider, providerType, err)
}
ioStreams.Infof("Successfully authenticate provider %s for %s\n", name, providerType)
return nil
@@ -315,28 +313,28 @@ func getTerraformProviderType(ctx context.Context, k8sClient client.Client, name
return def, nil
}
func prepareProviderDeleteCommand(c common.Args, ioStreams cmdutil.IOStreams) (*cobra.Command, error) {
ctx := context.Background()
k8sClient, err := c.GetClient()
if err != nil {
return nil, err
}
func prepareProviderDeleteCommand(c common.Args, ioStreams cmdutil.IOStreams) *cobra.Command {
cmd := &cobra.Command{
Use: "delete",
Aliases: []string{"rm", "del"},
Short: "Delete Terraform Cloud Provider",
Long: "Delete Terraform Cloud Provider",
Example: "vela provider delete <provider-type> -name <provider-name>",
Example: "vela provider delete <provider-type> --name <provider-name>",
}
deleteSubCommands, err := prepareProviderDeleteSubCommand(c, ioStreams)
if err != nil {
return nil, err
ioStreams.Errorf("Fail to prepare the sub commands for the delete command:%s \n", err.Error())
}
cmd.AddCommand(deleteSubCommands...)
cmd.RunE = func(cmd *cobra.Command, args []string) error {
k8sClient, err := c.GetClient()
if err != nil {
return err
}
ctx, cancel := context.WithTimeout(context.Background(), time.Minute*1)
defer cancel()
defs, err := getTerraformProviderTypes(ctx, k8sClient)
if len(args) < 1 {
errMsg := "must specify a Terraform Cloud Provider type"
@@ -366,11 +364,15 @@ func prepareProviderDeleteCommand(c common.Args, ioStreams cmdutil.IOStreams) (*
}
return nil
}
return cmd, nil
return cmd
}
func prepareProviderDeleteSubCommand(c common.Args, ioStreams cmdutil.IOStreams) ([]*cobra.Command, error) {
ctx := context.Background()
if len(os.Args) < 2 || os.Args[1] != "provider" {
return nil, nil
}
ctx, cancel := context.WithTimeout(context.Background(), time.Minute*1)
defer cancel()
k8sClient, err := c.GetClient()
if err != nil {
return nil, err

View File

@@ -228,6 +228,7 @@ func PrintInstalledTraitDef(c common2.Args, io cmdutil.IOStreams, filter filterF
table := newUITable()
table.AddRow("NAME", "APPLIES-TO")
table.AddRow("NAME", "APPLIES-TO", "DESCRIPTION")
for _, td := range list.Items {
data, err := json.Marshal(td)
@@ -243,7 +244,7 @@ func PrintInstalledTraitDef(c common2.Args, io cmdutil.IOStreams, filter filterF
if filter != nil && !filter(capa) {
continue
}
table.AddRow(capa.Name, capa.AppliesTo)
table.AddRow(capa.Name, capa.AppliesTo, capa.Description)
}
io.Info(table.String())
return nil

View File

@@ -209,6 +209,21 @@ func forceDisableAddon(ctx context.Context, kubeClient client.Client, config *re
if err := pkgaddon.DisableAddon(ctx, kubeClient, "fluxcd", config, true); err != nil {
return err
}
timeConsumed = time.Now()
for {
if time.Now().After(timeConsumed.Add(5 * time.Minute)) {
return errors.New("timeout disable fluxcd addon, please disable the addon manually")
}
addons, err := checkInstallAddon(kubeClient)
if err != nil {
return err
}
if len(addons) == 0 {
break
}
fmt.Printf("Waiting delete the fluxcd addon, timeout left %s . \r\n", 5*time.Minute-time.Since(timeConsumed))
time.Sleep(2 * time.Second)
}
}
return nil
}

View File

@@ -17,12 +17,18 @@ limitations under the License.
package cli
import (
"bytes"
"context"
"encoding/json"
"fmt"
"github.com/spf13/cobra"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/oam-dev/kubevela/apis/types"
"github.com/oam-dev/kubevela/pkg/cue/model/value"
"github.com/oam-dev/kubevela/pkg/utils/common"
"github.com/oam-dev/kubevela/pkg/utils/util"
"github.com/oam-dev/kubevela/pkg/velaql"
querytypes "github.com/oam-dev/kubevela/pkg/velaql/providers/query/types"
)
@@ -34,6 +40,54 @@ type Filter struct {
ClusterNamespace string
}
// NewQlCommand creates `ql` command for executing velaQL
func NewQlCommand(c common.Args, order string, ioStreams util.IOStreams) *cobra.Command {
ctx := context.Background()
cmd := &cobra.Command{
Use: "ql",
Short: "Show result of executing velaQL.",
Long: "Show result of executing velaQL.",
Example: `vela ql "view{parameter=value1,parameter=value2}"`,
RunE: func(cmd *cobra.Command, args []string) error {
argsLength := len(args)
if argsLength == 0 {
return fmt.Errorf("please specify an VelaQL statement")
}
velaQL := args[0]
newClient, err := c.GetClient()
if err != nil {
return err
}
return printVelaQLResult(ctx, newClient, c, velaQL, cmd)
},
Annotations: map[string]string{
types.TagCommandOrder: order,
types.TagCommandType: types.TypeApp,
},
}
cmd.SetOut(ioStreams.Out)
return cmd
}
// printVelaQLResult show velaQL result
func printVelaQLResult(ctx context.Context, client client.Client, velaC common.Args, velaQL string, cmd *cobra.Command) error {
queryValue, err := QueryValue(ctx, client, velaC, velaQL)
if err != nil {
return err
}
response, err := queryValue.CueValue().MarshalJSON()
if err != nil {
return err
}
var out bytes.Buffer
err = json.Indent(&out, response, "", " ")
if err != nil {
return err
}
cmd.Printf("%s\n", out.String())
return nil
}
// MakeVelaQL build velaQL
func MakeVelaQL(view string, params map[string]string, action string) string {
var paramString string
@@ -49,34 +103,20 @@ func MakeVelaQL(view string, params map[string]string, action string) string {
// GetServiceEndpoints get service endpoints by velaQL
func GetServiceEndpoints(ctx context.Context, client client.Client, appName string, namespace string, velaC common.Args, f Filter) ([]querytypes.ServiceEndpoint, error) {
dm, err := velaC.GetDiscoveryMapper()
if err != nil {
return nil, err
}
pd, err := velaC.GetPackageDiscover()
if err != nil {
return nil, err
}
parmas := map[string]string{
params := map[string]string{
"appName": appName,
"appNs": namespace,
}
if f.Component != "" {
parmas["name"] = f.Component
params["name"] = f.Component
}
if f.Cluster != "" && f.ClusterNamespace != "" {
parmas["cluster"] = f.Cluster
parmas["clusterNs"] = f.ClusterNamespace
params["cluster"] = f.Cluster
params["clusterNs"] = f.ClusterNamespace
}
queryView, err := velaql.ParseVelaQL(MakeVelaQL("service-endpoints-view", parmas, "status"))
if err != nil {
return nil, err
}
config, err := velaC.GetConfig()
if err != nil {
return nil, err
}
queryValue, err := velaql.NewViewHandler(client, config, dm, pd).QueryView(ctx, queryView)
velaQL := MakeVelaQL("service-endpoints-view", params, "status")
queryValue, err := QueryValue(ctx, client, velaC, velaQL)
if err != nil {
return nil, err
}
@@ -92,3 +132,28 @@ func GetServiceEndpoints(ctx context.Context, client client.Client, appName stri
}
return response.Endpoints, nil
}
// QueryValue get queryValue from velaQL
func QueryValue(ctx context.Context, client client.Client, velaC common.Args, velaQL string) (*value.Value, error) {
dm, err := velaC.GetDiscoveryMapper()
if err != nil {
return nil, err
}
pd, err := velaC.GetPackageDiscover()
if err != nil {
return nil, err
}
queryView, err := velaql.ParseVelaQL(velaQL)
if err != nil {
return nil, err
}
config, err := velaC.GetConfig()
if err != nil {
return nil, err
}
queryValue, err := velaql.NewViewHandler(client, config, dm, pd).QueryView(ctx, queryView)
if err != nil {
return nil, err
}
return queryValue, nil
}

View File

@@ -393,8 +393,8 @@ var _ = Describe("Test velaQL", func() {
fmt.Sprintf("http://%s:30229", gatewayIP),
"http://10.10.10.10",
"http://text.example.com",
"tcp://10.10.10.10:81",
"tcp://text.example.com:81",
"10.10.10.10:81",
"text.example.com:81",
// helmRelease
fmt.Sprintf("http://%s:30002", gatewayIP),
"http://ingress.domain.helm",

View File

@@ -17,8 +17,8 @@ limitations under the License.
package main
import (
"log"
"math/rand"
"os"
"time"
utilfeature "k8s.io/apiserver/pkg/util/feature"
@@ -37,6 +37,6 @@ func main() {
command := cli.NewCommand()
if err := command.Execute(); err != nil {
log.Fatal(err)
os.Exit(1)
}
}

View File

@@ -134,7 +134,8 @@ var _ = Describe("Test MultiCluster Rollout", func() {
verifySucceed(componentName + "-v1")
})
It("Test Rollout with health check policy, guarantee health scope controller work ", func() {
// HealthScopeController will not work properly with authentication module now
PIt("Test Rollout with health check policy, guarantee health scope controller work ", func() {
app := &v1beta1.Application{}
appYaml, err := ioutil.ReadFile("./testdata/app/multi-cluster-health-policy.yaml")
Expect(err).Should(Succeed())

View File

@@ -344,9 +344,11 @@ var _ = Describe("Test multicluster scenario", func() {
envs[1].Placement.ClusterSelector.Name = multicluster.ClusterLocalName
bs, err = json.Marshal(&v1alpha1.EnvBindingSpec{Envs: envs})
Expect(err).Should(Succeed())
Expect(k8sClient.Get(hubCtx, namespacedName, app)).Should(Succeed())
app.Spec.Policies[0].Properties.Raw = bs
Expect(k8sClient.Update(hubCtx, app)).Should(Succeed())
Eventually(func(g Gomega) {
g.Expect(k8sClient.Get(hubCtx, namespacedName, app)).Should(Succeed())
app.Spec.Policies[0].Properties.Raw = bs
g.Expect(k8sClient.Update(hubCtx, app)).Should(Succeed())
}, 15*time.Second).Should(Succeed())
Eventually(func(g Gomega) {
deploys := &appsv1.DeploymentList{}
g.Expect(k8sClient.List(hubCtx, deploys, client.InNamespace(testNamespace))).Should(Succeed())

View File

@@ -1,7 +1,9 @@
"ref-objects": {
type: "component"
annotations: {}
labels: {}
labels: {
"ui-hidden": "true"
}
description: "Ref-objects allow users to specify ref objects to use. Notice that this component type have special handle logic."
attributes: {
workload: type: "autodetects.core.oam.dev"

View File

@@ -1,7 +1,9 @@
"container-image": {
type: "trait"
annotations: {}
labels: {}
labels: {
"ui-hidden": "true"
}
description: "Set the image of the container."
attributes: {
podDisruptive: true

View File

@@ -1,7 +1,9 @@
"json-merge-patch": {
type: "trait"
annotations: {}
labels: {}
labels: {
"ui-hidden": "true"
}
description: "Patch the output following Json Merge Patch strategy, following RFC 7396."
attributes: {
podDisruptive: true

View File

@@ -1,7 +1,9 @@
"json-patch": {
type: "trait"
annotations: {}
labels: {}
labels: {
"ui-hidden": "true"
}
description: "Patch the output following Json Patch strategy, following RFC 6902."
attributes: {
podDisruptive: true

View File

@@ -5,7 +5,9 @@ import (
"deploy2env": {
type: "workflow-step"
annotations: {}
labels: {}
labels: {
"ui-hidden": "true"
}
description: "Deploy env binding component to target env"
}
template: {

View File

@@ -1,7 +1,9 @@
"step-group": {
type: "workflow-step"
annotations: {}
labels: {}
labels: {
"ui-hidden": "true"
}
description: "step group"
}
template: {