Compare commits

..

21 Commits

Author SHA1 Message Date
Jianbo Sun
b596b70ebe Fix: addon function converted (#4411)
Signed-off-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
2022-07-19 19:54:44 +08:00
github-actions[bot]
0cd370e867 Fix: fix volumes duplicate in list (#4390)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit 08fb73aa95)

Co-authored-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-07-15 20:11:49 +08:00
github-actions[bot]
d9adc73e5c Fix: add usage comment for ref-objects (#4386)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 0418c83117)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2022-07-14 18:51:05 +08:00
github-actions[bot]
4a2d9807c8 [Backport release-1.4] Fix: fail directly when app terminated (#4385)
* fail directly when app terminated

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 842c211cf6)

* support suspend

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

fix typo

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 307ef372f1)

Co-authored-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2022-07-14 18:50:23 +08:00
github-actions[bot]
840cb8ce58 Fix: several minor bugs (#4381)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit c5f10a5723)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2022-07-14 11:22:41 +08:00
github-actions[bot]
5a64fec916 Fix: abuse timeout context in terraform provider (#4375)
Signed-off-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
(cherry picked from commit ab56b3c274)

Co-authored-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
2022-07-13 15:48:13 +08:00
Jianbo Sun
657a374ded Fix: add the job of independently publishing chart packages (#4360) (#4361)
* Fix: add the job of independently publishing chart packages

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>

* Fix: add the job of independently publishing chart packages

Signed-off-by: barnettZQG <barnett.zqg@gmail.com>

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2022-07-12 12:31:05 +08:00
Tianxin Dong
dfe12cd9ca [Backport-1.4]: optimize imports packages to reduce 75% cpu with better performance (#4355)
* Feat: optimize imports packages

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>

* fix test

Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-07-11 18:33:53 +08:00
github-actions[bot]
cd42f67848 Fix: init container bug (#4354)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 3f116c7f10)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2022-07-11 17:03:49 +08:00
github-actions[bot]
61d2c588e3 Fix: health check use original ns if no override and original exists (#4353)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit f7923b5ac9)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2022-07-11 16:49:24 +08:00
github-actions[bot]
b3dad698a5 Fix: enhance sidecar & init traits (#4343)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit dc9b18d119)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2022-07-08 19:09:55 +08:00
github-actions[bot]
ec5159c2ca Fix: disable apprev status update when apprev disabled (#4338)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit b4c8e3265a)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2022-07-07 15:51:35 +08:00
github-actions[bot]
a7b2b221e0 [Backport release-1.4] Fix: more cluster system info range. (#4333)
* more collect info

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit b27764e072)

* fix comments

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
(cherry picked from commit 57805aa844)

Co-authored-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2022-07-06 16:15:31 +08:00
github-actions[bot]
caa495a5d9 [Backport release-1.4] Fix: ref-objects parameter with invalid field definition (#4330)
* fix: ref-objects parameter with invalid field definition

which cause validating webhook failed when use ref-objects component

Signed-off-by: jiangshantao <jiangshantao-dbg@qq.com>
(cherry picked from commit 13f328f362)

* fix: run make reviewable

Signed-off-by: jiangshantao <jiangshantao-dbg@qq.com>
(cherry picked from commit e09410af90)

Co-authored-by: jst <jst@meitu.com>
2022-07-06 14:18:17 +08:00
github-actions[bot]
15bea4fb64 Fix: fail to query the application logs with the special characters (#4308)
Signed-off-by: barnettZQG <barnett.zqg@gmail.com>
(cherry picked from commit 0101396a42)

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2022-07-01 20:15:09 +08:00
github-actions[bot]
1a094a4eea Fix: Jfrog Webhook Handler Cannot Get Right Image (#4306)
Merge branch 'release-1.4'

Apply suggestions from code review

Co-authored-by: lqs429521992 <lqs429521992@qq.com>

Update webhook.go

Fix: format

Signed-off-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
(cherry picked from commit b0d0a81731)

Co-authored-by: qingsliu <lqs429521992@qq.com>
2022-07-01 20:08:40 +08:00
github-actions[bot]
65b6f47330 Fix: fix the goroutine leak in http request (#4304)
Signed-off-by: FogDong <dongtianxin.tx@alibaba-inc.com>
(cherry picked from commit 559ef83abd)

Co-authored-by: FogDong <dongtianxin.tx@alibaba-inc.com>
2022-07-01 17:57:34 +08:00
github-actions[bot]
3a4cd2dca6 Fix: kube apply ignore userinfo for rt (#4300)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit 2a9e741d4c)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2022-07-01 17:32:52 +08:00
github-actions[bot]
9fabd950e5 Chore: avoid update version file when publish smaller version (#4287)
Signed-off-by: qiaozp <chivalry.pp@gmail.com>
(cherry picked from commit 696234e1aa)

Co-authored-by: qiaozp <chivalry.pp@gmail.com>
2022-06-30 15:51:17 +08:00
github-actions[bot]
0a012b4d34 Feat: enhance ServiceAccount trait to support privileges (#4278)
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
(cherry picked from commit f8d4aca499)

Co-authored-by: Somefive <yd219913@alibaba-inc.com>
2022-06-29 15:00:32 +08:00
github-actions[bot]
7c231e6c48 Fix: support stdin and url for vela ql (#4277)
Signed-off-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
(cherry picked from commit 235a4471c0)

Co-authored-by: Jianbo Sun <jianbo.sjb@alibaba-inc.com>
2022-06-29 14:53:26 +08:00
50 changed files with 978 additions and 255 deletions

89
.github/workflows/chart.yaml vendored Normal file
View File

@@ -0,0 +1,89 @@
name: Publish Chart
on:
push:
tags:
- "v*"
workflow_dispatch: { }
env:
BUCKET: ${{ secrets.OSS_BUCKET }}
ENDPOINT: ${{ secrets.OSS_ENDPOINT }}
ACCESS_KEY: ${{ secrets.OSS_ACCESS_KEY }}
ACCESS_KEY_SECRET: ${{ secrets.OSS_ACCESS_KEY_SECRET }}
ARTIFACT_HUB_REPOSITORY_ID: ${{ secrets.ARTIFACT_HUB_REPOSITORY_ID }}
jobs:
publish-charts:
env:
HELM_CHARTS_DIR: charts
HELM_CHART: charts/vela-core
MINIMAL_HELM_CHART: charts/vela-minimal
LEGACY_HELM_CHART: legacy/charts/vela-core-legacy
VELA_ROLLOUT_HELM_CHART: runtime/rollout/charts
LOCAL_OSS_DIRECTORY: .oss/
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@master
- name: Get git revision
id: vars
shell: bash
run: |
echo "::set-output name=git_revision::$(git rev-parse --short HEAD)"
- name: Install Helm
uses: azure/setup-helm@v1
with:
version: v3.4.0
- name: Setup node
uses: actions/setup-node@v2
with:
node-version: '14'
- name: Generate helm doc
run: |
make helm-doc-gen
- name: Prepare legacy chart
run: |
rsync -r $LEGACY_HELM_CHART $HELM_CHARTS_DIR
rsync -r $HELM_CHART/* $LEGACY_HELM_CHART --exclude=Chart.yaml --exclude=crds
- name: Prepare vela chart
run: |
rsync -r $VELA_ROLLOUT_HELM_CHART $HELM_CHARTS_DIR
- name: Get the version
id: get_version
run: |
VERSION=${GITHUB_REF#refs/tags/}
echo ::set-output name=VERSION::${VERSION}
- name: Tag helm chart image
run: |
image_tag=${{ steps.get_version.outputs.VERSION }}
chart_version=${{ steps.get_version.outputs.VERSION }}
sed -i "s/latest/${image_tag}/g" $HELM_CHART/values.yaml
sed -i "s/latest/${image_tag}/g" $MINIMAL_HELM_CHART/values.yaml
sed -i "s/latest/${image_tag}/g" $LEGACY_HELM_CHART/values.yaml
sed -i "s/latest/${image_tag}/g" $VELA_ROLLOUT_HELM_CHART/values.yaml
chart_smever=${chart_version#"v"}
sed -i "s/0.1.0/$chart_smever/g" $HELM_CHART/Chart.yaml
sed -i "s/0.1.0/$chart_smever/g" $MINIMAL_HELM_CHART/Chart.yaml
sed -i "s/0.1.0/$chart_smever/g" $LEGACY_HELM_CHART/Chart.yaml
sed -i "s/0.1.0/$chart_smever/g" $VELA_ROLLOUT_HELM_CHART/Chart.yaml
- name: Install ossutil
run: wget http://gosspublic.alicdn.com/ossutil/1.7.0/ossutil64 && chmod +x ossutil64 && mv ossutil64 ossutil
- name: Configure Alibaba Cloud OSSUTIL
run: ./ossutil --config-file .ossutilconfig config -i ${ACCESS_KEY} -k ${ACCESS_KEY_SECRET} -e ${ENDPOINT} -c .ossutilconfig
- name: sync cloud to local
run: ./ossutil --config-file .ossutilconfig sync oss://$BUCKET/core $LOCAL_OSS_DIRECTORY
- name: add artifacthub stuff to the repo
run: |
rsync $HELM_CHART/README.md $LEGACY_HELM_CHART/README.md
rsync $HELM_CHART/README.md $VELA_ROLLOUT_HELM_CHART/README.md
sed -i "s/ARTIFACT_HUB_REPOSITORY_ID/$ARTIFACT_HUB_REPOSITORY_ID/g" hack/artifacthub/artifacthub-repo.yml
rsync hack/artifacthub/artifacthub-repo.yml $LOCAL_OSS_DIRECTORY
- name: Package helm charts
run: |
helm package $HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm package $MINIMAL_HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm package $LEGACY_HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm package $VELA_ROLLOUT_HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm repo index --url https://$BUCKET.$ENDPOINT/core $LOCAL_OSS_DIRECTORY
- name: sync local to cloud
run: ./ossutil --config-file .ossutilconfig sync $LOCAL_OSS_DIRECTORY oss://$BUCKET/core -f

View File

@@ -8,11 +8,8 @@ on:
workflow_dispatch: {}
env:
BUCKET: ${{ secrets.OSS_BUCKET }}
ENDPOINT: ${{ secrets.OSS_ENDPOINT }}
ACCESS_KEY: ${{ secrets.OSS_ACCESS_KEY }}
ACCESS_KEY_SECRET: ${{ secrets.OSS_ACCESS_KEY_SECRET }}
ARTIFACT_HUB_REPOSITORY_ID: ${{ secrets.ARTIFACT_HUB_REPOSITORY_ID }}
jobs:
publish-core-images:
@@ -171,90 +168,6 @@ jobs:
ghcr.io/${{ github.repository_owner }}/oamdev/vela-rollout:${{ steps.get_version.outputs.VERSION }}
${{ secrets.ACR_DOMAIN }}/oamdev/vela-rollout:${{ steps.get_version.outputs.VERSION }}
publish-charts:
env:
HELM_CHARTS_DIR: charts
HELM_CHART: charts/vela-core
MINIMAL_HELM_CHART: charts/vela-minimal
LEGACY_HELM_CHART: legacy/charts/vela-core-legacy
VELA_ROLLOUT_HELM_CHART: runtime/rollout/charts
LOCAL_OSS_DIRECTORY: .oss/
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@master
- name: Get git revision
id: vars
shell: bash
run: |
echo "::set-output name=git_revision::$(git rev-parse --short HEAD)"
- name: Install Helm
uses: azure/setup-helm@v1
with:
version: v3.4.0
- name: Setup node
uses: actions/setup-node@v2
with:
node-version: '14'
- name: Generate helm doc
run: |
make helm-doc-gen
- name: Prepare legacy chart
run: |
rsync -r $LEGACY_HELM_CHART $HELM_CHARTS_DIR
rsync -r $HELM_CHART/* $LEGACY_HELM_CHART --exclude=Chart.yaml --exclude=crds
- name: Prepare vela chart
run: |
rsync -r $VELA_ROLLOUT_HELM_CHART $HELM_CHARTS_DIR
- uses: oprypin/find-latest-tag@v1
with:
repository: oam-dev/kubevela
releases-only: true
id: latest_tag
- name: Tag helm chart image
run: |
latest_repo_tag=${{ steps.latest_tag.outputs.tag }}
sub="."
major="$(cut -d"$sub" -f1 <<<"$latest_repo_tag")"
minor="$(cut -d"$sub" -f2 <<<"$latest_repo_tag")"
patch="0"
current_repo_tag="$major.$minor.$patch"
image_tag=${GITHUB_REF#refs/tags/}
chart_version=$latest_repo_tag
if [[ ${GITHUB_REF} == "refs/heads/master" ]]; then
image_tag=latest
chart_version=${current_repo_tag}-nightly-build
fi
sed -i "s/latest/${image_tag}/g" $HELM_CHART/values.yaml
sed -i "s/latest/${image_tag}/g" $MINIMAL_HELM_CHART/values.yaml
sed -i "s/latest/${image_tag}/g" $LEGACY_HELM_CHART/values.yaml
sed -i "s/latest/${image_tag}/g" $VELA_ROLLOUT_HELM_CHART/values.yaml
chart_smever=${chart_version#"v"}
sed -i "s/0.1.0/$chart_smever/g" $HELM_CHART/Chart.yaml
sed -i "s/0.1.0/$chart_smever/g" $MINIMAL_HELM_CHART/Chart.yaml
sed -i "s/0.1.0/$chart_smever/g" $LEGACY_HELM_CHART/Chart.yaml
sed -i "s/0.1.0/$chart_smever/g" $VELA_ROLLOUT_HELM_CHART/Chart.yaml
- name: Install ossutil
run: wget http://gosspublic.alicdn.com/ossutil/1.7.0/ossutil64 && chmod +x ossutil64 && mv ossutil64 ossutil
- name: Configure Alibaba Cloud OSSUTIL
run: ./ossutil --config-file .ossutilconfig config -i ${ACCESS_KEY} -k ${ACCESS_KEY_SECRET} -e ${ENDPOINT} -c .ossutilconfig
- name: sync cloud to local
run: ./ossutil --config-file .ossutilconfig sync oss://$BUCKET/core $LOCAL_OSS_DIRECTORY
- name: add artifacthub stuff to the repo
run: |
rsync $HELM_CHART/README.md $LEGACY_HELM_CHART/README.md
rsync $HELM_CHART/README.md $VELA_ROLLOUT_HELM_CHART/README.md
sed -i "s/ARTIFACT_HUB_REPOSITORY_ID/$ARTIFACT_HUB_REPOSITORY_ID/g" hack/artifacthub/artifacthub-repo.yml
rsync hack/artifacthub/artifacthub-repo.yml $LOCAL_OSS_DIRECTORY
- name: Package helm charts
run: |
helm package $HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm package $MINIMAL_HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm package $LEGACY_HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm package $VELA_ROLLOUT_HELM_CHART --destination $LOCAL_OSS_DIRECTORY
helm repo index --url https://$BUCKET.$ENDPOINT/core $LOCAL_OSS_DIRECTORY
- name: sync local to cloud
run: ./ossutil --config-file .ossutilconfig sync $LOCAL_OSS_DIRECTORY oss://$BUCKET/core -f
publish-capabilities:
env:
CAPABILITY_BUCKET: kubevela-registry

View File

@@ -123,6 +123,11 @@ jobs:
- name: sync the latest version file
if: ${{ !contains(env.VELA_VERSION,'alpha') && !contains(env.VELA_VERSION,'beta') }}
run: |
LATEST_VERSION=$(curl -fsSl https://static.kubevela.net/binary/vela/latest_version)
verlte() {
[ "$1" = "`echo -e "$1\n$2" | sort -V | head -n1`" ]
}
verlte ${{ env.VELA_VERSION }} $LATEST_VERSION && echo "${{ env.VELA_VERSION }} <= $LATEST_VERSION, skip update" && exit 0
echo ${{ env.VELA_VERSION }} > ./latest_version
./ossutil --config-file .ossutilconfig cp -u ./latest_version oss://$BUCKET/binary/vela/latest_version

View File

@@ -43,7 +43,7 @@ spec:
volumeMounts: [{
name: parameter.mountName
mountPath: parameter.initMountPath
}]
}] + parameter.extraVolumeMounts
}]
// +patchKey=name
volumes: [{
@@ -97,5 +97,13 @@ spec:
// +usage=Specify the mount path of init container
initMountPath: string
// +usage=Specify the extra volume mounts for the init container
extraVolumeMounts: [...{
// +usage=The name of the volume to be mounted
name: string
// +usage=The mountPath for mount in the init container
mountPath: string
}]
}

View File

@@ -14,12 +14,18 @@ spec:
cue:
template: |
#K8sObject: {
apiVersion: string
kind: string
metadata: {
name: string
...
}
// +usage=The resource type for the Kubernetes objects
resource?: string
// +usage=The group name for the Kubernetes objects
group?: string
// +usage=If specified, fetch the Kubernetes objects with the name, exclusive to labelSelector
name?: string
// +usage=If specified, fetch the Kubernetes objects from the namespace. Otherwise, fetch from the application's namespace.
namespace?: string
// +usage=If specified, fetch the Kubernetes objects from the cluster. Otherwise, fetch from the local cluster.
cluster?: string
// +usage=If specified, fetch the Kubernetes objects according to the label selector, exclusive to name
labelSelector?: [string]: string
...
}
output: parameter.objects[0]
@@ -30,7 +36,12 @@ spec:
}
}
}
parameter: objects: [...#K8sObject]
parameter: {
// +usage=If specified, application will fetch native Kubernetes objects according to the object description
objects?: [...#K8sObject]
// +usage=If specified, the objects in the urls will be loaded.
urls?: [...string]
}
status:
customStatus: |-
if context.output.apiVersion == "apps/v1" && context.output.kind == "Deployment" {

View File

@@ -14,10 +14,114 @@ spec:
schematic:
cue:
template: |
#Privileges: {
// +usage=Specify the verbs to be allowed for the resource
verbs: [...string]
// +usage=Specify the apiGroups of the resource
apiGroups?: [...string]
// +usage=Specify the resources to be allowed
resources?: [...string]
// +usage=Specify the resourceNames to be allowed
resourceNames?: [...string]
// +usage=Specify the resource url to be allowed
nonResourceURLs?: [...string]
// +usage=Specify the scope of the privileges, default to be namespace scope
scope: *"namespace" | "cluster"
}
parameter: {
// +usage=Specify the name of ServiceAccount
name: string
// +usage=Specify whether to create new ServiceAccount or not
create: *false | bool
// +usage=Specify the privileges of the ServiceAccount, if not empty, RoleBindings(ClusterRoleBindings) will be created
privileges?: [...#Privileges]
}
// +patchStrategy=retainKeys
patch: spec: template: spec: serviceAccountName: parameter.name
_clusterPrivileges: [ for p in parameter.privileges if p.scope == "cluster" {p}]
_namespacePrivileges: [ for p in parameter.privileges if p.scope == "namespace" {p}]
outputs: {
if parameter.create {
"service-account": {
apiVersion: "v1"
kind: "ServiceAccount"
metadata: name: parameter.name
}
}
if parameter.privileges != _|_ {
if len(_clusterPrivileges) > 0 {
"cluster-role": {
apiVersion: "rbac.authorization.k8s.io/v1"
kind: "ClusterRole"
metadata: name: "\(context.namespace):\(parameter.name)"
rules: [ for p in _clusterPrivileges {
verbs: p.verbs
if p.apiGroups != _|_ {
apiGroups: p.apiGroups
}
if p.resources != _|_ {
resources: p.resources
}
if p.resourceNames != _|_ {
resourceNames: p.resourceNames
}
if p.nonResourceURLs != _|_ {
nonResourceURLs: p.nonResourceURLs
}
}]
}
"cluster-role-binding": {
apiVersion: "rbac.authorization.k8s.io/v1"
kind: "ClusterRoleBinding"
metadata: name: "\(context.namespace):\(parameter.name)"
roleRef: {
apiGroup: "rbac.authorization.k8s.io"
kind: "ClusterRole"
name: "\(context.namespace):\(parameter.name)"
}
subjects: [{
kind: "ServiceAccount"
name: parameter.name
namespace: "\(context.namespace)"
}]
}
}
if len(_namespacePrivileges) > 0 {
role: {
apiVersion: "rbac.authorization.k8s.io/v1"
kind: "Role"
metadata: name: parameter.name
rules: [ for p in _namespacePrivileges {
verbs: p.verbs
if p.apiGroups != _|_ {
apiGroups: p.apiGroups
}
if p.resources != _|_ {
resources: p.resources
}
if p.resourceNames != _|_ {
resourceNames: p.resourceNames
}
if p.nonResourceURLs != _|_ {
nonResourceURLs: p.nonResourceURLs
}
}]
}
"role-binding": {
apiVersion: "rbac.authorization.k8s.io/v1"
kind: "RoleBinding"
metadata: name: parameter.name
roleRef: {
apiGroup: "rbac.authorization.k8s.io"
kind: "Role"
name: parameter.name
}
subjects: [{
kind: "ServiceAccount"
name: parameter.name
}]
}
}
}
}

View File

@@ -82,6 +82,11 @@ spec:
// +usage=The key of the config map to select from. Must be a valid secret key
key: string
}
// +usage=Specify the field reference for env
fieldRef?: {
// +usage=Specify the field path for env
fieldPath: string
}
}
}]

View File

@@ -64,6 +64,9 @@ spec:
{
name: "pvc-" + v.name
mountPath: v.mountPath
if v.subPath != _|_ {
subPath: v.subPath
}
}
}
},
@@ -73,6 +76,9 @@ spec:
{
name: "configmap-" + v.name
mountPath: v.mountPath
if v.subPath != _|_ {
subPath: v.subPath
}
}
},
] | []
@@ -103,6 +109,9 @@ spec:
{
name: "secret-" + v.name
mountPath: v.mountPath
if v.subPath != _|_ {
subPath: v.subPath
}
}
},
] | []
@@ -133,6 +142,9 @@ spec:
{
name: "emptydir-" + v.name
mountPath: v.mountPath
if v.subPath != _|_ {
subPath: v.subPath
}
}
},
] | []
@@ -141,12 +153,28 @@ spec:
{
name: "pvc-" + v.name
devicePath: v.mountPath
if v.subPath != _|_ {
subPath: v.subPath
}
}
},
] | []
volumesList: pvcVolumesList + configMapVolumesList + secretVolumesList + emptyDirVolumesList
deDupVolumesArray: [
for val in [
for i, vi in volumesList {
for j, vj in volumesList if j < i && vi.name == vj.name {
_ignore: true
}
vi
},
] if val._ignore == _|_ {
val
},
]
patch: spec: template: spec: {
// +patchKey=name
volumes: pvcVolumesList + configMapVolumesList + secretVolumesList + emptyDirVolumesList
volumes: deDupVolumesArray
containers: [{
// +patchKey=name
@@ -234,6 +262,7 @@ spec:
name: string
mountOnly: *false | bool
mountPath: string
subPath?: string
volumeMode: *"Filesystem" | string
volumeName?: string
accessModes: *["ReadWriteOnce"] | [...string]
@@ -275,6 +304,7 @@ spec:
configMapKey: string
}]
mountPath?: string
subPath?: string
defaultMode: *420 | int
readOnly: *false | bool
data?: {...}
@@ -298,6 +328,7 @@ spec:
secretKey: string
}]
mountPath?: string
subPath?: string
defaultMode: *420 | int
readOnly: *false | bool
stringData?: {...}
@@ -313,6 +344,7 @@ spec:
emptyDir?: [...{
name: string
mountPath: string
subPath?: string
medium: *"" | "Memory"
}]
}

View File

@@ -20,7 +20,10 @@ spec:
for v in parameter.volumeMounts.pvc {
{
mountPath: v.mountPath
name: v.name
if v.subPath != _|_ {
subPath: v.subPath
}
name: v.name
}
},
] | []
@@ -29,7 +32,10 @@ spec:
for v in parameter.volumeMounts.configMap {
{
mountPath: v.mountPath
name: v.name
if v.subPath != _|_ {
subPath: v.subPath
}
name: v.name
}
},
] | []
@@ -38,7 +44,10 @@ spec:
for v in parameter.volumeMounts.secret {
{
mountPath: v.mountPath
name: v.name
if v.subPath != _|_ {
subPath: v.subPath
}
name: v.name
}
},
] | []
@@ -47,7 +56,10 @@ spec:
for v in parameter.volumeMounts.emptyDir {
{
mountPath: v.mountPath
name: v.name
if v.subPath != _|_ {
subPath: v.subPath
}
name: v.name
}
},
] | []
@@ -56,7 +68,10 @@ spec:
for v in parameter.volumeMounts.hostPath {
{
mountPath: v.mountPath
name: v.name
if v.subPath != _|_ {
subPath: v.subPath
}
name: v.name
}
},
] | []
@@ -119,6 +134,19 @@ spec:
},
] | []
}
volumesList: volumesArray.pvc + volumesArray.configMap + volumesArray.secret + volumesArray.emptyDir + volumesArray.hostPath
deDupVolumesArray: [
for val in [
for i, vi in volumesList {
for j, vj in volumesList if j < i && vi.name == vj.name {
_ignore: true
}
vi
},
] if val._ignore == _|_ {
val
},
]
output: {
apiVersion: "apps/v1"
kind: "Deployment"
@@ -262,7 +290,7 @@ spec:
}
if parameter["volumeMounts"] != _|_ {
volumes: volumesArray.pvc + volumesArray.configMap + volumesArray.secret + volumesArray.emptyDir + volumesArray.hostPath
volumes: deDupVolumesArray
}
}
}
@@ -375,6 +403,7 @@ spec:
pvc?: [...{
name: string
mountPath: string
subPath?: string
// +usage=The name of the PVC
claimName: string
}]
@@ -382,6 +411,7 @@ spec:
configMap?: [...{
name: string
mountPath: string
subPath?: string
defaultMode: *420 | int
cmName: string
items?: [...{
@@ -394,6 +424,7 @@ spec:
secret?: [...{
name: string
mountPath: string
subPath?: string
defaultMode: *420 | int
secretName: string
items?: [...{
@@ -406,12 +437,14 @@ spec:
emptyDir?: [...{
name: string
mountPath: string
subPath?: string
medium: *"" | "Memory"
}]
// +usage=Mount HostPath type volume
hostPath?: [...{
name: string
mountPath: string
subPath?: string
path: string
}]
}

View File

@@ -43,7 +43,7 @@ spec:
volumeMounts: [{
name: parameter.mountName
mountPath: parameter.initMountPath
}]
}] + parameter.extraVolumeMounts
}]
// +patchKey=name
volumes: [{
@@ -97,5 +97,13 @@ spec:
// +usage=Specify the mount path of init container
initMountPath: string
// +usage=Specify the extra volume mounts for the init container
extraVolumeMounts: [...{
// +usage=The name of the volume to be mounted
name: string
// +usage=The mountPath for mount in the init container
mountPath: string
}]
}

View File

@@ -14,12 +14,18 @@ spec:
cue:
template: |
#K8sObject: {
apiVersion: string
kind: string
metadata: {
name: string
...
}
// +usage=The resource type for the Kubernetes objects
resource?: string
// +usage=The group name for the Kubernetes objects
group?: string
// +usage=If specified, fetch the Kubernetes objects with the name, exclusive to labelSelector
name?: string
// +usage=If specified, fetch the Kubernetes objects from the namespace. Otherwise, fetch from the application's namespace.
namespace?: string
// +usage=If specified, fetch the Kubernetes objects from the cluster. Otherwise, fetch from the local cluster.
cluster?: string
// +usage=If specified, fetch the Kubernetes objects according to the label selector, exclusive to name
labelSelector?: [string]: string
...
}
output: parameter.objects[0]
@@ -30,7 +36,12 @@ spec:
}
}
}
parameter: objects: [...#K8sObject]
parameter: {
// +usage=If specified, application will fetch native Kubernetes objects according to the object description
objects?: [...#K8sObject]
// +usage=If specified, the objects in the urls will be loaded.
urls?: [...string]
}
status:
customStatus: |-
if context.output.apiVersion == "apps/v1" && context.output.kind == "Deployment" {

View File

@@ -14,10 +14,114 @@ spec:
schematic:
cue:
template: |
#Privileges: {
// +usage=Specify the verbs to be allowed for the resource
verbs: [...string]
// +usage=Specify the apiGroups of the resource
apiGroups?: [...string]
// +usage=Specify the resources to be allowed
resources?: [...string]
// +usage=Specify the resourceNames to be allowed
resourceNames?: [...string]
// +usage=Specify the resource url to be allowed
nonResourceURLs?: [...string]
// +usage=Specify the scope of the privileges, default to be namespace scope
scope: *"namespace" | "cluster"
}
parameter: {
// +usage=Specify the name of ServiceAccount
name: string
// +usage=Specify whether to create new ServiceAccount or not
create: *false | bool
// +usage=Specify the privileges of the ServiceAccount, if not empty, RoleBindings(ClusterRoleBindings) will be created
privileges?: [...#Privileges]
}
// +patchStrategy=retainKeys
patch: spec: template: spec: serviceAccountName: parameter.name
_clusterPrivileges: [ for p in parameter.privileges if p.scope == "cluster" {p}]
_namespacePrivileges: [ for p in parameter.privileges if p.scope == "namespace" {p}]
outputs: {
if parameter.create {
"service-account": {
apiVersion: "v1"
kind: "ServiceAccount"
metadata: name: parameter.name
}
}
if parameter.privileges != _|_ {
if len(_clusterPrivileges) > 0 {
"cluster-role": {
apiVersion: "rbac.authorization.k8s.io/v1"
kind: "ClusterRole"
metadata: name: "\(context.namespace):\(parameter.name)"
rules: [ for p in _clusterPrivileges {
verbs: p.verbs
if p.apiGroups != _|_ {
apiGroups: p.apiGroups
}
if p.resources != _|_ {
resources: p.resources
}
if p.resourceNames != _|_ {
resourceNames: p.resourceNames
}
if p.nonResourceURLs != _|_ {
nonResourceURLs: p.nonResourceURLs
}
}]
}
"cluster-role-binding": {
apiVersion: "rbac.authorization.k8s.io/v1"
kind: "ClusterRoleBinding"
metadata: name: "\(context.namespace):\(parameter.name)"
roleRef: {
apiGroup: "rbac.authorization.k8s.io"
kind: "ClusterRole"
name: "\(context.namespace):\(parameter.name)"
}
subjects: [{
kind: "ServiceAccount"
name: parameter.name
namespace: "\(context.namespace)"
}]
}
}
if len(_namespacePrivileges) > 0 {
role: {
apiVersion: "rbac.authorization.k8s.io/v1"
kind: "Role"
metadata: name: parameter.name
rules: [ for p in _namespacePrivileges {
verbs: p.verbs
if p.apiGroups != _|_ {
apiGroups: p.apiGroups
}
if p.resources != _|_ {
resources: p.resources
}
if p.resourceNames != _|_ {
resourceNames: p.resourceNames
}
if p.nonResourceURLs != _|_ {
nonResourceURLs: p.nonResourceURLs
}
}]
}
"role-binding": {
apiVersion: "rbac.authorization.k8s.io/v1"
kind: "RoleBinding"
metadata: name: parameter.name
roleRef: {
apiGroup: "rbac.authorization.k8s.io"
kind: "Role"
name: parameter.name
}
subjects: [{
kind: "ServiceAccount"
name: parameter.name
}]
}
}
}
}

View File

@@ -82,6 +82,11 @@ spec:
// +usage=The key of the config map to select from. Must be a valid secret key
key: string
}
// +usage=Specify the field reference for env
fieldRef?: {
// +usage=Specify the field path for env
fieldPath: string
}
}
}]

View File

@@ -64,6 +64,9 @@ spec:
{
name: "pvc-" + v.name
mountPath: v.mountPath
if v.subPath != _|_ {
subPath: v.subPath
}
}
}
},
@@ -73,6 +76,9 @@ spec:
{
name: "configmap-" + v.name
mountPath: v.mountPath
if v.subPath != _|_ {
subPath: v.subPath
}
}
},
] | []
@@ -103,6 +109,9 @@ spec:
{
name: "secret-" + v.name
mountPath: v.mountPath
if v.subPath != _|_ {
subPath: v.subPath
}
}
},
] | []
@@ -133,6 +142,9 @@ spec:
{
name: "emptydir-" + v.name
mountPath: v.mountPath
if v.subPath != _|_ {
subPath: v.subPath
}
}
},
] | []
@@ -141,12 +153,28 @@ spec:
{
name: "pvc-" + v.name
devicePath: v.mountPath
if v.subPath != _|_ {
subPath: v.subPath
}
}
},
] | []
volumesList: pvcVolumesList + configMapVolumesList + secretVolumesList + emptyDirVolumesList
deDupVolumesArray: [
for val in [
for i, vi in volumesList {
for j, vj in volumesList if j < i && vi.name == vj.name {
_ignore: true
}
vi
},
] if val._ignore == _|_ {
val
},
]
patch: spec: template: spec: {
// +patchKey=name
volumes: pvcVolumesList + configMapVolumesList + secretVolumesList + emptyDirVolumesList
volumes: deDupVolumesArray
containers: [{
// +patchKey=name
@@ -234,6 +262,7 @@ spec:
name: string
mountOnly: *false | bool
mountPath: string
subPath?: string
volumeMode: *"Filesystem" | string
volumeName?: string
accessModes: *["ReadWriteOnce"] | [...string]
@@ -275,6 +304,7 @@ spec:
configMapKey: string
}]
mountPath?: string
subPath?: string
defaultMode: *420 | int
readOnly: *false | bool
data?: {...}
@@ -298,6 +328,7 @@ spec:
secretKey: string
}]
mountPath?: string
subPath?: string
defaultMode: *420 | int
readOnly: *false | bool
stringData?: {...}
@@ -313,6 +344,7 @@ spec:
emptyDir?: [...{
name: string
mountPath: string
subPath?: string
medium: *"" | "Memory"
}]
}

View File

@@ -20,7 +20,10 @@ spec:
for v in parameter.volumeMounts.pvc {
{
mountPath: v.mountPath
name: v.name
if v.subPath != _|_ {
subPath: v.subPath
}
name: v.name
}
},
] | []
@@ -29,7 +32,10 @@ spec:
for v in parameter.volumeMounts.configMap {
{
mountPath: v.mountPath
name: v.name
if v.subPath != _|_ {
subPath: v.subPath
}
name: v.name
}
},
] | []
@@ -38,7 +44,10 @@ spec:
for v in parameter.volumeMounts.secret {
{
mountPath: v.mountPath
name: v.name
if v.subPath != _|_ {
subPath: v.subPath
}
name: v.name
}
},
] | []
@@ -47,7 +56,10 @@ spec:
for v in parameter.volumeMounts.emptyDir {
{
mountPath: v.mountPath
name: v.name
if v.subPath != _|_ {
subPath: v.subPath
}
name: v.name
}
},
] | []
@@ -56,7 +68,10 @@ spec:
for v in parameter.volumeMounts.hostPath {
{
mountPath: v.mountPath
name: v.name
if v.subPath != _|_ {
subPath: v.subPath
}
name: v.name
}
},
] | []
@@ -119,6 +134,19 @@ spec:
},
] | []
}
volumesList: volumesArray.pvc + volumesArray.configMap + volumesArray.secret + volumesArray.emptyDir + volumesArray.hostPath
deDupVolumesArray: [
for val in [
for i, vi in volumesList {
for j, vj in volumesList if j < i && vi.name == vj.name {
_ignore: true
}
vi
},
] if val._ignore == _|_ {
val
},
]
output: {
apiVersion: "apps/v1"
kind: "Deployment"
@@ -262,7 +290,7 @@ spec:
}
if parameter["volumeMounts"] != _|_ {
volumes: volumesArray.pvc + volumesArray.configMap + volumesArray.secret + volumesArray.emptyDir + volumesArray.hostPath
volumes: deDupVolumesArray
}
}
}
@@ -375,6 +403,7 @@ spec:
pvc?: [...{
name: string
mountPath: string
subPath?: string
// +usage=The name of the PVC
claimName: string
}]
@@ -382,6 +411,7 @@ spec:
configMap?: [...{
name: string
mountPath: string
subPath?: string
defaultMode: *420 | int
cmName: string
items?: [...{
@@ -394,6 +424,7 @@ spec:
secret?: [...{
name: string
mountPath: string
subPath?: string
defaultMode: *420 | int
secretName: string
items?: [...{
@@ -406,12 +437,14 @@ spec:
emptyDir?: [...{
name: string
mountPath: string
subPath?: string
medium: *"" | "Memory"
}]
// +usage=Mount HostPath type volume
hostPath?: [...{
name: string
mountPath: string
subPath?: string
path: string
}]
}

View File

@@ -25,3 +25,7 @@ spec:
- name: my-mount
mountPath: /test
claimName: myclaim
- name: my-mount
mountPath: /test2
subPath: /sub
claimName: myclaim

View File

@@ -16,8 +16,9 @@ spec:
pvc:
- name: test1
mountPath: /test/mount/pvc
- name: test2
- name: test1
mountPath: /test/mount2/pvc
subPath: /sub
configMap:
- name: test1
mountPath: /test/mount/cm

View File

@@ -18,6 +18,8 @@ package service
import (
"context"
"encoding/base64"
"strings"
"k8s.io/client-go/rest"
"sigs.k8s.io/controller-runtime/pkg/client"
@@ -79,5 +81,9 @@ func (v *velaQLServiceImpl) QueryView(ctx context.Context, velaQL string) (*apis
log.Logger.Errorf("decode the velaQL response to json failure %s", err.Error())
return nil, bcode.ErrParseQuery2Json
}
if strings.Contains(velaQL, "collect-logs") {
enc, _ := base64.StdEncoding.DecodeString(resp["logs"].(string))
resp["logs"] = string(enc)
}
return &resp, err
}

View File

@@ -20,6 +20,7 @@ import (
"context"
"errors"
"fmt"
"strings"
"time"
"github.com/emicklei/go-restful/v3"
@@ -426,6 +427,11 @@ func (j *jfrogHandlerImpl) handle(ctx context.Context, webhookTrigger *model.App
return nil, err
}
image := fmt.Sprintf("%s/%s:%s", jfrogReq.Data.RepoKey, jfrogReq.Data.ImageName, jfrogReq.Data.Tag)
pathArray := strings.Split(jfrogReq.Data.Path, "/")
if len(pathArray) > 2 {
image = fmt.Sprintf("%s/%s:%s", jfrogReq.Data.RepoKey, strings.Join(pathArray[:len(pathArray)-2], "/"), jfrogReq.Data.Tag)
}
if jfrogReq.Data.URL != "" {
image = fmt.Sprintf("%s/%s", jfrogReq.Data.URL, image)
}
@@ -441,7 +447,7 @@ func (j *jfrogHandlerImpl) handle(ctx context.Context, webhookTrigger *model.App
TriggerType: apisv1.TriggerTypeWebhook,
Force: true,
ImageInfo: &model.ImageInfo{
Type: model.PayloadTypeHarbor,
Type: model.PayloadTypeJFrog,
Resource: &model.ImageResource{
Digest: jfrogReq.Data.Digest,
Tag: jfrogReq.Data.Tag,

View File

@@ -322,7 +322,17 @@ func genClusterCountInfo(num int) string {
return "<10"
case num < 50:
return "<50"
case num < 100:
return "<100"
case num < 150:
return "<150"
case num < 200:
return "<200"
case num < 300:
return "<300"
case num < 500:
return "<500"
default:
return ">=50"
return ">=500"
}
}

View File

@@ -211,8 +211,28 @@ func TestGenClusterCountInfo(t *testing.T) {
res: "<50",
},
{
count: 100,
res: ">=50",
count: 90,
res: "<100",
},
{
count: 137,
res: "<150",
},
{
count: 170,
res: "<200",
},
{
count: 270,
res: "<300",
},
{
count: 400,
res: "<500",
},
{
count: 520,
res: ">=500",
},
}
for _, testcase := range testcases {

View File

@@ -97,21 +97,21 @@ func (wl *Workload) EvalContext(ctx process.Context) error {
}
// EvalStatus eval workload status
func (wl *Workload) EvalStatus(ctx process.Context, cli client.Client, ns string) (string, error) {
func (wl *Workload) EvalStatus(ctx process.Context, cli client.Client, accessor util.NamespaceAccessor) (string, error) {
// if the standard workload is managed by trait always return empty message
if wl.SkipApplyWorkload {
return "", nil
}
return wl.engine.Status(ctx, cli, ns, wl.FullTemplate.CustomStatus, wl.Params)
return wl.engine.Status(ctx, cli, accessor, wl.FullTemplate.CustomStatus, wl.Params)
}
// EvalHealth eval workload health check
func (wl *Workload) EvalHealth(ctx process.Context, client client.Client, namespace string) (bool, error) {
func (wl *Workload) EvalHealth(ctx process.Context, client client.Client, accessor util.NamespaceAccessor) (bool, error) {
// if health of template is not set or standard workload is managed by trait always return true
if wl.FullTemplate.Health == "" || wl.SkipApplyWorkload {
return true, nil
}
return wl.engine.HealthCheck(ctx, client, namespace, wl.FullTemplate.Health)
return wl.engine.HealthCheck(ctx, client, accessor, wl.FullTemplate.Health)
}
// Scope defines the scope of workload
@@ -145,16 +145,16 @@ func (trait *Trait) EvalContext(ctx process.Context) error {
}
// EvalStatus eval trait status
func (trait *Trait) EvalStatus(ctx process.Context, cli client.Client, ns string) (string, error) {
return trait.engine.Status(ctx, cli, ns, trait.CustomStatusFormat, trait.Params)
func (trait *Trait) EvalStatus(ctx process.Context, cli client.Client, accessor util.NamespaceAccessor) (string, error) {
return trait.engine.Status(ctx, cli, accessor, trait.CustomStatusFormat, trait.Params)
}
// EvalHealth eval trait health check
func (trait *Trait) EvalHealth(ctx process.Context, client client.Client, namespace string) (bool, error) {
func (trait *Trait) EvalHealth(ctx process.Context, client client.Client, accessor util.NamespaceAccessor) (bool, error) {
if trait.FullTemplate.Health == "" {
return true, nil
}
return trait.engine.HealthCheck(ctx, client, namespace, trait.HealthCheckPolicy)
return trait.engine.HealthCheck(ctx, client, accessor, trait.HealthCheckPolicy)
}
// Appfile describes application

View File

@@ -47,6 +47,11 @@ func ContextWithUserInfo(ctx context.Context, app *v1beta1.Application) context.
return request.WithUser(ctx, GetUserInfoInAnnotation(&app.ObjectMeta))
}
// ContextClearUserInfo clear user info in context
func ContextClearUserInfo(ctx context.Context) context.Context {
return request.WithUser(ctx, nil)
}
// SetUserInfoInAnnotation set username and group from userInfo into annotations
// it will clear the existing service account annotation in avoid of permission leak
func SetUserInfoInAnnotation(obj *metav1.ObjectMeta, userInfo authv1.UserInfo) {

View File

@@ -51,7 +51,7 @@ func (c *HTTPCmd) Run(meta *registry.Meta) (res interface{}, err error) {
var (
r io.Reader
client = &http.Client{
Transport: &http.Transport{},
Transport: http.DefaultTransport,
Timeout: time.Second * 3,
}
)

View File

@@ -38,6 +38,7 @@ import (
"github.com/oam-dev/kubevela/pkg/monitor/metrics"
"github.com/oam-dev/kubevela/pkg/multicluster"
"github.com/oam-dev/kubevela/pkg/oam"
"github.com/oam-dev/kubevela/pkg/oam/util"
"github.com/oam-dev/kubevela/pkg/resourcekeeper"
)
@@ -215,17 +216,14 @@ func (h *AppHandler) ProduceArtifacts(ctx context.Context, comps []*types.Compon
// nolint
func (h *AppHandler) collectHealthStatus(ctx context.Context, wl *appfile.Workload, appRev *v1beta1.ApplicationRevision, overrideNamespace string) (*common.ApplicationComponentStatus, bool, error) {
namespace := h.app.Namespace
if overrideNamespace != "" {
namespace = overrideNamespace
}
accessor := util.NewApplicationResourceNamespaceAccessor(h.app.Namespace, overrideNamespace)
var (
status = common.ApplicationComponentStatus{
Name: wl.Name,
WorkloadDefinition: wl.FullTemplate.Reference.Definition,
Healthy: true,
Namespace: namespace,
Namespace: accessor.Namespace(),
Cluster: multicluster.ClusterNameInContext(ctx),
}
appName = appRev.Spec.Application.Name
@@ -235,10 +233,10 @@ func (h *AppHandler) collectHealthStatus(ctx context.Context, wl *appfile.Worklo
if wl.CapabilityCategory == types.TerraformCategory {
var configuration terraforv1beta2.Configuration
if err := h.r.Client.Get(ctx, client.ObjectKey{Name: wl.Name, Namespace: namespace}, &configuration); err != nil {
if err := h.r.Client.Get(ctx, client.ObjectKey{Name: wl.Name, Namespace: accessor.Namespace()}, &configuration); err != nil {
if kerrors.IsNotFound(err) {
var legacyConfiguration terraforv1beta1.Configuration
if err := h.r.Client.Get(ctx, client.ObjectKey{Name: wl.Name, Namespace: namespace}, &legacyConfiguration); err != nil {
if err := h.r.Client.Get(ctx, client.ObjectKey{Name: wl.Name, Namespace: accessor.Namespace()}, &legacyConfiguration); err != nil {
return nil, false, errors.WithMessagef(err, "app=%s, comp=%s, check health error", appName, wl.Name)
}
isHealth = setStatus(&status, legacyConfiguration.Status.ObservedGeneration, legacyConfiguration.Generation,
@@ -251,12 +249,12 @@ func (h *AppHandler) collectHealthStatus(ctx context.Context, wl *appfile.Worklo
appRev.Name, configuration.Status.Apply.State, configuration.Status.Apply.Message)
}
} else {
if ok, err := wl.EvalHealth(wl.Ctx, h.r.Client, namespace); !ok || err != nil {
if ok, err := wl.EvalHealth(wl.Ctx, h.r.Client, accessor); !ok || err != nil {
isHealth = false
status.Healthy = false
}
status.Message, err = wl.EvalStatus(wl.Ctx, h.r.Client, namespace)
status.Message, err = wl.EvalStatus(wl.Ctx, h.r.Client, accessor)
if err != nil {
return nil, false, errors.WithMessagef(err, "app=%s, comp=%s, evaluate workload status message error", appName, wl.Name)
}
@@ -264,24 +262,25 @@ func (h *AppHandler) collectHealthStatus(ctx context.Context, wl *appfile.Worklo
var traitStatusList []common.ApplicationTraitStatus
for _, tr := range wl.Traits {
traitOverrideNamespace := overrideNamespace
if tr.FullTemplate.TraitDefinition.Spec.ControlPlaneOnly {
namespace = appRev.GetNamespace()
traitOverrideNamespace = appRev.GetNamespace()
wl.Ctx.SetCtx(context.WithValue(wl.Ctx.GetCtx(), multicluster.ClusterContextKey, multicluster.ClusterLocalName))
}
_accessor := util.NewApplicationResourceNamespaceAccessor(h.app.Namespace, traitOverrideNamespace)
var traitStatus = common.ApplicationTraitStatus{
Type: tr.Name,
Healthy: true,
}
if ok, err := tr.EvalHealth(wl.Ctx, h.r.Client, namespace); !ok || err != nil {
if ok, err := tr.EvalHealth(wl.Ctx, h.r.Client, _accessor); !ok || err != nil {
isHealth = false
traitStatus.Healthy = false
}
traitStatus.Message, err = tr.EvalStatus(wl.Ctx, h.r.Client, namespace)
traitStatus.Message, err = tr.EvalStatus(wl.Ctx, h.r.Client, _accessor)
if err != nil {
return nil, false, errors.WithMessagef(err, "app=%s, comp=%s, trait=%s, evaluate status message error", appName, wl.Name, tr.Name)
}
traitStatusList = append(traitStatusList, traitStatus)
namespace = appRev.GetNamespace()
wl.Ctx.SetCtx(context.WithValue(wl.Ctx.GetCtx(), multicluster.ClusterContextKey, status.Cluster))
}

View File

@@ -41,6 +41,7 @@ import (
"github.com/oam-dev/kubevela/pkg/oam/util"
"github.com/oam-dev/kubevela/pkg/policy/envbinding"
"github.com/oam-dev/kubevela/pkg/utils"
"github.com/oam-dev/kubevela/pkg/velaql/providers/query"
"github.com/oam-dev/kubevela/pkg/workflow/providers"
"github.com/oam-dev/kubevela/pkg/workflow/providers/http"
"github.com/oam-dev/kubevela/pkg/workflow/providers/kube"
@@ -81,6 +82,7 @@ func (h *AppHandler) GenerateApplicationSteps(ctx monitorContext.Context,
terraformProvider.Install(handlerProviders, app, func(comp common.ApplicationComponent) (*appfile.Workload, error) {
return appParser.ParseWorkloadFromRevision(comp, appRev)
})
query.Install(handlerProviders, h.r.Client, nil)
var tasks []wfTypes.TaskRunner
for _, step := range af.WorkflowSteps {

View File

@@ -967,7 +967,7 @@ func (h historiesByComponentRevision) Less(i, j int) bool {
// UpdateApplicationRevisionStatus update application revision status
func (h *AppHandler) UpdateApplicationRevisionStatus(ctx context.Context, appRev *v1beta1.ApplicationRevision, succeed bool, wfStatus *common.WorkflowStatus) {
if appRev == nil {
if appRev == nil || DisableAllApplicationRevision {
return
}
appRev.Status.Succeeded = succeed

View File

@@ -44,6 +44,7 @@ import (
af "github.com/oam-dev/kubevela/pkg/appfile"
"github.com/oam-dev/kubevela/pkg/cue/process"
"github.com/oam-dev/kubevela/pkg/oam"
"github.com/oam-dev/kubevela/pkg/oam/util"
)
const (
@@ -478,7 +479,8 @@ func CUEBasedHealthCheck(ctx context.Context, c client.Client, wlRef WorkloadRef
okToCheckTrait = true
return
}
isHealthy, err := wl.EvalHealth(pCtx, c, ns)
accessor := util.NewApplicationResourceNamespaceAccessor(ns, "")
isHealthy, err := wl.EvalHealth(pCtx, c, accessor)
if err != nil {
wlHealth.HealthStatus = StatusUnhealthy
wlHealth.Diagnosis = errors.Wrap(err, errHealthCheck).Error()
@@ -490,7 +492,7 @@ func CUEBasedHealthCheck(ctx context.Context, c client.Client, wlRef WorkloadRef
// TODO(wonderflow): we should add a custom way to let the template say why it's unhealthy, only a bool flag is not enough
wlHealth.HealthStatus = StatusUnhealthy
}
wlHealth.CustomStatusMsg, err = wl.EvalStatus(pCtx, c, ns)
wlHealth.CustomStatusMsg, err = wl.EvalStatus(pCtx, c, accessor)
if err != nil {
wlHealth.Diagnosis = errors.Wrap(err, errHealthCheck).Error()
}
@@ -522,7 +524,8 @@ func CUEBasedHealthCheck(ctx context.Context, c client.Client, wlRef WorkloadRef
traits[i] = tHealth
continue
}
isHealthy, err := tr.EvalHealth(pCtx, c, ns)
accessor := util.NewApplicationResourceNamespaceAccessor("", ns)
isHealthy, err := tr.EvalHealth(pCtx, c, accessor)
if err != nil {
tHealth.HealthStatus = StatusUnhealthy
tHealth.Diagnosis = errors.Wrap(err, errHealthCheck).Error()
@@ -535,7 +538,7 @@ func CUEBasedHealthCheck(ctx context.Context, c client.Client, wlRef WorkloadRef
// TODO(wonderflow): we should add a custom way to let the template say why it's unhealthy, only a bool flag is not enough
tHealth.HealthStatus = StatusUnhealthy
}
tHealth.CustomStatusMsg, err = tr.EvalStatus(pCtx, c, ns)
tHealth.CustomStatusMsg, err = tr.EvalStatus(pCtx, c, accessor)
if err != nil {
tHealth.Diagnosis = errors.Wrap(err, errHealthCheck).Error()
}

View File

@@ -62,8 +62,8 @@ const (
// AbstractEngine defines Definition's Render interface
type AbstractEngine interface {
Complete(ctx process.Context, abstractTemplate string, params interface{}) error
HealthCheck(ctx process.Context, cli client.Client, ns string, healthPolicyTemplate string) (bool, error)
Status(ctx process.Context, cli client.Client, ns string, customStatusTemplate string, parameter interface{}) (string, error)
HealthCheck(ctx process.Context, cli client.Client, accessor util.NamespaceAccessor, healthPolicyTemplate string) (bool, error)
Status(ctx process.Context, cli client.Client, accessor util.NamespaceAccessor, customStatusTemplate string, parameter interface{}) (string, error)
}
type def struct {
@@ -151,7 +151,7 @@ func (wd *workloadDef) Complete(ctx process.Context, abstractTemplate string, pa
return nil
}
func (wd *workloadDef) getTemplateContext(ctx process.Context, cli client.Reader, ns string) (map[string]interface{}, error) {
func (wd *workloadDef) getTemplateContext(ctx process.Context, cli client.Reader, accessor util.NamespaceAccessor) (map[string]interface{}, error) {
var root = initRoot(ctx.BaseContextLabels())
var commonLabels = GetCommonLabels(ctx.BaseContextLabels())
@@ -162,7 +162,7 @@ func (wd *workloadDef) getTemplateContext(ctx process.Context, cli client.Reader
return nil, err
}
// workload main resource will have a unique label("app.oam.dev/resourceType"="WORKLOAD") in per component/app level
object, err := getResourceFromObj(ctx.GetCtx(), componentWorkload, cli, ns, util.MergeMapOverrideWithDst(map[string]string{
object, err := getResourceFromObj(ctx.GetCtx(), componentWorkload, cli, accessor.For(componentWorkload), util.MergeMapOverrideWithDst(map[string]string{
oam.LabelOAMResourceType: oam.ResourceTypeWorkload,
}, commonLabels), "")
if err != nil {
@@ -182,7 +182,7 @@ func (wd *workloadDef) getTemplateContext(ctx process.Context, cli client.Reader
return nil, err
}
// AuxiliaryWorkload will have a unique label("trait.oam.dev/resource"="name of outputs") in per component/app level
object, err := getResourceFromObj(ctx.GetCtx(), traitRef, cli, ns, util.MergeMapOverrideWithDst(map[string]string{
object, err := getResourceFromObj(ctx.GetCtx(), traitRef, cli, accessor.For(componentWorkload), util.MergeMapOverrideWithDst(map[string]string{
oam.TraitTypeLabel: AuxiliaryWorkload,
}, commonLabels), assist.Name)
if err != nil {
@@ -197,11 +197,11 @@ func (wd *workloadDef) getTemplateContext(ctx process.Context, cli client.Reader
}
// HealthCheck address health check for workload
func (wd *workloadDef) HealthCheck(ctx process.Context, cli client.Client, ns string, healthPolicyTemplate string) (bool, error) {
func (wd *workloadDef) HealthCheck(ctx process.Context, cli client.Client, accessor util.NamespaceAccessor, healthPolicyTemplate string) (bool, error) {
if healthPolicyTemplate == "" {
return true, nil
}
templateContext, err := wd.getTemplateContext(ctx, cli, ns)
templateContext, err := wd.getTemplateContext(ctx, cli, accessor)
if err != nil {
return false, errors.WithMessage(err, "get template context")
}
@@ -228,11 +228,11 @@ func checkHealth(templateContext map[string]interface{}, healthPolicyTemplate st
}
// Status get workload status by customStatusTemplate
func (wd *workloadDef) Status(ctx process.Context, cli client.Client, ns string, customStatusTemplate string, parameter interface{}) (string, error) {
func (wd *workloadDef) Status(ctx process.Context, cli client.Client, accessor util.NamespaceAccessor, customStatusTemplate string, parameter interface{}) (string, error) {
if customStatusTemplate == "" {
return "", nil
}
templateContext, err := wd.getTemplateContext(ctx, cli, ns)
templateContext, err := wd.getTemplateContext(ctx, cli, accessor)
if err != nil {
return "", errors.WithMessage(err, "get template context")
}
@@ -417,7 +417,7 @@ func initRoot(contextLabels map[string]string) map[string]interface{} {
return root
}
func (td *traitDef) getTemplateContext(ctx process.Context, cli client.Reader, ns string) (map[string]interface{}, error) {
func (td *traitDef) getTemplateContext(ctx process.Context, cli client.Reader, accessor util.NamespaceAccessor) (map[string]interface{}, error) {
var root = initRoot(ctx.BaseContextLabels())
var commonLabels = GetCommonLabels(ctx.BaseContextLabels())
@@ -431,7 +431,7 @@ func (td *traitDef) getTemplateContext(ctx process.Context, cli client.Reader, n
if err != nil {
return nil, err
}
object, err := getResourceFromObj(ctx.GetCtx(), traitRef, cli, ns, util.MergeMapOverrideWithDst(map[string]string{
object, err := getResourceFromObj(ctx.GetCtx(), traitRef, cli, accessor.For(traitRef), util.MergeMapOverrideWithDst(map[string]string{
oam.TraitTypeLabel: assist.Type,
}, commonLabels), assist.Name)
if err != nil {
@@ -446,11 +446,11 @@ func (td *traitDef) getTemplateContext(ctx process.Context, cli client.Reader, n
}
// Status get trait status by customStatusTemplate
func (td *traitDef) Status(ctx process.Context, cli client.Client, ns string, customStatusTemplate string, parameter interface{}) (string, error) {
func (td *traitDef) Status(ctx process.Context, cli client.Client, accessor util.NamespaceAccessor, customStatusTemplate string, parameter interface{}) (string, error) {
if customStatusTemplate == "" {
return "", nil
}
templateContext, err := td.getTemplateContext(ctx, cli, ns)
templateContext, err := td.getTemplateContext(ctx, cli, accessor)
if err != nil {
return "", errors.WithMessage(err, "get template context")
}
@@ -458,11 +458,11 @@ func (td *traitDef) Status(ctx process.Context, cli client.Client, ns string, cu
}
// HealthCheck address health check for trait
func (td *traitDef) HealthCheck(ctx process.Context, cli client.Client, ns string, healthPolicyTemplate string) (bool, error) {
func (td *traitDef) HealthCheck(ctx process.Context, cli client.Client, accessor util.NamespaceAccessor, healthPolicyTemplate string) (bool, error) {
if healthPolicyTemplate == "" {
return true, nil
}
templateContext, err := td.getTemplateContext(ctx, cli, ns)
templateContext, err := td.getTemplateContext(ctx, cli, accessor)
if err != nil {
return false, errors.WithMessage(err, "get template context")
}

View File

@@ -718,10 +718,10 @@ func TestImports(t *testing.T) {
context: stepSessionID: "3w9qkdgn5w"`
v, err := NewValue(`
import (
"vela/op"
"vela/custom"
)
id: op.context.stepSessionID
id: custom.context.stepSessionID
`+cont, nil, cont)
assert.NilError(t, err)

View File

@@ -953,3 +953,38 @@ func AsController(r *corev1.ObjectReference) metav1.OwnerReference {
ref.Controller = &c
return ref
}
// NamespaceAccessor namespace accessor for resource
type NamespaceAccessor interface {
For(obj client.Object) string
Namespace() string
}
type applicationResourceNamespaceAccessor struct {
applicationNamespace string
overrideNamespace string
}
// For access namespace for resource
func (accessor *applicationResourceNamespaceAccessor) For(obj client.Object) string {
if accessor.overrideNamespace != "" {
return accessor.overrideNamespace
}
if originalNamespace := obj.GetNamespace(); originalNamespace != "" {
return originalNamespace
}
return accessor.applicationNamespace
}
// Namespace the namespace by default
func (accessor *applicationResourceNamespaceAccessor) Namespace() string {
if accessor.overrideNamespace != "" {
return accessor.overrideNamespace
}
return accessor.applicationNamespace
}
// NewApplicationResourceNamespaceAccessor create namespace accessor for resource in application
func NewApplicationResourceNamespaceAccessor(appNs, overrideNs string) NamespaceAccessor {
return &applicationResourceNamespaceAccessor{applicationNamespace: appNs, overrideNamespace: overrideNs}
}

View File

@@ -105,6 +105,7 @@ func (h *resourceKeeper) record(ctx context.Context, manifests []*unstructured.U
}
cfg := newDispatchConfig(options...)
ctx = auth.ContextClearUserInfo(ctx)
if len(rootManifests) != 0 {
rt, err := h.getRootRT(ctx)
if err != nil {

View File

@@ -163,6 +163,10 @@ func listApplicationResourceTrackers(ctx context.Context, cli client.Client, app
}
// ListApplicationResourceTrackers list resource trackers for application with all historyRTs sorted by version number
// rootRT -> The ResourceTracker that records life-long resources. These resources will only be recycled when application is removed.
// currentRT -> The ResourceTracker that tracks the resources used by the latest version of application.
// historyRTs -> The ResourceTrackers that tracks the resources in outdated versions.
// crRT -> The ResourceTracker that tracks the component revisions created by the application.
func ListApplicationResourceTrackers(ctx context.Context, cli client.Client, app *v1beta1.Application) (rootRT *v1beta1.ResourceTracker, currentRT *v1beta1.ResourceTracker, historyRTs []*v1beta1.ResourceTracker, crRT *v1beta1.ResourceTracker, err error) {
metrics.ListResourceTrackerCounter.WithLabelValues("application").Inc()
rts, err := listApplicationResourceTrackers(ctx, cli, app)

View File

@@ -19,20 +19,32 @@ package stdlib
import (
"embed"
"fmt"
"os"
"path/filepath"
"strings"
"cuelang.org/go/cue/build"
"k8s.io/klog/v2"
)
func init() {
var err error
BuiltinImports, err = initBuiltinImports()
if err != nil {
klog.ErrorS(err, "Unable to init builtin imports")
os.Exit(1)
}
}
var (
//go:embed pkgs op.cue ql.cue
fs embed.FS
// BuiltinImports is the builtin imports for cue
BuiltinImports []*build.Instance
)
// GetPackages Get Stdlib packages
func GetPackages(tagTempl string) (map[string]string, error) {
func GetPackages() (map[string]string, error) {
files, err := fs.ReadDir("pkgs")
if err != nil {
return nil, err
@@ -63,16 +75,32 @@ func GetPackages(tagTempl string) (map[string]string, error) {
}
return map[string]string{
"vela/op": opContent + "\n" + tagTempl,
"vela/ql": qlContent + "\n" + tagTempl,
"vela/op": opContent,
"vela/ql": qlContent,
}, nil
}
// AddImportsFor install imports for build.Instance.
func AddImportsFor(inst *build.Instance, tagTempl string) error {
pkgs, err := GetPackages(tagTempl)
inst.Imports = append(inst.Imports, BuiltinImports...)
if tagTempl != "" {
p := &build.Instance{
PkgName: filepath.Base("vela/custom"),
ImportPath: "vela/custom",
}
if err := p.AddFile("-", tagTempl); err != nil {
return err
}
inst.Imports = append(inst.Imports, p)
}
return nil
}
func initBuiltinImports() ([]*build.Instance, error) {
imports := make([]*build.Instance, 0)
pkgs, err := GetPackages()
if err != nil {
return err
return nil, err
}
for path, content := range pkgs {
p := &build.Instance{
@@ -80,9 +108,9 @@ func AddImportsFor(inst *build.Instance, tagTempl string) error {
ImportPath: path,
}
if err := p.AddFile("-", content); err != nil {
return err
return nil, err
}
inst.Imports = append(inst.Imports, p)
imports = append(imports, p)
}
return nil
return imports, nil
}

View File

@@ -26,7 +26,7 @@ import (
)
func TestGetPackages(t *testing.T) {
pkgs, err := GetPackages("context: _")
pkgs, err := GetPackages()
assert.NilError(t, err)
var r cue.Runtime
for path, content := range pkgs {
@@ -36,8 +36,8 @@ func TestGetPackages(t *testing.T) {
builder := &build.Instance{}
builder.AddFile("-", `
import "vela/op"
out: op.context`)
import "vela/custom"
out: custom.context`)
err = AddImportsFor(builder, "context: id: \"xxx\"")
assert.NilError(t, err)

View File

@@ -31,7 +31,7 @@
kind: string
}
filter?: {
namespace?: *"" | string
namespace?: string
matchingLabels?: {...}
}
list?: {...}

View File

@@ -86,9 +86,9 @@
limitBytes: *null | int
}
outputs?: {
logs: string
err?: string
info: {
logs?: string
err?: string
info?: {
fromDate: string
toDate: string
}

View File

@@ -17,8 +17,6 @@
package velaql
import (
"io/ioutil"
"path/filepath"
"regexp"
"strconv"
"strings"
@@ -26,6 +24,7 @@ import (
"github.com/pkg/errors"
"github.com/oam-dev/kubevela/pkg/cue/model/value"
"github.com/oam-dev/kubevela/references/common"
)
// QueryView contains query data
@@ -99,7 +98,7 @@ func ParseVelaQL(ql string) (QueryView, error) {
// ParseVelaQLFromPath will parse a velaQL file path to QueryView
func ParseVelaQLFromPath(velaQLViewPath string) (*QueryView, error) {
body, err := ioutil.ReadFile(filepath.Clean(velaQLViewPath))
body, err := common.ReadRemoteOrLocalPath(velaQLViewPath)
if err != nil {
return nil, errors.Errorf("read view file from %s: %v", velaQLViewPath, err)
}

View File

@@ -18,12 +18,12 @@ package query
import (
"bufio"
"bytes"
"context"
"encoding/base64"
"fmt"
"io"
"regexp"
"strconv"
"strings"
"time"
"github.com/pkg/errors"
@@ -397,14 +397,6 @@ func (h *provider) GeneratorServiceEndpoints(wfctx wfContext.Context, v *value.V
return fillQueryResult(v, serviceEndpoints, "list")
}
var (
terminatedContainerNotFoundRegex = regexp.MustCompile("previous terminated container .+ in pod .+ not found")
)
func isTerminatedContainerNotFound(err error) bool {
return err != nil && terminatedContainerNotFoundRegex.MatchString(err.Error())
}
func (h *provider) CollectLogsInPod(ctx wfContext.Context, v *value.Value, act types.Action) error {
cluster, err := v.GetString("cluster")
if err != nil {
@@ -429,27 +421,29 @@ func (h *provider) CollectLogsInPod(ctx wfContext.Context, v *value.Value, act t
cliCtx := multicluster.ContextWithClusterName(context.Background(), cluster)
clientSet, err := kubernetes.NewForConfig(h.cfg)
if err != nil {
return errors.Wrapf(err, "failed to create kubernetes clientset")
return errors.Wrapf(err, "failed to create kubernetes client")
}
var defaultOutputs = make(map[string]interface{})
var errMsg string
podInst, err := clientSet.CoreV1().Pods(namespace).Get(cliCtx, pod, v1.GetOptions{})
if err != nil {
return errors.Wrapf(err, "failed to get pod")
errMsg = fmt.Sprintf("failed to get pod:%s", err.Error())
}
req := clientSet.CoreV1().Pods(namespace).GetLogs(pod, opts)
readCloser, err := req.Stream(cliCtx)
if err != nil && !isTerminatedContainerNotFound(err) {
return errors.Wrapf(err, "failed to get stream logs")
if err != nil {
errMsg = fmt.Sprintf("failed to get stream logs %s", err.Error())
}
r := bufio.NewReader(readCloser)
var b strings.Builder
var readErr error
if err == nil {
if readCloser != nil && podInst != nil {
r := bufio.NewReader(readCloser)
buffer := bytes.NewBuffer(nil)
var readErr error
defer func() {
_ = readCloser.Close()
}()
for {
s, err := r.ReadString('\n')
b.WriteString(s)
buffer.WriteString(s)
if err != nil {
if !errors.Is(err, io.EOF) {
readErr = err
@@ -457,30 +451,34 @@ func (h *provider) CollectLogsInPod(ctx wfContext.Context, v *value.Value, act t
break
}
}
} else {
readErr = err
toDate := v1.Now()
var fromDate v1.Time
// nolint
if opts.SinceTime != nil {
fromDate = *opts.SinceTime
} else if opts.SinceSeconds != nil {
fromDate = v1.NewTime(toDate.Add(time.Duration(-(*opts.SinceSeconds) * int64(time.Second))))
} else {
fromDate = podInst.CreationTimestamp
}
// the cue string can not support the special characters
logs := base64.StdEncoding.EncodeToString(buffer.Bytes())
defaultOutputs = map[string]interface{}{
"logs": logs,
"info": map[string]interface{}{
"fromDate": fromDate,
"toDate": toDate,
},
}
if readErr != nil {
errMsg = readErr.Error()
}
}
toDate := v1.Now()
var fromDate v1.Time
// nolint
if opts.SinceTime != nil {
fromDate = *opts.SinceTime
} else if opts.SinceSeconds != nil {
fromDate = v1.NewTime(toDate.Add(time.Duration(-(*opts.SinceSeconds) * int64(time.Second))))
} else {
fromDate = podInst.CreationTimestamp
if errMsg != "" {
klog.Warningf(errMsg)
defaultOutputs["err"] = errMsg
}
o := map[string]interface{}{
"logs": b.String(),
"info": map[string]interface{}{
"fromDate": fromDate,
"toDate": toDate,
},
}
if readErr != nil {
o["err"] = readErr.Error()
}
return v.FillObject(o, "outputs")
return v.FillObject(defaultOutputs, "outputs")
}
// Install register handlers to provider discover.

View File

@@ -30,6 +30,8 @@ import (
"github.com/oam-dev/kubevela/pkg/cue/model"
"github.com/oam-dev/kubevela/pkg/cue/model/value"
"github.com/oam-dev/kubevela/pkg/multicluster"
"github.com/oam-dev/kubevela/pkg/oam"
"github.com/oam-dev/kubevela/pkg/oam/util"
wfContext "github.com/oam-dev/kubevela/pkg/workflow/context"
"github.com/oam-dev/kubevela/pkg/workflow/providers"
"github.com/oam-dev/kubevela/pkg/workflow/types"
@@ -90,6 +92,12 @@ func (h *provider) Apply(ctx wfContext.Context, v *value.Value, act types.Action
}
deployCtx := multicluster.ContextWithClusterName(context.Background(), cluster)
deployCtx = auth.ContextWithUserInfo(deployCtx, h.app)
if h.app != nil {
util.AddLabels(workload, map[string]string{
oam.LabelAppName: h.app.Name,
oam.LabelAppNamespace: h.app.Namespace,
})
}
if err := h.apply(deployCtx, cluster, common.WorkflowResourceCreator, workload); err != nil {
return err
}

View File

@@ -265,7 +265,7 @@ func (t *TaskLoader) makeValue(ctx wfContext.Context, templ string, id string, p
}
contextTempl += "\n" + pCtx.ExtendedContextFile()
return value.NewValue(templ+contextTempl, t.pd, contextTempl, value.ProcessScript, value.TagFieldOrder)
return value.NewValue(templ+contextTempl, t.pd, "", value.ProcessScript, value.TagFieldOrder)
}
type executor struct {

View File

@@ -731,8 +731,15 @@ func waitApplicationRunning(k8sClient client.Client, addonName string) error {
return client.IgnoreNotFound(err)
}
phase := app.Status.Phase
if phase == common2.ApplicationRunning {
switch app.Status.Phase {
case common2.ApplicationRunning:
return nil
case common2.ApplicationWorkflowSuspending:
fmt.Printf("Enabling suspend, please run \"vela workflow resume %s -n vela-system\" to continue", pkgaddon.Convert2AppName(addonName))
return nil
case common2.ApplicationWorkflowTerminated:
return errors.Errorf("Enabling failed, please run \"vela status %s -n vela-system\" to check the status of the addon", pkgaddon.Convert2AppName(addonName))
default:
}
timeConsumed := int(time.Since(start).Seconds())
applySpinnerNewSuffix(spinner, fmt.Sprintf("Waiting addon application running. It is now in phase: %s (timeout %d/%d seconds)...",

View File

@@ -144,13 +144,13 @@ func prepareProviderAddSubCommand(c common.Args, ioStreams cmdutil.IOStreams) ([
if len(os.Args) < 2 || os.Args[1] != "provider" {
return nil, nil
}
ctx, cancel := context.WithTimeout(context.Background(), time.Minute*1)
timeoutCtx, cancel := context.WithTimeout(context.Background(), time.Minute*1)
defer cancel()
k8sClient, err := c.GetClient()
if err != nil {
return nil, err
}
defs, err := getTerraformProviderTypes(ctx, k8sClient)
defs, err := getTerraformProviderTypes(timeoutCtx, k8sClient)
if err == nil {
cmds := make([]*cobra.Command, len(defs))
for i, d := range defs {
@@ -161,7 +161,7 @@ func prepareProviderAddSubCommand(c common.Args, ioStreams cmdutil.IOStreams) ([
Long: fmt.Sprintf("Authenticate Terraform Cloud Provider %s by creating a credential secret and a Terraform Controller Provider", providerType),
Example: fmt.Sprintf("vela provider add %s", providerType),
}
parameters, err := getParameters(ctx, k8sClient, providerType)
parameters, err := getParameters(context.Background(), k8sClient, providerType)
if err != nil {
return nil, err
}
@@ -335,9 +335,9 @@ func prepareProviderDeleteCommand(c common.Args, ioStreams cmdutil.IOStreams) *c
if err != nil {
return err
}
ctx, cancel := context.WithTimeout(context.Background(), time.Minute*1)
timeoutCtx, cancel := context.WithTimeout(context.Background(), time.Minute*1)
defer cancel()
defs, err := getTerraformProviderTypes(ctx, k8sClient)
defs, err := getTerraformProviderTypes(timeoutCtx, k8sClient)
if len(args) < 1 {
errMsg := "must specify a Terraform Cloud Provider type"
if err == nil {
@@ -373,13 +373,13 @@ func prepareProviderDeleteSubCommand(c common.Args, ioStreams cmdutil.IOStreams)
if len(os.Args) < 2 || os.Args[1] != "provider" {
return nil, nil
}
ctx, cancel := context.WithTimeout(context.Background(), time.Minute*1)
timeoutContext, cancel := context.WithTimeout(context.Background(), time.Minute*1)
defer cancel()
k8sClient, err := c.GetClient()
if err != nil {
return nil, err
}
defs, err := getTerraformProviderTypes(ctx, k8sClient)
defs, err := getTerraformProviderTypes(timeoutContext, k8sClient)
if err == nil {
cmds := make([]*cobra.Command, len(defs))
for i, d := range defs {
@@ -390,7 +390,7 @@ func prepareProviderDeleteSubCommand(c common.Args, ioStreams cmdutil.IOStreams)
Long: fmt.Sprintf("Delete Terraform Cloud Provider %s", providerType),
Example: fmt.Sprintf("vela provider delete %s", providerType),
}
parameters, err := getParameters(ctx, k8sClient, providerType)
parameters, err := getParameters(context.Background(), k8sClient, providerType)
if err != nil {
return nil, err
}
@@ -404,7 +404,7 @@ func prepareProviderDeleteSubCommand(c common.Args, ioStreams cmdutil.IOStreams)
if err != nil || name == "" {
return fmt.Errorf("must specify a name for the Terraform Cloud Provider %s", providerType)
}
if err := config.DeleteApplication(ctx, k8sClient, name, true); err != nil {
if err := config.DeleteApplication(context.Background(), k8sClient, name, true); err != nil {
return errors.Wrapf(err, "failed to delete Terraform Cloud Provider %s", name)
}
ioStreams.Infof("Successfully delete provider %s for %s\n", name, providerType)

View File

@@ -50,12 +50,12 @@ func NewQlCommand(c common.Args, order string, ioStreams util.IOStreams) *cobra.
Short: "Show result of executing velaQL.",
Long: `Show result of executing velaQL, use it like:
vela ql --query "<inner-view-name>{<param1>=<value1>,<param2>=<value2>}
vela ql --file=./ql.cue
vela ql --file ./ql.cue
`,
Example: `Users can query with a query statement:
vela ql --query "<inner-view-name>{<param1>=<value1>,<param2>=<value2>}"
They can also query by a ql file:
vela ql --file=./ql.cue
vela ql --file ./ql.cue
Example content of ql.cue:
---
@@ -99,7 +99,7 @@ export: "status"
types.TagCommandType: types.TypeApp,
},
}
cmd.Flags().StringVarP(&cueFile, "file", "f", "", "The CUE file path for VelaQL.")
cmd.Flags().StringVarP(&cueFile, "file", "f", "", "The CUE file path for VelaQL, it could be a remote url.")
cmd.Flags().StringVarP(&querySts, "query", "q", "", "The query statement for VelaQL.")
cmd.SetOut(ioStreams.Out)
return cmd

View File

@@ -55,12 +55,18 @@
}
template: {
#K8sObject: {
apiVersion: string
kind: string
metadata: {
name: string
...
}
// +usage=The resource type for the Kubernetes objects
resource?: string
// +usage=The group name for the Kubernetes objects
group?: string
// +usage=If specified, fetch the Kubernetes objects with the name, exclusive to labelSelector
name?: string
// +usage=If specified, fetch the Kubernetes objects from the namespace. Otherwise, fetch from the application's namespace.
namespace?: string
// +usage=If specified, fetch the Kubernetes objects from the cluster. Otherwise, fetch from the local cluster.
cluster?: string
// +usage=If specified, fetch the Kubernetes objects according to the label selector, exclusive to name
labelSelector?: [string]: string
...
}
@@ -74,6 +80,9 @@ template: {
}
}
parameter: {
objects: [...#K8sObject]
// +usage=If specified, application will fetch native Kubernetes objects according to the object description
objects?: [...#K8sObject]
// +usage=If specified, the objects in the urls will be loaded.
urls?: [...string]
}
}

View File

@@ -57,7 +57,10 @@ template: {
for v in parameter.volumeMounts.pvc {
{
mountPath: v.mountPath
name: v.name
if v.subPath != _|_ {
subPath: v.subPath
}
name: v.name
}
},
] | []
@@ -66,7 +69,10 @@ template: {
for v in parameter.volumeMounts.configMap {
{
mountPath: v.mountPath
name: v.name
if v.subPath != _|_ {
subPath: v.subPath
}
name: v.name
}
},
] | []
@@ -75,7 +81,10 @@ template: {
for v in parameter.volumeMounts.secret {
{
mountPath: v.mountPath
name: v.name
if v.subPath != _|_ {
subPath: v.subPath
}
name: v.name
}
},
] | []
@@ -84,7 +93,10 @@ template: {
for v in parameter.volumeMounts.emptyDir {
{
mountPath: v.mountPath
name: v.name
if v.subPath != _|_ {
subPath: v.subPath
}
name: v.name
}
},
] | []
@@ -93,7 +105,10 @@ template: {
for v in parameter.volumeMounts.hostPath {
{
mountPath: v.mountPath
name: v.name
if v.subPath != _|_ {
subPath: v.subPath
}
name: v.name
}
},
] | []
@@ -160,6 +175,20 @@ template: {
] | []
}
volumesList: volumesArray.pvc + volumesArray.configMap + volumesArray.secret + volumesArray.emptyDir + volumesArray.hostPath
deDupVolumesArray: [
for val in [
for i, vi in volumesList {
for j, vj in volumesList if j < i && vi.name == vj.name {
_ignore: true
}
vi
},
] if val._ignore == _|_ {
val
},
]
output: {
apiVersion: "apps/v1"
kind: "Deployment"
@@ -305,7 +334,7 @@ template: {
}
if parameter["volumeMounts"] != _|_ {
volumes: volumesArray.pvc + volumesArray.configMap + volumesArray.secret + volumesArray.emptyDir + volumesArray.hostPath
volumes: deDupVolumesArray
}
}
}
@@ -421,6 +450,7 @@ template: {
pvc?: [...{
name: string
mountPath: string
subPath?: string
// +usage=The name of the PVC
claimName: string
}]
@@ -428,6 +458,7 @@ template: {
configMap?: [...{
name: string
mountPath: string
subPath?: string
defaultMode: *420 | int
cmName: string
items?: [...{
@@ -440,6 +471,7 @@ template: {
secret?: [...{
name: string
mountPath: string
subPath?: string
defaultMode: *420 | int
secretName: string
items?: [...{
@@ -452,12 +484,14 @@ template: {
emptyDir?: [...{
name: string
mountPath: string
subPath?: string
medium: *"" | "Memory"
}]
// +usage=Mount HostPath type volume
hostPath?: [...{
name: string
mountPath: string
subPath?: string
path: string
}]
}

View File

@@ -38,7 +38,7 @@ template: {
volumeMounts: [{
name: parameter.mountName
mountPath: parameter.initMountPath
}]
}] + parameter.extraVolumeMounts
}]
// +patchKey=name
volumes: [{
@@ -92,5 +92,13 @@ template: {
// +usage=Specify the mount path of init container
initMountPath: string
// +usage=Specify the extra volume mounts for the init container
extraVolumeMounts: [...{
// +usage=The name of the volume to be mounted
name: string
// +usage=The mountPath for mount in the init container
mountPath: string
}]
}
}

View File

@@ -9,10 +9,115 @@
}
}
template: {
#Privileges: {
// +usage=Specify the verbs to be allowed for the resource
verbs: [...string]
// +usage=Specify the apiGroups of the resource
apiGroups?: [...string]
// +usage=Specify the resources to be allowed
resources?: [...string]
// +usage=Specify the resourceNames to be allowed
resourceNames?: [...string]
// +usage=Specify the resource url to be allowed
nonResourceURLs?: [...string]
// +usage=Specify the scope of the privileges, default to be namespace scope
scope: *"namespace" | "cluster"
}
parameter: {
// +usage=Specify the name of ServiceAccount
name: string
// +usage=Specify whether to create new ServiceAccount or not
create: *false | bool
// +usage=Specify the privileges of the ServiceAccount, if not empty, RoleBindings(ClusterRoleBindings) will be created
privileges?: [...#Privileges]
}
// +patchStrategy=retainKeys
patch: spec: template: spec: serviceAccountName: parameter.name
_clusterPrivileges: [ for p in parameter.privileges if p.scope == "cluster" {p}]
_namespacePrivileges: [ for p in parameter.privileges if p.scope == "namespace" {p}]
outputs: {
if parameter.create {
"service-account": {
apiVersion: "v1"
kind: "ServiceAccount"
metadata: name: parameter.name
}
}
if parameter.privileges != _|_ {
if len(_clusterPrivileges) > 0 {
"cluster-role": {
apiVersion: "rbac.authorization.k8s.io/v1"
kind: "ClusterRole"
metadata: name: "\(context.namespace):\(parameter.name)"
rules: [ for p in _clusterPrivileges {
verbs: p.verbs
if p.apiGroups != _|_ {
apiGroups: p.apiGroups
}
if p.resources != _|_ {
resources: p.resources
}
if p.resourceNames != _|_ {
resourceNames: p.resourceNames
}
if p.nonResourceURLs != _|_ {
nonResourceURLs: p.nonResourceURLs
}
}]
}
"cluster-role-binding": {
apiVersion: "rbac.authorization.k8s.io/v1"
kind: "ClusterRoleBinding"
metadata: name: "\(context.namespace):\(parameter.name)"
roleRef: {
apiGroup: "rbac.authorization.k8s.io"
kind: "ClusterRole"
name: "\(context.namespace):\(parameter.name)"
}
subjects: [{
kind: "ServiceAccount"
name: parameter.name
namespace: "\(context.namespace)"
}]
}
}
if len(_namespacePrivileges) > 0 {
"role": {
apiVersion: "rbac.authorization.k8s.io/v1"
kind: "Role"
metadata: name: parameter.name
rules: [ for p in _namespacePrivileges {
verbs: p.verbs
if p.apiGroups != _|_ {
apiGroups: p.apiGroups
}
if p.resources != _|_ {
resources: p.resources
}
if p.resourceNames != _|_ {
resourceNames: p.resourceNames
}
if p.nonResourceURLs != _|_ {
nonResourceURLs: p.nonResourceURLs
}
}]
}
"role-binding": {
apiVersion: "rbac.authorization.k8s.io/v1"
kind: "RoleBinding"
metadata: name: parameter.name
roleRef: {
apiGroup: "rbac.authorization.k8s.io"
kind: "Role"
name: parameter.name
}
subjects: [{
kind: "ServiceAccount"
name: parameter.name
}]
}
}
}
}
}

View File

@@ -77,6 +77,11 @@ template: {
// +usage=The key of the config map to select from. Must be a valid secret key
key: string
}
// +usage=Specify the field reference for env
fieldRef?: {
// +usage=Specify the field path for env
fieldPath: string
}
}
}]

View File

@@ -65,6 +65,9 @@ template: {
{
name: "pvc-" + v.name
mountPath: v.mountPath
if v.subPath != _|_ {
subPath: v.subPath
}
}
}
},
@@ -75,6 +78,9 @@ template: {
{
name: "configmap-" + v.name
mountPath: v.mountPath
if v.subPath != _|_ {
subPath: v.subPath
}
}
},
] | []
@@ -108,6 +114,9 @@ template: {
{
name: "secret-" + v.name
mountPath: v.mountPath
if v.subPath != _|_ {
subPath: v.subPath
}
}
},
] | []
@@ -141,6 +150,9 @@ template: {
{
name: "emptydir-" + v.name
mountPath: v.mountPath
if v.subPath != _|_ {
subPath: v.subPath
}
}
},
] | []
@@ -150,13 +162,30 @@ template: {
{
name: "pvc-" + v.name
devicePath: v.mountPath
if v.subPath != _|_ {
subPath: v.subPath
}
}
},
] | []
volumesList: pvcVolumesList + configMapVolumesList + secretVolumesList + emptyDirVolumesList
deDupVolumesArray: [
for val in [
for i, vi in volumesList {
for j, vj in volumesList if j < i && vi.name == vj.name {
_ignore: true
}
vi
},
] if val._ignore == _|_ {
val
},
]
patch: spec: template: spec: {
// +patchKey=name
volumes: pvcVolumesList + configMapVolumesList + secretVolumesList + emptyDirVolumesList
volumes: deDupVolumesArray
containers: [{
// +patchKey=name
@@ -248,6 +277,7 @@ template: {
name: string
mountOnly: *false | bool
mountPath: string
subPath?: string
volumeMode: *"Filesystem" | string
volumeName?: string
accessModes: *["ReadWriteOnce"] | [...string]
@@ -289,6 +319,7 @@ template: {
configMapKey: string
}]
mountPath?: string
subPath?: string
defaultMode: *420 | int
readOnly: *false | bool
data?: {...}
@@ -312,6 +343,7 @@ template: {
secretKey: string
}]
mountPath?: string
subPath?: string
defaultMode: *420 | int
readOnly: *false | bool
stringData?: {...}
@@ -327,6 +359,7 @@ template: {
emptyDir?: [...{
name: string
mountPath: string
subPath?: string
medium: *"" | "Memory"
}]
}