Compare commits

..

22 Commits

Author SHA1 Message Date
yangsoon
08a1dc5a22 Add package discover refresh when component/trait definition are registered (#1402)
* add discover refresh

* add test

* fix store openapi schema

* fix discovermapper
2021-04-08 20:35:21 +08:00
Jianbo Sun
d2a46322c0 Merge pull request #1434 from chengshiwen/add-short-name
Add short name for crds
2021-04-08 20:15:22 +08:00
chengshiwen
7d31d84ec3 Remove redundant and ambiguous short names 2021-04-08 19:49:28 +08:00
chengshiwen
f59341f667 Add short name for crds 2021-04-08 19:49:23 +08:00
Jian.Li
a446aab46b Fix Bug: strategyUnify not work with close call (#1430) 2021-04-08 18:41:04 +08:00
Shiming Zhang
f2de6de6f8 Remove useless code (#1433)
Signed-off-by: Shiming Zhang <wzshiming@foxmail.com>
2021-04-08 18:39:42 +08:00
guoxudong
0f23f6eb09 stop zh-CN docs sync (#1431)
Co-authored-by: guoxudong <guoxudong.dev@gmial.com>
2021-04-08 17:23:45 +08:00
Lei Zhang (Harry)
473164efbd Finalize the message (#1426) 2021-04-08 15:54:40 +08:00
yangsoon
7dec0afc16 vela-cli support use "vela system cue-packages" to list cue-package (#1417)
* support list cue-package

* add test kube package doc

* refine words

* fix workflow

* fix the docs

Co-authored-by: 天元 <jianbo.sjb@alibaba-inc.com>
2021-04-08 15:54:25 +08:00
Jianbo Sun
a44257e153 fix flaky test (#1427) 2021-04-08 14:34:36 +08:00
yangsoon
1261e2678f add podDisruptive to traitdefinition (#1192)
* add podDisruptive to traitdefinition

* fix docs & example

* fix diff and add docs

* use bool type for podDisruptive

Co-authored-by: 天元 <jianbo.sjb@alibaba-inc.com>
2021-04-08 13:39:03 +08:00
Ryan Zhang
325a4cdb0e Add a new cloneset scale controller (#1301)
* add scale implementation

* fine tune the logic and adjust e2e test

* fix tests and fine tune logic

* try to fix flaky verification

* allow zero size step

* fix scale down check
2021-04-07 11:14:46 -07:00
wyike
b33b6fbead garbage collection for cross-namespace workloads and traits in application layer (#1421)
* add logic for gc cross ns resources

WIP regenerate api and charts

rewrite handle resourceTracker logic

WIP add e2e test logic

add more e2e  test

* WIP refactor sevral funs name and add comments for code

* WIP add more corener case e2e test

* add unit test and fix bug

* WIP refactor handleResouceTracker  logic

* refactor func name aglin with others and add comments

* make generate to fix check-diff ci error

* change resourceTracker status as a subresource
2021-04-07 21:34:48 +08:00
Jian.Li
15b1bd2660 Fix the bug that the registered k8s built-in gvk does not exist (#1414)
* fix bug for method exist(gvk)

* check that all built-in k8s resource are registered
2021-04-07 13:06:19 +08:00
yangsoon
8c73ea5d8a fix keda.md (#1420) 2021-04-06 16:07:22 +08:00
Jian.Li
4b25ed4ba1 fix import inner package in the format of third party package path and split test cases (#1412)
* test third party path in cue

* fix typo

* fix pkg mount bug

Co-authored-by: 天元 <jianbo.sjb@alibaba-inc.com>
2021-04-02 13:04:52 +08:00
Jianbo Sun
f47ca0f4da remove no used ingress notes in charts (#1405)
* remove no used ingress notes in charts

* remove helm repo update
2021-04-02 10:58:07 +08:00
Herman Zhu
5b1b054cca refactor(workload) remove all the duplicate fields (#1410)
remove all the duplicate fields in parser above as template contains

fix #1388

Signed-off-by: zhuhuijun <zhuhuijunzhj@gmail.com>
2021-04-02 10:24:27 +08:00
Lei Zhang (Harry)
35376bd396 Reword readme for clearer message (#1408)
Rewording the readme
2021-04-02 09:41:23 +08:00
Lei Zhang (Harry)
06069d3580 Update docs to reflect app delivery idea (#1401) 2021-04-01 15:26:25 +08:00
yangsoon
43057161bf update contributing guide (#1398) 2021-04-01 13:54:56 +08:00
woshicai
50ded65805 Fix: update homebrew before install kubevela client (#1395)
Co-authored-by: charles <charles.cai@sap.com>
2021-04-01 12:45:40 +08:00
120 changed files with 3477 additions and 972 deletions

View File

@@ -142,7 +142,6 @@ docker-push:
e2e-setup:
helm install --create-namespace -n flux-system helm-flux http://oam.dev/catalog/helm-flux2-0.1.0.tgz
helm install kruise https://github.com/openkruise/kruise/releases/download/v0.7.0/kruise-chart.tgz
helm repo update
helm upgrade --install --create-namespace --namespace vela-system --set image.pullPolicy=IfNotPresent --set image.repository=vela-core-test --set image.tag=$(GIT_COMMIT) --wait kubevela ./charts/vela-core
ginkgo version
ginkgo -v -r e2e/setup

View File

@@ -14,7 +14,7 @@
# KubeVela
KubeVela is the platform engine to create *developer-centric* experience on Kubernetes, in a scalable approach.
KubeVela is a modern application engine that adapts to your application's needs, not the other way around.
## Community
@@ -24,11 +24,11 @@ KubeVela is the platform engine to create *developer-centric* experience on Kube
## What problems does it solve?
Building **developer-centric platforms** with Kubernetes requires higher level primitives which is out-of-scope of Kubernetes itself. Hence, we platform teams build abstractions.
Traditional Platform-as-a-Service (PaaS) system enables easy application deployments and everything just works, but this happiness disappears when your application outgrows the capabilities of your platform.
However, great in flexibility and extensibility, the existing solutions such as IaC (Infrastructure-as-Code) and client-side templating tools all lead to ***Configuration Drift*** (i.e. the generated instances are not in line with the expected configuration) which is a nightmare in production.
KubeVela is a modern application engine whose capabilities are actually Infrastructure-as-Code (IaC) components coded by you or come from the ecosystem. Think of it as a *Heroku* which is fully programmable to serve your needs as you grow and expand.
KubeVela allows platform teams to create developer-centric abstractions with IaC but maintain them with the battle tested [Kubernetes Control Loop](https://kubernetes.io/docs/concepts/architecture/controller/). Think about a plug-in that turns your Kubernetes cluster into a *Heroku* via abstractions designed by yourself.
As a plus, KubeVela leverages [Kubernetes Control Loop](https://kubernetes.io/docs/concepts/architecture/controller/) to enforce all those abstractions so they will never leave *configuration drift* (i.e. the running instances are not in line with the expected configuration) in your clusters.
## Getting Started
@@ -38,10 +38,10 @@ KubeVela allows platform teams to create developer-centric abstractions with IaC
## Features
- **Robust, repeatable and extensible approach to create and maintain abstractions** - design your abstractions with [CUE](https://cuelang.org/) or [Helm](https://helm.sh), ship them to end users by `kubectl apply -f`, automatically generating GUI forms, upgrade your abstractions at runtime, and let Kubernetes controller guarantee determinism of the abstractions, no configuration drift.
- **Generic progressive rollout framework** - built-in rollout framework and strategies to upgrade your microservice regardless of its workload type (e.g. stateless, stateful, or even custom operators etc), seamless integration with observability systems.
- **Multi-enviroment app delievry model (WIP)** - built-in model to deliver or rollout your apps across multiple enviroments and/or clusters, seamless integration with Service Mesh for traffic management.
- **Simple and Kubernetes native** - KubeVela is just a simple custom controller, all its app delivery abstractions and features are defined as [Kubernetes Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) so they naturally work with any CI/CD or GitOps tools.
- **Zero-restriction deployment** - design and express platform capabilities with [CUE](https://cuelang.org/) or [Helm](https://helm.sh) per needs of your application, and let Kubernetes controller guarantee the deployment determinism. GUI forms are automatically generated for capabilities so even your dashboard are fully extensible.
- **Generic progressive rollout framework** - built-in rollout framework and strategies to upgrade your microservice regardless of its workload type (e.g. stateless, stateful, or even custom operators etc).
- **Multi-cluster multi-revision application deployment** - built-in model to deploy or rollout your apps across hybrid infrastructures, with Service Mesh for traffic shifting.
- **Simple and native** - KubeVela is a just simple Kubernetes custom controller, all its capabilities are defined as [Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) so they naturally work with any CI/CD or GitOps tools that works with Kubernetes.
## Documentation

View File

@@ -88,7 +88,7 @@ type ApplicationSpec struct {
// Application is the Schema for the applications API
// +kubebuilder:object:root=true
// +kubebuilder:resource:categories={oam},shortName=apps
// +kubebuilder:resource:categories={oam},shortName=app
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:name="COMPONENT",type=string,JSONPath=`.spec.components[*].name`
// +kubebuilder:printcolumn:name="TYPE",type=string,JSONPath=`.spec.components[*].type`

View File

@@ -55,7 +55,7 @@ type ApplicationRevisionSpec struct {
// ApplicationRevision is the Schema for the ApplicationRevision API
// +kubebuilder:object:root=true
// +kubebuilder:resource:categories={oam},shortName=apprev;revisions
// +kubebuilder:resource:categories={oam},shortName=apprev
// +kubebuilder:printcolumn:name="AGE",type=date,JSONPath=".metadata.creationTimestamp"
type ApplicationRevision struct {
metav1.TypeMeta `json:",inline"`

View File

@@ -61,7 +61,7 @@ type AppRolloutStatus struct {
// AppRollout is the Schema for the AppRollout API
// +kubebuilder:object:root=true
// +kubebuilder:resource:categories={oam},shortName=approllout;rollout
// +kubebuilder:resource:categories={oam},shortName=approllout
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:name="TARGET",type=string,JSONPath=`.status.rolloutStatus.rolloutTargetSize`
// +kubebuilder:printcolumn:name="UPGRADED",type=string,JSONPath=`.status.rolloutStatus.upgradedReplicas`

View File

@@ -112,6 +112,10 @@ type TraitDefinitionSpec struct {
// +optional
WorkloadRefPath string `json:"workloadRefPath,omitempty"`
// PodDisruptive specifies whether using the trait will cause the pod to restart or not.
// +optional
PodDisruptive bool `json:"podDisruptive,omitempty"`
// AppliesToWorkloads specifies the list of workload kinds this trait
// applies to. Workload kinds are specified in kind.group/version format,
// e.g. server.core.oam.dev/v1alpha2. Traits that omit this field apply to
@@ -216,7 +220,7 @@ type ScopeDefinitionSpec struct {
// to validate the schema of the scope when it is embedded in an OAM
// ApplicationConfiguration.
// +kubebuilder:printcolumn:JSONPath=".spec.definitionRef.name",name=DEFINITION-NAME,type=string
// +kubebuilder:resource:scope=Namespaced,categories={oam}
// +kubebuilder:resource:scope=Namespaced,categories={oam},shortName=scope
type ScopeDefinition struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

View File

@@ -67,7 +67,7 @@ type ApplicationSpec struct {
// Application is the Schema for the applications API
// +kubebuilder:storageversion
// +kubebuilder:subresource:status
// +kubebuilder:resource:categories={oam},shortName=apps
// +kubebuilder:resource:categories={oam},shortName=app
// +kubebuilder:printcolumn:name="COMPONENT",type=string,JSONPath=`.spec.components[*].name`
// +kubebuilder:printcolumn:name="TYPE",type=string,JSONPath=`.spec.components[*].type`
// +kubebuilder:printcolumn:name="PHASE",type=string,JSONPath=`.status.status`

View File

@@ -56,7 +56,7 @@ type ApplicationRevisionSpec struct {
// ApplicationRevision is the Schema for the ApplicationRevision API
// +kubebuilder:storageversion
// +kubebuilder:resource:categories={oam},shortName=apprev;revisions
// +kubebuilder:resource:categories={oam},shortName=apprev
// +kubebuilder:printcolumn:name="AGE",type=date,JSONPath=".metadata.creationTimestamp"
type ApplicationRevision struct {
metav1.TypeMeta `json:",inline"`

View File

@@ -63,7 +63,7 @@ type AppRolloutStatus struct {
// AppRollout is the Schema for the AppRollout API
// +kubebuilder:object:root=true
// +kubebuilder:resource:categories={oam},shortName=approllout;rollout
// +kubebuilder:resource:categories={oam},shortName=approllout
// +kubebuilder:subresource:status
// +kubebuilder:storageversion
// +kubebuilder:printcolumn:name="TARGET",type=string,JSONPath=`.status.rolloutTargetSize`

View File

@@ -20,6 +20,7 @@ import (
runtimev1alpha1 "github.com/crossplane/crossplane-runtime/apis/core/v1alpha1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
)
@@ -112,6 +113,10 @@ type TraitDefinitionSpec struct {
// +optional
WorkloadRefPath string `json:"workloadRefPath,omitempty"`
// PodDisruptive specifies whether using the trait will cause the pod to restart or not.
// +optional
PodDisruptive bool `json:"podDisruptive,omitempty"`
// AppliesToWorkloads specifies the list of workload kinds this trait
// applies to. Workload kinds are specified in kind.group/version format,
// e.g. server.core.oam.dev/v1alpha2. Traits that omit this field apply to
@@ -217,7 +222,7 @@ type ScopeDefinitionSpec struct {
// to validate the schema of the scope when it is embedded in an OAM
// ApplicationConfiguration.
// +kubebuilder:printcolumn:JSONPath=".spec.definitionRef.name",name=DEFINITION-NAME,type=string
// +kubebuilder:resource:scope=Namespaced,categories={oam}
// +kubebuilder:resource:scope=Namespaced,categories={oam},shortName=scope
// +kubebuilder:storageversion
type ScopeDefinition struct {
metav1.TypeMeta `json:",inline"`
@@ -236,12 +241,41 @@ type ScopeDefinitionList struct {
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// An ResourceTracker represents a tracker for track cross namespace resources
// +kubebuilder:resource:scope=Cluster,categories={oam}
// +kubebuilder:resource:scope=Cluster,categories={oam},shortName=tracker
type ResourceTracker struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Status ResourceTrackerStatus `json:"status,omitempty"`
}
// ResourceTrackerStatus define the status of resourceTracker
type ResourceTrackerStatus struct {
TrackedResources []TypedReference `json:"trackedResources,omitempty"`
}
// A TypedReference refers to an object by Name, Kind, and APIVersion. It is
// commonly used to reference across-namespace objects
type TypedReference struct {
// APIVersion of the referenced object.
APIVersion string `json:"apiVersion"`
// Kind of the referenced object.
Kind string `json:"kind"`
// Name of the referenced object.
Name string `json:"name"`
// Namespace of the objects outside the application namespace.
// +optional
Namespace string `json:"namespace,omitempty"`
// UID of the referenced object.
// +optional
UID types.UID `json:"uid,omitempty"`
}
// +kubebuilder:object:root=true

View File

@@ -862,6 +862,7 @@ func (in *ResourceTracker) DeepCopyInto(out *ResourceTracker) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceTracker.
@@ -914,6 +915,26 @@ func (in *ResourceTrackerList) DeepCopyObject() runtime.Object {
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ResourceTrackerStatus) DeepCopyInto(out *ResourceTrackerStatus) {
*out = *in
if in.TrackedResources != nil {
in, out := &in.TrackedResources, &out.TrackedResources
*out = make([]TypedReference, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceTrackerStatus.
func (in *ResourceTrackerStatus) DeepCopy() *ResourceTrackerStatus {
if in == nil {
return nil
}
out := new(ResourceTrackerStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ScopeDefinition) DeepCopyInto(out *ScopeDefinition) {
*out = *in
@@ -1141,6 +1162,21 @@ func (in *TraitDefinitionStatus) DeepCopy() *TraitDefinitionStatus {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TypedReference) DeepCopyInto(out *TypedReference) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TypedReference.
func (in *TypedReference) DeepCopy() *TypedReference {
if in == nil {
return nil
}
out := new(TypedReference)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *URIMatch) DeepCopyInto(out *URIMatch) {
*out = *in

View File

@@ -247,9 +247,13 @@ type RolloutStatus struct {
// Conditions represents the latest available observations of a CloneSet's current state.
runtimev1alpha1.ConditionedStatus `json:",inline"`
// RolloutTargetTotalSize is the size of the target resources. This is determined once the initial spec verification
// RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification
// and does not change until the rollout is restarted
RolloutTargetTotalSize int32 `json:"rolloutTargetSize,omitempty"`
RolloutOriginalSize int32 `json:"rolloutOriginalSize,omitempty"`
// RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification
// and does not change until the rollout is restarted
RolloutTargetSize int32 `json:"rolloutTargetSize,omitempty"`
// NewPodTemplateIdentifier is a string that uniquely represent the new pod template
// each workload type could use different ways to identify that so we cannot compare between resources

View File

@@ -209,7 +209,7 @@ func (r *RolloutStatus) RolloutFailing(reason string) {
// ResetStatus resets the status of the rollout to start from beginning
func (r *RolloutStatus) ResetStatus() {
r.NewPodTemplateIdentifier = ""
r.RolloutTargetTotalSize = -1
r.RolloutTargetSize = -1
r.LastAppliedPodTemplateIdentifier = ""
r.RollingState = LocatingTargetAppState
r.BatchRollingState = BatchInitializingState

View File

@@ -42,6 +42,7 @@ type RolloutTraitSpec struct {
// RolloutTrait is the Schema for the RolloutTrait API
// +kubebuilder:object:root=true
// +genclient
// +kubebuilder:resource:categories={oam}
// +kubebuilder:subresource:status
type RolloutTrait struct {
metav1.TypeMeta `json:",inline"`

View File

@@ -16,7 +16,6 @@ spec:
plural: applicationrevisions
shortNames:
- apprev
- revisions
singular: applicationrevision
scope: Namespaced
versions:
@@ -413,8 +412,12 @@ spec:
rollingState:
description: RollingState is the Rollout State
type: string
rolloutOriginalSize:
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
rolloutTargetSize:
description: RolloutTargetTotalSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
services:
@@ -757,6 +760,9 @@ spec:
description: Extension is used for extension needs by OAM platform builders
type: object
x-kubernetes-preserve-unknown-fields: true
podDisruptive:
description: PodDisruptive specifies whether using the trait will cause the pod to restart or not.
type: boolean
revisionEnabled:
description: Revision indicates whether a trait is aware of component revision
type: boolean
@@ -1461,8 +1467,12 @@ spec:
rollingState:
description: RollingState is the Rollout State
type: string
rolloutOriginalSize:
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
rolloutTargetSize:
description: RolloutTargetTotalSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
services:
@@ -1806,6 +1816,9 @@ spec:
description: Extension is used for extension needs by OAM platform builders
type: object
x-kubernetes-preserve-unknown-fields: true
podDisruptive:
description: PodDisruptive specifies whether using the trait will cause the pod to restart or not.
type: boolean
revisionEnabled:
description: Revision indicates whether a trait is aware of component revision
type: boolean

View File

@@ -26,7 +26,7 @@ spec:
listKind: ApplicationList
plural: applications
shortNames:
- apps
- app
singular: application
scope: Namespaced
versions:
@@ -424,8 +424,12 @@ spec:
rollingState:
description: RollingState is the Rollout State
type: string
rolloutOriginalSize:
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
rolloutTargetSize:
description: RolloutTargetTotalSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
services:
@@ -878,8 +882,12 @@ spec:
rollingState:
description: RollingState is the Rollout State
type: string
rolloutOriginalSize:
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
rolloutTargetSize:
description: RolloutTargetTotalSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
services:

View File

@@ -16,7 +16,6 @@ spec:
plural: approllouts
shortNames:
- approllout
- rollout
singular: approllout
scope: Namespaced
versions:
@@ -341,8 +340,12 @@ spec:
rollingState:
description: RollingState is the Rollout State
type: string
rolloutOriginalSize:
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
rolloutTargetSize:
description: RolloutTargetTotalSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
targetGeneration:
@@ -689,8 +692,12 @@ spec:
rollingState:
description: RollingState is the Rollout State
type: string
rolloutOriginalSize:
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
rolloutTargetSize:
description: RolloutTargetTotalSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
targetGeneration:

View File

@@ -14,6 +14,8 @@ spec:
kind: ResourceTracker
listKind: ResourceTrackerList
plural: resourcetrackers
shortNames:
- tracker
singular: resourcetracker
scope: Cluster
versions:
@@ -30,9 +32,40 @@ spec:
type: string
metadata:
type: object
status:
description: ResourceTrackerStatus define the status of resourceTracker
properties:
trackedResources:
items:
description: A TypedReference refers to an object by Name, Kind, and APIVersion. It is commonly used to reference across-namespace objects
properties:
apiVersion:
description: APIVersion of the referenced object.
type: string
kind:
description: Kind of the referenced object.
type: string
name:
description: Name of the referenced object.
type: string
namespace:
description: Namespace of the objects outside the application namespace.
type: string
uid:
description: UID of the referenced object.
type: string
required:
- apiVersion
- kind
- name
type: object
type: array
type: object
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""

View File

@@ -14,6 +14,8 @@ spec:
kind: ScopeDefinition
listKind: ScopeDefinitionList
plural: scopedefinitions
shortNames:
- scope
singular: scopedefinition
scope: Namespaced
versions:

View File

@@ -71,6 +71,9 @@ spec:
description: Extension is used for extension needs by OAM platform builders
type: object
x-kubernetes-preserve-unknown-fields: true
podDisruptive:
description: PodDisruptive specifies whether using the trait will cause the pod to restart or not.
type: boolean
revisionEnabled:
description: Revision indicates whether a trait is aware of component revision
type: boolean
@@ -251,6 +254,9 @@ spec:
description: Extension is used for extension needs by OAM platform builders
type: object
x-kubernetes-preserve-unknown-fields: true
podDisruptive:
description: PodDisruptive specifies whether using the trait will cause the pod to restart or not.
type: boolean
revisionEnabled:
description: Revision indicates whether a trait is aware of component revision
type: boolean

View File

@@ -9,6 +9,8 @@ metadata:
spec:
group: standard.oam.dev
names:
categories:
- oam
kind: RolloutTrait
listKind: RolloutTraitList
plural: rollouttraits
@@ -340,8 +342,12 @@ spec:
rollingState:
description: RollingState is the Rollout State
type: string
rolloutOriginalSize:
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
rolloutTargetSize:
description: RolloutTargetTotalSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
targetGeneration:

View File

@@ -1,21 +1 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "kubevela.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "kubevela.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "kubevela.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "kubevela.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:80
{{- end }}
Welcome to use the KubeVela! Enjoy your shipping application journey!

View File

@@ -21,6 +21,7 @@ spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: false
schematic:
cue:
template: |

View File

@@ -13,6 +13,7 @@ spec:
definitionRef:
name: manualscalertraits.core.oam.dev
workloadRefPath: spec.workloadRef
podDisruptive: true
schematic:
cue:
template: |

View File

@@ -1,41 +0,0 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "kubevela.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "kubevela.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "kubevela.fullname" . }}-test-connection"
labels:
{{- include "kubevela.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "kubevela.fullname" . }}:{{ .Values.service.port }}']
restartPolicy: Never

View File

@@ -38,24 +38,6 @@ securityContext: {}
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 80
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources:
limits:
cpu: 500m

View File

@@ -1,7 +1,6 @@
# Contributing to KubeVela Docs
[Here](https://github.com/oam-dev/kubevela.io) is the source code of [Kubevela website](http://kubevela.io/).
It's built by [Docusaurus 2](https://v2.docusaurus.io/), a modern static website generator.
[Here](https://github.com/oam-dev/kubevela/tree/master/docs) is the source documentation of [Kubevela website](http://kubevela.io/).
Any files modifid here will trigger the `check-docs` Github action to run and validate the docs could be build successfully into the website.
Any changes on these files(`docs/en/*`, `resource/*`, `sidebars.js`) will be submitted to the corresponding locations of the repo
[kubevela.io](https://github.com/oam-dev/kubevela.io). The Github-Action there will parse the document and publish it to the Kubevela Website automatically.

View File

@@ -4,7 +4,7 @@
# KubeVela
KubeVela is the platform engine to create *PaaS-like* experience on Kubernetes, in a scalable approach.
KubeVela is a modern application engine that adapts to your application's needs, not the other way around.
## Community

View File

@@ -40,152 +40,7 @@ spec:
image: "fluentd"
```
The `type: worker` means the specification of this component (claimed in following `properties` section) will be enforced by a `ComponentDefinition` object named `worker` as below:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: worker
annotations:
definition.oam.dev/description: "Describes long-running, scalable, containerized services that running at backend. They do NOT have network endpoint to receive external network traffic."
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
}]
}
}
}
}
parameter: {
image: string
cmd?: [...string]
}
```
Hence, the `properties` section of `backend` only supports two parameters: `image` and `cmd`, this is enforced by the `parameter` list of the `.spec.template` field of the definition.
The similar extensible abstraction mechanism also applies to traits.
For example, `type: autoscaler` in `frontend` means its trait specification (i.e. `properties` section)
will be enforced by a `TraitDefinition` object named `autoscaler` as below:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "configure k8s HPA for Deployment"
name: hpa
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
outputs: hpa: {
apiVersion: "autoscaling/v2beta2"
kind: "HorizontalPodAutoscaler"
metadata: name: context.name
spec: {
scaleTargetRef: {
apiVersion: "apps/v1"
kind: "Deployment"
name: context.name
}
minReplicas: parameter.min
maxReplicas: parameter.max
metrics: [{
type: "Resource"
resource: {
name: "cpu"
target: {
type: "Utilization"
averageUtilization: parameter.cpuUtil
}
}
}]
}
}
parameter: {
min: *1 | int
max: *10 | int
cpuUtil: *50 | int
}
```
The application also have a `sidecar` trait.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "add sidecar to the app"
name: sidecar
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |-
patch: {
// +patchKey=name
spec: template: spec: containers: [parameter]
}
parameter: {
name: string
image: string
command?: [...string]
}
```
All the definition objects are expected to be defined and installed by platform team.
The end users will only focus on `Application` resource.
## Conventions and "Standard Contract"
After the `Application` resource is applied to Kubernetes cluster,
the KubeVela runtime will generate and manage the underlying resources instances following below "standard contract" and conventions.
| Label | Description |
| :--: | :---------: |
|`workload.oam.dev/type=<component definition name>` | The name of its corresponding `ComponentDefinition` |
|`trait.oam.dev/type=<trait definition name>` | The name of its corresponding `TraitDefinition` |
|`app.oam.dev/name=<app name>` | The name of the application it belongs to |
|`app.oam.dev/component=<component name>` | The name of the component it belongs to |
|`trait.oam.dev/resource=<name of trait resource instance>` | The name of trait resource instance |
|`app.oam.dev/appRevision=<name of app revision>` | The name of the application revision it belongs to |
## Run Application
### Deploy the Application
Apply application yaml above, then you'll get the application started
@@ -232,3 +87,151 @@ $ kubectl get HorizontalPodAutoscaler frontend
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
frontend Deployment/frontend <unknown>/50% 1 10 1 101m
```
## Under the Hood
In above sample, the `type: worker` means the specification of this component (claimed in following `properties` section) will be enforced by a `ComponentDefinition` object named `worker` as below:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: worker
annotations:
definition.oam.dev/description: "Describes long-running, scalable, containerized services that running at backend. They do NOT have network endpoint to receive external network traffic."
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
}]
}
}
}
}
parameter: {
image: string
cmd?: [...string]
}
```
Hence, the `properties` section of `backend` only supports two parameters: `image` and `cmd`, this is enforced by the `parameter` list of the `.spec.template` field of the definition.
The similar extensible abstraction mechanism also applies to traits.
For example, `type: autoscaler` in `frontend` means its trait specification (i.e. `properties` section)
will be enforced by a `TraitDefinition` object named `autoscaler` as below:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "configure k8s HPA for Deployment"
name: hpa
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
outputs: hpa: {
apiVersion: "autoscaling/v2beta2"
kind: "HorizontalPodAutoscaler"
metadata: name: context.name
spec: {
scaleTargetRef: {
apiVersion: "apps/v1"
kind: "Deployment"
name: context.name
}
minReplicas: parameter.min
maxReplicas: parameter.max
metrics: [{
type: "Resource"
resource: {
name: "cpu"
target: {
type: "Utilization"
averageUtilization: parameter.cpuUtil
}
}
}]
}
}
parameter: {
min: *1 | int
max: *10 | int
cpuUtil: *50 | int
}
```
The application also have a `sidecar` trait.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "add sidecar to the app"
name: sidecar
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |-
patch: {
// +patchKey=name
spec: template: spec: containers: [parameter]
}
parameter: {
name: string
image: string
command?: [...string]
}
```
All the definition objects are expected to be declared and installed by platform team and end users will only focus on `Application` resource.
Please note that the end users of KubeVela do NOT need to know about definition objects, they learn how to use a given capability with visualized forms (or the JSON schema of parameters if they prefer). Please check the [Generate Forms from Definitions](/docs/platform-engineers/openapi-v3-json-schema) section about how this is achieved.
### Conventions and "Standard Contract"
After the `Application` resource is applied to Kubernetes cluster,
the KubeVela runtime will generate and manage the underlying resources instances following below "standard contract" and conventions.
| Label | Description |
| :--: | :---------: |
|`workload.oam.dev/type=<component definition name>` | The name of its corresponding `ComponentDefinition` |
|`trait.oam.dev/type=<trait definition name>` | The name of its corresponding `TraitDefinition` |
|`app.oam.dev/name=<app name>` | The name of the application it belongs to |
|`app.oam.dev/component=<component name>` | The name of the component it belongs to |
|`trait.oam.dev/resource=<name of trait resource instance>` | The name of trait resource instance |
|`app.oam.dev/appRevision=<name of app revision>` | The name of the application revision it belongs to |

View File

@@ -22,14 +22,12 @@ This template based workflow make it possible for platform team enforce best pra
![alt](../resources/what-is-kubevela.png)
Below are the core building blocks in KubeVela that make this happen.
Below are the core concepts in KubeVela that make this happen.
## `Application`
The *Application* is the core API of KubeVela. It allows developers to work with a single artifact to capture the complete application definition with simplified primitives.
The *Application* is the core API of KubeVela. It allows developers to work with a single artifact to capture the complete application deployment with simplified primitives.
### Why Choose `Application` as the Main Abstraction
Having an "application" concept is important to any developer-centric platform to simplify administrative tasks and can serve as an anchor to avoid configuration drifts during operation. Also, as an abstraction object, `Application` provides a much simpler path for on-boarding Kubernetes capabilities without relying on low level details. For example, a developer will be able to model a "web service" without defining a detailed Kubernetes Deployment + Service combo each time, or claim the auto-scaling requirements without referring to the underlying KEDA ScaleObject.
In application delivery platform, having an "application" concept is important to simplify administrative tasks and can serve as an anchor to avoid configuration drifts during operation. Also, it provides a much simpler path for on-boarding Kubernetes capabilities to application delivery process without relying on low level details. For example, a developer will be able to model a "web service" without defining a detailed Kubernetes Deployment + Service combo each time, or claim the auto-scaling requirements without referring to the underlying KEDA ScaleObject.
### Example
@@ -66,15 +64,15 @@ spec:
## Building the Abstraction
Unlike most of the higher level platforms, the `Application` abstraction in KubeVela is fully extensible and does not even have fixed schema. Instead, it is composed by building blocks (app components and traits etc.) that allow you to onboard platform capabilities to this application definition with your own abstractions.
Unlike most of the higher level abstractions, the `Application` resource in KubeVela is a LEGO-style object and does not even have fixed schema. Instead, it is composed by building blocks (app components and traits etc.) that allow you to on-board platform capabilities to this application definition via your own abstractions.
The building blocks to abstraction and model platform capabilities named `ComponentDefinition` and `TraitDefinition`.
### ComponentDefinition
You can think of `ComponentDefinition` as a *template* for workload type. It contains template, parametering and workload characteristic information as a declarative API resource.
`ComponentDefinition` is a pre-defined *template* for the deployable workload. It contains template, parametering and workload characteristic information as a declarative API resource.
Hence, the `Application` abstraction essentially declares how users want to **instantiate** given component definitions. Specifically, the `.type` field references the name of installed `ComponentDefinition` and `.properties` are the user set values to instantiate it.
Hence, the `Application` abstraction essentially declares how the user want to **instantiate** given component definitions in target cluster. Specifically, the `.type` field references the name of installed `ComponentDefinition` and `.properties` are the user set values to instantiate it.
Some typical component definitions are *Long Running Web Service*, *One-time Off Task* or *Redis Database*. All component definitions expected to be pre-installed in the platform, or provided by component providers such as 3rd-party software vendors.
@@ -82,7 +80,7 @@ Some typical component definitions are *Long Running Web Service*, *One-time Off
Optionally, each component has a `.traits` section that augments the component instance with operational behaviors such as load balancing policy, network ingress routing, auto-scaling policies, or upgrade strategies, etc.
You can think of traits as operational features provided by the platform. To attach a trait to component instance, the user will use `.type` field to reference the specific `TraitDefinition`, and `.properties` field to set property values of the given trait. Similarly, `TraitDefiniton` also allows you to define *template* for operational features.
Traits are operational features provided by the platform. To attach a trait to component instance, the user will declare `.type` field to reference the specific `TraitDefinition`, and `.properties` field to set property values of the given trait. Similarly, `TraitDefiniton` also allows you to define *template* for operational features.
We also reference component definitions and trait definitions as *"capability definitions"* in KubeVela.
@@ -103,4 +101,4 @@ The overall architecture of KubeVela is shown as below:
![alt](../resources/arch.png)
Specifically, the application controller is responsible for application abstraction and encapsulation (i.e. the controller for `Application` and `Definition`). The rollout controller will handle progressive rollout strategy with the whole application as a unit. The multi-env deployment engine (*currently WIP*) is responsible for deploying the application across multiple clusters and environments.
Specifically, the application controller is responsible for application abstraction and encapsulation (i.e. the controller for `Application` and `Definition`). The rollout controller will handle progressive rollout strategy with the whole application as a unit. The multi-cluster deployment engine is responsible for deploying the application across multiple clusters and environments with traffic shifting and rollout features supported.

View File

@@ -50,7 +50,4 @@ spec:
}}}
}
```
## Limitations
If you update definition by changing the `metadata.namespace` field. KubeVela will create new resources in the new namespace but not delete old resources.
We wil fix the limitation in the near future.

View File

@@ -21,6 +21,7 @@ spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: true
schematic:
cue:
template: |
@@ -54,7 +55,12 @@ spec:
}
```
The patch trait above assumes the target component instance have `spec.template.spec.affinity` field. Hence we need to use `appliesToWorkloads` to enforce the trait only applies to those workload types have this field.
The patch trait above assumes the target component instance have `spec.template.spec.affinity` field.
Hence, we need to use `appliesToWorkloads` to enforce the trait only applies to those workload types have this field.
Another important field is `podDisruptive`, this patch trait will patch to the pod template field,
so changes on any field of this trait will cause the pod to restart, We should add `podDisruptive` and make it to be true
to tell users that applying this trait will cause the pod to restart.
Now the users could declare they want to add node affinity rules to the component instance as below:
@@ -112,6 +118,7 @@ spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: true
schematic:
cue:
template: |
@@ -141,6 +148,7 @@ spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: true
schematic:
cue:
template: |
@@ -185,6 +193,7 @@ spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: true
schematic:
cue:
template: |
@@ -235,6 +244,7 @@ spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: false
schematic:
cue:
template: |
@@ -258,7 +268,7 @@ spec:
Inject system environments into Pod is also very common use case.
> This case rely on strategy merge patch, so don't forget add `+patchKey=name` as below:
> This case relies on strategy merge patch, so don't forget add `+patchKey=name` as below:
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -271,6 +281,7 @@ spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: true
schematic:
cue:
template: |
@@ -312,6 +323,7 @@ spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: true
schematic:
cue:
template: |
@@ -358,6 +370,7 @@ spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: true
schematic:
cue:
template: |

View File

@@ -68,6 +68,7 @@ kind: TraitDefinition
metadata:
name: ingress
spec:
podDisruptive: false
schematic:
cue:
template: |

View File

@@ -118,16 +118,13 @@ These steps will install KubeVela controller and its dependency.
helm install --create-namespace -n vela-system kubevela kubevela/vela-core --version <next_version>-rc-master
```
```console
NAME: kubevela
LAST DEPLOYED: Sat Mar 6 21:03:11 2021
NAMESPACE: vela-system
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace vela-system -l "app.kubernetes.io/name=vela-core,app.kubernetes.io/instance=kubevela" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace vela-system port-forward $POD_NAME 8080:80
NAME: kubevela
LAST DEPLOYED: Thu Apr 1 19:41:30 2021
NAMESPACE: vela-system
STATUS: deployed
REVISION: 1
NOTES:
Welcome to use the KubeVela! Enjoy your shipping application journey!
```
4. Install Kubevela with cert-manager (optional)
@@ -196,6 +193,13 @@ powershell -Command "iwr -useb https://kubevela.io/script/install.ps1 | iex"
<TabItem value="homebrew">
**macOS/Linux**
Update your brew firstly.
```shell script
brew update
```
Then install kubevela client.
```shell script
brew install kubevela
```

View File

@@ -22,7 +22,7 @@ In the end, developers complain those platforms are too rigid and slow in respon
For platform builders, KubeVela serves as a framework that relieves the pains of building developer focused platforms by doing the following:
- Developer Centric. KubeVela abstracts away the infrastructure level primitives by introducing the *Application* concept as main API, and then building operational features around the applications' needs only.
- Developer Centric. KubeVela abstracts away the infrastructure level primitives by introducing the *Application* concept to capture a full deployment of microservices, and then building operational features around the applications' needs only.
- Extending Natively. The *Application* is composed of modularized building blocks that support [CUELang](https://github.com/cuelang/cue) and [Helm](https://helm.sh) as template engines. This enable you to abstract Kubernetes capabilities in LEGO-style and ship them to end users via plain `kubectl apply -f`. Changes made to the abstraction templates take effect at runtime, neither recompilation nor redeployment of KubeVela is required.

View File

@@ -263,113 +263,147 @@ spec:
type: ClusterIP
```
### Test CUE Template with `Kube` package
## Dry-Run the `Application`
KubeVela automatically generates internal CUE packages for all built-in Kubernetes API resources including CRDs.
You can import them in CUE template to simplify your templates and help you do the validation.
When CUE template is good, we can use `vela system dry-run` to dry run and check the rendered resources in real Kubernetes cluster. This command will exactly execute the same render logic in KubeVela's `Application` Controller adn output the result for you.
There are two kinds of ways to import internal `kube` packages.
First, we need use `mergedef.sh` to merge the definition and cue files.
1. Import them with fixed style: `kube/<apiVersion>` and using it by `Kind`.
```cue
import (
apps "kube/apps/v1"
corev1 "kube/v1"
)
// output is validated by Deployment.
output: apps.#Deployment
outputs: service: corev1.#Service
```
This way is very easy to remember and use because it aligns with the K8s Object usage, only need to add a prefix `kube/` before `apiVersion`.
While this way only supported in KubeVela, so you can only debug and test it with [`vela system dry-run`](#dry-run-the-application).
2. Import them with third-party packages style. You can run `vela system cue-packages` to list all build-in `kube` packages
to know the `third-party packages` supported currently.
```shell
$ vela system cue-packages
DEFINITION-NAME IMPORT-PATH USAGE
#Deployment k8s.io/apps/v1 Kube Object for apps/v1.Deployment
#Service k8s.io/core/v1 Kube Object for v1.Service
#Secret k8s.io/core/v1 Kube Object for v1.Secret
#Node k8s.io/core/v1 Kube Object for v1.Node
#PersistentVolume k8s.io/core/v1 Kube Object for v1.PersistentVolume
#Endpoints k8s.io/core/v1 Kube Object for v1.Endpoints
#Pod k8s.io/core/v1 Kube Object for v1.Pod
```
In fact, they are all built-in packages, but you can import them with the `import-path` like the `third-party packages`.
In this way, you could debug with `cue` cli client.
#### A workflow to debug with `kube` packages
Here's a workflow that you can debug and test the CUE template with `cue` CLI and use **exactly the same CUE template** in KubeVela.
1. Create a test directory, Init CUE modules.
```shell
$ mergedef.sh def.yaml def.cue > componentdef.yaml
mkdir cue-debug && cd cue-debug/
cue mod init oam.dev
go mod init oam.dev
touch def.cue
```
Then, let's create an Application named `test-app.yaml`.
2. Download the `third-party packages` by using `cue` CLI.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: boutique
namespace: default
spec:
components:
- name: frontend
type: microservice
properties:
image: registry.cn-hangzhou.aliyuncs.com/vela-samples/frontend:v0.2.2
servicePort: 80
containerPort: 8080
env:
PORT: "8080"
cpu: "100m"
memory: "64Mi"
```
In KubeVela, we don't need to download these packages as they're automatically generated from K8s API.
But for local test, we need to use `cue get go` to fetch Go packages and convert them to CUE format files.
Dry run the application by using `vela system dry-run`.
So, by using K8s `Deployment` and `Serivice`, we need download and convert to CUE definitions for the `core` and `apps` Kubernetes modules like below:
```shell
$ vela system dry-run -f test-app.yaml -d componentdef.yaml
---
# Application(boutique) -- Comopnent(frontend)
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.oam.dev/component: frontend
app.oam.dev/name: boutique
workload.oam.dev/type: microservice
name: frontend
namespace: default
spec:
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
version: v1
spec:
containers:
- env:
- name: PORT
value: "8080"
image: registry.cn-hangzhou.aliyuncs.com/vela-samples/frontend:v0.2.2
name: frontend
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 64Mi
serviceAccountName: default
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
app.oam.dev/component: frontend
app.oam.dev/name: boutique
trait.oam.dev/resource: service
trait.oam.dev/type: AuxiliaryWorkload
name: frontend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: frontend
type: ClusterIP
---
cue get go k8s.io/api/core/v1
cue get go k8s.io/api/apps/v1
```
### Import `kube` Package
After that, the module directory will show the following contents:
KubeVela automatically generates internal CUE packages for all built-in Kubernetes API resources, so you can import them in CUE template. This could simplify how you write the template because some default values are already there, and the imported package will help you validate the template.
```shell
├── cue.mod
│ ├── gen
│ │ └── k8s.io
│ │ ├── api
│ │ │ ├── apps
│ │ │ └── core
│ │ └── apimachinery
│ │ └── pkg
│ ├── module.cue
│ ├── pkg
│ └── usr
├── def.cue
├── go.mod
└── go.sum
```
Let's try to define a template with help of `kube` package:
The package import path in CUE template should be:
```cue
import (
apps "kube/apps/v1"
corev1 "kube/v1"
apps "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
)
```
3. Refactor directory hierarchy.
Our goal is to test template locally and use the same template in KubeVela.
So we need to refactor our local CUE module directories a bit to align with the import path provided by KubeVela,
Copy the `apps` and `core` from `cue.mod/gen/k8s.io/api` to `cue.mod/gen/k8s.io`.
(Note we should keep the source directory `apps` and `core` in `gen/k8s.io/api` to avoid package dependency issues).
```bash
cp -r cue.mod/gen/k8s.io/api/apps cue.mod/gen/k8s.io
cp -r cue.mod/gen/k8s.io/api/core cue.mod/gen/k8s.io
```
The modified module directory should like:
```shell
├── cue.mod
│ ├── gen
│ │ └── k8s.io
│ │ ├── api
│ │ │ ├── apps
│ │ │ └── core
│ │ ├── apimachinery
│ │ │ └── pkg
│ │ ├── apps
│ │ └── core
│ ├── module.cue
│ ├── pkg
│ └── usr
├── def.cue
├── go.mod
└── go.sum
```
So, you can import the package use the following path that aligns with KubeVela:
```cue
import (
apps "k8s.io/apps/v1"
corev1 "k8s.io/core/v1"
)
```
4. Test and Run.
Finally, we can test CUE Template which use the `Kube` package.
```cue
import (
apps "k8s.io/apps/v1"
corev1 "k8s.io/core/v1"
)
// output is validated by Deployment.
@@ -467,16 +501,166 @@ parameter: {
cpu?: string
memory?: string
}
// mock context data
context: {
name: "test"
}
// mock parameter data
parameter: {
image: "test-image"
servicePort: 8000
env: {
"HELLO": "WORLD"
}
}
```
Then merge them.
Use `cue export` to see the export result.
```shell
mergedef.sh def.yaml def.cue > componentdef.yaml
$ cue export def.cue --out yaml
output:
metadata:
name: test
namespace: default
spec:
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
version: v1
spec:
terminationGracePeriodSeconds: 30
containers:
- name: test
image: test-image
ports:
- containerPort: 8000
env:
- name: HELLO
value: WORLD
resources:
requests: {}
outputs:
service:
metadata:
name: test
labels:
app: test
spec:
selector:
app: test
ports:
- port: 8000
targetPort: 8000
parameter:
version: v1
image: test-image
servicePort: 8000
podShutdownGraceSeconds: 30
env:
HELLO: WORLD
context:
name: test
```
And dry run to see the rendered resources:
## Dry-Run the `Application`
When CUE template is good, we can use `vela system dry-run` to dry run and check the rendered resources in real Kubernetes cluster. This command will exactly execute the same render logic in KubeVela's `Application` Controller adn output the result for you.
First, we need use `mergedef.sh` to merge the definition and cue files.
```shell
vela system dry-run -f test-app.yaml -d componentdef.yaml
$ mergedef.sh def.yaml def.cue > componentdef.yaml
```
Then, let's create an Application named `test-app.yaml`.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: boutique
namespace: default
spec:
components:
- name: frontend
type: microservice
properties:
image: registry.cn-hangzhou.aliyuncs.com/vela-samples/frontend:v0.2.2
servicePort: 80
containerPort: 8080
env:
PORT: "8080"
cpu: "100m"
memory: "64Mi"
```
Dry run the application by using `vela system dry-run`.
```shell
$ vela system dry-run -f test-app.yaml -d componentdef.yaml
---
# Application(boutique) -- Comopnent(frontend)
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.oam.dev/component: frontend
app.oam.dev/name: boutique
workload.oam.dev/type: microservice
name: frontend
namespace: default
spec:
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
version: v1
spec:
containers:
- env:
- name: PORT
value: "8080"
image: registry.cn-hangzhou.aliyuncs.com/vela-samples/frontend:v0.2.2
name: frontend
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 64Mi
serviceAccountName: default
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
app.oam.dev/component: frontend
app.oam.dev/name: boutique
trait.oam.dev/resource: service
trait.oam.dev/type: AuxiliaryWorkload
name: frontend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: frontend
type: ClusterIP
---
```

View File

@@ -2,8 +2,7 @@
title: Definition CRD
---
This documentation explains how to register and manage available *components* and *traits* in your platform with
`ComponentDefinition` and `TraitDefinition`, so end users could instantiate and "assemble" them into an `Application`.
This documentation explains `ComponentDefinition` and `TraitDefinition` in detail.
> All definition objects are expected to be maintained and installed by platform team, think them as *capability providers* in your platform.
@@ -76,7 +75,8 @@ spec:
- webservice
conflictsWith:
- service
workloadRefPath: spec.wrokloadRef
workloadRefPath: spec.wrokloadRef
podDisruptive: false
```
Let's explain them in detail.
@@ -116,12 +116,20 @@ If this field is omitted, it means this trait is NOT conflicting with any traits
##### `.spec.workloadRefPath`
This field defines the field path of the trait which is used to store the reference of the workload to which the trait is applied.
- It accepts a string as value, e.g., `spec.workloadRef`.
- It accepts a string as value, e.g., `spec.workloadRef`.
If this field is set, KubeVela core will automatically fill the workload reference into target field of the trait. Then the trait controller can get the workload reference from the trait latter. So this field usually accompanies with the traits whose controllers relying on the workload reference at runtime.
Please check [scaler](https://github.com/oam-dev/kubevela/blob/master/charts/vela-core/templates/defwithtemplate/manualscale.yaml) trait as a demonstration of how to set this field.
##### `.spec.podDisruptive`
This field defines that adding/updating the trait will disruptive the pod or not.
In this example, the answer is not, so the field is `false`, it will not affect the pod when the trait is added or updated.
If the field is `true`, then it will cause the pod to disruptive and restart when the trait is added or updated.
By default, the value is `false` which means this trait will not affect.
Please take care of this field, it's really important and useful for serious large scale production usage scenarios.
### Capability Encapsulation and Abstraction
The templating and parameterizing of given capability are defined in `spec.schematic` field. For example, below is the full definition of *Web Service* type in KubeVela:

View File

@@ -69,7 +69,7 @@ In this step, we will define the schematic of KEDA based autoscaling trait, i.e.
schematic:
cue:
template: |-
outputs: cpu-scaler: {
outputs: kedaScaler: {
apiVersion: "keda.sh/v1alpha1"
kind: "ScaledObject"
metadata: {
@@ -80,17 +80,17 @@ schematic:
name: context.name
}
triggers: [{
type: paramter.type
type: parameter.triggerType
metadata: {
type: "Utilization"
value: paramter.value
value: parameter.value
}
}]
}
}
paramter: {
parameter: {
// +usage=Types of triggering application elastic scaling, Optional: cpu, memory
type: string
triggerType: string
// +usage=Value to trigger scaling actions, represented as a percentage of the requested value of the resource for the pods. like: "60"(60%)
value: string
}

View File

@@ -62,8 +62,6 @@ Below is a form rendered with `form-render`:
![](../../resources/json-schema-render-example.jpg)
> Hence, end users of KubeVela do NOT need to learn about definition object to use a capability, they always work with a visualized form or learn the generated schema if they want.
# What's Next
It's by design that KubeVela supports multiple ways to define the schematic. Hence, we will explain `.schematic` field in detail with following guides.

View File

@@ -6,15 +6,15 @@ This documentation will explain what is `Application` object and why you need it
## Motivation
Encapsulation is probably the mostly widely used approach to enable easier developer experience and allow users to deliver the whole application resources as one unit. For example, many tools today encapsulate Kubernetes *Deployment* and *Service* into a *Web Service* module, and then instantiate this module by simply providing parameters such as *image=foo* and *ports=80*. This pattern can be found in cdk8s (e.g. [`web-service.ts` ](https://github.com/awslabs/cdk8s/blob/master/examples/typescript/web-service/web-service.ts)), CUE (e.g. [`kube.cue`](https://github.com/cuelang/cue/blob/b8b489251a3f9ea318830788794c1b4a753031c0/doc/tutorial/kubernetes/quick/services/kube.cue#L70)), and many widely used Helm charts (e.g. [Web Service](https://docs.bitnami.com/tutorials/create-your-first-helm-chart/)).
Encapsulation based abstraction is probably the mostly widely used approach to enable easier developer experience and allow users to deliver the whole application resources as one unit. For example, many tools today encapsulate Kubernetes *Deployment* and *Service* into a *Web Service* module, and then instantiate this module by simply providing parameters such as *image=foo* and *ports=80*. This pattern can be found in cdk8s (e.g. [`web-service.ts` ](https://github.com/awslabs/cdk8s/blob/master/examples/typescript/web-service/web-service.ts)), CUE (e.g. [`kube.cue`](https://github.com/cuelang/cue/blob/b8b489251a3f9ea318830788794c1b4a753031c0/doc/tutorial/kubernetes/quick/services/kube.cue#L70)), and many widely used Helm charts (e.g. [Web Service](https://docs.bitnami.com/tutorials/create-your-first-helm-chart/)).
Despite the efficiency and extensibility in defining abstractions with encapsulation, both DSL tools (e.g. cdk8s , CUE and Helm templating) are mostly used as client side tools and can be barely used as a platform level building block. This leaves platform builders either have to create restricted/inextensible abstractions, or re-invent the wheels of what DSL/templating has already been doing great.
Despite the efficiency and extensibility in defining abstractions, both DSL tools (e.g. cdk8s , CUE and Helm templating) are mostly used as client side tools and can be barely used as a platform level building block. This leaves platform builders either have to create restricted/inextensible abstractions, or re-invent the wheels of what DSL/templating has already been doing great.
KubeVela allows platform teams to create developer-centric abstractions with DSL/templating but maintain them with the battle tested [Kubernetes Control Loop](https://kubernetes.io/docs/concepts/architecture/controller/).
## Abstraction
## Application
First of all, KubeVela introduces an `Application` CRD as its main abstraction that could capture all needed resources to run the application, and exposes configurable parameters for end users. Every application is composed by multiple components, and each of them is defined by workload specification and traits (operational behaviors). For example:
First of all, KubeVela introduces an `Application` CRD as its main abstraction that could capture a full application deployment. To model the modern microservices, every application is composed by multiple components with attached traits (operational behaviors). For example:
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -43,21 +43,11 @@ spec:
bucket: "my-bucket"
```
## Encapsulation
The schema of *component* and *trait* specification in this application is actually enforced by another set of building block objects named *"definitions"*, for example, `ComponentDefinition` and `TraitDefinition`.
With `Application` provides an abstraction to deploy apps, each *component* and *trait* specification in this application is actually enforced by another set of building block objects named *"definitions"*, for example, `ComponentDefinition` and `TraitDefinition`.
`XxxDefinition` resources are designed to leverage encapsulation solutions such as `CUE`, `Helm` and `Terraform modules` to template and parameterize Kubernetes resources as well as cloud services. This enables users to assemble templated capabilities into an `Application` by simply setting parameters. In the `application-sample` above, it models a Kubernetes Deployment (component `foo`) to run container and a Alibaba Cloud OSS bucket (component `bar`) alongside.
Definitions are designed to leverage encapsulation technologies such as `CUE`, `Helm` and `Terraform modules` to template and parameterize Kubernetes resources as well as cloud services. This enables users to assemble templated capabilities into an `Application` by simply setting parameters. In the `application-sample` above, it models a Kubernetes Deployment (component `foo`) to run container and a Alibaba Cloud OSS bucket (component `bar`) alongside.
This encapsulation and abstraction mechanism is the key for KubeVela to provide *PaaS-like* experience (*i.e. app-centric, higher level abstractions, self-service operations etc*) to end users.
### No Configuration Drift
Many of the existing encapsulation solutions today work at client side, for example, DSL/IaC (Infrastructure as Code) tools and Helm. This approach is easy to be adopted and has less invasion in the user cluster.
But client side abstractions, though light-weighted, always lead to an issue called infrastructure/configuration drift, i.e. the generated component instances are not in line with the expected configuration. This could be caused by incomplete coverage, less-than-perfect processes or emergency changes.
Hence, the encapsulation engine of KubeVela is designed to be a [Kubernetes Control Loop](https://kubernetes.io/docs/concepts/architecture/controller/) and leverage Kubernetes control plane to eliminate the issue of configuration drifting, and still keeps the flexibly and velocity enabled by existing encapsulation solutions (e.g. DSL/IaC and templating).
This abstraction mechanism is the key for KubeVela to provide *PaaS-like* experience (*i.e. app-centric, higher level abstractions, self-service operations etc*) to end users, with benefits highlighted as below.
### No "Juggling" Approach to Manage Kubernetes Objects
@@ -67,7 +57,15 @@ The issue above could be even painful if the component instance is not *Deployme
#### Standard Contract Behind The Abstraction
The encapsulation engine of KubeVela is designed to relieve such burden of managing versionized Kubernetes resources manually. In nutshell, all the needed Kubernetes resources for an app are now encapsulated in a single abstraction, and KubeVela will maintain the instance name, revisions, labels and selector by the battle tested reconcile loop automation, not by human hand. At the meantime, the existence of definition objects allow the platform team to customize the details of all above metadata behind the abstraction, even control the behavior of how to do revision.
KubeVela is designed to relieve such burden of managing versionized Kubernetes resources manually. In nutshell, all the needed Kubernetes resources for an app are now encapsulated in a single abstraction, and KubeVela will maintain the instance name, revisions, labels and selector by the battle tested reconcile loop automation, not by human hand. At the meantime, the existence of definition objects allow the platform team to customize the details of all above metadata behind the abstraction, even control the behavior of how to do revision.
Thus, all those metadata now become a standard contract that any day 2 operation controller such as Istio or rollout can rely on. This is the key to ensure our platform could provide user friendly experience but keep "transparent" to the operational behaviors.
Thus, all those metadata now become a standard contract that any "day 2" operation controller such as Istio or rollout can rely on. This is the key to ensure our platform could provide user friendly experience but keep "transparent" to the operational behaviors.
### No Configuration Drift
Light-weighted and flexible in defining abstractions, any of the existing encapsulation solutions today work at client side, for example, DSL/IaC (Infrastructure as Code) tools and Helm. This approach is easy to be adopted and has less invasion in the user cluster.
But client side abstractions always lead to an issue called *Infrastructure/Configuration Drift*, i.e. the generated component instances are not in line with the expected configuration. This could be caused by incomplete coverage, less-than-perfect processes or emergency changes.
Hence, all abstractions in KubeVela is designed to be maintained with [Kubernetes Control Loop](https://kubernetes.io/docs/concepts/architecture/controller/) and leverage Kubernetes control plane to eliminate the issue of configuration drifting, and still keeps the flexibly and velocity enabled by existing encapsulation solutions (e.g. DSL/IaC and templating).

View File

@@ -1,21 +1,22 @@
---
title: Multi-Version Multi-Cluster Application Deployment
title: Multi-Cluster Application Deployment
---
# Introduction
KubeVela provides Application CRD which templates the low level resources and exposes high level parameters to users. But that's not enough -- it often requires a couple of standard techniques to deploy an application in production:
- Rolling upgrade: To continuously deploy apps requires to rollout in a safe manner which usually involves step by step rollout batches and analysis.
Modern application infrastructure involves multiple clusters to ensure high availability and maximize service throughput. In this section, we will introduce how to use KubeVela to achieve application deployment across multiple clusters with following features supported:
- Rolling Upgrade: To continuously deploy apps requires to rollout in a safe manner which usually involves step by step rollout batches and analysis.
- Traffic shifting: When rolling upgrade an app, it needs to split the traffic onto both the old and new revisions to verify the new version while preserving service availability.
- Multi-cluster: Modern application infrastructure involves multiple clusters to ensure high availability and maximize service throughput.
## AppDeployment CRD
The AppDeployment CRD has been provided to satisfy such requirements. Here's an overview of the API:
The `AppDeployment` CRD has been provided to satisfy such requirements. Here's an overview of the API:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: AppDeployment
metadata:
name: sample-appdeploy
spec:
traffic:
hosts:
@@ -76,6 +77,7 @@ spec:
The clusters selected in the `placement` part from above is defined in Cluster CRD. Here's what it looks like:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Cluster
metadata:
name: prod-cluster-1
@@ -89,7 +91,8 @@ spec:
The secret must contain the kubeconfig credentials in `config` field:
```yaml
kind: secret:
apiVersion: v1
kind: Secret
metadata:
name: kubeconfig-cluster-1
data:
@@ -130,7 +133,7 @@ You must run all commands in that directory.
example-app-v1 116s
```
With above annotation this won't create any pod instances.
> Note: with `app.oam.dev/revision-only: "true"` annotation, above `Application` resource won't create any pod instances and leave the real deployment process to `AppDeployment`.
1. Then use the above AppRevision to create an AppDeployment.
@@ -138,7 +141,7 @@ You must run all commands in that directory.
$ kubectl apply -f appdeployment-1.yaml
```
> Note that in order to AppDeployment to work, your workload object must have a `spec.replicas` field for scaling.
> Note: in order to AppDeployment to work, your workload object must have a `spec.replicas` field for scaling.
1. Now you can check that there will 1 deployment and 2 pod instances deployed

View File

@@ -18,7 +18,7 @@ We design KubeVela rollout solutions with the following principles in mind
related logic. The trait and application related logic can be easily encapsulated into its own
package.
- Second, the core rollout related logic is easily extensible to support different type of
workloads, i.e. Deployment, Cloneset, Statefulset, Daemonset or even customized workloads.
workloads, i.e. Deployment, CloneSet, Statefulset, DaemonSet or even customized workloads.
- Thirdly, the core rollout related logic has a well documented state machine that
does state transition explicitly.
- Finally, the controllers can support all the rollout/upgrade needs of an application running
@@ -49,9 +49,9 @@ spec:
```
## User Experience Workflow
Here is the end to end user experience
Here is the end to end user experience based on [CloneSet](https://openkruise.io/en-us/docs/cloneset.html)
1. Install Open Kurise and CloneSet based workloadDefinition
1. Install CloneSet and create a `ComponentDefinition` for it.
```shell
helm install kruise https://github.com/openkruise/kruise/releases/download/v0.7.0/kruise-chart.tgz
```

View File

@@ -80,6 +80,7 @@ spec:
message: "type: "+ context.outputs.service.spec.type +",\t clusterIP:"+ context.outputs.service.spec.clusterIP+",\t ports:"+ "\(context.outputs.service.spec.ports[0].port)"+",\t domain"+context.outputs.ingress.spec.rules[0].host
healthPolicy: |
isHealth: len(context.outputs.service.spec.clusterIP) > 0
podDisruptive: false
schematic:
cue:
template: |

View File

@@ -1,7 +1,7 @@
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: test-rolling-pause
name: test-rolling
annotations:
"app.oam.dev/rollout-template": "true"
spec:
@@ -12,7 +12,6 @@ spec:
cmd:
- ./podinfo
- stress-cpu=1
image: stefanprodan/podinfo:4.0.3
image: stefanprodan/podinfo:4.0.6
port: 8080
updateStrategyType: InPlaceIfPossible
replicas: 10
updateStrategyType: InPlaceIfPossible

View File

@@ -8,8 +8,10 @@ spec:
componentList:
- metrics-provider
rolloutPlan:
targetSize: 6
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 10%
- replicas: 2
- replicas: 2
- replicas: 1
- replicas: 20%
- replicas: 40%
- replicas: 40%

View File

@@ -5,6 +5,7 @@ metadata:
spec:
appliesToWorkloads:
- "*"
podDisruptive: false
schematic:
cue:
template: |

View File

@@ -9,6 +9,7 @@ spec:
- webservice
- worker
- deployments.apps
podDisruptive: true
extension:
template: |-
patch: {

View File

@@ -9,6 +9,7 @@ spec:
- webservice
- worker
- deployments.apps
podDisruptive: true
extension:
template: |-
patch: {

View File

@@ -12,6 +12,7 @@ spec:
workloadRefPath: spec.workloadRef
definitionRef:
name: autoscalers.standard.oam.dev
podDisruptive: true
extension:
install:
helm:

View File

@@ -4,6 +4,7 @@ metadata:
name: expose
namespace: vela-system
spec:
podDisruptive: false
schematic:
cue:
template: |-

View File

@@ -8,6 +8,7 @@ spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: true
schematic:
cue:
template: |

View File

@@ -9,6 +9,7 @@ spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: true
schematic:
cue:
template: |-

View File

@@ -8,6 +8,7 @@ metadata:
spec:
appliesToWorkloads:
- webservice # this should be some knative like workload
podDisruptive: true
schematic:
cue:
template: |-

View File

@@ -9,6 +9,7 @@ spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: true
schematic:
cue:
template: |-

View File

@@ -9,6 +9,7 @@ spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: true
schematic:
cue:
template: |-

View File

@@ -12,6 +12,7 @@ spec:
name: canaries.flagger.app
workloadRefPath: spec.targetRef
revisionEnabled: true
podDisruptive: true
extension:
install:
helm:

View File

@@ -11,6 +11,7 @@ spec:
workloadRefPath: spec.workloadRef
definitionRef:
name: routes.standard.oam.dev
podDisruptive: false
extension:
install:
helm:

View File

@@ -8,6 +8,7 @@ spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: true
schematic:
cue:
template: |-

View File

@@ -9,6 +9,7 @@ spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: false
schematic:
cue:
template: |-

View File

@@ -5,7 +5,6 @@ module.exports = {
label: 'Overview',
items: [
'introduction',
'concepts',
],
},
{
@@ -14,11 +13,20 @@ module.exports = {
items: [
'install',
'quick-start',
'concepts',
],
},
{
type: 'category',
label: 'Rollout Features',
label: 'Define Application',
items:[
'platform-engineers/overview',
'application',
]
},
{
type: 'category',
label: 'Rollout Application',
items:[
"rollout/rollout",
'rollout/appdeploy'
@@ -28,13 +36,7 @@ module.exports = {
type: 'category',
label: 'Platform Builder Guide',
items: [
{
'Design Abstraction': [
'platform-engineers/overview',
'application',
'platform-engineers/definition-and-templates',
]
},
'platform-engineers/definition-and-templates',
{
'Visualization': [
'platform-engineers/openapi-v3-json-schema'

View File

@@ -20,6 +20,7 @@ spec:
appliesToWorkloads:
- webservice
- worker
podDisruptive: false
schematic:
cue:
template: |

View File

@@ -12,6 +12,7 @@ spec:
definitionRef:
name: manualscalertraits.core.oam.dev
workloadRefPath: spec.workloadRef
podDisruptive: true
schematic:
cue:
template: |

View File

@@ -34,8 +34,8 @@ cat docs/sidebars.js > git-page/sidebars.js
echo "clear en docs"
rm -r git-page/docs/*
echo "clear zh docs"
rm -r git-page/i18n/zh/docusaurus-plugin-content-docs/*
#echo "clear zh docs"
#rm -r git-page/i18n/zh/docusaurus-plugin-content-docs/*
echo "clear resources"
rm -r git-page/resources/*
@@ -44,7 +44,7 @@ cp -R docs/resources/* git-page/resources/
echo "update docs"
cp -R docs/en/* git-page/docs/
cp -R docs/zh-CN/* git-page/i18n/zh/docusaurus-plugin-content-docs/
#cp -R docs/zh-CN/* git-page/i18n/zh/docusaurus-plugin-content-docs/
echo "git push"
cd git-page

View File

@@ -16,7 +16,6 @@ spec:
plural: applicationrevisions
shortNames:
- apprev
- revisions
singular: applicationrevision
scope: Namespaced
validation:
@@ -407,8 +406,12 @@ spec:
rollingState:
description: RollingState is the Rollout State
type: string
rolloutOriginalSize:
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
rolloutTargetSize:
description: RolloutTargetTotalSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
services:
@@ -752,6 +755,9 @@ spec:
description: Extension is used for extension needs by OAM platform builders
type: object
podDisruptive:
description: PodDisruptive specifies whether using the trait will cause the pod to restart or not.
type: boolean
revisionEnabled:
description: Revision indicates whether a trait is aware of component revision
type: boolean

View File

@@ -15,7 +15,7 @@ spec:
listKind: ApplicationList
plural: applications
shortNames:
- apps
- app
singular: application
scope: Namespaced
validation:
@@ -392,8 +392,12 @@ spec:
rollingState:
description: RollingState is the Rollout State
type: string
rolloutOriginalSize:
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
rolloutTargetSize:
description: RolloutTargetTotalSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
services:

View File

@@ -35,7 +35,6 @@ spec:
plural: approllouts
shortNames:
- approllout
- rollout
singular: approllout
scope: Namespaced
validation:
@@ -339,8 +338,12 @@ spec:
rollingState:
description: RollingState is the Rollout State
type: string
rolloutOriginalSize:
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
rolloutTargetSize:
description: RolloutTargetTotalSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
targetGeneration:

View File

@@ -14,8 +14,12 @@ spec:
kind: ResourceTracker
listKind: ResourceTrackerList
plural: resourcetrackers
shortNames:
- tracker
singular: resourcetracker
scope: Cluster
subresources:
status: {}
validation:
openAPIV3Schema:
description: An ResourceTracker represents a tracker for track cross namespace resources
@@ -28,6 +32,35 @@ spec:
type: string
metadata:
type: object
status:
description: ResourceTrackerStatus define the status of resourceTracker
properties:
trackedResources:
items:
description: A TypedReference refers to an object by Name, Kind, and APIVersion. It is commonly used to reference across-namespace objects
properties:
apiVersion:
description: APIVersion of the referenced object.
type: string
kind:
description: Kind of the referenced object.
type: string
name:
description: Name of the referenced object.
type: string
namespace:
description: Namespace of the objects outside the application namespace.
type: string
uid:
description: UID of the referenced object.
type: string
required:
- apiVersion
- kind
- name
type: object
type: array
type: object
type: object
version: v1beta1
versions:

View File

@@ -18,6 +18,8 @@ spec:
kind: ScopeDefinition
listKind: ScopeDefinitionList
plural: scopedefinitions
shortNames:
- scope
singular: scopedefinition
scope: Namespaced
subresources: {}

View File

@@ -71,6 +71,9 @@ spec:
description: Extension is used for extension needs by OAM platform builders
type: object
podDisruptive:
description: PodDisruptive specifies whether using the trait will cause the pod to restart or not.
type: boolean
revisionEnabled:
description: Revision indicates whether a trait is aware of component revision
type: boolean

View File

@@ -9,6 +9,8 @@ metadata:
spec:
group: standard.oam.dev
names:
categories:
- oam
kind: RolloutTrait
listKind: RolloutTraitList
plural: rollouttraits
@@ -340,8 +342,12 @@ spec:
rollingState:
description: RollingState is the Rollout State
type: string
rolloutOriginalSize:
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
rolloutTargetSize:
description: RolloutTargetTotalSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
description: RolloutTargetSize is the size of the target resources. This is determined once the initial spec verification and does not change until the rollout is restarted
format: int32
type: integer
targetGeneration:

View File

@@ -68,17 +68,8 @@ type Workload struct {
Params map[string]interface{}
Traits []*Trait
Scopes []Scope
Template string
HealthCheckPolicy string
CustomStatusFormat string
Helm *common.Helm
DefinitionReference common.WorkloadGVK
// TODO: remove all the duplicate fields above as workload now contains the whole template
FullTemplate *util.Template
engine definition.AbstractEngine
FullTemplate *util.Template
engine definition.AbstractEngine
// OutputSecretName is the secret name which this workload will generate after it successfully generate a cloud resource
OutputSecretName string
// RequiredSecrets stores secret names which the workload needs from cloud resource component and its context
@@ -103,17 +94,17 @@ func (wl *Workload) GetUserConfigName() string {
// EvalContext eval workload template and set result to context
func (wl *Workload) EvalContext(ctx process.Context) error {
return wl.engine.Complete(ctx, wl.Template, wl.Params)
return wl.engine.Complete(ctx, wl.FullTemplate.TemplateStr, wl.Params)
}
// EvalStatus eval workload status
func (wl *Workload) EvalStatus(ctx process.Context, cli client.Client, ns string) (string, error) {
return wl.engine.Status(ctx, cli, ns, wl.CustomStatusFormat)
return wl.engine.Status(ctx, cli, ns, wl.FullTemplate.CustomStatus)
}
// EvalHealth eval workload health check
func (wl *Workload) EvalHealth(ctx process.Context, client client.Client, namespace string) (bool, error) {
return wl.engine.HealthCheck(ctx, client, namespace, wl.HealthCheckPolicy)
return wl.engine.HealthCheck(ctx, client, namespace, wl.FullTemplate.Health)
}
// Scope defines the scope of workload
@@ -207,18 +198,13 @@ func (p *Parser) parseWorkload(ctx context.Context, comp v1beta1.ApplicationComp
return nil, errors.WithMessagef(err, "fail to parse settings for %s", comp.Name)
}
workload := &Workload{
Traits: []*Trait{},
Name: comp.Name,
Type: comp.Type,
CapabilityCategory: templ.CapabilityCategory,
Template: templ.TemplateStr,
HealthCheckPolicy: templ.Health,
CustomStatusFormat: templ.CustomStatus,
DefinitionReference: templ.Reference,
Helm: templ.Helm,
FullTemplate: templ,
Params: settings,
engine: definition.NewWorkloadAbstractEngine(comp.Name, p.pd),
Traits: []*Trait{},
Name: comp.Name,
Type: comp.Type,
CapabilityCategory: templ.CapabilityCategory,
FullTemplate: templ,
Params: settings,
engine: definition.NewWorkloadAbstractEngine(comp.Name, p.pd),
}
for _, traitValue := range comp.Traits {
properties, err := util.RawExtension2Map(&traitValue.Properties)
@@ -398,7 +384,7 @@ func generateComponentFromKubeModule(c client.Client, wl *Workload, appName, rev
}
// NOTE a hack way to enable using CUE capabilities on KUBE schematic workload
wl.Template = fmt.Sprintf(`
wl.FullTemplate.TemplateStr = fmt.Sprintf(`
output: {
%s
}`, string(cueRaw))
@@ -487,19 +473,19 @@ func setParameterValuesToKubeObj(obj *unstructured.Unstructured, values paramVal
}
func generateComponentFromHelmModule(c client.Client, wl *Workload, appName, revision, ns string) (*v1alpha2.Component, *v1alpha2.ApplicationConfigurationComponent, error) {
gv, err := schema.ParseGroupVersion(wl.DefinitionReference.APIVersion)
gv, err := schema.ParseGroupVersion(wl.FullTemplate.Reference.APIVersion)
if err != nil {
return nil, nil, err
}
targetWokrloadGVK := gv.WithKind(wl.DefinitionReference.Kind)
targetWorkloadGVK := gv.WithKind(wl.FullTemplate.Reference.Kind)
// NOTE this is a hack way to enable using CUE module capabilities on Helm module workload
// construct an empty base workload according to its GVK
wl.Template = fmt.Sprintf(`
wl.FullTemplate.TemplateStr = fmt.Sprintf(`
output: {
apiVersion: "%s"
kind: "%s"
}`, targetWokrloadGVK.GroupVersion().String(), targetWokrloadGVK.Kind)
}`, targetWorkloadGVK.GroupVersion().String(), targetWorkloadGVK.Kind)
// re-use the way CUE module generates comp & acComp
comp, acComp, err := generateComponentFromCUEModule(c, wl, appName, revision, ns)
@@ -507,7 +493,7 @@ output: {
return nil, nil, err
}
release, repo, err := helm.RenderHelmReleaseAndHelmRepo(wl.Helm, wl.Name, appName, ns, wl.Params)
release, repo, err := helm.RenderHelmReleaseAndHelmRepo(wl.FullTemplate.Helm, wl.Name, appName, ns, wl.Params)
if err != nil {
return nil, nil, err
}
@@ -593,7 +579,7 @@ func GetOutputSecretNames(workloads *Workload) (string, error) {
func parseWorkloadInsertSecretTo(ctx context.Context, c client.Client, namespace string, wl *Workload) ([]process.RequiredSecrets, error) {
var requiredSecret []process.RequiredSecrets
api, err := utils.GenerateOpenAPISchemaFromDefinition(wl.Name, wl.Template)
api, err := utils.GenerateOpenAPISchemaFromDefinition(wl.Name, wl.FullTemplate.TemplateStr)
if err != nil {
if !errors.Is(err, errors.Errorf(utils.ErrNoSectionParameterInCue, wl.Name)) {
return nil, nil
@@ -667,7 +653,7 @@ func (wl *Workload) IsCloudResourceProducer() bool {
// IsCloudResourceConsumer checks whether a workload is cloud resource consumer role
func (wl *Workload) IsCloudResourceConsumer() bool {
requiredSecretTag := strings.TrimRight(utils.InsertSecretToTag, "=")
matched, err := regexp.Match(regexp.QuoteMeta(requiredSecretTag), []byte(wl.Template))
matched, err := regexp.Match(regexp.QuoteMeta(requiredSecretTag), []byte(wl.FullTemplate.TemplateStr))
if err != nil || !matched {
return false
}

View File

@@ -57,7 +57,8 @@ var expectedExceptApp = &Appfile{
"image": "busybox",
"cmd": []interface{}{"sleep", "1000"},
},
Template: `
FullTemplate: &util.Template{
TemplateStr: `
output: {
apiVersion: "apps/v1"
kind: "Deployment"
@@ -96,6 +97,7 @@ var expectedExceptApp = &Appfile{
cmd?: [...string]
}`,
},
Traits: []*Trait{
{
Name: "scaler",
@@ -306,7 +308,8 @@ var _ = Describe("Test appFile parser", func() {
}},
},
engine: definition.NewWorkloadAbstractEngine("myweb", pd),
Template: `
FullTemplate: &util.Template{
TemplateStr: `
output: {
apiVersion: "apps/v1"
kind: "Deployment"
@@ -348,6 +351,7 @@ var _ = Describe("Test appFile parser", func() {
cmd?: [...string]
}`,
},
Traits: []*Trait{
{
Name: "scaler",
@@ -545,22 +549,24 @@ var _ = Describe("Test appfile parser to parse helm module", func() {
`,
},
},
Helm: &common.Helm{
Release: util.Object2RawExtension(map[string]interface{}{
"chart": map[string]interface{}{
"spec": map[string]interface{}{
"chart": "podinfo",
"version": "5.1.4",
FullTemplate: &util.Template{
Reference: common.WorkloadGVK{
APIVersion: "apps/v1",
Kind: "Deployment",
},
Helm: &common.Helm{
Release: util.Object2RawExtension(map[string]interface{}{
"chart": map[string]interface{}{
"spec": map[string]interface{}{
"chart": "podinfo",
"version": "5.1.4",
},
},
},
}),
Repository: util.Object2RawExtension(map[string]interface{}{
"url": "http://oam.dev/catalog/",
}),
},
DefinitionReference: common.WorkloadGVK{
APIVersion: "apps/v1",
Kind: "Deployment",
}),
Repository: util.Object2RawExtension(map[string]interface{}{
"url": "http://oam.dev/catalog/",
}),
},
},
},
},
@@ -770,10 +776,10 @@ spec:
},
},
},
},
DefinitionReference: common.WorkloadGVK{
APIVersion: "apps/v1",
Kind: "Deployment",
Reference: common.WorkloadGVK{
APIVersion: "apps/v1",
Kind: "Deployment",
},
},
},
},
@@ -922,8 +928,8 @@ settings: {
)
wl := &Workload{
Name: "abc",
Template: template,
Name: "abc",
FullTemplate: &util.Template{TemplateStr: template},
}
By("call target function")
secrets, err := parseWorkloadInsertSecretTo(ctx, k8sClient, ns, wl)
@@ -963,7 +969,7 @@ parameter: {
Params: map[string]interface{}{
"dbSecret": targetSecretName,
},
Template: template,
FullTemplate: &util.Template{TemplateStr: template},
}
By("create secret")
s := &corev1.Secret{
@@ -1021,7 +1027,7 @@ var _ = Describe("Test IsCloudResourceConsumer", func() {
Context("Workload is a Cloud Resource consumer", func() {
It("", func() {
wl := &Workload{
Template: "// +insertSecretTo=dbConn",
FullTemplate: &util.Template{TemplateStr: "// +insertSecretTo=dbConn"},
}
Expect(wl.IsCloudResourceConsumer()).Should(Equal(true))
})
@@ -1030,7 +1036,7 @@ var _ = Describe("Test IsCloudResourceConsumer", func() {
Context("Workload is a Cloud Resource consumer", func() {
It("", func() {
wl := &Workload{
Template: "// +useage=dbConn",
FullTemplate: &util.Template{TemplateStr: "// +useage=dbConn"},
}
Expect(wl.IsCloudResourceProducer()).Should(Equal(false))
})

View File

@@ -27,7 +27,6 @@ import (
apps "k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/klog/v2"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
@@ -165,13 +164,6 @@ func (r *Controller) reconcileBatchInRolling(ctx context.Context, workloadContro
return
}
// makes sure that the current batch and replica count in the status are validate
err := r.validateRollingBatchStatus(int(r.rolloutStatus.RolloutTargetTotalSize))
if err != nil {
r.rolloutStatus.RolloutFailing(err.Error())
return
}
switch r.rolloutStatus.BatchRollingState {
case v1alpha1.BatchInitializingState:
r.initializeOneBatch(ctx)
@@ -330,53 +322,6 @@ func (r *Controller) finalizeRollout(ctx context.Context) {
r.rolloutStatus.StateTransition(v1alpha1.RollingFinalizedEvent)
}
// verify that the upgradedReplicas and current batch in the status are valid according to the spec
func (r *Controller) validateRollingBatchStatus(totalSize int) error {
status := r.rolloutStatus
spec := r.rolloutSpec
podCount := 0
if spec.BatchPartition != nil && *spec.BatchPartition < status.CurrentBatch {
err := fmt.Errorf("the current batch value in the status is greater than the batch partition")
klog.ErrorS(err, "we have moved past the user defined partition", "user specified batch partition",
*spec.BatchPartition, "current batch we are working on", status.CurrentBatch)
return err
}
upgradedReplicas := int(status.UpgradedReplicas)
currentBatch := int(status.CurrentBatch)
// calculate the lower bound of the possible pod count just before the current batch
for i, r := range spec.RolloutBatches {
if i < currentBatch {
batchSize, _ := intstr.GetValueFromIntOrPercent(&r.Replicas, totalSize, true)
podCount += batchSize
} else {
break
}
}
// the recorded number should be at least as much as the all the pods before the current batch
if podCount > upgradedReplicas {
err := fmt.Errorf("the upgraded replica in the status is less than all the pods in the previous batch")
klog.ErrorS(err, "rollout status inconsistent", "upgraded num status", upgradedReplicas, "pods in all the previous batches", podCount)
return err
}
// calculate the upper bound with the current batch
if currentBatch == len(spec.RolloutBatches)-1 {
// avoid round up problems
podCount = totalSize
} else {
batchSize, _ := intstr.GetValueFromIntOrPercent(&spec.RolloutBatches[currentBatch].Replicas,
totalSize, true)
podCount += batchSize
}
// the recorded number should be not as much as the all the pods including the active batch
if podCount < upgradedReplicas {
err := fmt.Errorf("the upgraded replica in the status is greater than all the pods in the current batch")
klog.ErrorS(err, "rollout status inconsistent", "total target size", totalSize,
"upgraded num status", upgradedReplicas, "pods in the batches including the current batch", podCount)
return err
}
return nil
}
// GetWorkloadController pick the right workload controller to work on the workload
func (r *Controller) GetWorkloadController() (workloads.WorkloadController, error) {
kind := r.targetWorkload.GetObjectKind().GroupVersionKind().Kind
@@ -392,13 +337,19 @@ func (r *Controller) GetWorkloadController() (workloads.WorkloadController, erro
if r.targetWorkload.GroupVersionKind().Group == kruisev1.GroupVersion.Group {
if r.targetWorkload.GetKind() == reflect.TypeOf(kruisev1.CloneSet{}).Name() {
return workloads.NewCloneSetController(r.client, r.recorder, r.parentController,
// check whether current rollout plan is for workload rolling or scaling
if r.sourceWorkload != nil {
return workloads.NewCloneSetRolloutController(r.client, r.recorder, r.parentController,
r.rolloutSpec, r.rolloutStatus, target), nil
}
return workloads.NewCloneSetScaleController(r.client, r.recorder, r.parentController,
r.rolloutSpec, r.rolloutStatus, target), nil
}
}
if r.targetWorkload.GroupVersionKind().Group == apps.GroupName {
if r.targetWorkload.GetKind() == reflect.TypeOf(apps.Deployment{}).Name() {
// TODO: create deployment scale controller when current rollout plan is for scale
return workloads.NewDeploymentController(r.client, r.recorder, r.parentController,
r.rolloutSpec, r.rolloutStatus, source, target), nil
}

View File

@@ -110,9 +110,7 @@ func TestMakeHTTPRequest(t *testing.T) {
},
}
for testName, tt := range tests {
// deep copy it before going to goroutine
tt := tt
t.Run(testName, func(t *testing.T) {
func(testName string) {
// generate a test server so we can capture and inspect the request
testServer := NewMock(tt.httpParameter.method, tt.httpParameter.statusCode, tt.httpParameter.body)
defer testServer.Close()
@@ -141,7 +139,7 @@ func TestMakeHTTPRequest(t *testing.T) {
if string(gotReply) != tt.want.body {
t.Errorf("\n%s\nr.Reconcile(...): want reply `%s`, got reply:`%s`\n", testName, tt.want.body, string(gotReply))
}
})
}(testName)
}
}

View File

@@ -21,8 +21,6 @@ import (
"fmt"
"github.com/crossplane/crossplane-runtime/pkg/event"
kruise "github.com/openkruise/kruise-api/apps/v1alpha1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
@@ -34,35 +32,29 @@ import (
"github.com/oam-dev/kubevela/pkg/oam"
)
// CloneSetController is responsible for handle Cloneset type of workloads
type CloneSetController struct {
client client.Client
recorder event.Recorder
parentController oam.Object
rolloutSpec *v1alpha1.RolloutPlan
rolloutStatus *v1alpha1.RolloutStatus
workloadNamespacedName types.NamespacedName
cloneSet *kruise.CloneSet
// CloneSetRolloutController is responsible for handle rollout Cloneset type of workloads
type CloneSetRolloutController struct {
cloneSetController
}
// NewCloneSetController creates a new Cloneset controller
func NewCloneSetController(client client.Client, recorder event.Recorder, parentController oam.Object,
rolloutSpec *v1alpha1.RolloutPlan, rolloutStatus *v1alpha1.RolloutStatus, workloadName types.NamespacedName) *CloneSetController {
return &CloneSetController{
client: client,
recorder: recorder,
parentController: parentController,
rolloutSpec: rolloutSpec,
rolloutStatus: rolloutStatus,
workloadNamespacedName: workloadName,
// NewCloneSetRolloutController creates a new Cloneset rollout controller
func NewCloneSetRolloutController(client client.Client, recorder event.Recorder, parentController oam.Object,
rolloutSpec *v1alpha1.RolloutPlan, rolloutStatus *v1alpha1.RolloutStatus, workloadName types.NamespacedName) *CloneSetRolloutController {
return &CloneSetRolloutController{
cloneSetController: cloneSetController{
client: client,
recorder: recorder,
parentController: parentController,
rolloutSpec: rolloutSpec,
rolloutStatus: rolloutStatus,
workloadNamespacedName: workloadName,
},
}
}
// VerifySpec verifies that the target rollout resource is consistent with the rollout spec
func (c *CloneSetController) VerifySpec(ctx context.Context) (bool, error) {
func (c *CloneSetRolloutController) VerifySpec(ctx context.Context) (bool, error) {
var verifyErr error
defer func() {
if verifyErr != nil {
klog.Error(verifyErr)
@@ -71,16 +63,22 @@ func (c *CloneSetController) VerifySpec(ctx context.Context) (bool, error) {
}()
// fetch the cloneset and get its current size
totalReplicas, verifyErr := c.size(ctx)
currentReplicas, verifyErr := c.size(ctx)
if verifyErr != nil {
// do not fail the rollout because we can't get the resource
c.rolloutStatus.RolloutRetry(verifyErr.Error())
// nolint: nilerr
return false, nil
}
// record the size
klog.InfoS("record the target size", "total replicas", totalReplicas)
c.rolloutStatus.RolloutTargetTotalSize = totalReplicas
// the cloneset size has to be the same as the current size
if c.cloneSet.Spec.Replicas != nil && *c.cloneSet.Spec.Replicas != c.cloneSet.Status.Replicas {
verifyErr = fmt.Errorf("the cloneset is still scaling, target = %d, cloneset size = %d",
*c.cloneSet.Spec.Replicas, c.cloneSet.Status.Replicas)
// we can wait for the cloneset scale operation to finish
c.rolloutStatus.RolloutRetry(verifyErr.Error())
return false, nil
}
// make sure that the updateRevision is different from what we have already done
targetHash := c.cloneSet.Status.UpdateRevision
@@ -91,10 +89,15 @@ func (c *CloneSetController) VerifySpec(ctx context.Context) (bool, error) {
c.rolloutStatus.NewPodTemplateIdentifier = targetHash
// check if the rollout batch replicas added up to the Cloneset replicas
if verifyErr = c.verifyRolloutBatchReplicaValue(totalReplicas); verifyErr != nil {
if verifyErr = c.verifyRolloutBatchReplicaValue(currentReplicas); verifyErr != nil {
return false, verifyErr
}
// record the size
klog.InfoS("record the target size", "total replicas", currentReplicas)
c.rolloutStatus.RolloutTargetSize = currentReplicas
c.rolloutStatus.RolloutOriginalSize = currentReplicas
// check if the cloneset is disabled
if !c.cloneSet.Spec.UpdateStrategy.Paused {
return false, fmt.Errorf("the cloneset %s is in the middle of updating, need to be paused first",
@@ -114,7 +117,7 @@ func (c *CloneSetController) VerifySpec(ctx context.Context) (bool, error) {
}
// Initialize makes sure that the cloneset is under our control
func (c *CloneSetController) Initialize(ctx context.Context) (bool, error) {
func (c *CloneSetRolloutController) Initialize(ctx context.Context) (bool, error) {
totalReplicas, err := c.size(ctx)
if err != nil {
c.rolloutStatus.RolloutRetry(err.Error())
@@ -148,10 +151,15 @@ func (c *CloneSetController) Initialize(ctx context.Context) (bool, error) {
// RolloutOneBatchPods calculates the number of pods we can upgrade once according to the rollout spec
// and then set the partition accordingly, return if we are done
func (c *CloneSetController) RolloutOneBatchPods(ctx context.Context) (bool, error) {
func (c *CloneSetRolloutController) RolloutOneBatchPods(ctx context.Context) (bool, error) {
// calculate what's the total pods that should be upgraded given the currentBatch in the status
cloneSetSize, _ := c.size(ctx)
newPodTarget := c.calculateNewPodTarget(int(cloneSetSize))
cloneSetSize, err := c.size(ctx)
if err != nil {
c.rolloutStatus.RolloutRetry(err.Error())
return false, nil
}
newPodTarget := calculateNewBatchTarget(c.rolloutSpec, 0, int(cloneSetSize), int(c.rolloutStatus.CurrentBatch))
// set the Partition as the desired number of pods in old revisions.
clonePatch := client.MergeFrom(c.cloneSet.DeepCopyObject())
c.cloneSet.Spec.UpdateStrategy.Partition = &intstr.IntOrString{Type: intstr.Int,
@@ -170,10 +178,14 @@ func (c *CloneSetController) RolloutOneBatchPods(ctx context.Context) (bool, err
return true, nil
}
// CheckOneBatchPods checks to see if the pods are all available according to the rollout plan
func (c *CloneSetController) CheckOneBatchPods(ctx context.Context) (bool, error) {
cloneSetSize, _ := c.size(ctx)
newPodTarget := c.calculateNewPodTarget(int(cloneSetSize))
// CheckOneBatchPods checks to see if enough pods are upgraded according to the rollout plan
func (c *CloneSetRolloutController) CheckOneBatchPods(ctx context.Context) (bool, error) {
cloneSetSize, err := c.size(ctx)
if err != nil {
c.rolloutStatus.RolloutRetry(err.Error())
return false, nil
}
newPodTarget := calculateNewBatchTarget(c.rolloutSpec, 0, int(cloneSetSize), int(c.rolloutStatus.CurrentBatch))
// get the number of ready pod from cloneset
readyPodCount := int(c.cloneSet.Status.UpdatedReadyReplicas)
currentBatch := c.rolloutSpec.RolloutBatches[c.rolloutStatus.CurrentBatch]
@@ -200,14 +212,41 @@ func (c *CloneSetController) CheckOneBatchPods(ctx context.Context) (bool, error
return false, nil
}
// FinalizeOneBatch makes sure that the rollout status are updated correctly
func (c *CloneSetController) FinalizeOneBatch(ctx context.Context) (bool, error) {
// nothing to do for cloneset for now
// FinalizeOneBatch makes sure that the upgradedReplicas and current batch in the status are valid according to the spec
func (c *CloneSetRolloutController) FinalizeOneBatch(ctx context.Context) (bool, error) {
status := c.rolloutStatus
spec := c.rolloutSpec
if spec.BatchPartition != nil && *spec.BatchPartition < status.CurrentBatch {
err := fmt.Errorf("the current batch value in the status is greater than the batch partition")
klog.ErrorS(err, "we have moved past the user defined partition", "user specified batch partition",
*spec.BatchPartition, "current batch we are working on", status.CurrentBatch)
return false, err
}
upgradedReplicas := int(status.UpgradedReplicas)
currentBatch := int(status.CurrentBatch)
// calculate the lower bound of the possible pod count just before the current batch
podCount := calculateNewBatchTarget(c.rolloutSpec, 0, int(c.rolloutStatus.RolloutTargetSize), currentBatch-1)
// the recorded number should be at least as much as the all the pods before the current batch
if podCount > upgradedReplicas {
err := fmt.Errorf("the upgraded replica in the status is less than all the pods in the previous batch")
klog.ErrorS(err, "rollout status inconsistent", "upgraded num status", upgradedReplicas,
"pods in all the previous batches", podCount)
return false, err
}
// calculate the upper bound with the current batch
podCount = calculateNewBatchTarget(c.rolloutSpec, 0, int(c.rolloutStatus.RolloutTargetSize), currentBatch)
// the recorded number should be not as much as the all the pods including the active batch
if podCount < upgradedReplicas {
err := fmt.Errorf("the upgraded replica in the status is greater than all the pods in the current batch")
klog.ErrorS(err, "rollout status inconsistent", "total target size", c.rolloutStatus.RolloutTargetSize,
"upgraded num status", upgradedReplicas, "pods in the batches including the current batch", podCount)
return false, err
}
return true, nil
}
// Finalize makes sure the Cloneset is all upgraded
func (c *CloneSetController) Finalize(ctx context.Context, succeed bool) bool {
func (c *CloneSetRolloutController) Finalize(ctx context.Context, succeed bool) bool {
if err := c.fetchCloneSet(ctx); err != nil {
c.rolloutStatus.RolloutRetry(err.Error())
return false
@@ -241,69 +280,18 @@ func (c *CloneSetController) Finalize(ctx context.Context, succeed bool) bool {
// ---------------------------------------------
// The functions below are helper functions
// ---------------------------------------------
// size fetches the Cloneset and returns the replicas (not the actual number of pods)
func (c *CloneSetController) size(ctx context.Context) (int32, error) {
if c.cloneSet == nil {
err := c.fetchCloneSet(ctx)
if err != nil {
return 0, err
}
}
// default is 1
if c.cloneSet.Spec.Replicas == nil {
return 1, nil
}
return *c.cloneSet.Spec.Replicas, nil
}
// check if the replicas in all the rollout batches add up to the right number
func (c *CloneSetController) verifyRolloutBatchReplicaValue(totalReplicas int32) error {
func (c *CloneSetRolloutController) verifyRolloutBatchReplicaValue(currentReplicas int32) error {
// the target size has to be the same as the cloneset size
if c.rolloutSpec.TargetSize != nil && *c.rolloutSpec.TargetSize != totalReplicas {
if c.rolloutSpec.TargetSize != nil && *c.rolloutSpec.TargetSize != currentReplicas {
return fmt.Errorf("the rollout plan is attempting to scale the cloneset, target = %d, cloneset size = %d",
*c.rolloutSpec.TargetSize, totalReplicas)
*c.rolloutSpec.TargetSize, currentReplicas)
}
// use a common function to check if the sum of all the batches can match the cloneset size
err := VerifySumOfBatchSizes(c.rolloutSpec, totalReplicas)
err := verifyBatchesWithRollout(c.rolloutSpec, currentReplicas)
if err != nil {
return err
}
return nil
}
func (c *CloneSetController) fetchCloneSet(ctx context.Context) error {
// get the cloneSet
workload := kruise.CloneSet{}
err := c.client.Get(ctx, c.workloadNamespacedName, &workload)
if err != nil {
if !apierrors.IsNotFound(err) {
c.recorder.Event(c.parentController, event.Warning("Failed to get the Cloneset", err))
}
return err
}
c.cloneSet = &workload
return nil
}
func (c *CloneSetController) calculateNewPodTarget(cloneSetSize int) int {
currentBatch := int(c.rolloutStatus.CurrentBatch)
newPodTarget := 0
if currentBatch == len(c.rolloutSpec.RolloutBatches)-1 {
newPodTarget = cloneSetSize
// special handle the last batch, we ignore the rest of the batch in case there are rounding errors
klog.InfoS("use the cloneset size as the total pod target for the last rolling batch",
"current batch", currentBatch, "new version pod target", newPodTarget)
} else {
for i, r := range c.rolloutSpec.RolloutBatches {
batchSize, _ := intstr.GetValueFromIntOrPercent(&r.Replicas, cloneSetSize, true)
if i <= currentBatch {
newPodTarget += batchSize
} else {
break
}
}
klog.InfoS("Calculated the number of new version pod", "current batch", currentBatch,
"new version pod target", newPodTarget)
}
return newPodTarget
}

View File

@@ -0,0 +1,292 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package workloads
import (
"context"
"fmt"
"github.com/crossplane/crossplane-runtime/pkg/event"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/klog/v2"
"k8s.io/utils/pointer"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
"github.com/oam-dev/kubevela/apis/standard.oam.dev/v1alpha1"
"github.com/oam-dev/kubevela/pkg/oam"
"github.com/oam-dev/kubevela/pkg/oam/util"
)
// CloneSetScaleController is responsible for handle scale Cloneset type of workloads
type CloneSetScaleController struct {
cloneSetController
}
// NewCloneSetScaleController creates CloneSet scale controller
func NewCloneSetScaleController(client client.Client, recorder event.Recorder, parentController oam.Object, rolloutSpec *v1alpha1.RolloutPlan, rolloutStatus *v1alpha1.RolloutStatus, workloadName types.NamespacedName) *CloneSetScaleController {
return &CloneSetScaleController{
cloneSetController: cloneSetController{
client: client,
recorder: recorder,
parentController: parentController,
rolloutSpec: rolloutSpec,
rolloutStatus: rolloutStatus,
workloadNamespacedName: workloadName,
},
}
}
// VerifySpec verifies that the cloneset is stable and can be scaled
func (s *CloneSetScaleController) VerifySpec(ctx context.Context) (bool, error) {
var verifyErr error
defer func() {
if verifyErr != nil {
klog.Error(verifyErr)
s.recorder.Event(s.parentController, event.Warning("VerifyFailed", verifyErr))
}
}()
// the rollout has to have a target size in the scale case
if s.rolloutSpec.TargetSize == nil {
return false, fmt.Errorf("the rollout plan is attempting to scale the cloneset %s without a target",
s.workloadNamespacedName.Name)
}
// record the target size
s.rolloutStatus.RolloutTargetSize = *s.rolloutSpec.TargetSize
klog.InfoS("record the target size", "target size", *s.rolloutSpec.TargetSize)
// fetch the cloneset and get its current size
originalSize, verifyErr := s.size(ctx)
if verifyErr != nil {
// do not fail the rollout because we can't get the resource
s.rolloutStatus.RolloutRetry(verifyErr.Error())
// nolint: nilerr
return false, nil
}
s.rolloutStatus.RolloutOriginalSize = originalSize
klog.InfoS("record the original size", "original size", originalSize)
// check if the rollout batch replicas scale up/down to the replicas target
if verifyErr = verifyBatchesWithScale(s.rolloutSpec, int(originalSize),
int(s.rolloutStatus.RolloutTargetSize)); verifyErr != nil {
return false, verifyErr
}
// check if the cloneset is scaling
if originalSize != s.cloneSet.Status.Replicas {
verifyErr = fmt.Errorf("the cloneset %s is in the middle of scaling, target size = %d, real size = %d",
s.cloneSet.GetName(), originalSize, s.cloneSet.Status.Replicas)
// do not fail the rollout, we can wait
s.rolloutStatus.RolloutRetry(verifyErr.Error())
return false, nil
}
// check if the cloneset is upgrading
if !s.cloneSet.Spec.UpdateStrategy.Paused && s.cloneSet.Status.UpdatedReplicas != originalSize {
verifyErr = fmt.Errorf("the cloneset %s is in the middle of updating, target size = %d, updated pod = %d",
s.cloneSet.GetName(), originalSize, s.cloneSet.Status.UpdatedReplicas)
// do not fail the rollout, we can wait
s.rolloutStatus.RolloutRetry(verifyErr.Error())
return false, nil
}
// check if the cloneset has any controller
if controller := metav1.GetControllerOf(s.cloneSet); controller != nil {
return false, fmt.Errorf("the cloneset %s has a controller owner %s",
s.cloneSet.GetName(), controller.String())
}
// mark the scale verified
s.recorder.Event(s.parentController, event.Normal("Scale Verified",
"Rollout spec and the CloneSet resource are verified"))
return true, nil
}
// Initialize makes sure that the cloneset is under our control
func (s *CloneSetScaleController) Initialize(ctx context.Context) (bool, error) {
err := s.fetchCloneSet(ctx)
if err != nil {
s.rolloutStatus.RolloutRetry(err.Error())
// nolint: nilerr
return false, nil
}
if controller := metav1.GetControllerOf(s.cloneSet); controller != nil {
if controller.Kind == v1beta1.AppRolloutKind && controller.APIVersion == v1beta1.SchemeGroupVersion.String() {
// it's already there
return true, nil
}
}
// add the parent controller to the owner of the cloneset
clonePatch := client.MergeFrom(s.cloneSet.DeepCopyObject())
ref := metav1.NewControllerRef(s.parentController, v1beta1.AppRolloutKindVersionKind)
s.cloneSet.SetOwnerReferences(append(s.cloneSet.GetOwnerReferences(), *ref))
s.cloneSet.Spec.UpdateStrategy.Paused = false
// patch the CloneSet
if err := s.client.Patch(ctx, s.cloneSet, clonePatch, client.FieldOwner(s.parentController.GetUID())); err != nil {
s.recorder.Event(s.parentController, event.Warning("Failed to the start the cloneset update", err))
s.rolloutStatus.RolloutRetry(err.Error())
return false, nil
}
// mark the rollout initialized
s.recorder.Event(s.parentController, event.Normal("Scale Initialized", "Cloneset is initialized"))
return true, nil
}
// RolloutOneBatchPods calculates the number of pods we can scale to according to the rollout spec
func (s *CloneSetScaleController) RolloutOneBatchPods(ctx context.Context) (bool, error) {
err := s.fetchCloneSet(ctx)
if err != nil {
s.rolloutStatus.RolloutRetry(err.Error())
// nolint: nilerr
return false, nil
}
clonePatch := client.MergeFrom(s.cloneSet.DeepCopyObject())
// set the replica according to the batch
newPodTarget := calculateNewBatchTarget(s.rolloutSpec, int(s.rolloutStatus.RolloutOriginalSize),
int(s.rolloutStatus.RolloutTargetSize), int(s.rolloutStatus.CurrentBatch))
s.cloneSet.Spec.Replicas = pointer.Int32Ptr(int32(newPodTarget))
// patch the Cloneset
if err := s.client.Patch(ctx, s.cloneSet, clonePatch, client.FieldOwner(s.parentController.GetUID())); err != nil {
s.recorder.Event(s.parentController, event.Warning("Failed to update the cloneset to upgrade", err))
s.rolloutStatus.RolloutRetry(err.Error())
return false, nil
}
// record the scale
klog.InfoS("scale one batch", "current batch", s.rolloutStatus.CurrentBatch)
s.recorder.Event(s.parentController, event.Normal("Batch Rollout",
fmt.Sprintf("Submitted scale quest for batch %d", s.rolloutStatus.CurrentBatch)))
s.rolloutStatus.UpgradedReplicas = int32(newPodTarget)
return true, nil
}
// CheckOneBatchPods checks to see if the pods are scaled according to the rollout plan
func (s *CloneSetScaleController) CheckOneBatchPods(ctx context.Context) (bool, error) {
err := s.fetchCloneSet(ctx)
if err != nil {
s.rolloutStatus.RolloutRetry(err.Error())
// nolint:nilerr
return false, nil
}
newPodTarget := calculateNewBatchTarget(s.rolloutSpec, int(s.rolloutStatus.RolloutOriginalSize),
int(s.rolloutStatus.RolloutTargetSize), int(s.rolloutStatus.CurrentBatch))
// get the number of ready pod from cloneset
readyPodCount := int(s.cloneSet.Status.ReadyReplicas)
currentBatch := s.rolloutSpec.RolloutBatches[s.rolloutStatus.CurrentBatch]
unavail := 0
if currentBatch.MaxUnavailable != nil {
unavail, _ = intstr.GetValueFromIntOrPercent(currentBatch.MaxUnavailable,
int(s.rolloutStatus.RolloutOriginalSize), true)
}
klog.InfoS("checking the scaling progress", "current batch", s.rolloutStatus.CurrentBatch,
"new pod count target", newPodTarget, "new ready pod count", readyPodCount,
"max unavailable pod allowed", unavail)
s.rolloutStatus.UpgradedReadyReplicas = int32(readyPodCount)
targetReached := false
// nolint
if s.rolloutStatus.RolloutOriginalSize <= s.rolloutStatus.RolloutTargetSize && unavail+readyPodCount >= newPodTarget {
targetReached = true
} else if s.rolloutStatus.RolloutOriginalSize > s.rolloutStatus.RolloutTargetSize && readyPodCount <= newPodTarget {
targetReached = true
}
if targetReached {
// record the successful upgrade
klog.InfoS("the current batch is ready", "current batch", s.rolloutStatus.CurrentBatch,
"target", newPodTarget, "readyPodCount", readyPodCount, "max unavailable allowed", unavail)
s.recorder.Event(s.parentController, event.Normal("Batch Available",
fmt.Sprintf("Batch %d is available", s.rolloutStatus.CurrentBatch)))
return true, nil
}
// continue to verify
klog.InfoS("the batch is not ready yet", "current batch", s.rolloutStatus.CurrentBatch,
"target", newPodTarget, "readyPodCount", readyPodCount, "max unavailable allowed", unavail)
s.rolloutStatus.RolloutRetry("the batch is not ready yet")
return false, nil
}
// FinalizeOneBatch makes sure that the current batch and replica count in the status are validate
func (s *CloneSetScaleController) FinalizeOneBatch(ctx context.Context) (bool, error) {
status := s.rolloutStatus
spec := s.rolloutSpec
if spec.BatchPartition != nil && *spec.BatchPartition < status.CurrentBatch {
err := fmt.Errorf("the current batch value in the status is greater than the batch partition")
klog.ErrorS(err, "we have moved past the user defined partition", "user specified batch partition",
*spec.BatchPartition, "current batch we are working on", status.CurrentBatch)
return false, err
}
// special case the equal case
if s.rolloutStatus.RolloutOriginalSize == s.rolloutStatus.RolloutTargetSize {
return true, nil
}
// we just make sure the target is right
finishedPodCount := int(status.UpgradedReplicas)
currentBatch := int(status.CurrentBatch)
// calculate the pod target just before the current batch
preBatchTarget := calculateNewBatchTarget(s.rolloutSpec, int(s.rolloutStatus.RolloutOriginalSize),
int(s.rolloutStatus.RolloutTargetSize), currentBatch-1)
// calculate the pod target with the current batch
curBatchTarget := calculateNewBatchTarget(s.rolloutSpec, int(s.rolloutStatus.RolloutOriginalSize),
int(s.rolloutStatus.RolloutTargetSize), currentBatch)
// the recorded number should be at least as much as the all the pods before the current batch
if finishedPodCount < util.Min(preBatchTarget, curBatchTarget) {
err := fmt.Errorf("the upgraded replica in the status is less than the lower bound")
klog.ErrorS(err, "rollout status inconsistent", "existing pod target", finishedPodCount,
"the lower bound", util.Min(preBatchTarget, curBatchTarget))
return false, err
}
// the recorded number should be not as much as the all the pods including the active batch
if finishedPodCount > util.Max(preBatchTarget, curBatchTarget) {
err := fmt.Errorf("the upgraded replica in the status is greater than the upper bound")
klog.ErrorS(err, "rollout status inconsistent", "existing pod target", finishedPodCount,
"the upper bound", util.Max(preBatchTarget, curBatchTarget))
return false, err
}
return true, nil
}
// Finalize makes sure the Cloneset is scaled and ready to use
func (s *CloneSetScaleController) Finalize(ctx context.Context, succeed bool) bool {
if err := s.fetchCloneSet(ctx); err != nil {
s.rolloutStatus.RolloutRetry(err.Error())
return false
}
clonePatch := client.MergeFrom(s.cloneSet.DeepCopyObject())
// remove the parent controller from the resources' owner list
var newOwnerList []metav1.OwnerReference
for _, owner := range s.cloneSet.GetOwnerReferences() {
if owner.Kind == v1beta1.AppRolloutKind && owner.APIVersion == v1beta1.SchemeGroupVersion.String() {
continue
}
newOwnerList = append(newOwnerList, owner)
}
s.cloneSet.SetOwnerReferences(newOwnerList)
// patch the CloneSet
if err := s.client.Patch(ctx, s.cloneSet, clonePatch, client.FieldOwner(s.parentController.GetUID())); err != nil {
s.recorder.Event(s.parentController, event.Warning("Failed to the finalize the cloneset", err))
s.rolloutStatus.RolloutRetry(err.Error())
return false
}
// mark the resource finalized
s.recorder.Event(s.parentController, event.Normal("Scale Finalized",
fmt.Sprintf("Scale resource are finalized, succeed := %t", succeed)))
return true
}

View File

@@ -20,13 +20,14 @@ import (
"fmt"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/klog/v2"
"github.com/oam-dev/kubevela/apis/standard.oam.dev/v1alpha1"
)
// VerifySumOfBatchSizes verifies that the the sum of all the batch replicas is valid given the total replica
// verifyBatchesWithRollout verifies that the the sum of all the batch replicas is valid given the total replica
// each batch replica can be absolute or a percentage
func VerifySumOfBatchSizes(rolloutSpec *v1alpha1.RolloutPlan, totalReplicas int32) error {
func verifyBatchesWithRollout(rolloutSpec *v1alpha1.RolloutPlan, totalReplicas int32) error {
// if not set, the sum of all the batch sizes minus the last batch cannot be more than the totalReplicas
totalRollout := 0
for i := 0; i < len(rolloutSpec.RolloutBatches)-1; i++ {
@@ -52,3 +53,70 @@ func VerifySumOfBatchSizes(rolloutSpec *v1alpha1.RolloutPlan, totalReplicas int3
}
return nil
}
// verifyBatchesWithScale verifies that executing batches finally reach the target size starting from original size
func verifyBatchesWithScale(rolloutSpec *v1alpha1.RolloutPlan, originalSize, targetSize int) error {
totalRollout := originalSize
for i := 0; i < len(rolloutSpec.RolloutBatches)-1; i++ {
rb := rolloutSpec.RolloutBatches[i]
if targetSize > originalSize {
batchSize, _ := intstr.GetValueFromIntOrPercent(&rb.Replicas, targetSize-originalSize, true)
totalRollout += batchSize
} else {
batchSize, _ := intstr.GetValueFromIntOrPercent(&rb.Replicas, originalSize-targetSize, true)
totalRollout -= batchSize
}
}
if targetSize > originalSize {
if totalRollout >= targetSize {
return fmt.Errorf("the rollout plan increased too much, total batch size = %d, targetSize size = %d",
totalRollout, targetSize)
}
} else if targetSize < originalSize {
if totalRollout <= targetSize {
return fmt.Errorf("the rollout plan reduced too much, total batch size = %d, targetSize size = %d",
totalRollout, originalSize)
}
}
// include the last batch if it has an int value
// we ignore the last batch percentage since it is very likely to cause rounding errors
lastBatch := rolloutSpec.RolloutBatches[len(rolloutSpec.RolloutBatches)-1]
if lastBatch.Replicas.Type == intstr.Int {
if targetSize > originalSize {
totalRollout += int(lastBatch.Replicas.IntVal)
} else {
totalRollout -= int(lastBatch.Replicas.IntVal)
}
// now that they should be the same
if totalRollout != targetSize {
return fmt.Errorf("the rollout plan batch size mismatch, total batch size = %d, targetSize size = %d",
totalRollout, targetSize)
}
}
return nil
}
func calculateNewBatchTarget(rolloutSpec *v1alpha1.RolloutPlan, originalSize, targetSize, currentBatch int) int {
newPodTarget := originalSize
if currentBatch == len(rolloutSpec.RolloutBatches)-1 {
newPodTarget = targetSize
// special handle the last batch, we ignore the rest of the batch in case there are rounding errors
klog.InfoS("use the target size as the total pod target for the last rolling batch",
"current batch", currentBatch, "new pod target", newPodTarget)
return newPodTarget
}
for i := 0; i <= currentBatch && i < len(rolloutSpec.RolloutBatches); i++ {
if targetSize > originalSize {
batchSize, _ := intstr.GetValueFromIntOrPercent(&rolloutSpec.RolloutBatches[i].Replicas, targetSize-originalSize,
true)
newPodTarget += batchSize
} else {
batchSize, _ := intstr.GetValueFromIntOrPercent(&rolloutSpec.RolloutBatches[i].Replicas, originalSize-targetSize,
true)
newPodTarget -= batchSize
}
}
klog.InfoS("calculated the number of new pod size", "current batch", currentBatch,
"new pod target", newPodTarget)
return newPodTarget
}

View File

@@ -0,0 +1,226 @@
/*
Copyright 2021 The KubeVela Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package workloads
import (
"testing"
"k8s.io/apimachinery/pkg/util/intstr"
"github.com/oam-dev/kubevela/apis/standard.oam.dev/v1alpha1"
)
var (
rolloutPercentSpec = &v1alpha1.RolloutPlan{
RolloutBatches: []v1alpha1.RolloutBatch{
{
Replicas: intstr.FromString("20%"),
},
{
Replicas: intstr.FromString("40%"),
},
{
Replicas: intstr.FromString("30%"),
},
{
Replicas: intstr.FromString("10%"),
},
},
}
rolloutNumericSpec = &v1alpha1.RolloutPlan{
RolloutBatches: []v1alpha1.RolloutBatch{
{
Replicas: intstr.FromInt(1),
},
{
Replicas: intstr.FromInt(2),
},
{
Replicas: intstr.FromInt(4),
},
{
Replicas: intstr.FromInt(3),
},
},
}
rolloutMixedSpec = &v1alpha1.RolloutPlan{
RolloutBatches: []v1alpha1.RolloutBatch{
{
Replicas: intstr.FromInt(1),
},
{
Replicas: intstr.FromString("20%"),
},
{
Replicas: intstr.FromString("50%"),
},
{
Replicas: intstr.FromInt(2),
},
},
}
rolloutRelaxSpec = &v1alpha1.RolloutPlan{
RolloutBatches: []v1alpha1.RolloutBatch{
{
Replicas: intstr.FromString("20%"),
},
{
Replicas: intstr.FromInt(2),
},
{
Replicas: intstr.FromInt(2),
},
{
Replicas: intstr.FromString("50%"),
},
},
}
)
func TestCalculateNewBatchTarget(t *testing.T) {
// test common rollout
if got := calculateNewBatchTarget(rolloutMixedSpec, 0, 10, 0); got != 1 {
t.Errorf("calculateNewBatchTarget() = %v, want %v", got, 1)
}
if got := calculateNewBatchTarget(rolloutMixedSpec, 0, 10, 1); got != 3 {
t.Errorf("calculateNewBatchTarget() = %v, want %v", got, 3)
}
if got := calculateNewBatchTarget(rolloutMixedSpec, 0, 10, 2); got != 8 {
t.Errorf("calculateNewBatchTarget() = %v, want %v", got, 8)
}
if got := calculateNewBatchTarget(rolloutMixedSpec, 0, 10, 3); got != 10 {
t.Errorf("calculateNewBatchTarget() = %v, want %v", got, 10)
}
// test scale up
if got := calculateNewBatchTarget(rolloutMixedSpec, 2, 12, 0); got != 3 {
t.Errorf("calculateNewBatchTarget() = %v, want %v", got, 3)
}
if got := calculateNewBatchTarget(rolloutMixedSpec, 3, 13, 1); got != 6 {
t.Errorf("calculateNewBatchTarget() = %v, want %v", got, 6)
}
if got := calculateNewBatchTarget(rolloutMixedSpec, 4, 14, 2); got != 12 {
t.Errorf("calculateNewBatchTarget() = %v, want %v", got, 12)
}
if got := calculateNewBatchTarget(rolloutMixedSpec, 5, 15, 3); got != 15 {
t.Errorf("calculateNewBatchTarget() = %v, want %v", got, 15)
}
// test scale down
if got := calculateNewBatchTarget(rolloutMixedSpec, 10, 0, 0); got != 9 {
t.Errorf("calculateNewBatchTarget() = %v, want %v", got, 9)
}
if got := calculateNewBatchTarget(rolloutMixedSpec, 20, 5, 1); got != 16 {
t.Errorf("calculateNewBatchTarget() = %v, want %v", got, 16)
}
if got := calculateNewBatchTarget(rolloutMixedSpec, 30, 15, 2); got != 18 {
t.Errorf("calculateNewBatchTarget() = %v, want %v", got, 18)
}
if got := calculateNewBatchTarget(rolloutMixedSpec, 40, 10, 3); got != 10 {
t.Errorf("calculateNewBatchTarget() = %v, want %v", got, 10)
}
}
func TestCalculateNewBatchTargetCornerCases(t *testing.T) {
// test batch size overflow
if got := calculateNewBatchTarget(rolloutMixedSpec, 2, 12, 4); got != 12 {
t.Errorf("calculateNewBatchTarget() = %v, want %v", got, 12)
}
if got := calculateNewBatchTarget(rolloutMixedSpec, 13, 3, 5); got != 3 {
t.Errorf("calculateNewBatchTarget() = %v, want %v", got, 3)
}
// numeric value doesn't match the range
if got := calculateNewBatchTarget(rolloutNumericSpec, 16, 10, 0); got != 15 {
t.Errorf("calculateNewBatchTarget() = %v, want %v", got, 15)
}
if got := calculateNewBatchTarget(rolloutPercentSpec, 10, 10, 2); got != 10 {
t.Errorf("calculateNewBatchTarget() = %v, want %v", got, 10)
}
}
func TestVerifyBatchesWithRollout(t *testing.T) {
if err := verifyBatchesWithRollout(rolloutMixedSpec, 10); err != nil {
t.Errorf("verifyBatchesWithRollout() = %v, want nil", err)
}
if err := verifyBatchesWithRollout(rolloutMixedSpec, 12); err != nil {
t.Errorf("verifyBatchesWithRollout() = %v, want nil", err)
}
if err := verifyBatchesWithRollout(rolloutMixedSpec, 13); err != nil {
t.Errorf("verifyBatchesWithRollout() = %v, want nil", err)
}
if err := verifyBatchesWithRollout(rolloutMixedSpec, 20); err == nil {
t.Errorf("verifyBatchesWithRollout() = %v, want error", nil)
}
if err := verifyBatchesWithRollout(rolloutMixedSpec, 6); err == nil {
t.Errorf("verifyBatchesWithRollout() = %v, want error", nil)
}
// last batch as a percentage always succeeds
if err := verifyBatchesWithRollout(rolloutRelaxSpec, 10); err != nil {
t.Errorf("verifyBatchesWithRollout() = %v, want nil", err)
}
if err := verifyBatchesWithRollout(rolloutRelaxSpec, 100); err != nil {
t.Errorf("verifyBatchesWithRollout() = %v, want nil", err)
}
if err := verifyBatchesWithRollout(rolloutRelaxSpec, 31); err != nil {
t.Errorf("verifyBatchesWithRollout() = %v, want nil", err)
}
// last can't be zero
if err := verifyBatchesWithRollout(rolloutRelaxSpec, 6); err == nil {
t.Errorf("verifyBatchesWithRollout() = %v, want error", nil)
}
// test hard number
if err := verifyBatchesWithRollout(rolloutNumericSpec, 6); err == nil {
t.Errorf("verifyBatchesWithRollout() = %v, want error", nil)
}
if err := verifyBatchesWithRollout(rolloutNumericSpec, 11); err == nil {
t.Errorf("verifyBatchesWithRollout() = %v, want error", nil)
}
if err := verifyBatchesWithRollout(rolloutNumericSpec, 10); err != nil {
t.Errorf("verifyBatchesWithRollout() = %v, want nil", err)
}
}
func Test_verifyBatchesWithScale(t *testing.T) {
if err := verifyBatchesWithScale(rolloutMixedSpec, 10, 0); err != nil {
t.Errorf("verifyBatchesWithRollout() = %v, want nil", err)
}
if err := verifyBatchesWithScale(rolloutMixedSpec, 30, 42); err != nil {
t.Errorf("verifyBatchesWithRollout() = %v, want nil", err)
}
if err := verifyBatchesWithScale(rolloutMixedSpec, 13, 26); err != nil {
t.Errorf("verifyBatchesWithRollout() = %v, want nil", err)
}
if err := verifyBatchesWithScale(rolloutMixedSpec, 13, 20); err == nil {
t.Errorf("verifyBatchesWithRollout() = %v, want error", nil)
}
if err := verifyBatchesWithScale(rolloutMixedSpec, 42, 10); err == nil {
t.Errorf("verifyBatchesWithRollout() = %v, want error", nil)
}
// test hard batch numbers
if err := verifyBatchesWithScale(rolloutNumericSpec, 22, 32); err != nil {
t.Errorf("verifyBatchesWithRollout() = %v, want nil", err)
}
if err := verifyBatchesWithScale(rolloutNumericSpec, 22, 12); err != nil {
t.Errorf("verifyBatchesWithRollout() = %v, want nil", err)
}
if err := verifyBatchesWithScale(rolloutNumericSpec, 42, 30); err == nil {
t.Errorf("verifyBatchesWithRollout() = %v, want error", nil)
}
}

View File

@@ -18,6 +18,15 @@ package workloads
import (
"context"
"github.com/crossplane/crossplane-runtime/pkg/event"
kruise "github.com/openkruise/kruise-api/apps/v1alpha1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/oam-dev/kubevela/apis/standard.oam.dev/v1alpha1"
"github.com/oam-dev/kubevela/pkg/oam"
)
// WorkloadController is the interface that all type of cloneSet controller implements
@@ -50,3 +59,44 @@ type WorkloadController interface {
// and the finalize rollout web hooks will be called after this call succeeds
Finalize(ctx context.Context, succeed bool) bool
}
// cloneSetController is the place to hold fields needed for handle Cloneset type of workloads
type cloneSetController struct {
client client.Client
recorder event.Recorder
parentController oam.Object
rolloutSpec *v1alpha1.RolloutPlan
rolloutStatus *v1alpha1.RolloutStatus
workloadNamespacedName types.NamespacedName
cloneSet *kruise.CloneSet
}
// size fetches the Cloneset and returns the replicas (not the actual number of pods)
func (c *cloneSetController) size(ctx context.Context) (int32, error) {
if c.cloneSet == nil {
err := c.fetchCloneSet(ctx)
if err != nil {
return 0, err
}
}
// default is 1
if c.cloneSet.Spec.Replicas == nil {
return 1, nil
}
return *c.cloneSet.Spec.Replicas, nil
}
func (c *cloneSetController) fetchCloneSet(ctx context.Context) error {
// get the cloneSet
workload := kruise.CloneSet{}
err := c.client.Get(ctx, c.workloadNamespacedName, &workload)
if err != nil {
if !apierrors.IsNotFound(err) {
c.recorder.Event(c.parentController, event.Warning("Failed to get the Cloneset", err))
}
return err
}
c.cloneSet = &workload
return nil
}

View File

@@ -87,7 +87,7 @@ func (c *DeploymentController) VerifySpec(ctx context.Context) (bool, error) {
}
// record the size and we will use this value to drive the rest of the batches
// we do not handle scale case in this controller
c.rolloutStatus.RolloutTargetTotalSize = targetTotalReplicas
c.rolloutStatus.RolloutTargetSize = targetTotalReplicas
// make sure that the updateRevision is different from what we have already done
targetHash, verifyErr := utils.ComputeSpecHash(c.targetDeploy.Spec)
@@ -175,13 +175,13 @@ func (c *DeploymentController) RolloutOneBatchPods(ctx context.Context) (bool, e
rolloutStrategy = c.rolloutSpec.RolloutStrategy
}
// Determine if we are the first or the second part of the current batch rollout
if currentSizeSetting == c.rolloutStatus.RolloutTargetTotalSize {
if currentSizeSetting == c.rolloutStatus.RolloutTargetSize {
// we need to finish the first part of the rollout,
// will always return false to not move to the next phase
return false, c.rolloutBatchFirstHalf(ctx, rolloutStrategy)
}
// we are at the second half
targetSize := c.calculateCurrentTarget(c.rolloutStatus.RolloutTargetTotalSize)
targetSize := c.calculateCurrentTarget(c.rolloutStatus.RolloutTargetSize)
if !c.rolloutBatchSecondHalf(ctx, rolloutStrategy, targetSize) {
return false, nil
}
@@ -206,8 +206,8 @@ func (c *DeploymentController) CheckOneBatchPods(ctx context.Context) (bool, err
readyTargetPodCount := c.targetDeploy.Status.ReadyReplicas
sourcePodCount := c.sourceDeploy.Status.Replicas
currentBatch := c.rolloutSpec.RolloutBatches[c.rolloutStatus.CurrentBatch]
targetGoal := c.calculateCurrentTarget(c.rolloutStatus.RolloutTargetTotalSize)
sourceGoal := c.calculateCurrentSource(c.rolloutStatus.RolloutTargetTotalSize)
targetGoal := c.calculateCurrentTarget(c.rolloutStatus.RolloutTargetSize)
sourceGoal := c.calculateCurrentSource(c.rolloutStatus.RolloutTargetSize)
maxUnavail := 0
if currentBatch.MaxUnavailable != nil {
maxUnavail, _ = intstr.GetValueFromIntOrPercent(currentBatch.MaxUnavailable, int(targetGoal), true)
@@ -219,7 +219,7 @@ func (c *DeploymentController) CheckOneBatchPods(ctx context.Context) (bool, err
// make sure that the source deployment has the correct pods before moving the target
// and the total we could overshoot in revert cases
if sourcePodCount != sourceGoal ||
int32(maxUnavail)+readyTargetPodCount+sourcePodCount < c.rolloutStatus.RolloutTargetTotalSize {
int32(maxUnavail)+readyTargetPodCount+sourcePodCount < c.rolloutStatus.RolloutTargetSize {
// continue to verify
klog.InfoS("the batch is not ready yet", "current batch", c.rolloutStatus.CurrentBatch)
c.rolloutStatus.RolloutRetry(fmt.Sprintf(
@@ -300,7 +300,7 @@ func (c *DeploymentController) calculateInitialTargetSize(ctx context.Context) b
// check if the replicas in all the rollout batches add up to the right number
func (c *DeploymentController) verifyRolloutBatchReplicaValue(totalReplicas int32) error {
// use a common function to check if the sum of all the batches can match the Deployment size
err := VerifySumOfBatchSizes(c.rolloutSpec, totalReplicas)
err := verifyBatchesWithRollout(c.rolloutSpec, totalReplicas)
if err != nil {
return err
}
@@ -352,7 +352,7 @@ func (c *DeploymentController) claimDeployment(ctx context.Context, deploy *apps
func (c *DeploymentController) rolloutBatchFirstHalf(ctx context.Context, rolloutStrategy v1alpha1.RolloutStrategyType) error {
if rolloutStrategy == v1alpha1.IncreaseFirstRolloutStrategyType {
// set the target replica first which should increase its size
if err := c.patchDeployment(ctx, c.calculateCurrentTarget(c.rolloutStatus.RolloutTargetTotalSize),
if err := c.patchDeployment(ctx, c.calculateCurrentTarget(c.rolloutStatus.RolloutTargetSize),
&c.targetDeploy); err != nil {
c.rolloutStatus.RolloutRetry(err.Error())
}
@@ -362,7 +362,7 @@ func (c *DeploymentController) rolloutBatchFirstHalf(ctx context.Context, rollou
}
if rolloutStrategy == v1alpha1.DecreaseFirstRolloutStrategyType {
// set the source replicas first which should shrink its size
if err := c.patchDeployment(ctx, c.calculateCurrentSource(c.rolloutStatus.RolloutTargetTotalSize),
if err := c.patchDeployment(ctx, c.calculateCurrentSource(c.rolloutStatus.RolloutTargetSize),
&c.sourceDeploy); err != nil {
c.rolloutStatus.RolloutRetry(err.Error())
}
@@ -386,7 +386,7 @@ func (c *DeploymentController) rolloutBatchSecondHalf(ctx context.Context,
// make sure that the target deployment has enough ready pods before reducing the source
if c.targetDeploy.Status.ReadyReplicas+int32(maxUnavail) >= targetSize {
// set the source replicas now which should shrink its size
if err = c.patchDeployment(ctx, c.calculateCurrentSource(c.rolloutStatus.RolloutTargetTotalSize),
if err = c.patchDeployment(ctx, c.calculateCurrentSource(c.rolloutStatus.RolloutTargetSize),
&c.sourceDeploy); err != nil {
c.rolloutStatus.RolloutRetry(err.Error())
return false
@@ -401,7 +401,7 @@ func (c *DeploymentController) rolloutBatchSecondHalf(ctx context.Context,
}
} else if rolloutStrategy == v1alpha1.DecreaseFirstRolloutStrategyType {
// make sure that the source deployment has the correct pods before moving the target
sourceSize := c.calculateCurrentSource(c.rolloutStatus.RolloutTargetTotalSize)
sourceSize := c.calculateCurrentSource(c.rolloutStatus.RolloutTargetSize)
if c.sourceDeploy.Status.Replicas == sourceSize {
// we can increase the target deployment as soon as the source deployment's replica is correct
// no need to wait for them to be ready

View File

@@ -19,7 +19,6 @@ package appdeployment
import (
"context"
"encoding/json"
"fmt"
"time"
"github.com/crossplane/crossplane-runtime/pkg/logging"
@@ -509,9 +508,6 @@ func (r *Reconciler) applyTraffic(ctx context.Context, appd *oamcore.AppDeployme
}
}
for _, svc := range svcs {
fmt.Println("haha", svc.Name, svc.Namespace)
}
for clusterName := range affectedClusters {
var kubecli client.Client
if isHostCluster(clusterName) {

View File

@@ -145,6 +145,7 @@ func (r *Reconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
err = handler.handleResourceTracker(ctx, comps, ac)
if err != nil {
applog.Error(err, "[Handle resourceTracker]")
app.Status.SetConditions(errorCondition("Handle resourceTracker", err))
return handler.handleErr(err)
}
@@ -180,6 +181,12 @@ func (r *Reconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
app.Status.Services = appCompStatus
app.Status.SetConditions(readyCondition("HealthCheck"))
app.Status.Phase = common.ApplicationRunning
err = handler.garbageCollection(ctx)
if err != nil {
applog.Error(err, "[Garbage collection]")
app.Status.SetConditions(errorCondition("GarbageCollection", err))
return handler.handleErr(err)
}
// Gather status of components
var refComps []v1alpha1.TypedReference
for _, comp := range comps {

View File

@@ -166,6 +166,88 @@ var _ = Describe("Test application controller finalizer logic", func() {
checkRt := new(v1beta1.ResourceTracker)
Expect(k8sClient.Get(ctx, getTrackerKey(checkApp.Namespace, checkApp.Name), checkRt)).Should(util.NotFoundMatcher{})
})
It("Test cross namespace workload, then update the app to change the namespace", func() {
appName := "app-3"
appKey := types.NamespacedName{Namespace: namespace, Name: appName}
app := getApp(appName, namespace, "cross-worker")
Expect(k8sClient.Create(ctx, app)).Should(BeNil())
By("Create a cross workload app")
_, err := reconciler.Reconcile(ctrl.Request{NamespacedName: appKey})
Expect(err).Should(BeNil())
checkApp := &v1beta1.Application{}
Expect(k8sClient.Get(ctx, appKey, checkApp)).Should(BeNil())
Expect(checkApp.Status.Phase).Should(Equal(common.ApplicationRunning))
Expect(len(checkApp.Finalizers)).Should(BeEquivalentTo(0))
rt := &v1beta1.ResourceTracker{}
Expect(k8sClient.Get(ctx, getTrackerKey(checkApp.Namespace, checkApp.Name), rt)).Should(BeNil())
_, err = reconciler.Reconcile(ctrl.Request{NamespacedName: appKey})
Expect(err).Should(BeNil())
checkApp = new(v1beta1.Application)
Expect(k8sClient.Get(ctx, appKey, checkApp)).Should(BeNil())
Expect(len(checkApp.Finalizers)).Should(BeEquivalentTo(1))
Expect(checkApp.Finalizers[0]).Should(BeEquivalentTo(resourceTrackerFinalizer))
Expect(checkApp.Status.ResourceTracker.UID).Should(BeEquivalentTo(rt.UID))
Expect(len(rt.Status.TrackedResources)).Should(BeEquivalentTo(1))
By("Update the app, set type to normal-worker")
checkApp.Spec.Components[0].Type = "normal-worker"
Expect(k8sClient.Update(ctx, checkApp)).Should(BeNil())
_, err = reconciler.Reconcile(ctrl.Request{NamespacedName: appKey})
Expect(err).Should(BeNil())
checkApp = new(v1beta1.Application)
Expect(k8sClient.Get(ctx, appKey, checkApp)).Should(BeNil())
Expect(checkApp.Status.ResourceTracker).Should(BeNil())
Expect(k8sClient.Get(ctx, getTrackerKey(checkApp.Namespace, checkApp.Name), rt)).Should(util.NotFoundMatcher{})
Expect(k8sClient.Delete(ctx, checkApp)).Should(BeNil())
_, err = reconciler.Reconcile(ctrl.Request{NamespacedName: appKey})
Expect(err).Should(BeNil())
})
It("Test cross namespace workload and trait, then update the app to delete trait ", func() {
appName := "app-4"
appKey := types.NamespacedName{Namespace: namespace, Name: appName}
app := getApp(appName, namespace, "cross-worker")
app.Spec.Components[0].Traits = []v1beta1.ApplicationTrait{
{
Type: "cross-scaler",
Properties: runtime.RawExtension{Raw: []byte(`{"replicas": 1}`)},
},
}
Expect(k8sClient.Create(ctx, app)).Should(BeNil())
By("Create a cross workload trait app")
_, err := reconciler.Reconcile(ctrl.Request{NamespacedName: appKey})
Expect(err).Should(BeNil())
checkApp := &v1beta1.Application{}
Expect(k8sClient.Get(ctx, appKey, checkApp)).Should(BeNil())
Expect(checkApp.Status.Phase).Should(Equal(common.ApplicationRunning))
Expect(len(checkApp.Finalizers)).Should(BeEquivalentTo(0))
rt := &v1beta1.ResourceTracker{}
Expect(k8sClient.Get(ctx, getTrackerKey(checkApp.Namespace, checkApp.Name), rt)).Should(BeNil())
_, err = reconciler.Reconcile(ctrl.Request{NamespacedName: appKey})
Expect(err).Should(BeNil())
checkApp = new(v1beta1.Application)
Expect(k8sClient.Get(ctx, appKey, checkApp)).Should(BeNil())
Expect(len(checkApp.Finalizers)).Should(BeEquivalentTo(1))
Expect(checkApp.Finalizers[0]).Should(BeEquivalentTo(resourceTrackerFinalizer))
Expect(checkApp.Status.ResourceTracker.UID).Should(BeEquivalentTo(rt.UID))
Expect(len(rt.Status.TrackedResources)).Should(BeEquivalentTo(2))
By("Update the app, set type to normal-worker")
checkApp.Spec.Components[0].Traits = nil
Expect(k8sClient.Update(ctx, checkApp)).Should(BeNil())
_, err = reconciler.Reconcile(ctrl.Request{NamespacedName: appKey})
Expect(err).Should(BeNil())
rt = &v1beta1.ResourceTracker{}
checkApp = new(v1beta1.Application)
Expect(k8sClient.Get(ctx, appKey, checkApp)).Should(BeNil())
Expect(k8sClient.Get(ctx, getTrackerKey(checkApp.Namespace, checkApp.Name), rt)).Should(BeNil())
Expect(checkApp.Status.ResourceTracker.UID).Should(BeEquivalentTo(rt.UID))
Expect(len(rt.Status.TrackedResources)).Should(BeEquivalentTo(1))
Expect(k8sClient.Delete(ctx, checkApp)).Should(BeNil())
_, err = reconciler.Reconcile(ctrl.Request{NamespacedName: appKey})
Expect(err).Should(BeNil())
Expect(k8sClient.Get(ctx, getTrackerKey(checkApp.Namespace, checkApp.Name), rt)).Should(util.NotFoundMatcher{})
})
})
var _ = Describe("Test finalizer related func", func() {
@@ -186,46 +268,6 @@ var _ = Describe("Test finalizer related func", func() {
By("[TEST] Clean up resources after an integration test")
})
It("Test getResourceTrackerAndOwnReference func", func() {
app := getApp("app-1", namespace, "worker")
handler = appHandler{
r: reconciler,
app: app,
logger: reconciler.Log.WithValues("application", "finalizer-func-test"),
}
checkRt := new(v1beta1.ResourceTracker)
Expect(k8sClient.Get(ctx, getTrackerKey(namespace, app.Name), checkRt)).Should(util.NotFoundMatcher{})
rt, owner, err := handler.getResourceTrackerAndOwnReference(ctx)
Expect(err).Should(BeNil())
Expect(rt.UID).Should(BeEquivalentTo(owner.UID))
Expect(owner.Kind).Should(BeEquivalentTo(v1beta1.ResourceTrackerKind))
checkRt = new(v1beta1.ResourceTracker)
Expect(k8sClient.Get(ctx, getTrackerKey(namespace, app.Name), checkRt)).Should(BeNil())
Expect(checkRt.UID).Should(BeEquivalentTo(rt.UID))
Expect(k8sClient.Delete(ctx, checkRt)).Should(BeNil())
})
It("Test getResourceTrackerAndOwnReference func with already exsit resourceTracker", func() {
app := getApp("app-2", namespace, "worker")
handler = appHandler{
r: reconciler,
app: app,
logger: reconciler.Log.WithValues("application", "finalizer-func-test"),
}
rt := &v1beta1.ResourceTracker{
ObjectMeta: metav1.ObjectMeta{
Name: namespace + "-" + app.GetName(),
},
}
Expect(k8sClient.Create(ctx, rt)).Should(BeNil())
checkRt, owner, err := handler.getResourceTrackerAndOwnReference(ctx)
Expect(err).Should(BeNil())
Expect(rt.UID).Should(BeEquivalentTo(checkRt.UID))
Expect(owner.Kind).Should(BeEquivalentTo(v1beta1.ResourceTrackerKind))
Expect(checkRt.UID).Should(BeEquivalentTo(owner.UID))
Expect(k8sClient.Delete(ctx, checkRt)).Should(BeNil())
})
It("Test finalizeResourceTracker func with need update ", func() {
app := getApp("app-3", namespace, "worker")
rt := &v1beta1.ResourceTracker{

View File

@@ -20,6 +20,7 @@ import (
"context"
"fmt"
"strconv"
"strings"
"time"
runtimev1alpha1 "github.com/crossplane/crossplane-runtime/apis/core/v1alpha1"
@@ -31,6 +32,7 @@ import (
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
ctypes "k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/klog/v2"
@@ -41,6 +43,7 @@ import (
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1alpha2"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
"github.com/oam-dev/kubevela/pkg/appfile"
"github.com/oam-dev/kubevela/pkg/controller/core.oam.dev/v1alpha2/applicationconfiguration"
"github.com/oam-dev/kubevela/pkg/controller/utils"
"github.com/oam-dev/kubevela/pkg/dsl/process"
"github.com/oam-dev/kubevela/pkg/oam"
@@ -67,13 +70,15 @@ func readyCondition(tpy string) runtimev1alpha1.Condition {
}
type appHandler struct {
r *Reconciler
app *v1beta1.Application
appfile *appfile.Appfile
logger logr.Logger
inplace bool
isNewRevision bool
revisionHash string
r *Reconciler
app *v1beta1.Application
appfile *appfile.Appfile
logger logr.Logger
inplace bool
isNewRevision bool
revisionHash string
acrossNamespaceResources []v1beta1.TypedReference
resourceTracker *v1beta1.ResourceTracker
}
// setInplace will mark if the application should upgrade the workload within the same instance(name never changed)
@@ -108,6 +113,12 @@ func (h *appHandler) apply(ctx context.Context, appRev *v1beta1.ApplicationRevis
Controller: pointer.BoolPtr(true),
}}
if _, exist := h.app.GetAnnotations()[oam.AnnotationAppRollout]; !exist && h.app.Spec.RolloutPlan == nil {
h.setInplace(true)
} else {
h.setInplace(false)
}
// don't create components and AC if revision-only annotation is set
if ac.Annotations[oam.AnnotationAppRevisionOnly] == "true" {
h.FinalizeAppRevision(appRev, ac, comps)
@@ -116,12 +127,21 @@ func (h *appHandler) apply(ctx context.Context, appRev *v1beta1.ApplicationRevis
for _, comp := range comps {
comp.SetOwnerReferences(owners)
needTracker, err := h.checkAndSetResourceTracker(&comp.Spec.Workload)
if err != nil {
return err
}
newComp := comp.DeepCopy()
// newComp will be updated and return the revision name instead of the component name
revisionName, err := h.createOrUpdateComponent(ctx, newComp)
if err != nil {
return err
}
if needTracker {
if err := h.recodeTrackedWorkload(comp, revisionName); err != nil {
return err
}
}
// find the ACC that contains this component
for i := 0; i < len(ac.Spec.Components); i++ {
// update the AC using the component revision instead of component name
@@ -129,6 +149,9 @@ func (h *appHandler) apply(ctx context.Context, appRev *v1beta1.ApplicationRevis
if ac.Spec.Components[i].ComponentName == newComp.Name {
ac.Spec.Components[i].RevisionName = revisionName
ac.Spec.Components[i].ComponentName = ""
if err := h.checkResourceTrackerForTrait(ctx, ac.Spec.Components[i], newComp.Name); err != nil {
return err
}
}
}
if comp.Spec.Helm != nil {
@@ -148,11 +171,9 @@ func (h *appHandler) apply(ctx context.Context, appRev *v1beta1.ApplicationRevis
// the rollout will create AppContext which will launch the real K8s resources.
// Otherwise, we should create/update the appContext here when there if no rollout controller to take care of new versions
// In this case, the workload should update with the annotation `app.oam.dev/inplace-upgrade=true`
if _, exist := h.app.GetAnnotations()[oam.AnnotationAppRollout]; !exist && h.app.Spec.RolloutPlan == nil {
h.setInplace(true)
if h.inplace {
return h.createOrUpdateAppContext(ctx, owners)
}
h.setInplace(false)
return nil
}
@@ -399,90 +420,30 @@ func (h *appHandler) applyHelmModuleResources(ctx context.Context, comp *v1alpha
return nil
}
// handleResourceTracker check the namesapce of all components and traits, if the namespace is different with application, the tracker will own them
func (h *appHandler) handleResourceTracker(ctx context.Context, components []*v1alpha2.Component, ac *v1alpha2.ApplicationConfiguration) error {
ref := new(metav1.OwnerReference)
// resourceTracker is cache for resourceTracker, avoid get from k8s every time
resourceTracker := new(v1beta1.ResourceTracker)
// checkAndSetResourceTracker check if resource's namespace is different with application, if yes set resourceTracker as
// resource's ownerReference
func (h *appHandler) checkAndSetResourceTracker(resource *runtime.RawExtension) (bool, error) {
needTracker := false
for i, c := range components {
u, err := oamutil.RawExtension2Unstructured(&c.Spec.Workload)
if err != nil {
return err
}
if checkResourceDiffWithApp(u, h.app.Namespace) {
needTracker = true
if len(resourceTracker.Name) == 0 {
resourceTracker, ref, err = h.getResourceTrackerAndOwnReference(ctx)
if err != nil {
return err
}
}
u.SetOwnerReferences([]metav1.OwnerReference{*ref})
raw := oamutil.Object2RawExtension(u)
components[i].Spec.Workload = raw
}
u, err := oamutil.RawExtension2Unstructured(resource)
if err != nil {
return false, err
}
for _, acComponent := range ac.Spec.Components {
for i, t := range acComponent.Traits {
u, err := oamutil.RawExtension2Unstructured(&t.Trait)
if err != nil {
return err
}
if checkResourceDiffWithApp(u, h.app.Namespace) {
needTracker = true
if len(resourceTracker.Name) == 0 {
resourceTracker, ref, err = h.getResourceTrackerAndOwnReference(ctx)
if err != nil {
return err
}
}
u.SetOwnerReferences([]metav1.OwnerReference{*ref})
raw := oamutil.Object2RawExtension(u)
acComponent.Traits[i].Trait = raw
}
}
if checkResourceDiffWithApp(u, h.app.Namespace) {
needTracker = true
ref := h.genResourceTrackerOwnerReference()
// set resourceTracker as the ownerReference of workload/trait
u.SetOwnerReferences([]metav1.OwnerReference{*ref})
raw := oamutil.Object2RawExtension(u)
*resource = raw
return needTracker, nil
}
if !needTracker {
h.app.Status.ResourceTracker = nil
// check weather related resourceTracker is existed, if yes delete it
err := h.r.Get(ctx, ctypes.NamespacedName{Name: h.generateResourceTrackerName()}, resourceTracker)
if err == nil {
return h.r.Delete(ctx, resourceTracker)
}
if !apierrors.IsNotFound(err) {
return err
}
return nil
}
h.app.Status.ResourceTracker = &runtimev1alpha1.TypedReference{
Name: resourceTracker.Name,
Kind: v1beta1.ResourceTrackerGroupKind,
APIVersion: v1beta1.ResourceTrackerKindAPIVersion,
UID: resourceTracker.UID}
return nil
return needTracker, nil
}
func (h *appHandler) getResourceTrackerAndOwnReference(ctx context.Context) (*v1beta1.ResourceTracker, *metav1.OwnerReference, error) {
resourceTracker := new(v1beta1.ResourceTracker)
key := ctypes.NamespacedName{Name: h.generateResourceTrackerName()}
err := h.r.Get(ctx, key, resourceTracker)
if err != nil {
if apierrors.IsNotFound(err) {
resourceTracker = &v1beta1.ResourceTracker{
ObjectMeta: metav1.ObjectMeta{
Name: h.generateResourceTrackerName(),
},
}
if err = h.r.Client.Create(ctx, resourceTracker); err != nil {
return nil, nil, err
}
return resourceTracker, metav1.NewControllerRef(resourceTracker, v1beta1.ResourceTrackerKindVersionKind), nil
}
return nil, nil, err
}
return resourceTracker, metav1.NewControllerRef(resourceTracker, v1beta1.ResourceTrackerKindVersionKind), nil
// genResourceTrackerOwnerReference check the related resourceTracker whether have been created.
// If not, create it. And return the ownerReference of this resourceTracker.
func (h *appHandler) genResourceTrackerOwnerReference() *metav1.OwnerReference {
return metav1.NewControllerRef(h.resourceTracker, v1beta1.ResourceTrackerKindVersionKind)
}
func (h *appHandler) generateResourceTrackerName() string {
@@ -529,3 +490,196 @@ func (h *appHandler) removeResourceTracker(ctx context.Context) (bool, error) {
h.app.Status.ResourceTracker = nil
return true, nil
}
func (h *appHandler) recodeTrackedWorkload(comp *v1alpha2.Component, compRevisionName string) error {
workloadName, err := h.getWorkloadName(comp.Spec.Workload, comp.Name, compRevisionName)
if err != nil {
return err
}
if err = h.recodeTrackedResource(workloadName, comp.Spec.Workload); err != nil {
return err
}
return nil
}
// checkResourceTrackerForTrait check component trait namespace, if it's namespace is different with application, set resourceTracker as its ownerReference
// and recode trait in handler acrossNamespace field
func (h *appHandler) checkResourceTrackerForTrait(ctx context.Context, comp v1alpha2.ApplicationConfigurationComponent, compName string) error {
for i, ct := range comp.Traits {
needTracker, err := h.checkAndSetResourceTracker(&comp.Traits[i].Trait)
if err != nil {
return err
}
if needTracker {
traitName, err := h.getTraitName(ctx, compName, comp.Traits[i].DeepCopy(), &ct.Trait)
if err != nil {
return err
}
if err = h.recodeTrackedResource(traitName, ct.Trait); err != nil {
return err
}
}
}
return nil
}
// getWorkloadName generate workload name. By default the workload's name will be generated by applicationContext, this func is for application controller
// get name of crossNamespace workload. The logic of this func is same with the way of appConfig generating workloadName
func (h *appHandler) getWorkloadName(w runtime.RawExtension, componentName string, revisionName string) (string, error) {
workload, err := oamutil.RawExtension2Unstructured(&w)
if err != nil {
return "", err
}
var revision int = 0
if len(revisionName) != 0 {
r, err := utils.ExtractRevision(revisionName)
if err != nil {
return "", err
}
revision = r
}
applicationconfiguration.SetAppWorkloadInstanceName(componentName, workload, revision, strconv.FormatBool(h.inplace))
return workload.GetName(), nil
}
// getTraitName generate trait name. By default the trait name will be generated by applicationContext, this func is for application controller
// get name of crossNamespace trait. The logic of this func is same with the way of appConfig generating traitName
func (h *appHandler) getTraitName(ctx context.Context, componentName string, ct *v1alpha2.ComponentTrait, t *runtime.RawExtension) (string, error) {
trait, err := oamutil.RawExtension2Unstructured(t)
if err != nil {
return "", err
}
traitDef, err := oamutil.FetchTraitDefinition(ctx, h.r, h.r.dm, trait)
if err != nil {
if !apierrors.IsNotFound(err) {
return "", errors.Wrapf(err, "cannot find trait definition %q %q %q", trait.GetAPIVersion(), trait.GetKind(), trait.GetName())
}
traitDef = oamutil.GetDummyTraitDefinition(trait)
}
traitType := traitDef.Name
if strings.Contains(traitType, ".") {
traitType = strings.Split(traitType, ".")[0]
}
traitName := oamutil.GenTraitName(componentName, ct, traitType)
return traitName, nil
}
// recodeTrackedResource append cross namespace resource to apphandler's acrossNamespaceResources field
func (h *appHandler) recodeTrackedResource(resourceName string, resource runtime.RawExtension) error {
u, err := oamutil.RawExtension2Unstructured(&resource)
if err != nil {
return err
}
tr := new(v1beta1.TypedReference)
tr.Name = resourceName
tr.Namespace = u.GetNamespace()
tr.APIVersion = u.GetAPIVersion()
tr.Kind = u.GetKind()
h.acrossNamespaceResources = append(h.acrossNamespaceResources, *tr)
return nil
}
// Now if workloads or traits are in the same namespace with application, applicationContext will take over gc workloads and traits.
// Here we cover the case in witch a cross namespace component or one of its cross namespace trait is removed from an application.
func (h *appHandler) garbageCollection(ctx context.Context) error {
rt := new(v1beta1.ResourceTracker)
err := h.r.Get(ctx, ctypes.NamespacedName{Name: h.generateResourceTrackerName()}, rt)
if err != nil {
if apierrors.IsNotFound(err) {
// guarantee app status right
h.app.Status.ResourceTracker = nil
return nil
}
return err
}
applied := map[v1beta1.TypedReference]bool{}
if len(h.acrossNamespaceResources) == 0 {
h.app.Status.ResourceTracker = nil
if err := h.r.Delete(ctx, rt); err != nil {
return err
}
return nil
}
for _, resource := range h.acrossNamespaceResources {
applied[resource] = true
}
for _, ref := range rt.Status.TrackedResources {
if !applied[ref] {
resource := new(unstructured.Unstructured)
resource.SetAPIVersion(ref.APIVersion)
resource.SetKind(ref.Kind)
resource.SetNamespace(ref.Namespace)
resource.SetName(ref.Name)
err := h.r.Delete(ctx, resource)
if err != nil {
if apierrors.IsNotFound(err) {
continue
}
return err
}
}
}
// update resourceTracker status, recode applied across-namespace resources
rt.Status.TrackedResources = h.acrossNamespaceResources
if err := h.r.Status().Update(ctx, rt); err != nil {
return err
}
h.app.Status.ResourceTracker = &runtimev1alpha1.TypedReference{
Name: rt.Name,
Kind: v1beta1.ResourceTrackerGroupKind,
APIVersion: v1beta1.ResourceTrackerKindAPIVersion,
UID: rt.UID}
return nil
}
// handleResourceTracker check the namespace of all workloads and traits
// if one resource is across-namespace create resourceTracker and set in appHandler field
func (h *appHandler) handleResourceTracker(ctx context.Context, components []*v1alpha2.Component, ac *v1alpha2.ApplicationConfiguration) error {
resourceTracker := new(v1beta1.ResourceTracker)
needTracker := false
for _, c := range components {
u, err := oamutil.RawExtension2Unstructured(&c.Spec.Workload)
if err != nil {
return err
}
if checkResourceDiffWithApp(u, h.app.Namespace) {
needTracker = true
break
}
}
outLoop:
for _, acComponent := range ac.Spec.Components {
for _, t := range acComponent.Traits {
u, err := oamutil.RawExtension2Unstructured(&t.Trait)
if err != nil {
return err
}
if checkResourceDiffWithApp(u, h.app.Namespace) {
needTracker = true
break outLoop
}
}
}
if needTracker {
// check weather related resourceTracker is existed, if not create it
err := h.r.Get(ctx, ctypes.NamespacedName{Name: h.generateResourceTrackerName()}, resourceTracker)
if err == nil {
h.resourceTracker = resourceTracker
return nil
}
if apierrors.IsNotFound(err) {
resourceTracker = &v1beta1.ResourceTracker{
ObjectMeta: metav1.ObjectMeta{
Name: h.generateResourceTrackerName(),
},
}
if err = h.r.Client.Create(ctx, resourceTracker); err != nil {
return err
}
h.resourceTracker = resourceTracker
return nil
}
return err
}
return nil
}

View File

@@ -36,12 +36,13 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/util/retry"
"k8s.io/klog/v2"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1alpha2"
types2 "github.com/oam-dev/kubevela/apis/types"
oamtype "github.com/oam-dev/kubevela/apis/types"
core "github.com/oam-dev/kubevela/pkg/controller/core.oam.dev"
"github.com/oam-dev/kubevela/pkg/oam"
"github.com/oam-dev/kubevela/pkg/oam/discoverymapper"
@@ -258,13 +259,15 @@ func (r *OAMApplicationReconciler) ACReconcile(ctx context.Context, ac *v1alpha2
for name, hook := range r.postHooks {
exeResult, err := hook.Exec(ctx, ac, log)
if err != nil {
log.Debug("Failed to execute post-hooks", "hook name", name, "error", err, "requeue-after", result.RequeueAfter)
log.Debug("Failed to execute post-hooks", "hook name", name, "error", err,
"requeue-after", result.RequeueAfter)
r.record.Event(ac, event.Warning(reasonCannotExecutePosthooks, err))
ac.SetConditions(v1alpha1.ReconcileError(errors.Wrap(err, errExecutePosthooks)))
result = exeResult
return
}
r.record.Event(ac, event.Normal(reasonExecutePosthook, "Successfully executed a posthook", "posthook name", name))
r.record.Event(ac, event.Normal(reasonExecutePosthook, "Successfully executed a posthook",
"posthook name", name))
}
}()
@@ -289,7 +292,7 @@ func (r *OAMApplicationReconciler) ACReconcile(ctx context.Context, ac *v1alpha2
log.Info(msg)
r.record.Event(ac, event.Normal(reasonRevision, msg))
ac.SetConditions(v1alpha1.Unavailable())
ac.Status.RollingStatus = types2.InactiveAfterRollingCompleted
ac.Status.RollingStatus = oamtype.InactiveAfterRollingCompleted
// TODO: GC the traits/workloads
return reconcile.Result{}
}
@@ -303,7 +306,8 @@ func (r *OAMApplicationReconciler) ACReconcile(ctx context.Context, ac *v1alpha2
return reconcile.Result{}
}
log.Debug("Successfully rendered components", "workloads", len(workloads))
r.record.Event(ac, event.Normal(reasonRenderComponents, "Successfully rendered components", "workloads", strconv.Itoa(len(workloads))))
r.record.Event(ac, event.Normal(reasonRenderComponents, "Successfully rendered components",
"workloads", strconv.Itoa(len(workloads))))
applyOpts := []apply.ApplyOption{apply.MustBeControllableBy(ac.GetUID()), applyOnceOnly(ac, r.applyOnceOnlyMode, log)}
if err := r.workloads.Apply(ctx, ac.Status.Workloads, workloads, applyOpts...); err != nil {
@@ -312,8 +316,15 @@ func (r *OAMApplicationReconciler) ACReconcile(ctx context.Context, ac *v1alpha2
ac.SetConditions(v1alpha1.ReconcileError(errors.Wrap(err, errApplyComponents)))
return reconcile.Result{}
}
// only change the status after the apply succeeds
// TODO: take into account the templating object may not be applied if there are dependencies
if ac.Status.RollingStatus == oamtype.RollingTemplating {
klog.InfoS("mark the ac rolling status as templated", "appConfig", klog.KRef(ac.Namespace, ac.Name))
ac.Status.RollingStatus = oamtype.RollingTemplated
}
log.Debug("Successfully applied components", "workloads", len(workloads))
r.record.Event(ac, event.Normal(reasonApplyComponents, "Successfully applied components", "workloads", strconv.Itoa(len(workloads))))
r.record.Event(ac, event.Normal(reasonApplyComponents, "Successfully applied components",
"workloads", strconv.Itoa(len(workloads))))
// Kubernetes garbage collection will (by default) reap workloads and traits
// when the appconfig that controls them (in the controller reference sense)
@@ -415,7 +426,8 @@ func (r *OAMApplicationReconciler) updateStatus(ctx context.Context, ac, acPatch
var ul unstructured.UnstructuredList
ul.SetKind(w.Workload.GetKind())
ul.SetAPIVersion(w.Workload.GetAPIVersion())
if err := r.client.List(ctx, &ul, client.MatchingLabels{oam.LabelAppName: ac.Name, oam.LabelAppComponent: w.ComponentName, oam.LabelOAMResourceType: oam.ResourceTypeWorkload}); err != nil {
if err := r.client.List(ctx, &ul, client.MatchingLabels{oam.LabelAppName: ac.Name,
oam.LabelAppComponent: w.ComponentName, oam.LabelOAMResourceType: oam.ResourceTypeWorkload}); err != nil {
continue
}
for _, v := range ul.Items {
@@ -693,7 +705,8 @@ func applyOnceOnly(ac *v1alpha2.ApplicationConfiguration, mode core.ApplyOnceOnl
dLabels[oam.LabelOAMResourceType] != oam.ResourceTypeTrait {
// this ApplyOption only works for workload and trait
// skip if the resource is not workload nor trait, e.g., scope
log.Info("ignore apply only once check, because resourceType is not workload or trait", oam.LabelOAMResourceType, dLabels[oam.LabelOAMResourceType])
log.Info("ignore apply only once check, because resourceType is not workload or trait",
oam.LabelOAMResourceType, dLabels[oam.LabelOAMResourceType])
return nil
}
@@ -714,7 +727,8 @@ func applyOnceOnly(ac *v1alpha2.ApplicationConfiguration, mode core.ApplyOnceOnl
// the workload matches applied resource
createdBefore = true
// for workload, when revision enabled, only when revision changed that can trigger to create a new one
if dLabels[oam.LabelOAMResourceType] == oam.ResourceTypeWorkload && w.AppliedComponentRevision == dLabels[oam.LabelAppComponentRevision] {
if dLabels[oam.LabelOAMResourceType] == oam.ResourceTypeWorkload &&
w.AppliedComponentRevision == dLabels[oam.LabelAppComponentRevision] {
// the revision is not changed, so return an error to abort creating it
return &GenerationUnchanged{}
}
@@ -746,7 +760,8 @@ func applyOnceOnly(ac *v1alpha2.ApplicationConfiguration, mode core.ApplyOnceOnl
message = "apply only once with mode: force, but resource updated, will create new"
}
log.Info(message, "appConfig", ac.Name, "gvk", desired.GetObjectKind().GroupVersionKind(), "name", d.GetName(),
"resourceType", dLabels[oam.LabelOAMResourceType], "appliedCompRevision", appliedRevision, "labeledCompRevision", dLabels[oam.LabelAppComponentRevision],
"resourceType", dLabels[oam.LabelOAMResourceType], "appliedCompRevision", appliedRevision,
"labeledCompRevision", dLabels[oam.LabelAppComponentRevision],
"appliedGeneration", appliedGeneration, "labeledGeneration", dAnnots[oam.AnnotationAppGeneration])
// no recorded workloads nor traits matches the applied resource

View File

@@ -273,7 +273,7 @@ spec:
return 0
}
return traitObj.GetGeneration()
}, 20*time.Second, time.Second).Should(Equal(int64(2)))
}, 60*time.Second, time.Second).Should(Equal(int64(2)))
By("Check labels are removed")
_, found, _ := unstructured.NestedString(traitObj.UnstructuredContent(), "metadata", "labels", "test.label")

View File

@@ -135,11 +135,11 @@ func (r *components) Render(ctx context.Context, ac *v1alpha2.ApplicationConfigu
return nil, nil, err
}
workloads = append(workloads, w)
// TODO: handle rolling status better when there are multiple components
if isComponentRolling && needRolloutTemplate {
ac.Status.RollingStatus = oamtype.RollingTemplating
}
}
workloadsAllClear := true
ds := &v1alpha2.DependencyStatus{}
res := make([]Workload, 0, len(ac.Spec.Components))
for i, acc := range ac.Spec.Components {
@@ -148,16 +148,8 @@ func (r *components) Render(ctx context.Context, ac *v1alpha2.ApplicationConfigu
return nil, nil, err
}
ds.Unsatisfied = append(ds.Unsatisfied, unsatisfied...)
if workloads[i].HasDep {
workloadsAllClear = false
}
res = append(res, *workloads[i])
}
// set the ac rollingStatus to be RollingTemplated if all workloads are going to be applied
if workloadsAllClear && ac.Status.RollingStatus == oamtype.RollingTemplating {
klog.InfoS("mark the ac rolling status as templated", "appConfig", klog.KRef(ac.Namespace, ac.Name))
ac.Status.RollingStatus = oamtype.RollingTemplated
}
return res, ds, nil
}

View File

@@ -916,9 +916,9 @@ func TestRender(t *testing.T) {
if got[0].SkipApply {
t.Errorf("\n%s\nr.Render(...): template workload should not be skipped\n", tc.reason)
}
if tc.args.ac.Status.RollingStatus != oamtype.RollingTemplated {
if tc.args.ac.Status.RollingStatus != oamtype.RollingTemplating {
t.Errorf("\n%s\nr.Render(...): ac status should be templated but got %s\n", tc.reason,
ac.Status.RollingStatus)
tc.args.ac.Status.RollingStatus)
}
}

View File

@@ -136,7 +136,7 @@ func (r *Reconciler) Reconcile(req ctrl.Request) (res reconcile.Result, retErr e
appRollout.Status.StateTransition(v1alpha1.RollingModifiedEvent)
}
//TODO: handle deleting/abandoning state differently
//TODO: handle deleting/abandoning state differently in a big fork
// Get the source application first
var sourceApRev *oamv1alpha2.ApplicationRevision

View File

@@ -31,7 +31,9 @@ import (
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1alpha2"
"github.com/oam-dev/kubevela/apis/types"
controller "github.com/oam-dev/kubevela/pkg/controller/core.oam.dev"
"github.com/oam-dev/kubevela/pkg/controller/utils"
"github.com/oam-dev/kubevela/pkg/dsl/definition"
@@ -74,6 +76,17 @@ func (r *Reconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
cd: &componentDefinition,
}
if handler.cd.Spec.Workload.Type == "" {
err := utils.RefreshPackageDiscover(r.dm, r.pd, handler.cd.Spec.Workload.Definition,
common.DefinitionReference{}, types.TypeComponentDefinition)
if err != nil {
klog.ErrorS(err, "cannot discover the open api of the CRD")
r.record.Event(&componentDefinition, event.Warning("cannot discover the open api of the CRD", err))
return ctrl.Result{}, util.PatchCondition(ctx, r, &componentDefinition,
cpv1alpha1.ReconcileError(fmt.Errorf(util.ErrRefreshPackageDiscover, err)))
}
}
workloadType, err := handler.CreateWorkloadDefinition(ctx)
if err != nil {
klog.ErrorS(err, "cannot create converted WorkloadDefinition")
@@ -96,7 +109,7 @@ func (r *Reconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
def.Kube = componentDefinition.Spec.Schematic.KUBE
default:
}
err = def.StoreOpenAPISchema(ctx, r, req.Namespace, req.Name)
err = def.StoreOpenAPISchema(ctx, r.Client, r.pd, req.Namespace, req.Name)
if err != nil {
klog.ErrorS(err, "cannot store capability in ConfigMap")
r.record.Event(&(def.ComponentDefinition), event.Warning("cannot store capability in ConfigMap", err))

View File

@@ -24,14 +24,18 @@ import (
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
corev1 "k8s.io/api/core/v1"
crdv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/utils/pointer"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/yaml"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1alpha2"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
"github.com/oam-dev/kubevela/apis/types"
"github.com/oam-dev/kubevela/pkg/oam/util"
)
@@ -164,9 +168,9 @@ spec:
// API server will convert blank namespace to `default`
def.Namespace = namespace
Expect(k8sClient.Create(ctx, &def)).Should(Succeed())
reconcileRetry(&r, req)
By("Check whether ConfigMap is created")
reconcileRetry(&r, req)
var cm corev1.ConfigMap
name := fmt.Sprintf("%s%s", types.CapabilityConfigMapNamePrefix, componentDefinitionName)
Eventually(func() bool {
@@ -255,9 +259,9 @@ spec:
var def v1alpha2.ComponentDefinition
Expect(yaml.Unmarshal([]byte(validComponentDefinition), &def)).Should(BeNil())
Expect(k8sClient.Create(ctx, &def)).Should(Succeed())
reconcileRetry(&r, req)
By("Check whether ConfigMap is created")
reconcileRetry(&r, req)
var cm corev1.ConfigMap
name := fmt.Sprintf("%s%s", types.CapabilityConfigMapNamePrefix, componentDefinitionName)
Eventually(func() bool {
@@ -464,9 +468,9 @@ spec:
Expect(yaml.Unmarshal([]byte(validComponentDefinition), &def)).Should(BeNil())
def.Namespace = namespace
Expect(k8sClient.Create(ctx, &def)).Should(Succeed())
By("Check whether WorkloadDefinition is created")
reconcileRetry(&r, req)
By("Check whether WorkloadDefinition is created")
var wd v1alpha2.WorkloadDefinition
var wdName = componentDefinitionName
Eventually(func() bool {
@@ -507,9 +511,9 @@ spec:
}
By("Create ComponentDefinition")
Expect(k8sClient.Create(ctx, &cd)).Should(Succeed())
reconcileRetry(&r, req)
By("Check whether WorkloadDefinition is created")
reconcileRetry(&r, req)
var wd v1alpha2.WorkloadDefinition
var wdName = componentDefinitionName
Eventually(func() bool {
@@ -610,7 +614,7 @@ spec:
Eventually(func() bool {
err := k8sClient.Get(ctx, client.ObjectKey{Namespace: namespace, Name: name}, &cm)
return err == nil
}, 10*time.Second, time.Second).Should(BeTrue())
}, 15*time.Second, time.Second).Should(BeTrue())
Expect(cm.Data[types.OpenapiV3JSONSchema]).Should(Not(Equal("")))
By("Check whether ConfigMapRef refer to right")
@@ -621,4 +625,101 @@ spec:
})
})
Context("When the CUE Template in ComponentDefinition import new added CRD", func() {
var componentDefinationName = "test-refresh"
var namespace = "default"
It("Applying ComponentDefinition import new crd in CUE Template, should create a ConfigMap", func() {
By("create new crd")
newCrd := crdv1.CustomResourceDefinition{
ObjectMeta: metav1.ObjectMeta{
Name: "foo.example.com",
},
Spec: crdv1.CustomResourceDefinitionSpec{
Group: "example.com",
Names: crdv1.CustomResourceDefinitionNames{
Kind: "Foo",
ListKind: "FooList",
Plural: "foo",
Singular: "foo",
},
Versions: []crdv1.CustomResourceDefinitionVersion{{
Name: "v1",
Served: true,
Storage: true,
Subresources: &crdv1.CustomResourceSubresources{Status: &crdv1.CustomResourceSubresourceStatus{}},
Schema: &crdv1.CustomResourceValidation{
OpenAPIV3Schema: &crdv1.JSONSchemaProps{
Type: "object",
Properties: map[string]crdv1.JSONSchemaProps{
"spec": {
Type: "object",
XPreserveUnknownFields: pointer.BoolPtr(true),
Properties: map[string]crdv1.JSONSchemaProps{
"key": {Type: "string"},
}},
"status": {
Type: "object",
XPreserveUnknownFields: pointer.BoolPtr(true),
Properties: map[string]crdv1.JSONSchemaProps{
"key": {Type: "string"},
"app-hash": {Type: "string"},
}}}}}},
},
Scope: crdv1.NamespaceScoped,
},
}
Expect(k8sClient.Create(context.Background(), &newCrd)).Should(SatisfyAny(BeNil(), &util.AlreadyExistMatcher{}))
componentDef := `
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: test-refresh
namespace: default
spec:
workload:
definition:
apiVersion: example.com/v1
kind: Foo
schematic:
cue:
template: |
import (
ev1 "example.com/v1"
)
output: ev1.#Foo
output: {
spec: key: parameter.key1
status: key: parameter.key2
}
parameter: {
key1: string
key2: string
}
`
var cd v1beta1.ComponentDefinition
Expect(yaml.Unmarshal([]byte(componentDef), &cd)).Should(BeNil())
Expect(k8sClient.Create(ctx, &cd)).Should(Succeed())
req := reconcile.Request{NamespacedName: client.ObjectKey{Name: componentDefinationName, Namespace: namespace}}
reconcileRetry(&r, req)
By("check workload")
var wd v1beta1.WorkloadDefinition
var wdName = componentDefinationName
Eventually(func() bool {
err := k8sClient.Get(ctx, client.ObjectKey{Namespace: namespace, Name: wdName}, &wd)
return err == nil
}, 20*time.Second, time.Second).Should(BeTrue())
By("Check whether ConfigMap is created")
var cm corev1.ConfigMap
name := fmt.Sprintf("%s%s", types.CapabilityConfigMapNamePrefix, componentDefinationName)
Eventually(func() bool {
err := k8sClient.Get(ctx, client.ObjectKey{Namespace: namespace, Name: name}, &cm)
return err == nil
}, 15*time.Second, time.Second).Should(BeTrue())
Expect(cm.Data[types.OpenapiV3JSONSchema]).Should(Not(Equal("")))
})
})
})

View File

@@ -25,6 +25,8 @@ import (
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
crdv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/client-go/kubernetes/scheme"
"k8s.io/client-go/rest"
ctrl "sigs.k8s.io/controller-runtime"
@@ -33,6 +35,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/reconcile"
oamCore "github.com/oam-dev/kubevela/apis/core.oam.dev"
"github.com/oam-dev/kubevela/pkg/dsl/definition"
"github.com/oam-dev/kubevela/pkg/oam/discoverymapper"
)
@@ -63,6 +66,7 @@ var _ = BeforeSuite(func(done Done) {
err = oamCore.AddToScheme(scheme.Scheme)
Expect(err).NotTo(HaveOccurred())
Expect(crdv1.AddToScheme(scheme.Scheme)).Should(BeNil())
By("Create the k8s client")
k8sClient, err = client.New(cfg, client.Options{Scheme: scheme.Scheme})
@@ -80,11 +84,14 @@ var _ = BeforeSuite(func(done Done) {
Expect(err).ToNot(HaveOccurred())
_, err = dm.Refresh()
Expect(err).ToNot(HaveOccurred())
pd, err := definition.NewPackageDiscover(cfg)
Expect(err).ToNot(HaveOccurred())
r = Reconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
dm: dm,
pd: pd,
}
Expect(r.SetupWithManager(mgr)).ToNot(HaveOccurred())
controllerDone = make(chan struct{}, 1)
@@ -109,5 +116,5 @@ func reconcileRetry(r reconcile.Reconciler, req reconcile.Request) {
Eventually(func() error {
_, err := r.Reconcile(req)
return err
}, 3*time.Second, time.Second).Should(BeNil())
}, 20*time.Second, time.Second).Should(BeNil())
}

View File

@@ -31,7 +31,9 @@ import (
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/oam-dev/kubevela/apis/core.oam.dev/common"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1alpha2"
"github.com/oam-dev/kubevela/apis/types"
controller "github.com/oam-dev/kubevela/pkg/controller/core.oam.dev"
"github.com/oam-dev/kubevela/pkg/controller/utils"
"github.com/oam-dev/kubevela/pkg/dsl/definition"
@@ -68,10 +70,21 @@ func (r *Reconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
return ctrl.Result{}, nil
}
if traitdefinition.Spec.Reference.Name != "" {
err := utils.RefreshPackageDiscover(r.dm, r.pd, common.WorkloadGVK{},
traitdefinition.Spec.Reference, types.TypeTrait)
if err != nil {
klog.ErrorS(err, "cannot refresh packageDiscover")
r.record.Event(&traitdefinition, event.Warning("cannot refresh packageDiscover", err))
return ctrl.Result{}, util.PatchCondition(ctx, r, &traitdefinition,
cpv1alpha1.ReconcileError(fmt.Errorf(util.ErrRefreshPackageDiscover, err)))
}
}
var def utils.CapabilityTraitDefinition
def.Name = req.NamespacedName.Name
err := def.StoreOpenAPISchema(ctx, r, req.Namespace, req.Name)
err := def.StoreOpenAPISchema(ctx, r.Client, r.pd, req.Namespace, req.Name)
if err != nil {
klog.ErrorS(err, "cannot store capability in ConfigMap")
r.record.Event(&(def.TraitDefinition), event.Warning("cannot store capability in ConfigMap", err))

View File

@@ -25,13 +25,17 @@ import (
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
corev1 "k8s.io/api/core/v1"
crdv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/utils/pointer"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/yaml"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1alpha2"
"github.com/oam-dev/kubevela/apis/core.oam.dev/v1beta1"
"github.com/oam-dev/kubevela/apis/types"
"github.com/oam-dev/kubevela/pkg/oam/util"
)
@@ -133,15 +137,15 @@ spec:
Expect(yaml.Unmarshal([]byte(validTraitDefinition), &def)).Should(BeNil())
def.Namespace = namespace
Expect(k8sClient.Create(ctx, &def)).Should(Succeed())
reconcileRetry(&r, req)
By("Check whether ConfigMap is created")
reconcileRetry(&r, req)
var cm corev1.ConfigMap
name := fmt.Sprintf("%s%s", types.CapabilityConfigMapNamePrefix, traitDefinitionName)
Eventually(func() bool {
err := k8sClient.Get(ctx, client.ObjectKey{Namespace: namespace, Name: name}, &cm)
return err == nil
}, 10*time.Second, time.Second).Should(BeTrue())
}, 15*time.Second, time.Second).Should(BeTrue())
Expect(cm.Data[types.OpenapiV3JSONSchema]).Should(Not(Equal("")))
By("Check whether ConfigMapRef refer to right")
@@ -204,9 +208,9 @@ spec:
var def v1alpha2.TraitDefinition
Expect(yaml.Unmarshal([]byte(validTraitDefinition), &def)).Should(BeNil())
Expect(k8sClient.Create(ctx, &def)).Should(Succeed())
By("Check whether ConfigMap is created")
reconcileRetry(&r, req)
By("Check whether ConfigMap is created")
var cm corev1.ConfigMap
name := fmt.Sprintf("%s%s", types.CapabilityConfigMapNamePrefix, traitDefinitionName)
Eventually(func() bool {
@@ -272,4 +276,98 @@ spec:
Expect(k8sClient.Get(ctx, client.ObjectKey{Name: invalidTraitDefinitionName, Namespace: namespace}, gotTraitDefinition)).Should(BeNil())
})
})
Context("When the CUE Template in TraitDefinition import new added CRD", func() {
var traitDefinitionName = "test-refresh"
var namespace = "default"
It("Applying TraitDefinition", func() {
By("Create new CRD")
newCrd := crdv1.CustomResourceDefinition{
ObjectMeta: metav1.ObjectMeta{
Name: "foo.example.com",
},
Spec: crdv1.CustomResourceDefinitionSpec{
Group: "example.com",
Names: crdv1.CustomResourceDefinitionNames{
Kind: "Foo",
ListKind: "FooList",
Plural: "foo",
Singular: "foo",
},
Versions: []crdv1.CustomResourceDefinitionVersion{{
Name: "v1",
Served: true,
Storage: true,
Subresources: &crdv1.CustomResourceSubresources{Status: &crdv1.CustomResourceSubresourceStatus{}},
Schema: &crdv1.CustomResourceValidation{
OpenAPIV3Schema: &crdv1.JSONSchemaProps{
Type: "object",
Properties: map[string]crdv1.JSONSchemaProps{
"spec": {
Type: "object",
XPreserveUnknownFields: pointer.BoolPtr(true),
Properties: map[string]crdv1.JSONSchemaProps{
"key": {Type: "string"},
}},
"status": {
Type: "object",
XPreserveUnknownFields: pointer.BoolPtr(true),
Properties: map[string]crdv1.JSONSchemaProps{
"key": {Type: "string"},
"app-hash": {Type: "string"},
}}}}}},
},
Scope: crdv1.NamespaceScoped,
},
}
Expect(k8sClient.Create(context.Background(), &newCrd)).Should(SatisfyAny(BeNil(), &util.AlreadyExistMatcher{}))
traitDef := `
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "Configures replicas for your service."
name: test-refresh
namespace: default
spec:
appliesToWorkloads:
- webservice
- worker
definitionRef:
name: foo.example.com
schematic:
cue:
template: |
import (
ev1 "example.com/v1"
)
output: ev1.#Foo
output: {
spec: key: parameter.key1
status: key: parameter.key2
}
parameter: {
key1: string
key2: string
}
`
var td v1beta1.TraitDefinition
Expect(yaml.Unmarshal([]byte(traitDef), &td)).Should(BeNil())
Expect(k8sClient.Create(ctx, &td)).Should(Succeed())
req := reconcile.Request{NamespacedName: client.ObjectKey{Name: traitDefinitionName, Namespace: namespace}}
By("Check whether ConfigMap is created")
var cm corev1.ConfigMap
name := fmt.Sprintf("%s%s", types.CapabilityConfigMapNamePrefix, traitDefinitionName)
Eventually(func() bool {
reconcileRetry(&r, req)
err := k8sClient.Get(ctx, client.ObjectKey{Namespace: namespace, Name: name}, &cm)
return err == nil
}, 20*time.Second, time.Second).Should(BeTrue())
Expect(cm.Data[types.OpenapiV3JSONSchema]).Should(Not(Equal("")))
})
})
})

View File

@@ -24,6 +24,8 @@ import (
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
crdv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/client-go/kubernetes/scheme"
"k8s.io/client-go/rest"
ctrl "sigs.k8s.io/controller-runtime"
@@ -32,6 +34,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/reconcile"
oamCore "github.com/oam-dev/kubevela/apis/core.oam.dev"
"github.com/oam-dev/kubevela/pkg/dsl/definition"
"github.com/oam-dev/kubevela/pkg/oam/discoverymapper"
)
@@ -60,8 +63,8 @@ var _ = BeforeSuite(func(done Done) {
Expect(err).ToNot(HaveOccurred())
Expect(cfg).ToNot(BeNil())
err = oamCore.AddToScheme(scheme.Scheme)
Expect(err).NotTo(HaveOccurred())
Expect(oamCore.AddToScheme(scheme.Scheme)).Should(BeNil())
Expect(crdv1.AddToScheme(scheme.Scheme)).Should(BeNil())
By("Create the k8s client")
k8sClient, err = client.New(cfg, client.Options{Scheme: scheme.Scheme})
@@ -75,13 +78,19 @@ var _ = BeforeSuite(func(done Done) {
Port: 48081,
})
Expect(err).ToNot(HaveOccurred())
pd, err := definition.NewPackageDiscover(cfg)
Expect(err).ToNot(HaveOccurred())
dm, err := discoverymapper.New(mgr.GetConfig())
Expect(err).ToNot(HaveOccurred())
_, err = dm.Refresh()
Expect(err).ToNot(HaveOccurred())
r = Reconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
dm: dm,
pd: pd,
}
Expect(r.SetupWithManager(mgr)).ToNot(HaveOccurred())
controllerDone = make(chan struct{}, 1)

View File

@@ -24,6 +24,7 @@ import (
"strings"
"cuelang.org/go/cue"
"cuelang.org/go/cue/build"
"github.com/getkin/kin-openapi/openapi3"
"github.com/pkg/errors"
v1 "k8s.io/api/core/v1"
@@ -37,6 +38,7 @@ import (
"github.com/oam-dev/kubevela/apis/types"
"github.com/oam-dev/kubevela/pkg/appfile/helm"
mycue "github.com/oam-dev/kubevela/pkg/cue"
"github.com/oam-dev/kubevela/pkg/dsl/definition"
"github.com/oam-dev/kubevela/pkg/oam/util"
"github.com/oam-dev/kubevela/pkg/utils/common"
)
@@ -104,12 +106,12 @@ func (def *CapabilityComponentDefinition) GetCapabilityObject(ctx context.Contex
}
// GetOpenAPISchema gets OpenAPI v3 schema by WorkloadDefinition name
func (def *CapabilityComponentDefinition) GetOpenAPISchema(ctx context.Context, k8sClient client.Client, namespace, name string) ([]byte, error) {
func (def *CapabilityComponentDefinition) GetOpenAPISchema(ctx context.Context, k8sClient client.Client, pd *definition.PackageDiscover, namespace, name string) ([]byte, error) {
capability, err := def.GetCapabilityObject(ctx, k8sClient, namespace, name)
if err != nil {
return nil, err
}
return getOpenAPISchema(*capability)
return getOpenAPISchema(*capability, pd)
}
// GetKubeSchematicOpenAPISchema gets OpenAPI v3 schema based on kube schematic parameters
@@ -150,7 +152,7 @@ func (def *CapabilityComponentDefinition) GetKubeSchematicOpenAPISchema(params [
}
// StoreOpenAPISchema stores OpenAPI v3 schema in ConfigMap from WorkloadDefinition
func (def *CapabilityComponentDefinition) StoreOpenAPISchema(ctx context.Context, k8sClient client.Client, namespace, name string) error {
func (def *CapabilityComponentDefinition) StoreOpenAPISchema(ctx context.Context, k8sClient client.Client, pd *definition.PackageDiscover, namespace, name string) error {
var jsonSchema []byte
var err error
switch def.WorkloadType {
@@ -159,7 +161,7 @@ func (def *CapabilityComponentDefinition) StoreOpenAPISchema(ctx context.Context
case util.KubeDef:
jsonSchema, err = def.GetKubeSchematicOpenAPISchema(def.Kube.Parameters)
default:
jsonSchema, err = def.GetOpenAPISchema(ctx, k8sClient, namespace, name)
jsonSchema, err = def.GetOpenAPISchema(ctx, k8sClient, pd, namespace, name)
}
if err != nil {
return fmt.Errorf("failed to generate OpenAPI v3 JSON schema for capability %s: %w", def.Name, err)
@@ -210,17 +212,17 @@ func (def *CapabilityTraitDefinition) GetCapabilityObject(ctx context.Context, k
}
// GetOpenAPISchema gets OpenAPI v3 schema by TraitDefinition name
func (def *CapabilityTraitDefinition) GetOpenAPISchema(ctx context.Context, k8sClient client.Client, namespace, name string) ([]byte, error) {
func (def *CapabilityTraitDefinition) GetOpenAPISchema(ctx context.Context, k8sClient client.Client, pd *definition.PackageDiscover, namespace, name string) ([]byte, error) {
capability, err := def.GetCapabilityObject(ctx, k8sClient, namespace, name)
if err != nil {
return nil, err
}
return getOpenAPISchema(*capability)
return getOpenAPISchema(*capability, pd)
}
// StoreOpenAPISchema stores OpenAPI v3 schema from TraitDefinition in ConfigMap
func (def *CapabilityTraitDefinition) StoreOpenAPISchema(ctx context.Context, k8sClient client.Client, namespace, name string) error {
jsonSchema, err := def.GetOpenAPISchema(ctx, k8sClient, namespace, name)
func (def *CapabilityTraitDefinition) StoreOpenAPISchema(ctx context.Context, k8sClient client.Client, pd *definition.PackageDiscover, namespace, name string) error {
jsonSchema, err := def.GetOpenAPISchema(ctx, k8sClient, pd, namespace, name)
if err != nil {
return fmt.Errorf(util.ErrGenerateOpenAPIV2JSONSchemaForCapability, def.Name, err)
}
@@ -287,8 +289,8 @@ func (def *CapabilityBaseDefinition) CreateOrUpdateConfigMap(ctx context.Context
}
// getDefinition is the main function for GetDefinition API
func getOpenAPISchema(capability types.Capability) ([]byte, error) {
openAPISchema, err := generateOpenAPISchemaFromCapabilityParameter(capability)
func getOpenAPISchema(capability types.Capability, pd *definition.PackageDiscover) ([]byte, error) {
openAPISchema, err := generateOpenAPISchemaFromCapabilityParameter(capability, pd)
if err != nil {
return nil, err
}
@@ -306,26 +308,43 @@ func getOpenAPISchema(capability types.Capability) ([]byte, error) {
}
// generateOpenAPISchemaFromCapabilityParameter returns the parameter of a definition in cue.Value format
func generateOpenAPISchemaFromCapabilityParameter(capability types.Capability) ([]byte, error) {
return GenerateOpenAPISchemaFromDefinition(capability.Name, capability.CueTemplate)
func generateOpenAPISchemaFromCapabilityParameter(capability types.Capability, pd *definition.PackageDiscover) ([]byte, error) {
template, err := prepareParameterCue(capability.Name, capability.CueTemplate)
if err != nil {
return nil, err
}
var cueInst *cue.Instance
template += mycue.BaseTemplate
if pd == nil {
var r cue.Runtime
cueInst, err = r.Compile("-", template)
if err != nil {
return nil, err
}
} else {
bi := build.NewContext().NewInstance("", nil)
err = bi.AddFile("-", template)
if err != nil {
return nil, err
}
pd.ImportBuiltinPackagesFor(bi)
var r cue.Runtime
cueInst, err = r.Build(bi)
if err != nil {
return nil, err
}
}
return common.GenOpenAPI(cueInst)
}
// GenerateOpenAPISchemaFromDefinition returns the parameter of a definition
func GenerateOpenAPISchemaFromDefinition(definitionName, cueTemplate string) ([]byte, error) {
template, err := prepareParameterCue(definitionName, cueTemplate)
if err != nil {
return nil, err
capability := types.Capability{
Name: definitionName,
CueTemplate: cueTemplate,
}
// append context section in CUE string
template += mycue.BaseTemplate
var r cue.Runtime
cueInst, err := r.Compile("-", template)
if err != nil {
return nil, err
}
return common.GenOpenAPI(cueInst)
return generateOpenAPISchemaFromCapabilityParameter(capability, nil)
}
// prepareParameterCue cuts `parameter` section form definition .cue file

View File

@@ -113,7 +113,7 @@ spec:
Expect(capability).Should(Not(BeNil()))
By("Test GetOpenAPISchema")
schema, err := def.GetOpenAPISchema(ctx, k8sClient, namespace, componentDefinitionName)
schema, err := def.GetOpenAPISchema(ctx, k8sClient, pd, namespace, componentDefinitionName)
Expect(err).Should(BeNil())
Expect(schema).Should(Not(BeNil()))
@@ -179,7 +179,7 @@ spec:
By("Test GetOpenAPISchema")
var expectedSchema = "{\"properties\":{\"replicas\":{\"default\":1,\"description\":\"Replicas of the workload\",\"title\":\"replicas\",\"type\":\"integer\"}},\"required\":[\"replicas\"],\"type\":\"object\"}"
schema, err := def.GetOpenAPISchema(ctx, k8sClient, namespace, traitDefinitionName)
schema, err := def.GetOpenAPISchema(ctx, k8sClient, pd, namespace, traitDefinitionName)
Expect(err).Should(BeNil())
Expect(string(schema)).Should(Equal(expectedSchema))
})

View File

@@ -29,11 +29,13 @@ import (
"sigs.k8s.io/controller-runtime/pkg/envtest"
oamCore "github.com/oam-dev/kubevela/apis/core.oam.dev"
"github.com/oam-dev/kubevela/pkg/dsl/definition"
)
var cfg *rest.Config
var k8sClient client.Client
var testEnv *envtest.Environment
var pd *definition.PackageDiscover
func TestCapability(t *testing.T) {
RegisterFailHandler(Fail)
@@ -62,6 +64,9 @@ var _ = BeforeSuite(func(done Done) {
Expect(err).ToNot(HaveOccurred())
Expect(k8sClient).ToNot(BeNil())
pd, err = definition.NewPackageDiscover(cfg)
Expect(err).ToNot(HaveOccurred())
close(done)
}, 60)

View File

@@ -75,7 +75,7 @@ func TestGetOpenAPISchema(t *testing.T) {
},
}
capability, _ := util.ConvertTemplateJSON2Object(tc.name, nil, schematic)
schema, err := getOpenAPISchema(capability)
schema, err := getOpenAPISchema(capability, pd)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\ngetOpenAPISchema(...): -want error, +got error:\n%s", tc.reason, diff)
}
@@ -138,7 +138,7 @@ func TestGenerateOpenAPISchemaFromCapabilityParameter(t *testing.T) {
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
got, err := generateOpenAPISchemaFromCapabilityParameter(tc.capability)
got, err := generateOpenAPISchemaFromCapabilityParameter(tc.capability, pd)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\ngetDefinition(...): -want error, +got error:\n%s", tc.reason, diff)
}

Some files were not shown because too many files have changed in this diff Show More