Rewrite the doc (#1322)

This commit is contained in:
Lei Zhang (Harry)
2021-03-27 00:02:23 -07:00
committed by GitHub
parent 68a0e40db4
commit cbc866ccae
24 changed files with 1759 additions and 1816 deletions

View File

@@ -40,7 +40,7 @@ KubeVela allows platform teams to create developer-centric abstractions with IaC
## Features
- **Robust, repeatable and extensible approach** to create and maintain abstractions - design your abstractions with [CUE](https://cuelang.org/) or [Helm](https://helm.sh), ship them to your end users by `kubectl apply -f`, upgrade your abstractions at runtime, no restart, no recompiling, and let Kubernetes controller guarantee determinism of the abstractions, no configuration drift.
- **Robust, repeatable and extensible approach to create and maintain abstractions** - design your abstractions with [CUE](https://cuelang.org/) or [Helm](https://helm.sh), ship them to end users by `kubectl apply -f`, automatically generating GUI forms, upgrade your abstractions at runtime, and let Kubernetes controller guarantee determinism of the abstractions, no configuration drift.
- **Generic progressive rollout framework** - built-in rollout framework and strategies to upgrade your microservice regardless of its workload type (e.g. stateless, stateful, or even custom operators etc), seamless integration with observability systems.
- **Multi-enviroment app delievry model (WIP)** - built-in model to deliver or rollout your apps across multiple enviroments and/or clusters, seamless integration with Service Mesh for traffic management.
- **Simple and Kubernetes native** - KubeVela is just a simple custom controller, all its app delivery abstractions and features are defined as [Kubernetes Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) so they naturally work with any CI/CD or GitOps tools.

View File

@@ -11,26 +11,36 @@
- [Overview](/en/platform-engineers/overview.md)
- [Application CRD](/en/application.md)
- [Definition CRD](/en/platform-engineers/definition-and-templates.md)
- [Auto-generated Schema](/en/platform-engineers/openapi-v3-json-schema.md)
- [Defining Components](/en/platform-engineers/component.md)
- [Defining Traits](/en/platform-engineers/trait.md)
<!-- - [Defining Cloud Service](/en/platform-engineers/cloud-services.md) -->
- Using CUE
- Visualization
- [Generate Forms from Definitions](/en/platform-engineers/openapi-v3-json-schema.md)
- Defining Components
- CUE
- [How-to](/en/cue/component.md)
- [Learning CUE](/en/cue/basic.md)
- [Define Components in CUE](/en/cue/component.md)
- [Define Traits](/en/cue/trait.md)
- [Advanced Features](/en/cue/status.md)
- Using Helm
- [Define Components in Chart](/en/helm/component.md)
- Helm
- [How-to](/en/helm/component.md)
- [Attach Traits](/en/helm/trait.md)
- [Known Limitations](/en/helm/known-issues.md)
- Using Raw Kube
- [Define Components With Raw K8s](/en/kube/component.md)
- Raw Template
- [How-to](/en/kube/component.md)
- [Attach Traits](/en/kube/trait.md)
- Defining Traits
- [How-to](/en/cue/trait.md)
- [Patch Traits](/en/cue/patch-trait.md)
- [Status Write Back](/en/cue/status.md)
- [Advanced Features](/en/cue/advanced.md)
- Hands-on Lab
- [Debug, Test and Dry-run](/en/platform-engineers/debug-test-cue.md)
- [Defining KEDA as Autoscaling Trait](/en/platform-engineers/keda.md)
<!-- - [Defining Cloud Database as Component](/en/platform-engineers/cloud-services.md) -->
- Developer Experience Guide
- Appfile
- [Overview](/en/quick-start-appfile.md)

View File

@@ -1,4 +1,4 @@
# Using `Application` to Describe Your App
# Introduction of Application CRD
This documentation will walk through how to use `Application` object to define your apps with corresponding operational behaviors in declarative approach.
@@ -38,7 +38,7 @@ spec:
image: "fluentd"
```
The `type: worker` means the specification of this workload (claimed in following `properties` section) will be enforced by a `ComponentDefinition` object named `worker` as below:
The `type: worker` means the specification of this component (claimed in following `properties` section) will be enforced by a `ComponentDefinition` object named `worker` as below:
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -86,7 +86,7 @@ spec:
```
Hence, the `settings` section of `backend` only supports two parameters: `image` and `cmd`, this is enforced by the `parameter` list of the `.spec.template` field of the definition.
Hence, the `properties` section of `backend` only supports two parameters: `image` and `cmd`, this is enforced by the `parameter` list of the `.spec.template` field of the definition.
The similar extensible abstraction mechanism also applies to traits. For example, `name: autoscaler` in `frontend` means its trait specification (i.e. `properties` section) will be enforced by a `TraitDefinition` object named `autoscaler` as below:
@@ -135,7 +135,7 @@ spec:
}
```
All the definition objects are expected to be defined and installed by platform team. The end users will only focus on `Application` resource (either render it by tools or author it manually).
All the definition objects are expected to be defined and installed by platform team. The end users will only focus on `Application` resource.
## Conventions and "Standard Contract"

View File

@@ -22,10 +22,14 @@ This template based workflow make it possible for platform team enforce best pra
Below are the core building blocks in KubeVela that make this happen.
## Application
## `Application`
The *Application* is the core API of KubeVela. It allows developers to work with a single artifact to capture the complete application definition with simplified primitives.
Having an "application" concept is important to for any app-centric platform to simplify administrative tasks and can serve as an anchor to avoid configuration drifts during operation. Also, as an abstraction object, `Application` provides a much simpler path for on-boarding Kubernetes capabilities without relying on low level details. For example, a developer will be able to model a "web service" without defining a detailed Kubernetes Deployment + Service combo each time, or claim the auto-scaling requirements without referring to the underlying KEDA ScaleObject.
### Why Choose `Application` as the Main Abstraction
Having an "application" concept is important to any developer-centric platform to simplify administrative tasks and can serve as an anchor to avoid configuration drifts during operation. Also, as an abstraction object, `Application` provides a much simpler path for on-boarding Kubernetes capabilities without relying on low level details. For example, a developer will be able to model a "web service" without defining a detailed Kubernetes Deployment + Service combo each time, or claim the auto-scaling requirements without referring to the underlying KEDA ScaleObject.
### Example
An example of `website` application with two components (i.e. `frontend` and `backend`) could be modeled as below:
@@ -58,25 +62,27 @@ spec:
image: "fluentd"
```
### Components
## Building the Abstraction
For each of the components in `Application`, its `.type` field references the detailed definition of this component (such as its workload type, template, parameters, etc.), and `.settings` are the user input values to instantiate it. Some typical component types are *Long Running Web Service*, *One-time Off Task* or *Redis Database*.
Unlike most of the higher level platforms, the `Application` abstraction in KubeVela is fully extensible and does not even have fixed schema. Instead, it is composed by building blocks (app components and traits etc.) that allow you to onboard platform capabilities to this application definition with your own abstractions.
All supported component types expected to be pre-installed in the platform, or provided by component providers such as 3rd-party software vendors.
The building blocks to abstraction and model platform capabilities named `ComponentDefinition` and `TraitDefinition`.
### Traits
### ComponentDefinition
Optionally, each component has a `.traits` section that augments its component instance with operational behaviors such as load balancing policy, network ingress routing, auto-scaling policies, or upgrade strategies, etc.
You can think of `ComponentDefinition` as a *template* for workload type. It contains template, parametering and workload characteristic information as a declarative API resource.
Essentially, traits are operational features provided by the platform, note that KubeVela allows users bring their own traits as well. To attach a trait, use `.name` field to reference the specific trait definition, and `.properties` field to set detailed configuration values of the given trait.
Hence, the `Application` abstraction essentially declares how users want to **instantiate** given component definitions. Specifically, the `.type` field references the name of installed `ComponentDefinition` and `.properties` are the user set values to instantiate it.
We also reference component types and traits as *"capabilities"* in KubeVela.
Some typical component definitions are *Long Running Web Service*, *One-time Off Task* or *Redis Database*. All component definitions expected to be pre-installed in the platform, or provided by component providers such as 3rd-party software vendors.
## Definitions
### TraitDefinition
Both the schemas of workload settings and trait properties in `Application` are enforced by a set of definition objects. The platform teams or component providers are responsible for registering and managing definition objects in target cluster following [workload definition](https://github.com/oam-dev/spec/blob/master/4.workload_types.md) and [trait definition](https://github.com/oam-dev/spec/blob/master/6.traits.md) specifications in Open Application Model (OAM).
Optionally, each component has a `.traits` section that augments the component instance with operational behaviors such as load balancing policy, network ingress routing, auto-scaling policies, or upgrade strategies, etc.
Specifically, definition object carries the templating information of this capability. Currently, KubeVela supports [Helm](http://helm.sh/) charts and [CUE](https://github.com/cuelang/cue) modules as definitions which means you could use KubeVela to deploy Helm charts and CUE modules as application components, or claim them as traits. More capability types support such as [Terraform](https://www.terraform.io/) is also work in progress.
You can think of traits as operational features provided by the platform. To attach a trait to component instance, the user will use `.type` field to reference the specific `TraitDefinition`, and `.properties` field to set property values of the given trait. Similarly, `TraitDefiniton` also allows you to define *template* for operational features.
We also reference component definitions and trait definitions as *"capability definitions"* in KubeVela.
## Environment
Before releasing an application to production, it's important to test the code in testing/staging workspaces. In KubeVela, we describe these workspaces as "deployment environments" or "environments" for short. Each environment has its own configuration (e.g., domain, Kubernetes cluster and namespace, configuration data, access control policy, etc.) to allow user to create different deployment environments such as "test" and "production".

243
docs/en/cue/advanced.md Normal file
View File

@@ -0,0 +1,243 @@
# Advanced Features
As a Data Configuration Language, CUE allows you to do some advanced templating magic in definition objects.
## Render Multiple Resources With a Loop
You can define the for-loop inside the `outputs`.
> Note that in this case the type of `parameter` field used in the for-loop must be a map.
Below is an example that will render multiple Kubernetes Services in one trait:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: expose
spec:
schematic:
cue:
template: |
parameter: {
http: [string]: int
}
outputs: {
for k, v in parameter.http {
"\(k)": {
apiVersion: "v1"
kind: "Service"
spec: {
selector:
app: context.name
ports: [{
port: v
targetPort: v
}]
}
}
}
}
```
The usage of this trait could be:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: testapp
spec:
components:
- name: express-server
type: webservice
properties:
...
traits:
- type: expose
properties:
http:
myservice1: 8080
myservice2: 8081
```
## Execute HTTP Request in Trait Definition
The trait definition can send a HTTP request and capture the response to help you rendering the resource with keyword `processing`.
You can define HTTP request `method`, `url`, `body`, `header` and `trailer` in the `processing.http` section, and the returned data will be stored in `processing.output`.
> Please ensure the target HTTP server returns a **JSON data**. `output`.
Then you can reference the returned data from `processing.output` in `patch` or `output/outputs`.
Below is an example:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: auth-service
spec:
schematic:
cue:
template: |
parameter: {
serviceURL: string
}
processing: {
output: {
token?: string
}
// The target server will return a JSON data with `token` as key.
http: {
method: *"GET" | string
url: parameter.serviceURL
request: {
body?: bytes
header: {}
trailer: {}
}
}
}
patch: {
data: token: processing.output.token
}
```
In above example, this trait definition will send request to get the `token` data, and then patch the data to given component instance.
## Data Passing
A trait definition can read the generated API resources (rendered from `output` and `outputs`) of given component definition.
> KubeVela will ensure the component definitions are always rendered before traits definitions.
Specifically, the `context.output` contains the rendered workload API resource (whose GVK is indicated by `spec.workload`in component definition), and use `context.outputs.<xx>` to contain all the other rendered API resources.
Below is an example for data passing:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: worker
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
ports: [{containerPort: parameter.port}]
envFrom: [{
configMapRef: name: context.name + "game-config"
}]
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
}]
}
}
}
}
outputs: gameconfig: {
apiVersion: "v1"
kind: "ConfigMap"
metadata: {
name: context.name + "game-config"
}
data: {
enemies: parameter.enemies
lives: parameter.lives
}
}
parameter: {
// +usage=Which image would you like to use for your service
// +short=i
image: string
// +usage=Commands to run in the container
cmd?: [...string]
lives: string
enemies: string
port: int
}
---
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: ingress
spec:
schematic:
cue:
template: |
parameter: {
domain: string
path: string
exposePort: int
}
// trait template can have multiple outputs in one trait
outputs: service: {
apiVersion: "v1"
kind: "Service"
spec: {
selector:
app: context.name
ports: [{
port: parameter.exposePort
targetPort: context.output.spec.template.spec.containers[0].ports[0].containerPort
}]
}
}
outputs: ingress: {
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: context.name
labels: config: context.outputs.gameconfig.data.enemies
spec: {
rules: [{
host: parameter.domain
http: {
paths: [{
path: parameter.path
backend: {
serviceName: context.name
servicePort: parameter.exposePort
}
}]
}
}]
}
}
```
In detail, during rendering `worker` `ComponentDefinition`:
1. the rendered Kubernetes Deployment resource will be stored in the `context.output`,
2. all other rendered resources will be stored in `context.outputs.<xx>`, with `<xx>` is the unique name in every `template.outputs`.
Thus, in `TraitDefinition`, it can read the rendered API resources (e.g. `context.outputs.gameconfig.data.enemies`) from the `context`.

View File

@@ -1,12 +1,12 @@
# Learning CUE
This document will explain how to use [CUE](https://cuelang.org/) to encapsulate and abstract a given capability in Kubernetes and
make it available to end users to consume in `Application` CRD. Please make sure you have already learned about
`Application` custom resource before reading the following guide.
This document will explain more about how to use CUE to encapsulate and abstract a given capability in Kubernetes in detail.
> Please make sure you have already learned about `Application` custom resource before reading the following guide.
## Overview
The reasons for KubeVela supports CUE as first class templating solution can be concluded as below:
The reasons for KubeVela supports CUE as a first-class solution to design abstraction can be concluded as below:
- **CUE is designed for large scale configuration.** CUE has the ability to understand a
configuration worked on by engineers across a whole company and to safely change a value that modifies thousands of objects in a configuration. This aligns very well with KubeVela's original goal to define and ship production level applications at web scale.
@@ -18,10 +18,11 @@ The reasons for KubeVela supports CUE as first class templating solution can be
## Prerequisites
Please make sure below CLIs are present in your environment:
* [`cue` >=v0.2.2](https://cuelang.org/docs/install/)
* [`vela` (>v1.0.0)](https://kubevela.io/#/en/install?id=_3-optional-get-kubevela-cli)
## CUE CLI basic
## CUE CLI Basic
Below is the basic CUE data, you can define both schema and value in the same file with the almost same format:
@@ -131,7 +132,7 @@ CUE has powerful CLI commands. Let's keep the data in a file named `first.cue` a
For now, you have learned all useful CUE cli operations.
## CUE language basic
## CUE Language Basic
* Data structure: Below is the basic data structure of CUE.
@@ -193,7 +194,7 @@ You can also define a more complex custom struct, such as:
It's widely used in KubeVela to define templates and do validation.
## CUE templating and reference
## CUE Templating and References
Let's try to define a CUE template with the knowledge just learned.

View File

@@ -1,19 +1,18 @@
# Use CUE to extend Component type
# Defining Components with CUE
In this section, it will introduce how to use CUE to extend your custom component types.
In this section, it will introduce how to use [CUE](https://cuelang.org/) to declare app components via `ComponentDefinition`.
Before reading this part, please make sure you've learned [the definition and template concepts](../platform-engineers/definition-and-templates.md)
and the [basic CUE](./basic.md) knowledge related with KubeVela.
> Before reading this part, please make sure you've learned the [Definition CRD](../platform-engineers/definition-and-templates.md) in KubeVela.
## Write ComponentDefinition
## Declare `ComponentDefinition`
Here is a basic `ComponentDefinition` example:
Here is a CUE based `ComponentDefinition` example which provides a abstraction for stateless workload type:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: mydeploy
name: stateless
spec:
workload:
definition:
@@ -47,56 +46,13 @@ spec:
}
}
```
In detail:
- `.spec.workload` is required to indicate the workload type of this component.
- `.spec.schematic.cue.template` is a CUE template, specifically:
* The `output` filed defines the template for the abstraction.
* The `parameter` filed defines the template parameters, i.e. the configurable properties exposed in the `Application`abstraction (and JSON schema will be automatically generated based on them).
- `.spec.workload` is required to indicate the workload(apiVersion/Kind) defined in the CUE.
- `.spec.schematic.cue.template` is a CUE template, it defines two keywords for KubeVela to build the application abstraction:
* The `parameter` defines the input parameters from end user, i.e. the configurable fields in the abstraction.
* The `output` defines the template for the abstraction.
## Create an Application using the CUE based ComponentDefinition
As long as you installed the ComponentDefinition object (e.g. `kubectl apply -f mydeploy.yaml`) with above template into
the K8s system, it can be used like below:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
metadata:
name: website
spec:
components:
- name: backend
type: mydeploy
properties:
image: crccheck/hello-world
name: mysvc
```
It will finally render out the following object into the K8s system.
```yaml
apiVersion: apps/v1
kind: Deployment
meadata:
name: mydeploy
spec:
template:
spec:
containers:
- name: mysvc
image: crccheck/hello-world
metadata:
labels:
app.oam.dev/component: mysvc
selector:
matchLabels:
app.oam.dev/component: mysvc
```
All information was rendered from the `output` keyword in CUE template.
And so on, a K8s Job as component type could be:
Let's declare another component named `task`, i.e. an abstraction for run-to-completion workload.
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -114,35 +70,110 @@ spec:
cue:
template: |
output: {
apiVersion: "batch/v1"
kind: "Job"
spec: {
parallelism: parameter.count
completions: parameter.count
template: spec: {
restartPolicy: parameter.restart
containers: [{
image: parameter.image
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
}]
}
}
apiVersion: "batch/v1"
kind: "Job"
spec: {
parallelism: parameter.count
completions: parameter.count
template: spec: {
restartPolicy: parameter.restart
containers: [{
image: parameter.image
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
}]
}
}
}
parameter: {
count: *1 | int
image: string
restart: *"Never" | string
cmd?: [...string]
count: *1 | int
image: string
restart: *"Never" | string
cmd?: [...string]
}
```
## Context in CUE
Save above `ComponentDefintion` objects to files and install them to your Kubernetes cluster by `$ kubectl apply -f stateless-def.yaml task-def.yaml`
When you want to reference the runtime instance name for an app, you can use the `conext` keyword to define `parameter`.
## Declare an `Application`
KubeVela runtime provides a `context` struct including app name(`context.appName`) and component name(`context.name`).
The `ComponentDefinition` can be instantiated in `Application` abstraction as below:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
metadata:
name: website
spec:
components:
- name: hello
type: stateless
properties:
image: crccheck/hello-world
name: mysvc
- name: countdown
type: task
properties:
image: centos:7
cmd:
- "bin/bash"
- "-c"
- "for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done"
```
### Under The Hood
<details>
Above application resource will generate and manage following Kubernetes resources in your target cluster based on the `output` in CUE template and user input in `Application` properties.
```yaml
apiVersion: apps/v1
kind: Deployment
meadata:
name: backend
... # skip tons of metadata info
spec:
template:
spec:
containers:
- name: mysvc
image: crccheck/hello-world
metadata:
labels:
app.oam.dev/component: mysvc
selector:
matchLabels:
app.oam.dev/component: mysvc
---
apiVersion: batch/v1
kind: Job
metadata:
name: countdown
... # skip tons of metadata info
spec:
parallelism: 1
completions: 1
template:
metadata:
name: countdown
spec:
containers:
- name: countdown
image: 'centos:7'
command:
- bin/bash
- '-c'
- for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done
restartPolicy: Never
```
</details>
## CUE `Context`
KubeVela allows you to reference the runtime information of your application via `conext` keyword.
The most widely used context is application name(`context.appName`) and component name(`context.name`).
```cue
context: {
@@ -151,10 +182,9 @@ context: {
}
```
Values of the context will be automatically generated before the underlying resources are applied.
This is why you can reference the context variable as value in the template.
For example, let's say you want to use the component name filled in by users as the container name in the workload instance:
```yaml
```cue
parameter: {
image: string
}
@@ -170,15 +200,28 @@ output: {
}
```
> Note that `context` information are auto-injected before resources are applied to target cluster.
> TBD: full available information in CUE `context`.
## Composition
A workload type can contain multiple Kubernetes resources, for example, we can define a `webserver` workload type that is composed by Deployment and Service.
It's common that a component definition is composed by multiple API resources, for example, a `webserver` component that is composed by a Deployment and a Service. CUE is a great solution to achieve this in simplified primitives.
Note that in this case, you MUST define the template of component instance in `output` section, and leave all the other templates in `outputs` with resource name claimed. The format MUST be `outputs:<unique-name>:<full template>`.
> Another approach to do composition in KubeVela of course is [using Helm](/en/helm/component.md).
> This is how KubeVela know which resource is the running instance of the application component.
## How-to
Below is the example:
KubeVela requires you to define the template of workload type in `output` section, and leave all the other resource templates in `outputs` section with format as below:
```cue
outputs: <unique-name>:
<full template data>
```
> The reason for this requirement is KubeVela needs to know it is currently rendering a workload so it could do some "magic" like patching annotations/labels or other data during it.
Below is the example for `webserver` definition:
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -274,7 +317,7 @@ spec:
}
```
Register the new workload to kubevela. And create an Application to use it:
The user could now declare an `Application` with it:
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -290,12 +333,12 @@ spec:
image: crccheck/hello-world
port: 8000
env:
- name: "PORT"
value: "8000"
- name: "foo"
value: "bar"
cpu: "100m"
```
You will finally got the following resources:
It will generate and manage below API resources in target cluster:
```shell
$ kubectl get deployment
@@ -307,643 +350,6 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
hello-world-trait-7bdcff98f7 ClusterIP <your ip> <none> 8000/TCP 32s
```
## What's Next
## Extend CRD Operator as Component Type
Let's use [OpenKruise](https://github.com/openkruise/kruise) as example of extend CRD as KubeVela Component.
**The mechanism works for all CRD Operators**.
### Step 1: Install the CRD controller
You need to [install the CRD controller](https://github.com/openkruise/kruise#quick-start) into your K8s system.
### Step 2: Create Component Definition
To register Cloneset(one of the OpenKruise workloads) as a new workload type in KubeVela, the only thing needed is to create an `ComponentDefinition` object for it.
A full example can be found in this [cloneset.yaml](https://github.com/oam-dev/catalog/blob/master/registry/cloneset.yaml).
Several highlights are list below.
#### 1. Describe The Workload Type
```yaml
...
annotations:
definition.oam.dev/description: "OpenKruise cloneset"
...
```
A one line description of this component type. It will be shown in helper commands such as `$ vela components`.
#### 2. Register it's underlying CRD
```yaml
...
workload:
definition:
apiVersion: apps.kruise.io/v1alpha1
kind: CloneSet
...
```
This is how you register OpenKruise Cloneset's API resource (`fapps.kruise.io/v1alpha1.CloneSet`) as the workload type.
KubeVela uses Kubernetes API resource discovery mechanism to manage all registered capabilities.
#### 4. Define Template
```yaml
...
schematic:
cue:
template: |
output: {
apiVersion: "apps.kruise.io/v1alpha1"
kind: "CloneSet"
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
replicas: parameter.replicas
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
}]
}
}
}
}
parameter: {
// +usage=Which image would you like to use for your service
// +short=i
image: string
// +usage=Number of pods in the cloneset
replicas: *5 | int
}
```
### Step 3: Register New Component Type to KubeVela
As long as the definition file is ready, you just need to apply it to Kubernetes.
```bash
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/catalog/master/registry/cloneset.yaml
```
And the new component type will immediately become available for developers to use in KubeVela.
## A Full Workflow of how to Debug and Test CUE definitions.
This section will explain how to test and debug CUE templates using CUE CLI as well as
dry-run your capability definitions via KubeVela CLI.
### Combine Definition File
Usually we define the Definition file in two parts, one is the yaml part and the other is the CUE part.
Let's name the yaml part as `def.yaml`.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: microservice
annotations:
definition.oam.dev/description: "Describes a microservice combo Deployment with Service."
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
```
And the CUE Template part as `def.cue`, then we can use `cue fmt` / `cue vet` to format and validate the CUE file.
```
output: {
// Deployment
apiVersion: "apps/v1"
kind: "Deployment"
metadata: {
name: context.name
namespace: "default"
}
spec: {
selector: matchLabels: {
"app": context.name
}
template: {
metadata: {
labels: {
"app": context.name
"version": parameter.version
}
}
spec: {
serviceAccountName: "default"
terminationGracePeriodSeconds: parameter.podShutdownGraceSeconds
containers: [{
name: context.name
image: parameter.image
ports: [{
if parameter.containerPort != _|_ {
containerPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
containerPort: parameter.servicePort
}
}]
if parameter.env != _|_ {
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}
resources: {
requests: {
if parameter.cpu != _|_ {
cpu: parameter.cpu
}
if parameter.memory != _|_ {
memory: parameter.memory
}
}
}
}]
}
}
}
}
// Service
outputs: service: {
apiVersion: "v1"
kind: "Service"
metadata: {
name: context.name
labels: {
"app": context.name
}
}
spec: {
type: "ClusterIP"
selector: {
"app": context.name
}
ports: [{
port: parameter.servicePort
if parameter.containerPort != _|_ {
targetPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
targetPort: parameter.servicePort
}
}]
}
}
parameter: {
version: *"v1" | string
image: string
servicePort: int
containerPort?: int
// +usage=Optional duration in seconds the pod needs to terminate gracefully
podShutdownGraceSeconds: *30 | int
env: [string]: string
cpu?: string
memory?: string
}
```
And finally there's a script [`hack/vela-templates/mergedef.sh`](https://github.com/oam-dev/kubevela/blob/master/hack/vela-templates/mergedef.sh)
can merge the `def.yaml` and `def.cue` to a completed Definition.
```shell
$ ./hack/vela-templates/mergedef.sh def.yaml def.cue > componentdef.yaml
```
### Debug CUE template
#### use `cue vet` to validate
The `cue vet` validates CUE files well.
```shell
$ cue vet def.cue
output.metadata.name: reference "context" not found:
./def.cue:6:14
output.spec.selector.matchLabels.app: reference "context" not found:
./def.cue:11:11
output.spec.template.metadata.labels.app: reference "context" not found:
./def.cue:16:17
output.spec.template.spec.containers.name: reference "context" not found:
./def.cue:24:13
outputs.service.metadata.name: reference "context" not found:
./def.cue:62:9
outputs.service.metadata.labels.app: reference "context" not found:
./def.cue:64:11
outputs.service.spec.selector.app: reference "context" not found:
./def.cue:70:11
```
The `reference "context" not found` is a very common error, this is because the [`context`](workload-type.md#context) is
a KubeVela inner variable that will be existed in runtime.
But in order to check the correctness of the CUE Template more conveniently. We can add a fake `context` in `def.cue` for test.
Note that you need to remove it when you have finished the development and test.
```CUE
output: {
// Deployment
apiVersion: "apps/v1"
kind: "Deployment"
metadata: {
name: context.name
namespace: "default"
}
spec: {
selector: matchLabels: {
"app": context.name
}
template: {
metadata: {
labels: {
"app": context.name
"version": parameter.version
}
}
spec: {
serviceAccountName: "default"
terminationGracePeriodSeconds: parameter.podShutdownGraceSeconds
containers: [{
name: context.name
image: parameter.image
...
}]
}
}
}
}
// Service
outputs: service: {
apiVersion: "v1"
kind: "Service"
metadata: {
name: context.name
labels: {
"app": context.name
}
}
spec: {
type: "ClusterIP"
selector: {
"app": context.name
}
...
}
}
parameter: {
version: *"v1" | string
image: string
servicePort: int
containerPort?: int
// +usage=Optional duration in seconds the pod needs to terminate gracefully
podShutdownGraceSeconds: *30 | int
env: [string]: string
cpu?: string
memory?: string
}
context: {
name: string
}
```
Then execute the command:
```shell
$ cue vet def.cue
some instances are incomplete; use the -c flag to show errors or suppress this message
```
`cue vet` will only validates the data type. The `-c` validates that all regular fields are concrete.
We can fill in the concrete data to verify the correctness of the template.
```shell
$ cue vet def.cue -c
context.name: incomplete value string
output.metadata.name: incomplete value string
output.spec.selector.matchLabels.app: incomplete value string
output.spec.template.metadata.labels.app: incomplete value string
output.spec.template.spec.containers.0.image: incomplete value string
output.spec.template.spec.containers.0.name: incomplete value string
output.spec.template.spec.containers.0.ports.0.containerPort: incomplete value int
outputs.service.metadata.labels.app: incomplete value string
outputs.service.metadata.name: incomplete value string
outputs.service.spec.ports.0.port: incomplete value int
outputs.service.spec.ports.0.targetPort: incomplete value int
outputs.service.spec.selector.app: incomplete value string
parameter.image: incomplete value string
parameter.servicePort: incomplete value int
```
Again, use the mock data for the `context` and `parameter`, append these following data in your `def.cue` file.
```CUE
context: {
name: "test-app"
}
parameter: {
version: "v2"
image: "image-address"
servicePort: 80
containerPort: 8000
env: {"PORT": "8000"}
cpu: "500m"
memory: "128Mi"
}
```
The `cue` will verify the field type in the mock parameter.
You can try any data you want until the following command is executed without complains.
```shell
cue vet def.cue -c
```
#### use `cue export` to check the result
`cue export` can export the result in yaml. It's help you to check the correctness of template with the specified output result.
```shell
$ cue export -e output def.cue --out yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app
namespace: default
spec:
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
version: v2
spec:
serviceAccountName: default
terminationGracePeriodSeconds: 30
containers:
- name: test-app
image: image-address
```
```shell
$ cue export -e outputs.service def.cue --out yaml
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
selector:
app: test-app
type: ClusterIP
```
## Dry-Run Application
After we test the CUE Template well, we can use `vela system dry-run` to dry run an application and test in in real K8s environment.
This command will show you the real k8s resources that will be created.
First, we need use `mergedef.sh` to merge the definition and cue files.
```shell
$ mergedef.sh def.yaml def.cue > componentdef.yaml
```
Then, let's create an Application named `test-app.yaml`.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: boutique
namespace: default
spec:
components:
- name: frontend
type: microservice
properties:
image: registry.cn-hangzhou.aliyuncs.com/vela-samples/frontend:v0.2.2
servicePort: 80
containerPort: 8080
env:
PORT: "8080"
cpu: "100m"
memory: "64Mi"
```
Dry run the application by using `vela system dry-run`.
```shell
$ vela system dry-run -f test-app.yaml -d componentdef.yaml
---
# Application(boutique) -- Comopnent(frontend)
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.oam.dev/component: frontend
app.oam.dev/name: boutique
workload.oam.dev/type: microservice
name: frontend
namespace: default
spec:
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
version: v1
spec:
containers:
- env:
- name: PORT
value: "8080"
image: registry.cn-hangzhou.aliyuncs.com/vela-samples/frontend:v0.2.2
name: frontend
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 64Mi
serviceAccountName: default
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
app.oam.dev/component: frontend
app.oam.dev/name: boutique
trait.oam.dev/resource: service
trait.oam.dev/type: AuxiliaryWorkload
name: frontend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: frontend
type: ClusterIP
---
```
> Note: `vela system dry-run` will execute the same logic of `Application` controller in KubeVela.
> Hence it's helpful for you to test or debug.
### Import Kube Package
KubeVela automatically generates internal packages for all built-in K8s API resources based on K8s OpenAPI.
With the help of `vela system dry-run`, you can use the `import kube package` feature and test it locally.
So some default values in our `def.cue` can be simplified, and the imported package will help you validate the template:
```cue
import (
apps "kube/apps/v1"
corev1 "kube/v1"
)
// output is validated by Deployment.
output: apps.#Deployment
output: {
metadata: {
name: context.name
namespace: "default"
}
spec: {
selector: matchLabels: {
"app": context.name
}
template: {
metadata: {
labels: {
"app": context.name
"version": parameter.version
}
}
spec: {
terminationGracePeriodSeconds: parameter.podShutdownGraceSeconds
containers: [{
name: context.name
image: parameter.image
ports: [{
if parameter.containerPort != _|_ {
containerPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
containerPort: parameter.servicePort
}
}]
if parameter.env != _|_ {
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}
resources: {
requests: {
if parameter.cpu != _|_ {
cpu: parameter.cpu
}
if parameter.memory != _|_ {
memory: parameter.memory
}
}
}
}]
}
}
}
}
outputs:{
service: corev1.#Service
}
// Service
outputs: service: {
metadata: {
name: context.name
labels: {
"app": context.name
}
}
spec: {
//type: "ClusterIP"
selector: {
"app": context.name
}
ports: [{
port: parameter.servicePort
if parameter.containerPort != _|_ {
targetPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
targetPort: parameter.servicePort
}
}]
}
}
parameter: {
version: *"v1" | string
image: string
servicePort: int
containerPort?: int
// +usage=Optional duration in seconds the pod needs to terminate gracefully
podShutdownGraceSeconds: *30 | int
env: [string]: string
cpu?: string
memory?: string
}
```
Then merge them.
```shell
mergedef.sh def.yaml def.cue > componentdef.yaml
```
And dry run.
```shell
vela system dry-run -f test-app.yaml -d componentdef.yaml
```
Please check the [Learning CUE](./basic.md) documentation about why we support CUE as first-class templating solution and more details about using CUE efficiently.

430
docs/en/cue/patch-trait.md Normal file
View File

@@ -0,0 +1,430 @@
# Patch Trait
**Patch** is a very common pattern of trait definitions, i.e. the app operators can amend/path attributes to the component instance (normally the workload) to enable certain operational features such as sidecar or node affinity rules (and this should be done **before** the resources applied to target cluster).
This pattern is extremely useful when the component definition is provided by third-party component provider (e.g. software distributor) so app operators do not have privilege to change its template.
> Note that even patch trait itself is defined by CUE, it can patch any component regardless how its schematic is defined (i.e. CUE, Helm, and any other supported schematic approaches).
Below is an example for `node-affinity` trait:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "affinity specify node affinity and toleration"
name: node-affinity
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {
spec: template: spec: {
if parameter.affinity != _|_ {
affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: [{
matchExpressions: [
for k, v in parameter.affinity {
key: k
operator: "In"
values: v
},
]}]
}
if parameter.tolerations != _|_ {
tolerations: [
for k, v in parameter.tolerations {
effect: "NoSchedule"
key: k
operator: "Equal"
value: v
}]
}
}
}
parameter: {
affinity?: [string]: [...string]
tolerations?: [string]: string
}
```
The patch trait above assumes the target component instance have `spec.template.spec.affinity` field. Hence we need to use `appliesToWorkloads` to enforce the trait only applies to those workload types have this field.
Now the users could declare they want to add node affinity rules to the component instance as below:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
metadata:
name: testapp
spec:
components:
- name: express-server
type: webservice
properties:
image: oamdev/testapp:v1
traits:
- type: "node-affinity"
properties:
affinity:
server-owner: ["owner1","owner2"]
resource-pool: ["pool1","pool2","pool3"]
tolerations:
resource-pool: "broken-pool1"
server-owner: "old-owner"
```
### Known Limitations
By default, patch trait in KubeVela leverages the CUE `merge` operation. It has following known constraints though:
- Can not handle conflicts.
- For example, if a component instance already been set with value `replicas=5`, then any patch trait to patch `replicas` field will fail, a.k.a you should not expose `replicas` field in its component definition schematic.
- Array list in the patch will be merged following the order of index. It can not handle the duplication of the array list members. This could be fixed by another feature below.
### Strategy Patch
The `strategy patch` is useful for patching array list.
> Note that this is not a standard CUE feature, KubeVela enhanced CUE in this case.
With `//+patchKey=<key_name>` annotation, merging logic of two array lists will not follow the CUE behavior. Instead, it will treat the list as object and use a strategy merge approach:
- if a duplicated key is found, the patch data will be merge with the existing values;
- if no duplication found, the patch will append into the array list.
The example of strategy patch trait will like below:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "add sidecar to the app"
name: sidecar
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {
// +patchKey=name
spec: template: spec: containers: [parameter]
}
parameter: {
name: string
image: string
command?: [...string]
}
```
In above example we defined `patchKey` is `name` which is the parameter key of container name. In this case, if the workload don't have the container with same name, it will be a sidecar container append into the `spec.template.spec.containers` array list. If the workload already has a container with the same name of this `sidecar` trait, then merge operation will happen instead of append (which leads to duplicated containers).
If `patch` and `outputs` both exist in one trait definition, the `patch` operation will be handled first and then render the `outputs`.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "expose the app"
name: expose
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {spec: template: metadata: labels: app: context.name}
outputs: service: {
apiVersion: "v1"
kind: "Service"
metadata: name: context.name
spec: {
selector: app: context.name
ports: [
for k, v in parameter.http {
port: v
targetPort: v
},
]
}
}
parameter: {
http: [string]: int
}
```
So the above trait which attaches a Service to given component instance will patch an corresponding label to the workload first and then render the Service resource based on template in `outputs`.
## More Use Cases of Patch Trait
Patch trait is in general pretty useful to separate operational concerns from the component definition, here are some more examples.
### Add Labels
For example, patch common label (virtual group) to the component instance.
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "Add virtual group labels"
name: virtualgroup
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {
spec: template: {
metadata: labels: {
if parameter.scope == "namespace" {
"app.namespace.virtual.group": parameter.group
}
if parameter.scope == "cluster" {
"app.cluster.virtual.group": parameter.group
}
}
}
}
parameter: {
group: *"default" | string
scope: *"namespace" | string
}
```
Then it could be used like:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
spec:
...
traits:
- type: virtualgroup
properties:
group: "my-group1"
scope: "cluster"
```
### Add Annotations
Similar to common labels, you could also patch the component instance with annotations. The annotation value should be a JSON string.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "Specify auto scale by annotation"
name: kautoscale
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
import "encoding/json"
patch: {
metadata: annotations: {
"my.custom.autoscale.annotation": json.Marshal({
"minReplicas": parameter.min
"maxReplicas": parameter.max
})
}
}
parameter: {
min: *1 | int
max: *3 | int
}
```
### Add Pod Environments
Inject system environments into Pod is also very common use case.
> This case rely on strategy merge patch, so don't forget add `+patchKey=name` as below:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "add env into your pods"
name: env
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {
spec: template: spec: {
// +patchKey=name
containers: [{
name: context.name
// +patchKey=name
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}]
}
}
parameter: {
env: [string]: string
}
```
### Inject `ServiceAccount` Based on External Auth Service
In this example, the service account was dynamically requested from an authentication service and patched into the service.
This example put UID token in HTTP header but you can also use request body if you prefer.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "dynamically specify service account"
name: service-account
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
processing: {
output: {
credentials?: string
}
http: {
method: *"GET" | string
url: parameter.serviceURL
request: {
header: {
"authorization.token": parameter.uidtoken
}
}
}
}
patch: {
spec: template: spec: serviceAccountName: processing.output.credentials
}
parameter: {
uidtoken: string
serviceURL: string
}
```
The `processing.http` section is an advanced feature that allow trait definition to send a HTTP request during rendering the resource. Please refer to [Execute HTTP Request in Trait Definition](#Processing-Trait) section for more details.
### Add `InitContainer`
[`InitContainer`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container) is useful to pre-define operations in an image and run it before app container.
Below is an example:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "add an init container and use shared volume with pod"
name: init-container
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {
spec: template: spec: {
// +patchKey=name
containers: [{
name: context.name
// +patchKey=name
volumeMounts: [{
name: parameter.mountName
mountPath: parameter.appMountPath
}]
}]
initContainers: [{
name: parameter.name
image: parameter.image
if parameter.command != _|_ {
command: parameter.command
}
// +patchKey=name
volumeMounts: [{
name: parameter.mountName
mountPath: parameter.initMountPath
}]
}]
// +patchKey=name
volumes: [{
name: parameter.mountName
emptyDir: {}
}]
}
}
parameter: {
name: string
image: string
command?: [...string]
mountName: *"workdir" | string
appMountPath: string
initMountPath: string
}
```
The usage could be:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: testapp
spec:
components:
- name: express-server
type: webservice
properties:
image: oamdev/testapp:v1
traits:
- type: "init-container"
properties:
name: "install-container"
image: "busybox"
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://info.cern.ch
mountName: "workdir"
appMountPath: "/usr/share/nginx/html"
initMountPath: "/work-dir"
```

View File

@@ -1,6 +1,6 @@
# Advanced Features
# Status Write Back
By using CUE as encapsulation method, some advanced features such as status write back could be easily achieved.
This documentation will explain how to achieve status write back by using CUE templates in definition objects.
## Health Check

View File

@@ -1,13 +1,62 @@
# Defining Traits in CUE
# Defining Traits
In this section we will introduce how to define a Trait with CUE template.
In this section we will introduce how to define a trait.
## Composition
Defining a *Trait* with CUE template is a bit different from *Workload Type*: a trait MUST use `outputs` keyword instead of `output` in template.
## Simple Trait
With the help of CUE template, it is very nature to compose multiple Kubernetes resources in one trait.
Similarly, the format MUST be `outputs:<unique-name>:<full template>`.
A trait in KubeVela can be defined by simply reference a existing Kubernetes API resource.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: ingress
spec:
definitionRef:
name: ingresses.networking.k8s.io
```
Let's attach this trait to a component instance in `Application`:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: testapp
spec:
components:
- name: express-server
type: webservice
properties:
cmd:
- node
- server.js
image: oamdev/testapp:v1
port: 8080
traits:
- type: ingress
properties:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
```
Note that in this case, all fields in the referenced resource's `spec` will be exposed to end user and no metadata (e.g. `annotations` etc) are allowed to be set trait properties. Hence this approach is normally used when you want to bring your own CRD and controller as a trait, and it dose not rely on `annotations` etc as tuning knobs.
## Using CUE as Trait Schematic
The recommended approach is defining a CUE based schematic for trait as well. In this case, it comes with abstraction and you have full flexibility to templating any resources and fields as you want. Note that KubeVela requires all traits MUST be defined in `outputs` section (not `output`) in CUE template with format as below:
```cue
outputs: <unique-name>:
<full template data>
```
Below is an example for `ingress` trait.
@@ -65,7 +114,7 @@ spec:
}
```
It can be used in the application object like below:
Let's attach this trait to a component instance in `Application`:
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -90,662 +139,4 @@ spec:
"/api": 8080
```
### Generate Multiple Resources with Loop
You can define the for-loop inside the `outputs`, the type of `parameter` field used in the for-loop must be a map.
Below is an example that will generate multiple Kubernetes Services in one trait:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: expose
spec:
schematic:
cue:
template: |
parameter: {
http: [string]: int
}
outputs: {
for k, v in parameter.http {
"\(k)": {
apiVersion: "v1"
kind: "Service"
spec: {
selector:
app: context.name
ports: [{
port: v
targetPort: v
}]
}
}
}
}
```
The usage of this trait could be:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: testapp
spec:
components:
- name: express-server
type: webservice
properties:
...
traits:
- type: expose
properties:
http:
myservice1: 8080
myservice2: 8081
```
## Patch Trait
You could also use keyword `patch` to patch data to the component instance (before the resource applied) and claim this behavior as a trait.
Below is an example for `node-affinity` trait:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "affinity specify node affinity and toleration"
name: node-affinity
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {
spec: template: spec: {
if parameter.affinity != _|_ {
affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: [{
matchExpressions: [
for k, v in parameter.affinity {
key: k
operator: "In"
values: v
},
]}]
}
if parameter.tolerations != _|_ {
tolerations: [
for k, v in parameter.tolerations {
effect: "NoSchedule"
key: k
operator: "Equal"
value: v
}]
}
}
}
parameter: {
affinity?: [string]: [...string]
tolerations?: [string]: string
}
```
You can use it like:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
metadata:
name: testapp
spec:
components:
- name: express-server
type: webservice
properties:
image: oamdev/testapp:v1
traits:
- type: "node-affinity"
properties:
affinity:
server-owner: ["owner1","owner2"]
resource-pool: ["pool1","pool2","pool3"]
tolerations:
resource-pool: "broken-pool1"
server-owner: "old-owner"
```
The patch trait above assumes the component instance have `spec.template.spec.affinity` schema. Hence we need to use it with the field `appliesToWorkloads` which can enforce the trait only to be used by these specified workload types.
By default, the patch trait in KubeVela relies on the CUE `merge` operation. It has following known constraints:
* Can not handle conflicts. For example, if a field already has a final value `replicas=5`, then the patch trait will conflict when patches `replicas=1` and fail. It only works when `replica` is not finalized before patch.
* Array list in the patch will be merged following the order of index. It can not handle the duplication of the array list members.
### Strategy Patch Trait
The `strategy patch` is a special patch logic for patching array list. This is supported **only** in KubeVela (i.e. not a standard CUE feature).
In order to make it work, you need to use annotation `//+patchKey=<key_name>` in the template.
With this annotation, merging logic of two array lists will not follow the CUE behavior. Instead, it will treat the list as object and use a strategy merge approach: if the value of the key name equal, then the patch data will merge into that, if no equal found, the patch will append into the array list.
The example of strategy patch trait will like below:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "add sidecar to the app"
name: sidecar
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {
// +patchKey=name
spec: template: spec: containers: [parameter]
}
parameter: {
name: string
image: string
command?: [...string]
}
```
The patchKey is `name` which represents the container name in this example. In this case, if the workload already has a container with the same name of this `sidecar` trait, it will be a merge operation. If the workload don't have the container with same name, it will be a sidecar container append into the `spec.template.spec.containers` array list.
### Patch The Trait
If patch and outputs both exist in one trait, the patch part will execute first and then the output object will be rendered out.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "service the app"
name: kservice
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {spec: template: metadata: labels: app: context.name}
outputs: service: {
apiVersion: "v1"
kind: "Service"
metadata: name: context.name
spec: {
selector: app: context.name
ports: [
for k, v in parameter.http {
port: v
targetPort: v
},
]
}
}
parameter: {
http: [string]: int
}
```
## Processing Trait
A trait can also help you to do some processing job. Currently, we have supported http request.
The keyword is `processing`, inside the `processing`, there are two keywords `output` and `http`.
You can define http request `method`, `url`, `body`, `header` and `trailer` in the `http` section.
KubeVela will send a request using this information, the requested server shall output a **json result**.
The `output` section will used to match with the `json result`, correlate fields by name will be automatically filled into it.
Then you can use the requested data from `processing.output` into `patch` or `output/outputs`.
Below is an example:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: auth-service
spec:
schematic:
cue:
template: |
parameter: {
serviceURL: string
}
processing: {
output: {
token?: string
}
// task shall output a json result and output will correlate fields by name.
http: {
method: *"GET" | string
url: parameter.serviceURL
request: {
body?: bytes
header: {}
trailer: {}
}
}
}
patch: {
data: token: processing.output.token
}
```
## Simple data passing
The trait can use the data of workload output and outputs to fill itself.
There are two keywords `output` and `outputs` in the rendering context.
You can use `context.output` refer to the workload object, and use `context.outputs.<xx>` refer to the trait object.
please make sure the trait resource name is unique, or the former data will be covered by the latter one.
Below is an example
1. the main workload object(Deployment) in this example will render into the context.output before rendering traits.
2. the context.outputs.<xx> will keep all these rendered trait data and can be used in the traits after them.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: worker
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
ports: [{containerPort: parameter.port}]
envFrom: [{
configMapRef: name: context.name + "game-config"
}]
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
}]
}
}
}
}
outputs: gameconfig: {
apiVersion: "v1"
kind: "ConfigMap"
metadata: {
name: context.name + "game-config"
}
data: {
enemies: parameter.enemies
lives: parameter.lives
}
}
parameter: {
// +usage=Which image would you like to use for your service
// +short=i
image: string
// +usage=Commands to run in the container
cmd?: [...string]
lives: string
enemies: string
port: int
}
---
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
name: ingress
spec:
schematic:
cue:
template: |
parameter: {
domain: string
path: string
exposePort: int
}
// trait template can have multiple outputs in one trait
outputs: service: {
apiVersion: "v1"
kind: "Service"
spec: {
selector:
app: context.name
ports: [{
port: parameter.exposePort
targetPort: context.output.spec.template.spec.containers[0].ports[0].containerPort
}]
}
}
outputs: ingress: {
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: context.name
labels: config: context.outputs.gameconfig.data.enemies
spec: {
rules: [{
host: parameter.domain
http: {
paths: [{
path: parameter.path
backend: {
serviceName: context.name
servicePort: parameter.exposePort
}
}]
}
}]
}
}
```
## More Use Cases for Patch Trait
Patch trait could be very powerful, here are some more advanced use cases.
### Add Labels
For example, patch common label (virtual group) to the component workload.
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "Add virtual group labels"
name: virtualgroup
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {
spec: template: {
metadata: labels: {
if parameter.type == "namespace" {
"app.namespace.virtual.group": parameter.group
}
if parameter.type == "cluster" {
"app.cluster.virtual.group": parameter.group
}
}
}
}
parameter: {
group: *"default" | string
type: *"namespace" | string
}
```
Then it could be used like:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
spec:
...
traits:
- type: virtualgroup
properties:
group: "my-group1"
type: "cluster"
```
In this example, different type will use different label key.
### Add Annotations
Similar to common labels, you could also patch the component workload with annotations. The annotation value will be a JSON string.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "Specify auto scale by annotation"
name: kautoscale
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
import "encoding/json"
patch: {
metadata: annotations: {
"my.custom.autoscale.annotation": json.Marshal({
"minReplicas": parameter.min
"maxReplicas": parameter.max
})
}
}
parameter: {
min: *1 | int
max: *3 | int
}
```
### Add Pod ENV
Inject some system environments into pod is also very common use case.
The example could be like below, this case rely on strategy merge patch, so don't forget add `+patchKey=name` like below:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "add env into your pods"
name: env
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {
spec: template: spec: {
// +patchKey=name
containers: [{
name: context.name
// +patchKey=name
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}]
}
}
parameter: {
env: [string]: string
}
```
### Dynamically Pod Service Account
In this example, the service account was dynamically requested from an authentication service and patched into the service.
This example put uid token in http header, you can also use request body. You may refer to [processing](#Processing-Trait) section for more details.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "dynamically specify service account"
name: service-account
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
processing: {
output: {
credentials?: string
}
http: {
method: *"GET" | string
url: parameter.serviceURL
request: {
header: {
"authorization.token": parameter.uidtoken
}
}
}
}
patch: {
spec: template: spec: serviceAccountName: processing.output.credentials
}
parameter: {
uidtoken: string
serviceURL: string
}
```
### Add Init Container
Init container is useful to pre-define operations in an image and run it before app container.
> Please check [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container) for more detail about Init Container.
Below is an example of init container trait:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "add an init container and use shared volume with pod"
name: init-container
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {
spec: template: spec: {
// +patchKey=name
containers: [{
name: context.name
// +patchKey=name
volumeMounts: [{
name: parameter.mountName
mountPath: parameter.appMountPath
}]
}]
initContainers: [{
name: parameter.name
image: parameter.image
if parameter.command != _|_ {
command: parameter.command
}
// +patchKey=name
volumeMounts: [{
name: parameter.mountName
mountPath: parameter.initMountPath
}]
}]
// +patchKey=name
volumes: [{
name: parameter.mountName
emptyDir: {}
}]
}
}
parameter: {
name: string
image: string
command?: [...string]
mountName: *"workdir" | string
appMountPath: string
initMountPath: string
}
```
This case must rely on the strategy merge patch, for every array list, we add a `// +patchKey=name` annotation to avoid conflict.
The usage could be:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: testapp
spec:
components:
- name: express-server
type: webservice
properties:
image: oamdev/testapp:v1
traits:
- type: "init-container"
properties:
name: "install-container"
image: "busybox"
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://info.cern.ch
mountName: "workdir"
appMountPath: "/usr/share/nginx/html"
initMountPath: "/work-dir"
```
CUE based trait definitions can also enable many other advanced scenarios such as patching and data passing. They will be explained in detail in the following documentations.

View File

@@ -1,14 +1,14 @@
# Use Helm To Extend a Component type
# Defining Components with Helm
This documentation explains how to use Helm chart to define an application component.
In this section, it will introduce how to declare Helm charts as app components via `ComponentDefinition`.
Before reading this part, please make sure you've learned [the definition and template concepts](../platform-engineers/definition-and-templates.md).
> Before reading this part, please make sure you've learned [the definition and template concepts](../platform-engineers/definition-and-templates.md).
## Prerequisite
* [fluxcd/flux2](../install.md#3-optional-install-flux2), make sure you have installed the flux2 in the [installation guide](https://kubevela.io/#/en/install).
## Write ComponentDefinition
## Declare `ComponentDefinition`
Here is an example `ComponentDefinition` about how to use Helm as schematic module.
@@ -35,17 +35,12 @@ spec:
url: "http://oam.dev/catalog/"
```
Just like using CUE as schematic module, we also have some rules and contracts to use helm chart as schematic module.
In detail:
- `.spec.workload` is required to indicate the workload type of this Helm based component. Please also check for [Known Limitations](/en/helm/known-issues?id=workload-type-indicator) if you have multiple workloads packaged in one chart.
- `.spec.schematic.helm` contains information of Helm `release` and `repository` which leverages `fluxcd/flux2`.
- i.e. the pec of `release` aligns with [`HelmReleaseSpec`](https://github.com/fluxcd/helm-controller/blob/main/docs/api/helmrelease.md) and spec of `repository` aligns with [`HelmRepositorySpec`](https://github.com/fluxcd/source-controller/blob/main/docs/api/source.md#source.toolkit.fluxcd.io/v1beta1.HelmRepository).
- `.spec.workload` is required to indicate the main workload(apiVersion/Kind) in your Helm chart.
Only one workload allowed in one helm chart.
For example, in our sample chart, the core workload is `deployments.apps/v1`, other resources will also be deployed but mechanism of KubeVela won't work for them.
- `.spec.schematic.helm` contains information of Helm release & repository.
There are two fields `release` and `repository` in the `.spec.schematic.helm` section, these two fields align with the APIs of `fluxcd/flux2`. Spec of `release` aligns with [`HelmReleaseSpec`](https://github.com/fluxcd/helm-controller/blob/main/docs/api/helmrelease.md) and spec of `repository` aligns with [`HelmRepositorySpec`](https://github.com/fluxcd/source-controller/blob/main/docs/api/source.md#source.toolkit.fluxcd.io/v1beta1.HelmRepository).
In a word, just like the fields shown in the sample, the helm schematic module describes a specific Helm chart release and its repository.
## Create an Application using the helm based ComponentDefinition
## Declare an `Application`
Here is an example `Application`.
@@ -64,36 +59,29 @@ spec:
tag: "5.1.2"
```
Helm module workload will use data in `properties` as [Helm chart values](https://github.com/captainroy-hy/podinfo/blob/master/charts/podinfo/values.yaml).
You can learn the schema of settings by reading the `README.md` of the Helm
chart, and the schema are totally align with
[`values.yaml`](https://github.com/captainroy-hy/podinfo/blob/master/charts/podinfo/values.yaml)
of the chart.
The component `properties` is exactly the [overlay values](https://github.com/captainroy-hy/podinfo/blob/master/charts/podinfo/values.yaml) of the Helm chart.
Helm v3 has [support to validate
values](https://helm.sh/docs/topics/charts/#schema-files) in a chart's
values.yaml file with JSON schemas.
Vela will try to fetch the `values.schema.json` file from the Chart archive and
[save the schema into a
ConfigMap](https://kubevela.io/#/en/platform-engineers/openapi-v3-json-schema.md)
which can be consumed latter through UI or CLI.
If `values.schema.json` is not provided by the Chart author, Vela will generate a
OpenAPI-v3 JSON schema based on the `values.yaml` file automatically.
Deploy the application and after several minutes (it takes time to fetch Helm chart from the repo, render and install), you can check the Helm release is installed.
Deploy the application and after several minutes (it may take time to fetch Helm chart), you can check the Helm release is installed.
```shell
$ helm ls -A
myapp-demo-podinfo default 1 2021-03-05 02:02:18.692317102 +0000 UTC deployed podinfo-5.1.4 5.1.4
myapp-demo-podinfo default 1 2021-03-05 02:02:18.692317102 +0000 UTC deployed podinfo-5.1.4 5.1.4
```
Check the deployment defined in the chart has been created successfully.
Check the workload defined in the chart has been created successfully.
```shell
$ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
myapp-demo-podinfo 1/1 1 1 66m
```
Check the values(`image.tag = 5.1.2`) from application's `settings` are assigned to the chart.
Check the values (`image.tag = 5.1.2`) from application's `properties` are assigned to the chart.
```shell
$ kubectl get deployment myapp-demo-podinfo -o json | jq '.spec.template.spec.containers[0].image'
"ghcr.io/stefanprodan/podinfo:5.1.2"
```
### Generate Form from Helm Based Components
KubeVela will automatically generate OpenAPI v3 JSON schema based on [`values.schema.json`](https://helm.sh/docs/topics/charts/#schema-files) in the Helm chart, and store it in a `ConfigMap` in the same `namespace` with the definition object. Furthermore, if `values.schema.json` is not provided by the chart author, KubeVela will generate OpenAPI v3 JSON schema based on its `values.yaml` file automatically.
Please check the [Generate Forms from Definitions](en/platform-engineers/openapi-v3-json-schema) guide for more detail of using this schema to render GUI forms.

View File

@@ -1,20 +1,13 @@
# Limitations and Known Issues
# Known Limitations and Issues
Here are some known issues for using Helm chart as application component. Pleas note most of these restrictions will be fixed over time.
## Limitations
## Only one main workload in the chart
Here are some known limitations for using Helm chart as application component.
The chart must have exactly one workload being regarded as the **main** workload. In this context, `main workload` means the workload that will be tracked by KubeVela controllers, applied with traits and added into scopes. Only the `main workload` will benefit from KubeVela with rollout, revision, traffic management, etc.
### Workload Type Indicator
To tell KubeVela which one is the main workload, you must follow these two steps:
Following best practices of microservice, KubeVela recommends only one workload resource present in one Helm chart. Please split your "super" Helm chart into multiple charts (i.e. components). Essentially, KubeVela relies on the `workload` filed in component definition to indicate the workload type it needs to take care, for example:
#### 1. Declare main workload's resource definition
The field `.spec.definitionRef` in `ComponentDefinition` is used to record the
resource definition of the main workload.
The name should be in the format: `<resource>.<group>`.
For example, the Deployment resource should be defined as:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
@@ -25,7 +18,6 @@ spec:
apiVersion: apps/v1
kind: Deployment
```
The CloneSet workload resource should be defined as:
```yaml
...
spec:
@@ -35,33 +27,21 @@ spec:
kind: Cloneset
```
#### 2. Qualified full name of the main workload
Note that KubeVela won't fail if multiple workload types are packaged in one chart, the issue is for further operational behaviors such as rollout, revisions, and traffic management, they can only take effect on the indicated workload type.
The name of the main workload should be templated with [a default fully
qualified app
name](https://github.com/helm/helm/blob/543364fba59b0c7c30e38ebe0f73680db895abb6/pkg/chartutil/create.go#L415). DO NOT assign any value to `.Values.fullnameOverride`.
### Always Use Full Qualified Name
> Also, Helm highly recommend that new charts are created via `$ helm create` command so the template names are automatically defined as per this best practice.
The name of the workload should be templated with [fully qualified application name](https://github.com/helm/helm/blob/543364fba59b0c7c30e38ebe0f73680db895abb6/pkg/chartutil/create.go#L415) and please do NOT assign any value to `.Values.fullnameOverride`. As a best practice, Helm also highly recommend that new charts should be created via `$ helm create` command so the template names are automatically defined as per this best practice.
## Upgrade the application
### Control the Application Upgrade
#### Rollout strategy
For now, Helm based components cannot benefit from [application level rollout strategy](https://github.com/oam-dev/kubevela/blob/master/design/vela-core/rollout-design.md#applicationdeployment-workflow).
So currently in-place upgrade by modifying the application specification directly is the only way to upgrade the Helm based components, no advanced rollout strategy can be assigned to it. Please check [this sample](./trait.md#update-an-applicatiion).
#### Changing `settings` will trigger Helm release upgrade
For Helm based component, `.spec.components.settings` is the way user override the default values of the chart, so any changes applied to `settings` will trigger a Helm release upgrade.
This process is handled by Helm and `Flux2/helm-controller`, hence you can define remediation
strategies in the schematic according to [fluxcd/helmrelease API
doc](https://github.com/fluxcd/helm-controller/blob/main/docs/api/helmrelease.md#upgraderemediation)
and [spec doc](https://toolkit.fluxcd.io/components/helm/helmreleases/#configuring-failure-remediation)
Changes made to the component `properties` will trigger a Helm release upgrade. This process is handled by Flux v2 Helm controller, hence you can define remediation
strategies in the schematic based on [Helm Release
documentation](https://github.com/fluxcd/helm-controller/blob/main/docs/api/helmrelease.md#upgraderemediation)
and [specification](https://toolkit.fluxcd.io/components/helm/helmreleases/#configuring-failure-remediation)
in case failure happens during this upgrade.
For example
For example:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
@@ -85,8 +65,16 @@ spec:
```
> Note: currently, it's hard to get helpful information of a living Helm release to figure out what happened if upgrading failed. We will enhance the observability to help users track the situation of Helm release in application level.
Though one issue is for now it's hard to get helpful information of a living Helm release to figure out what happened if upgrading failed. We will enhance the observability to help users track the situation of Helm release in application level.
#### Changing `traits` may make Pods restart
## Issues
Traits work on Helm based component in the same way as CUE based component, i.e. changes on traits may impact the main workload instance. Hence, the Pods belonging to this workload instance may restart twice during upgrade, one is by the Helm upgrade, and the other one is caused by traits.
The known issues will be fixed in following releases.
### Rollout Strategy
For now, Helm based components cannot benefit from [application level rollout strategy](https://github.com/oam-dev/kubevela/blob/master/design/vela-core/rollout-design.md#applicationdeployment-workflow). As shown in [this sample](./trait.md#update-an-applicatiion), if the application is updated, it can only be rollouted directly without canary or blue-green approach.
### Updating Traits Properties may Also Lead to Pods Restart
Changes on traits properties may impact the component instance and Pods belonging to this workload instance will restart. In CUE based components this is avoidable as KubeVela has full control on the rendering process of the resources, though in Helm based components it's currently deferred to Flux v2 controller.

View File

@@ -1,8 +1,9 @@
# Attach Traits to Helm Based Components
Most traits in KubeVela can be attached to Helm based component seamlessly. In this sample application below, we add two traits, [scaler](https://github.com/oam-dev/kubevela/blob/master/charts/vela-core/templates/defwithtemplate/manualscale.yaml)
and [virtualgroup](https://github.com/oam-dev/kubevela/blob/master/docs/examples/helm-module/virtual-group-td.yaml),
to a Helm based component.
Traits in KubeVela can be attached to Helm based component seamlessly.
In this sample application below, we add two traits, [scaler](https://github.com/oam-dev/kubevela/blob/master/charts/vela-core/templates/defwithtemplate/manualscale.yaml)
and [virtualgroup](https://github.com/oam-dev/kubevela/blob/master/docs/examples/helm-module/virtual-group-td.yaml) to a Helm based component.
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -27,16 +28,15 @@ spec:
type: "cluster"
```
> Note: when use Trait system with Helm module workload, please *make sure the target workload in your Helm chart strictly follows the qualified-full-name convention in Helm.* [For example in this chart](https://github.com/captainroy-hy/podinfo/blob/c2b9603036f1f033ec2534ca0edee8eff8f5b335/charts/podinfo/templates/deployment.yaml#L4), the workload name is composed of [release name and chart name](https://github.com/captainroy-hy/podinfo/blob/c2b9603036f1f033ec2534ca0edee8eff8f5b335/charts/podinfo/templates/_helpers.tpl#L13).
> Note: when use traits with Helm based component, please *make sure the target workload in your Helm chart strictly follows the qualified-full-name convention in Helm.* [For example in this chart](https://github.com/captainroy-hy/podinfo/blob/c2b9603036f1f033ec2534ca0edee8eff8f5b335/charts/podinfo/templates/deployment.yaml#L4), the workload name is composed of [release name and chart name](https://github.com/captainroy-hy/podinfo/blob/c2b9603036f1f033ec2534ca0edee8eff8f5b335/charts/podinfo/templates/_helpers.tpl#L13).
> This is because KubeVela relies on the name to discovery the workload, otherwise it cannot apply traits to the workload. KubeVela will generate a release name based on your `Application` name and component name automatically, so you need to make sure never override the full name template in your Helm chart.
## Verify traits work correctly
You may wait a bit more time to check the trait works after deploying the application.
Because KubeVela may not discovery the target workload immediately when it's created because of reconciliation interval.
> You may need to wait a few seconds to check the trait attached because of reconciliation interval.
Check the scaler trait.
Check the `scaler` trait takes effect.
```shell
$ kubectl get manualscalertrait
NAME AGE
@@ -47,7 +47,7 @@ $ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.replicas
4
```
Check the virtualgroup trait.
Check the `virtualgroup` trait.
```shell
$ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.template.metadata.labels
{
@@ -56,11 +56,11 @@ $ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.template.metadata
}
```
## Update an application
## Update Application
After the application is deployed and workloads/traits are created successfully,
you can update the application, and corresponding changes will be applied to the
workload.
workload instances.
Let's make several changes on the configuration of the sample application.
@@ -89,7 +89,7 @@ spec:
Apply the new configuration and check the results after several minutes.
Check the new values(`image.tag = 5.1.3`) from application's `settings` are assigned to the chart.
Check the new values (`image.tag = 5.1.3`) from application's `properties` are assigned to the chart.
```shell
$ kubectl get deployment myapp-demo-podinfo -o json | jq '.spec.template.spec.containers[0].image'
"ghcr.io/stefanprodan/podinfo:5.1.3"
@@ -101,13 +101,13 @@ NAME NAMESPACE REVISION UPDATED ST
myapp-demo-podinfo default 2 2021-03-15 08:52:00.037690148 +0000 UTC deployed podinfo-5.1.4 5.1.4
```
Check the scaler trait.
Check the `scaler` trait.
```shell
$ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.replicas
2
```
Check the virtualgroup trait.
Check the `virtualgroup` trait.
```shell
$ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.template.metadata.labels
{
@@ -116,9 +116,9 @@ $ kubectl get deployment myapp-demo-podinfo -o json | jq .spec.template.metadata
}
```
## Delete a trait
## Detach Trait
Let's have a try removing a trait from the application.
Let's have a try detach a trait from the application.
```yaml
apiVersion: core.oam.dev/v1alpha2
@@ -143,7 +143,7 @@ spec:
type: "cluster"
```
Apply the configuration and check `manualscalertrait` has been deleted.
Apply the application and check `manualscalertrait` has been deleted.
```shell
$ kubectl get manualscalertrait
No resources found

View File

@@ -1,12 +1,12 @@
# Use Raw Kubernetes Resource To Extend a Component type
# Defining Components with Raw Template
This documentation explains how to use raw K8s resource to define an application component.
In this section, it will introduce how to use raw template to declare app components via `ComponentDefinition`.
Before reading this part, please make sure you've learned [the definition and template concepts](../platform-engineers/definition-and-templates.md).
> Before reading this part, please make sure you've learned [the definition and template concepts](../platform-engineers/definition-and-templates.md).
## Write ComponentDefinition
## Declare `ComponentDefinition`
Here is an example `ComponentDefinition` about how to use raw k8s resource as schematic module.
Here is a raw template based `ComponentDefinition` example which provides a abstraction for worker workload type:
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -45,33 +45,16 @@ spec:
- "spec.template.spec.containers[0].image"
```
Just like using CUE as schematic module, we also have some rules and contracts
to use raw k8s resource as schematic module.
`.spec.schematic.kube` contains template of the raw k8s resource and
In detail, the `.spec.schematic.kube` contains template of a workload resource and
configurable parameters.
- `.spec.schematic.kube.template` is the raw k8s resource in YAML format just like
we usually defined in a YAML file.
- `.spec.schematic.kube.parameters` contains a set of configurable parameters.
`name`, `type`, and `fieldPaths` are required fields.
`description` and `required` are optional fields.
- `.spec.schematic.kube.template` is the raw template in YAML format.
- `.spec.schematic.kube.parameters` contains a set of configurable parameters. The `name`, `type`, and `fieldPaths` are required fields, `description` and `required` are optional fields.
- The parameter `name` must be unique in a `ComponentDefinition`.
- `type` indicates the data type of value set to the field in a workload.
This is a required field which will help Vela to generate a OpenAPI JSON schema
for the parameters automatically.
Currently, only basic data types are allowed, including `string`, `number`, and
`boolean`, while `array` and `object` are not.
- `fieldPaths` in the parameter specifies an array of fields within this workload
that will be overwritten by the value of this parameter.
All fields must be of the same type.
Fields are specified as JSON field paths without a leading dot, for example
- `type` indicates the data type of value set to the field. This is a required field which will help KubeVela to generate a OpenAPI JSON schema for the parameters automatically. In raw template, only basic data types are allowed, including `string`, `number`, and `boolean`, while `array` and `object` are not.
- `fieldPaths` in the parameter specifies an array of fields within the template that will be overwritten by the value of this parameter. Fields are specified as JSON field paths without a leading dot, for example
`spec.replicas`, `spec.containers[0].image`.
## Create an Application using Kube schematic ComponentDefinition
## Declare an `Application`
Here is an example `Application`.
@@ -89,13 +72,9 @@ spec:
image: nginx:1.14.0
```
Kube schematic workload will use data in `properties` as the values of
parameters.
Since parameters only support basic data type, values in `properties` should be
formatted as simple key-value, `<parameterName>: <parameterValue>`.
And don't forget to set value to required parameter.
Since parameters only support basic data type, values in `properties` should be simple key-value, `<parameterName>: <parameterValue>`.
Deploy the `Application` and verify the resulting workload.
Deploy the `Application` and verify the running workload instance.
```shell
$ kubectl get deploy

View File

@@ -1,10 +1,9 @@
# Attach Traits to Kube Based Components
# Attach Traits to Raw Template Based Components
Most traits in KubeVela can be attached to Kube based component seamlessly.
In this sample application below, we add two traits,
In this sample, we will attach two traits,
[scaler](https://github.com/oam-dev/kubevela/blob/master/charts/vela-core/templates/defwithtemplate/manualscale.yaml)
and
[virtualgroup](https://github.com/oam-dev/kubevela/blob/master/docs/examples/kube-module/virtual-group-td.yaml), to a Kube based component.
[virtualgroup](https://github.com/oam-dev/kubevela/blob/master/docs/examples/kube-module/virtual-group-td.yaml) to a component
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -28,11 +27,11 @@ spec:
type: "cluster"
```
## Verify traits work correctly
## Verify
Deploy the application and verify traits work.
Check the scaler trait.
Check the `scaler` trait.
```shell
$ kubectl get manualscalertrait
NAME AGE
@@ -43,7 +42,7 @@ $ kubectl get deployment mycomp -o json | jq .spec.replicas
2
```
Check the virtualgroup trait.
Check the `virtualgroup` trait.
```shell
$ kubectl get deployment mycomp -o json | jq .spec.template.metadata.labels
{
@@ -84,19 +83,21 @@ spec:
Apply the new configuration and check the results after several seconds.
Check the new parameter works.
> After updating, the workload instance name will be updated from `mycomp-v1` to `mycomp-v2`.
Check the new property value.
```shell
$ kubectl get deployment mycomp -o json | jq '.spec.template.spec.containers[0].image'
"nginx:1.14.1"
```
Check the scaler trait.
Check the `scaler` trait.
```shell
$ kubectl get deployment mycomp -o json | jq .spec.replicas
4
```
Check the virtualgroup trait.
Check the `virtualgroup` trait.
```shell
$ kubectl get deployment mycomp -o json | jq .spec.template.metadata.labels
{

View File

@@ -0,0 +1,90 @@
## Extend CRD Operator as Component Type
Let's use [OpenKruise](https://github.com/openkruise/kruise) as example of extend CRD as KubeVela Component.
**The mechanism works for all CRD Operators**.
### Step 1: Install the CRD controller
You need to [install the CRD controller](https://github.com/openkruise/kruise#quick-start) into your K8s system.
### Step 2: Create Component Definition
To register Cloneset(one of the OpenKruise workloads) as a new workload type in KubeVela, the only thing needed is to create an `ComponentDefinition` object for it.
A full example can be found in this [cloneset.yaml](https://github.com/oam-dev/catalog/blob/master/registry/cloneset.yaml).
Several highlights are list below.
#### 1. Describe The Workload Type
```yaml
...
annotations:
definition.oam.dev/description: "OpenKruise cloneset"
...
```
A one line description of this component type. It will be shown in helper commands such as `$ vela components`.
#### 2. Register it's underlying CRD
```yaml
...
workload:
definition:
apiVersion: apps.kruise.io/v1alpha1
kind: CloneSet
...
```
This is how you register OpenKruise Cloneset's API resource (`fapps.kruise.io/v1alpha1.CloneSet`) as the workload type.
KubeVela uses Kubernetes API resource discovery mechanism to manage all registered capabilities.
#### 4. Define Template
```yaml
...
schematic:
cue:
template: |
output: {
apiVersion: "apps.kruise.io/v1alpha1"
kind: "CloneSet"
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
replicas: parameter.replicas
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
}]
}
}
}
}
parameter: {
// +usage=Which image would you like to use for your service
// +short=i
image: string
// +usage=Number of pods in the cloneset
replicas: *5 | int
}
```
### Step 3: Register New Component Type to KubeVela
As long as the definition file is ready, you just need to apply it to Kubernetes.
```bash
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/catalog/master/registry/cloneset.yaml
```
And the new component type will immediately become available for developers to use in KubeVela.

View File

@@ -1,24 +1,29 @@
# Cloud Service
# Defining Cloud Database as Component
KubeVela can help you to provision and consume cloud resources with your apps very well.
KubeVela provides unified abstraction even for cloud services.
## Provision
## Should a Cloud Service be a Component or Trait?
In this section, we will add a Alibaba Cloud's RDS service as a new workload type in KubeVela.
The following practice could be considered:
- Use `ComponentDefinition` if:
- you want to allow your end users explicitly claim a "instance" of the cloud service and consume it, and release the "instance" when deleting the application.
- Use `TraitDefinition` if:
- you don't want to give your end users any control/workflow of claiming or releasing the cloud service, you only want to give them a way to consume a cloud service which could even be managed by some other system. A `Service Binding` trait is widely used in this case.
### Step 1: Install and configure Crossplane
In this documentation, we will add a Alibaba Cloud's RDS (Relational Database Service) as a component.
We use [Crossplane](https://crossplane.io/) as the cloud resource operator for Kubernetes.
This tutorial has been verified with Crossplane `version 0.14`.
Please follow the Crossplane [Documentation](https://crossplane.io/docs/),
especially the `Install & Configure` and `Compose Infrastructure` sections to configure
## Step 1: Install and Configure Crossplane
KubeVela uses [Crossplane](https://crossplane.io/) as the cloud service operator.
> This tutorial has been tested with Crossplane version `0.14`. Please follow the [Crossplane documentation](https://crossplane.io/docs/), especially the `Install & Configure` and `Compose Infrastructure` sections to configure
Crossplane with your cloud account.
**Note: When installing crossplane helm chart, please DON'T set `alpha.oam.enabled=true` as OAM crds are already installed by KubeVela.**
**Note: When installing Crossplane via Helm chart, please DON'T set `alpha.oam.enabled=true` as all OAM features are already installed by KubeVela.**
## Step 2: Add Component Definition
First, register the `rds` workload type to KubeVela.
Register the `rds` component to KubeVela.
```bash
$ cat << EOF | kubectl apply -f -
@@ -65,10 +70,7 @@ EOF
## Step 3: Verify
Use RDS component in an [Application](../application.md) to provide cloud resources.
As we have claimed an RDS instance with ComponentDefinition name `rds`.
The component in the application should refer to this type.
Instantiate RDS component in an [Application](../application.md) to provide cloud resources.
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -85,22 +87,18 @@ spec:
secretname: "myrds-conn"
```
Apply the application into the K8s system.
Apply above application to Kubernetes and a RDS instance will be automatically provisioned (may take some time, ~5 mins).
The database provision will take some time (> 5 min) to be ready.
// TBD: add status check , or should database is created result.
> TBD: add status check , show database create result.
## Consuming
## Step 4: Consuming The Cloud Service
In this section, we will consume the cloud resources created.
In this section, we will show how another component consumes the RDS instance.
> ** Note: We highly recommend that you should split the cloud resource provision and consuming in different applications.**
** Because the cloud resources can have standalone Lifecycle Management.**
> But it also works if you combine the resources provision and consuming within an App.
> Note: we recommend to define the cloud resource claiming to an independent application if that cloud resource has standalone lifecycle. Otherwise, it could be defined in the same application of the consumer component.
### Step 1 Define a ComponentDefinition consume from secrets
### `ComponentDefinition` With Secret Reference
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -155,18 +153,14 @@ spec:
parameter: {
image: string
//+InsertSecretTo=mySecret
mysecret: string
dbConnection: string
cmd?: [...string]
}
```
The key point is the annotation `//+InsertSecretTo=mySecret`,
KubeVela will know the parameter is a K8s secret, it will parse the secret and bind the data into the CUE struct `mySecret`.
With the `//+InsertSecretTo=mySecret` annotation, KubeVela knows this parameter value comes from a Kubernetes Secret (whose name is set by user), so it will inject its data to `mySecret` which is referenced as environment variable in the template.
Then the `output` can reference the `mySecret` struct for the data value. The name `mySecret` can be any name.
It's just an example in this case. The `+InsertSecretTo` is keyword, it defines the data binding mechanism.
Then create an Application to consume the data.
Then declare an application to consume the RDS instance.
```yaml
apiVersion: core.oam.dev/v1alpha2
@@ -177,9 +171,9 @@ spec:
components:
- name: myweb
type: webserver
settings:
properties:
image: "nginx"
mysecret: "mydb-outputs"
dbConnection: "mydb-outputs"
```
// TBD show the result

View File

@@ -1,31 +0,0 @@
# Component Definition
In the following tutorial, you will learn about define your own Component Definition to extend KubeVela.
Before continue, make sure you have learned the basic concept of [Definition Objects](definition-and-templates.md) in KubeVela.
Usually, there are general two kinds of capability resources you can find in K8s ecosystem.
1. Compose K8s built-in resources: in this case, you can easily use them by apply YAML files.
This is widely used as helm charts. For example, [wordpress helm chart](https://bitnami.com/stack/wordpress/helm), [mysql helm chart](https://bitnami.com/stack/mysql/helm).
2. CRD(Custom Resource Definition) Operator: in this case, you need to install it and create CR(Custom Resource) instance for use.
This is widely used such as [Promethus Operator](https://github.com/prometheus-operator/prometheus-operator), [TiDB Operator](https://github.com/pingcap/tidb-operator), etc.
For both cases, they can all be extended into KubeVela as Component type.
## Extend helm chart as KubeVela Component
In this case, it's very straight forward to register a helm chart as KubeVela capabilities.
KubeVela will deploy the helm chart, and with the help of KubeVela, the extended helm charts can use all the KubeVela traits.
Refer to ["Use Helm To Extend a Component type"](https://kubevela.io/#/en/helm/component) to learn details in this case.
## Extend CRD Operator as KubeVela Component
In this case, you're more likely to make a CUE template to do the abstraction and encapsulation.
KubeVela will render the CUE template, and deploy the rendered resources. This is the most native and powerful way in KubeVela.
Refer to ["Use CUE to extend Component type"](https://kubevela.io/#/en/cue/component) to learn details in this case.

View File

@@ -0,0 +1,480 @@
# Debug, Test and Dry-run
With flexibility in defining abstractions, it's important to be able to debug, test and dry-run the CUE based definitions. This tutorial will show this step by step.
## Prerequisites
Please make sure below CLIs are present in your environment:
* [`cue` >=v0.2.2](https://cuelang.org/docs/install/)
* [`vela` (>v1.0.0)](https://kubevela.io/#/en/install?id=_3-optional-get-kubevela-cli)
## Define Definition and Template
We recommend to define the `Definition Object` in two separate parts: the CRD part and the CUE template. This enable us to debug, test and dry-run the CUE template.
Let's name the CRD part as `def.yaml`.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: microservice
annotations:
definition.oam.dev/description: "Describes a microservice combo Deployment with Service."
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
```
And the CUE template part as `def.cue`, then we can use CUE commands such as `cue fmt` / `cue vet` to format and validate the CUE file.
```
output: {
// Deployment
apiVersion: "apps/v1"
kind: "Deployment"
metadata: {
name: context.name
namespace: "default"
}
spec: {
selector: matchLabels: {
"app": context.name
}
template: {
metadata: {
labels: {
"app": context.name
"version": parameter.version
}
}
spec: {
serviceAccountName: "default"
terminationGracePeriodSeconds: parameter.podShutdownGraceSeconds
containers: [{
name: context.name
image: parameter.image
ports: [{
if parameter.containerPort != _|_ {
containerPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
containerPort: parameter.servicePort
}
}]
if parameter.env != _|_ {
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}
resources: {
requests: {
if parameter.cpu != _|_ {
cpu: parameter.cpu
}
if parameter.memory != _|_ {
memory: parameter.memory
}
}
}
}]
}
}
}
}
// Service
outputs: service: {
apiVersion: "v1"
kind: "Service"
metadata: {
name: context.name
labels: {
"app": context.name
}
}
spec: {
type: "ClusterIP"
selector: {
"app": context.name
}
ports: [{
port: parameter.servicePort
if parameter.containerPort != _|_ {
targetPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
targetPort: parameter.servicePort
}
}]
}
}
parameter: {
version: *"v1" | string
image: string
servicePort: int
containerPort?: int
// +usage=Optional duration in seconds the pod needs to terminate gracefully
podShutdownGraceSeconds: *30 | int
env: [string]: string
cpu?: string
memory?: string
}
```
After everything is done, there's a script [`hack/vela-templates/mergedef.sh`](https://github.com/oam-dev/kubevela/blob/master/hack/vela-templates/mergedef.sh) to merge the `def.yaml` and `def.cue` into a completed Definition Object.
```shell
$ ./hack/vela-templates/mergedef.sh def.yaml def.cue > microservice-def.yaml
```
## Debug CUE template
### Use `cue vet` to Validate
```shell
$ cue vet def.cue
output.metadata.name: reference "context" not found:
./def.cue:6:14
output.spec.selector.matchLabels.app: reference "context" not found:
./def.cue:11:11
output.spec.template.metadata.labels.app: reference "context" not found:
./def.cue:16:17
output.spec.template.spec.containers.name: reference "context" not found:
./def.cue:24:13
outputs.service.metadata.name: reference "context" not found:
./def.cue:62:9
outputs.service.metadata.labels.app: reference "context" not found:
./def.cue:64:11
outputs.service.spec.selector.app: reference "context" not found:
./def.cue:70:11
```
The `reference "context" not found` is a common error in this step as [`context`](en/cue/component?id=cue-context) is a runtime information that only exist in KubeVela controllers. In order to validate the CUE template end-to-end, we can add a mock `context` in `def.cue`.
> Note that you need to remove all mock data when you finished the validation.
```CUE
... // existing template data
context: {
name: string
}
```
Then execute the command:
```shell
$ cue vet def.cue
some instances are incomplete; use the -c flag to show errors or suppress this message
```
The `reference "context" not found` error is gone, but `cue vet` only validates the data type which is not enough to ensure the login in template is correct. Hence we need to use `cue vet -c` for complete validation:
```shell
$ cue vet def.cue -c
context.name: incomplete value string
output.metadata.name: incomplete value string
output.spec.selector.matchLabels.app: incomplete value string
output.spec.template.metadata.labels.app: incomplete value string
output.spec.template.spec.containers.0.image: incomplete value string
output.spec.template.spec.containers.0.name: incomplete value string
output.spec.template.spec.containers.0.ports.0.containerPort: incomplete value int
outputs.service.metadata.labels.app: incomplete value string
outputs.service.metadata.name: incomplete value string
outputs.service.spec.ports.0.port: incomplete value int
outputs.service.spec.ports.0.targetPort: incomplete value int
outputs.service.spec.selector.app: incomplete value string
parameter.image: incomplete value string
parameter.servicePort: incomplete value int
```
It now complains some runtime data is incomplete (because `context` and `parameter` do not have value), let's now fill in more mock data in the `def.cue` file:
```CUE
context: {
name: "test-app"
}
parameter: {
version: "v2"
image: "image-address"
servicePort: 80
containerPort: 8000
env: {"PORT": "8000"}
cpu: "500m"
memory: "128Mi"
}
```
It won't complain now which means validation is passed:
```shell
cue vet def.cue -c
```
#### Use `cue export` to Check the Rendered Resources
The `cue export` can export rendered result in YAMl foramt:
```shell
$ cue export -e output def.cue --out yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app
namespace: default
spec:
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
version: v2
spec:
serviceAccountName: default
terminationGracePeriodSeconds: 30
containers:
- name: test-app
image: image-address
```
```shell
$ cue export -e outputs.service def.cue --out yaml
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
selector:
app: test-app
type: ClusterIP
```
## Dry-Run the `Application`
When CUE template is good, we can use `vela system dry-run` to dry run and check the rendered resources in real Kubernetes cluster. This command will exactly execute the same render logic in KubeVela's `Application` Controller adn output the result for you.
First, we need use `mergedef.sh` to merge the definition and cue files.
```shell
$ mergedef.sh def.yaml def.cue > componentdef.yaml
```
Then, let's create an Application named `test-app.yaml`.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: boutique
namespace: default
spec:
components:
- name: frontend
type: microservice
properties:
image: registry.cn-hangzhou.aliyuncs.com/vela-samples/frontend:v0.2.2
servicePort: 80
containerPort: 8080
env:
PORT: "8080"
cpu: "100m"
memory: "64Mi"
```
Dry run the application by using `vela system dry-run`.
```shell
$ vela system dry-run -f test-app.yaml -d componentdef.yaml
---
# Application(boutique) -- Comopnent(frontend)
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.oam.dev/component: frontend
app.oam.dev/name: boutique
workload.oam.dev/type: microservice
name: frontend
namespace: default
spec:
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
version: v1
spec:
containers:
- env:
- name: PORT
value: "8080"
image: registry.cn-hangzhou.aliyuncs.com/vela-samples/frontend:v0.2.2
name: frontend
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 64Mi
serviceAccountName: default
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
app.oam.dev/component: frontend
app.oam.dev/name: boutique
trait.oam.dev/resource: service
trait.oam.dev/type: AuxiliaryWorkload
name: frontend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: frontend
type: ClusterIP
---
```
### Import `kube` Package
KubeVela automatically generates internal CUE packages for all built-in Kubernetes API resources, so you can import them in CUE template. This could simplify how you write the template because some default values are already there, and the imported package will help you validate the template.
Let's try to define a template with help of `kube` package:
```cue
import (
apps "kube/apps/v1"
corev1 "kube/v1"
)
// output is validated by Deployment.
output: apps.#Deployment
output: {
metadata: {
name: context.name
namespace: "default"
}
spec: {
selector: matchLabels: {
"app": context.name
}
template: {
metadata: {
labels: {
"app": context.name
"version": parameter.version
}
}
spec: {
terminationGracePeriodSeconds: parameter.podShutdownGraceSeconds
containers: [{
name: context.name
image: parameter.image
ports: [{
if parameter.containerPort != _|_ {
containerPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
containerPort: parameter.servicePort
}
}]
if parameter.env != _|_ {
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}
resources: {
requests: {
if parameter.cpu != _|_ {
cpu: parameter.cpu
}
if parameter.memory != _|_ {
memory: parameter.memory
}
}
}
}]
}
}
}
}
outputs:{
service: corev1.#Service
}
// Service
outputs: service: {
metadata: {
name: context.name
labels: {
"app": context.name
}
}
spec: {
//type: "ClusterIP"
selector: {
"app": context.name
}
ports: [{
port: parameter.servicePort
if parameter.containerPort != _|_ {
targetPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
targetPort: parameter.servicePort
}
}]
}
}
parameter: {
version: *"v1" | string
image: string
servicePort: int
containerPort?: int
// +usage=Optional duration in seconds the pod needs to terminate gracefully
podShutdownGraceSeconds: *30 | int
env: [string]: string
cpu?: string
memory?: string
}
```
Then merge them.
```shell
mergedef.sh def.yaml def.cue > componentdef.yaml
```
And dry run to see the rendered resources:
```shell
vela system dry-run -f test-app.yaml -d componentdef.yaml
```

View File

@@ -1,19 +1,20 @@
# Introduction of Definition Objects
# Introduction of Definition CRD
This documentation explains how to register and manage available *components* and *traits* in your platform with
`ComponentDefinition` and `TraitDefinition`, so your end users could "assemble" them into an `Application` resource.
`ComponentDefinition` and `TraitDefinition`, so end users could instantiate and "assemble" them into an `Application`.
> All definition objects are expected to be maintained and installed by platform team, think them as *capability providers* in your platform.
## Overview
Essentially, a definition object in KubeVela is consisted by three section:
- **Capability Indexer** defined by `spec.workload` in `ComponentDefinition` and `spec.definitionRef` in `TraitDefinition`.
- this is for discovering the provider of this capability.
- **Capability Indicator**
- `ComponentDefinition` uses `spec.workload` to indicate the workload type of this component.
- `TraitDefinition` uses `spec.definitionRef` to indicate the provider of this trait.
- **Interoperability Fields**
- they are for the platform to ensure a trait can work with given workload type. Hence only `TraitDefinition` has these fields.
- **Capability Encapsulation** defined by `spec.schematic`
- this defines the encapsulation (i.e. templating and parametering) of this capability. For now, user can choose to use Helm or CUE as encapsulation.
- **Capability Encapsulation and Abstraction** defined by `spec.schematic`
- this defines the **templating and parametering** (i.e. encapsulation) of this capability.
Hence, the basic structure of definition object is as below:
@@ -34,9 +35,9 @@ spec:
Let's explain these fields one by one.
### Capability Indexer
### Capability Indicator
The indexer of given capability is declared as `spec.workload`.
In `ComponentDefinition`, the indicator of workload type is declared as `spec.workload`.
Below is a definition for *Web Service* in KubeVela:
@@ -56,7 +57,7 @@ spec:
...
```
In above example, it claims to leverage Kubernetes Deployment (`apiVersion: apps/v1`, `kind: Deployment`) as the workload type to instantiate this component.
In above example, it claims to leverage Kubernetes Deployment (`apiVersion: apps/v1`, `kind: Deployment`) as the workload type for component.
### Interoperability Fields
@@ -119,9 +120,9 @@ If this field is set, KubeVela core will automatically fill the workload referen
Please check [scaler](https://github.com/oam-dev/kubevela/blob/master/charts/vela-core/templates/defwithtemplate/manualscale.yaml) trait as a demonstration of how to set this field.
### Capability Encapsulation
### Capability Encapsulation and Abstraction
The encapsulation (i.e. templating and parameterizing) of given capability are defined in `spec.schematic` field. For example, below is the full definition of *Web Service* type in KubeVela:
The templating and parameterizing of given capability are defined in `spec.schematic` field. For example, below is the full definition of *Web Service* type in KubeVela:
<details>
@@ -222,8 +223,6 @@ spec:
```
</details>
It's by design that KubeVela supports multiple ways to define the encapsulation. Hence, we will explain this field in detail with following guides.
- Learn about [CUE](/en/cue/basic) based capability definitions.
- Learn about [Helm](/en/helm/component) based capability definitions.
The specification of `schematic` is explained in following CUE and Helm specific documentations.
Also, the `schematic` filed enables you to render UI forms directly based on them, please check the [Generate Forms from Definitions](/en/platform-engineers/openapi-v3-json-schema.md) section about how to.

View File

@@ -0,0 +1,110 @@
# Defining KEDA as Autoscaling Trait
> Before continue, make sure you have learned about the concepts of [Definition Objects](definition-and-templates.md) and [Defining Traits with CUE](https://kubevela.io/#/en/cue/trait) section.
In the following tutorial, you will learn to add [KEDA](https://keda.sh/) as a new autoscaling trait to your KubeVela based platform.
> KEDA is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container based on resource metrics or the number of events needing to be processed.
## Step 1: Install KEDA controller
[Install the KEDA controller](https://keda.sh/docs/2.2/deploy/) into your K8s system.
## Step 2: Create Trait Definition
To register KEDA as a new capability (i.e. trait) in KubeVela, the only thing needed is to create an `TraitDefinition` object for it.
A full example can be found in this [keda.yaml](https://github.com/oam-dev/catalog/blob/master/registry/keda-scaler.yaml).
Several highlights are list below.
### 1. Describe The Trait
```yaml
...
name: keda-scaler
annotations:
definition.oam.dev/description: "keda supports multiple event to elastically scale applications, this scaler only applies to deployment as example"
...
```
We use label `definition.oam.dev/description` to add one line description for this trait.
It will be shown in helper commands such as `$ vela traits`.
### 2. Register API Resource
```yaml
...
spec:
definitionRef:
name: scaledobjects.keda.sh
...
```
This is how you claim and register KEDA `ScaledObject`'s API resource (`scaledobjects.keda.sh`) as a trait definition.
### 3. Define `appliesToWorkloads`
A trait can be attached to specified workload types or all (i.e. `"*"` means your trait can work with any workload type).
For the case of KEAD, we will only allow user to attach it to Kubernetes workload type. So we claim it as below:
```yaml
...
spec:
...
appliesToWorkloads:
- "deployments.apps" # claim KEDA based autoscaling trait can only attach to Kubernetes Deployment workload type.
...
```
### 4. Define Schematic
In this step, we will define the schematic of KEDA based autoscaling trait, i.e. we will create abstraction for KEDA `ScaledObject` with simplified primitives, so end users of this platform don't really need to know what is KEDA at all.
```yaml
...
schematic:
cue:
template: |-
outputs: cpu-scaler: {
apiVersion: "keda.sh/v1alpha1"
kind: "ScaledObject"
metadata: {
name: context.name
}
spec: {
scaleTargetRef: {
name: context.name
}
triggers: [{
type: paramter.type
metadata: {
type: "Utilization"
value: paramter.value
}
}]
}
}
paramter: {
// +usage=Types of triggering application elastic scaling, Optional: cpu, memory
type: string
// +usage=Value to trigger scaling actions, represented as a percentage of the requested value of the resource for the pods. like: "60"(60%)
value: string
}
```
This is a CUE based template which only exposes `type` and `value` as trait properties for user to set.
> Please check the [Defining Trait with CUE](../cue/trait.md) section for more details regarding to CUE templating.
## Step 2: Register New Trait to KubeVela
As long as the definition file is ready, you just need to apply it to Kubernetes.
```bash
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/catalog/master/registry/keda-scaler.yaml
```
And the new trait will immediately become available for end users to use in `Application` resource.

View File

@@ -1,24 +1,14 @@
# Auto-generated Schema for Capability Parameters
# Generate Forms from Definitions
For any installed capabilities from [definition files](./definition-and-templates.md),
KubeVela will automatically generate OpenAPI v3 JSON Schema for the parameters defined.
So end users can learn how to write the Application Object from it.
For any capabilities installed via [Definition Objects](./definition-and-templates.md),
KubeVela will automatically generate OpenAPI v3 JSON schema based on its parameter list, and store it in a `ConfigMap` in the same `namespace` with the definition object.
Platform builders can integrate the schema API to build a new UI for their end users.
> The default KubeVela system `namespace` is `vela-system`, the built-in capabilities and schemas are laid there.
## An integration workflow
In definition objects, `parameter` is always required as the entrance for encapsulation of the capabilities.
## List Schema
* CUE: the [`parameter`](../cue/component.md#Write-ComponentDefinition) is a `keyword` in CUE template.
* HELM: the [`parameter``](../helm/component.md#Write-ComponentDefinition) is generated from `values.yaml` in HELM chart.
When a new ComponentDefinition or TraitDefinition applied in K8s, KubeVela will watch the resources and
generate a `ConfigMap` in the same namespace with the definition object.
The default KubeVela system namespace is `vela-system`, the built-in capabilities are laid there.
The ConfigMap will have a common label `definition.oam.dev=schema`, so you can find easily by:
This `ConfigMap` will have a common label `definition.oam.dev=schema`, so you can find easily by:
```shell
$ kubectl get configmap -n vela-system -l definition.oam.dev=schema
@@ -30,10 +20,10 @@ schema-webservice 1 19s
schema-worker 1 20s
```
The ConfigMap name is in the format of `schema-<your-definition-name>`,
and the `key` of ConfigMap is `openapi-v3-json-schema`.
The `ConfigMap` name is in the format of `schema-<your-definition-name>`,
and the data key is `openapi-v3-json-schema`.
For example, we can use the following command to get the JSON Schema of `webservice`.
For example, we can use the following command to get the JSON schema of `webservice`.
```shell
$ kubectl get configmap schema-webservice -n vela-system -o yaml
@@ -57,11 +47,21 @@ data:
port do you want customer traffic sent to","title":"port","type":"integer"}},"required":["image","port"],"type":"object"}'
```
Then the platform builder can follow the [OpenAPI v3 Specification](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md#format)
to build their own GUI for end users.
Specifically, this schema is generated based on `parameter` section in capability definition:
For example, you can render the schema by [form-render](https://github.com/alibaba/form-render) or [React JSON Schema form](https://github.com/rjsf-team/react-jsonschema-form).
* For CUE based definition: the [`parameter`](../cue/component.md#Write-ComponentDefinition) is a keyword in CUE template.
* For Helm based definition: the [`parameter`](../helm/component.md#Write-ComponentDefinition) is generated from `values.yaml` in Helm chart.
A web form rendered from the `schema-webservice` can be as below.
## Render Form
You can render above schema into a form by [form-render](https://github.com/alibaba/form-render) or [React JSON Schema form](https://github.com/rjsf-team/react-jsonschema-form) and integrate with your dashboard easily.
Below is a form rendered with `form-render`:
![](../../resources/json-schema-render-example.jpg)
> Hence, end users of KubeVela do NOT need to learn about definition object to use a capability, they always work with a visualized form or learn the generated schema if they want.
# What's Next
It's by design that KubeVela supports multiple ways to define the schematic. Hence, we will explain `.schematic` field in detail with following guides.

View File

@@ -1,4 +1,4 @@
# Introduction of Application CRD
# The `Application` Abstraction
This documentation will explain what is `Application` object and why you need it.
@@ -6,9 +6,9 @@ This documentation will explain what is `Application` object and why you need it
Encapsulation is probably the mostly widely used approach to enable easier developer experience and allow users to deliver the whole application resources as one unit. For example, many tools today encapsulate Kubernetes *Deployment* and *Service* into a *Web Service* module, and then instantiate this module by simply providing parameters such as *image=foo* and *ports=80*. This pattern can be found in cdk8s (e.g. [`web-service.ts` ](https://github.com/awslabs/cdk8s/blob/master/examples/typescript/web-service/web-service.ts)), CUE (e.g. [`kube.cue`](https://github.com/cuelang/cue/blob/b8b489251a3f9ea318830788794c1b4a753031c0/doc/tutorial/kubernetes/quick/services/kube.cue#L70)), and many widely used Helm charts (e.g. [Web Service](https://docs.bitnami.com/tutorials/create-your-first-helm-chart/)).
Despite the efficiency and extensibility in defining abstractions with encapsulation, both DSL (e.g. cdk8s and CUE) and templating (e.g. Helm) are mostly used as client side tools and can be barely used as a platform level building block. This leaves platform builders either have to create restricted/inextensible abstractions, or re-invent the wheels of what DSL/templating has already been doing great.
Despite the efficiency and extensibility in defining abstractions with encapsulation, both DSL tools (e.g. cdk8s , CUE and Helm templating) are mostly used as client side tools and can be barely used as a platform level building block. This leaves platform builders either have to create restricted/inextensible abstractions, or re-invent the wheels of what DSL/templating has already been doing great.
KubeVela is designed to make it possible to build easy-to-use yet highly extensible platforms with leverage of the battle-tested encapsulation and abstraction.
KubeVela allows platform teams to create developer-centric abstractions with DSL/templating but maintain them with the battle tested [Kubernetes Control Loop](https://kubernetes.io/docs/concepts/architecture/controller/).
## Abstraction
@@ -43,19 +43,19 @@ spec:
## Encapsulation
With `Application` provides an abstraction to deploy apps, each *component* and *trait* specification in this application is actually enforced by another set of building block objects named *"definitions"*, for example, [`ComponentDefinition`](https://github.com/oam-dev/kubevela/tree/master/docs/examplesapplication#workload-definition) and [`TraitDefinition`](https://github.com/oam-dev/kubevela/tree/master/docs/examplesapplication#scaler-trait-definition).
With `Application` provides an abstraction to deploy apps, each *component* and *trait* specification in this application is actually enforced by another set of building block objects named *"definitions"*, for example, `ComponentDefinition` and `TraitDefinition`.
Definitions are designed to leverage encapsulation technologies such as `CUE`, `Helm` and `Terraform modules` to template and parameterize Kubernetes resources as well as cloud services. This enables users to assemble templated capabilities (defined via Helm charts or CUE modules etc) into an `Application` by simply providing parameters, i.e. they only need to interact with an unified abstraction. Actually, in the `application-sample` above, it models a Kubernetes Deployment (component `foo`) to run container and a Alibaba Cloud OSS bucket (component `bar`) alongside.
Definitions are designed to leverage encapsulation technologies such as `CUE`, `Helm` and `Terraform modules` to template and parameterize Kubernetes resources as well as cloud services. This enables users to assemble templated capabilities into an `Application` by simply setting parameters. In the `application-sample` above, it models a Kubernetes Deployment (component `foo`) to run container and a Alibaba Cloud OSS bucket (component `bar`) alongside.
This templating based encapsulation and abstraction mechanism is the key for KubeVela to provide *PaaS-like* experience (*i.e. app-centric, higher level abstractions, self-service operations etc*) to end users.
This encapsulation and abstraction mechanism is the key for KubeVela to provide *PaaS-like* experience (*i.e. app-centric, higher level abstractions, self-service operations etc*) to end users.
### No Configuration Drift
Many of the existing encapsulation solutions today work at client side, for example, DSL/IaC tools and Helm. This approach is easy to be adopted and has less invasion in the user cluster, KubeVela encapsulation engine can also be implemented at client side.
Many of the existing encapsulation solutions today work at client side, for example, DSL/IaC (Infrastructure as Code) tools and Helm. This approach is easy to be adopted and has less invasion in the user cluster.
But client side abstractions, though light-weighted, always lead to an issue called infrastructure/configuration drift, i.e. the generated component instances are not in line with the expected configuration. This could be caused by incomplete coverage, less-than-perfect processes or emergency changes.
Hence, the encapsulation engine of KubeVela is designed to be a [Kubernetes Control Loop](https://kubernetes.io/docs/concepts/architecture/controller/) and leverage Kubernetes control plane to eliminate the issue of configuration drifting, and still keeps the flexibly and velocity enabled by existing encapsulation solutions (e.g. DSL and templating).
Hence, the encapsulation engine of KubeVela is designed to be a [Kubernetes Control Loop](https://kubernetes.io/docs/concepts/architecture/controller/) and leverage Kubernetes control plane to eliminate the issue of configuration drifting, and still keeps the flexibly and velocity enabled by existing encapsulation solutions (e.g. DSL/IaC and templating).
### No "Juggling" Approach to Manage Kubernetes Objects

View File

@@ -1,142 +0,0 @@
# Trait Definition
In the following tutorial, you will learn about define your own trait to extend KubeVela.
Before continue, make sure you have learned the basic concept of [Definition Objects](definition-and-templates.md) in KubeVela.
The KubeVela trait system is very powerful. Generally, you could define a trait(e.g. "do some patch") with very low code,
just writing some CUE template is enough. Refer to ["Defining Traits in CUE"](https://kubevela.io/#/en/cue/trait) for
more details in this case.
## Extend CRD Operator as Trait
In the following tutorial, you will learn to extend traits into KubeVela with [KEDA](https://keda.sh/) as example.
KEDA is a very cool Event Driven Autoscaler.
### Step 1: Install the CRD controller
[Install the KEDA controller](https://keda.sh/docs/2.2/deploy/) into your K8s system.
### Step 2: Create Trait Definition
To register KEDA as a new trait in KubeVela, the only thing needed is to create an `TraitDefinition` object for it.
A full example can be found in this [keda.yaml](https://github.com/oam-dev/catalog/blob/master/registry/keda-scaler.yaml).
Several highlights are list below.
#### 1. Describe The Trait Usage
```yaml
...
name: keda-scaler
annotations:
definition.oam.dev/description: "keda supports multiple event to elastically scale applications, this scaler only applies to deployment as example"
...
```
We use label `definition.oam.dev/description` to add one line description for this trait.
It will be shown in helper commands such as `$ vela traits`.
#### 2. Register API Resource
```yaml
...
spec:
definitionRef:
name: scaledobjects.keda.sh
...
```
This is how you register KEDA ScaledObject's API resource (`scaledobjects.keda.sh`) as the Trait.
KubeVela uses Kubernetes API resource discovery mechanism to manage all registered capabilities.
#### 3. Define Workloads this trait can apply to
```yaml
...
spec:
...
appliesToWorkloads:
- "*"
...
```
A trait can work on specified workload or any kinds of workload, that depends on what you describe here.
Use `"*"` to represent your trait can work on any workloads.
You can also specify the trait can only work on K8s Deployment and Statefulset by describe like below:
```yaml
...
spec:
...
appliesToWorkloads:
- "deployments.apps"
- "statefulsets.apps"
...
```
#### 4. Define the field if the trait can receive workload reference
```yaml
...
spec:
workloadRefPath: spec.workloadRef
...
```
Once registered, the OAM framework can inject workload reference information automatically to trait CR object during creation or update.
The workload reference will include group, version, kind and name. Then, the trait can get the whole workload information
from this reference.
With the help of the OAM framework, end users will never bother writing the relationship info such like `targetReference`.
Platform builders only need to declare this info here once, then the OAM framework will help glue them together.
#### 5. Define Template
```yaml
...
schematic:
cue:
template: |-
outputs: cpu-scaler: {
apiVersion: "keda.sh/v1alpha1"
kind: "ScaledObject"
metadata: {
name: context.name
}
spec: {
scaleTargetRef: {
name: context.name
}
triggers: [{
type: paramter.type
metadata: {
type: "Utilization"
value: paramter.value
}
}]
}
}
paramter: {
// +usage=Types of triggering application elastic scaling, Optional: cpu, memory
type: string
// +usage=Value to trigger scaling actions, represented as a percentage of the requested value of the resource for the pods. like: "60"(60%)
value: string
}
```
This is a CUE based template to define end user abstraction for this workload type. Please check the [templating documentation](../cue/trait.md) for more detail.
### Step 2: Register New Trait to KubeVela
As long as the definition file is ready, you just need to apply it to Kubernetes.
```bash
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/catalog/master/registry/keda-scaler.yaml
```
And the new trait will immediately become available for developers to use in KubeVela.