add extend component definition in KubeVela (#1297)

* update docs

* refactor docs
This commit is contained in:
Jianbo Sun
2021-03-25 23:26:32 +08:00
committed by GitHub
parent 06a099f540
commit d042c0c7ec
15 changed files with 1165 additions and 1417 deletions

View File

@@ -11,17 +11,19 @@
- [Overview](/en/platform-engineers/overview.md)
- [Application CRD](/en/application.md)
- [Definition CRD](/en/platform-engineers/definition-and-templates.md)
- [Auto-generated Schema](/en/platform-engineers/openapi-v3-json-schema.md)
- [Defining Components](/en/platform-engineers/component.md)
<!-- - [Defining Traits](/en/platform-engineers/trait.md) -->
<!-- - [Defining Cloud Service](/en/platform-engineers/cloud-services.md) -->
- Using CUE
- [Learning CUE](/en/cue/basic.md)
- [Define Components](/en/cue/workload-type.md)
- [Define Components in CUE](/en/cue/component.md)
- [Define Traits](/en/cue/trait.md)
- [Advanced Features](/en/cue/status.md)
- [Auto-generated Schema](/en/platform-engineers/openapi-v3-json-schema.md)
- [Development Guide](/en/cue/development-guide.md)
- Using Helm
- [Define Components](/en/helm/component.md)
- [Define Components in Chart](/en/helm/component.md)
- [Attach Traits](/en/helm/trait.md)
- [Known Limitations](/en/helm/known-issues.md)
@@ -41,10 +43,6 @@
<!-- - [Setting Monitoring Policy](/en/developers/extensions/set-metrics.md) -->
- [Setting Up Deployment Environment](/en/developers/config-enviroments.md)
- [Configuring data/env in Application](/en/developers/config-app.md)
<!-- - How-to (Out-of-dated) -->
<!-- - [Defining Workload Type](/en/platform-engineers/workload-type.md) -->
<!-- - [Defining Trait](/en/platform-engineers/trait.md) -->
<!-- - [Defining Cloud Service](/en/platform-engineers/cloud-services.md) -->
<!-- - [Alternative Commands](/en/developers/alternative-cmd.md) -->
- CLI References

View File

@@ -38,7 +38,7 @@ spec:
image: "fluentd"
```
The `type: worker` means the specification of this workload (claimed in following `settings` section) will be enforced by a `WorkloadDefinition` object named `worker` as below:
The `type: worker` means the specification of this workload (claimed in following `properties` section) will be enforced by a `ComponentDefinition` object named `worker` as below:
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -144,7 +144,7 @@ After the `Application` resource is applied to Kubernetes cluster, the KubeVela
| Label | Description |
| :--: | :---------: |
|`workload.oam.dev/type=<workload definition name>` | The name of its corresponding `WorkloadDefinition` |
|`workload.oam.dev/type=<component definition name>` | The name of its corresponding `ComponentDefinition` |
|`trait.oam.dev/type=<trait definition name>` | The name of its corresponding `TraitDefinition` |
|`app.oam.dev/name=<app name>` | The name of the application it belongs to |
|`app.oam.dev/component=<component name>` | The name of the component it belongs to |

View File

@@ -16,112 +16,28 @@ The reasons for KubeVela supports CUE as first class templating solution can be
> Pleas also check [The Configuration Complexity Curse](https://blog.cedriccharly.com/post/20191109-the-configuration-complexity-curse/) and [The Logic of CUE](https://cuelang.org/docs/concepts/logic/) for more details.
If you haven't ever learned the basic of CUE, you can refer to [our CUE basic guide](./development-guide.md) to learn
the basic knowledge of CUE which will be used in KubeVela.
## Prerequisites
## Parameter and Template
* [`cue` >=v0.2.2](https://cuelang.org/docs/install/)
* [`vela` (>v1.0.0)](https://kubevela.io/#/en/install?id=_3-optional-get-kubevela-cli)
A very simple `WorkloadDefinition` is like below:
## CUE CLI basic
Below is the basic CUE data, you can define both schema and value in the same file with the almost same format:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: mydeploy
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
parameter: {
name: string
image: string
}
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": parameter.name
}
template: {
metadata: labels: {
"app.oam.dev/component": parameter.name
}
spec: {
containers: [{
name: parameter.name
image: parameter.image
}]
}
}
}
}
```
The `template` field in this definition is a CUE module, it defines two keywords for KubeVela to build the application abstraction:
- The `parameter` defines the input parameters from end user, i.e. the configurable fields in the abstraction.
- The `output` defines the template for the abstraction.
## CUE Template Step by Step
Let's say as the platform team, we only want to allow end user configure `image` and `name` fields in the `Application` abstraction, and automatically generate all rest of the fields. How can we use CUE to achieve this?
We can start from the final resource we envision the platform will generate based on user inputs, for example:
```yaml
apiVersion: apps/v1
kind: Deployment
meadata:
name: mytest # user inputs
spec:
template:
spec:
containers:
- name: mytest # user inputs
env:
- name: a
value: b
image: nginx:v1 # user inputs
metadata:
labels:
app.oam.dev/component: mytest # generate by user inputs
selector:
matchLabels:
app.oam.dev/component: mytest # generate by user inputs
```
Then we can just convert this YAML to JSON and put the whole JSON object into the `output` keyword field:
```cue
output: {
apiVersion: "apps/v1"
kind: "Deployment"
metadata: name: "mytest"
spec: {
selector: matchLabels: {
"app.oam.dev/component": "mytest"
}
template: {
metadata: labels: {
"app.oam.dev/component": "mytest"
}
spec: {
containers: [{
name: "mytest"
image: "nginx:v1"
env: [{name:"a",value:"b"}]
}]
}
}
}
a: 1.5
a: float
b: 1
b: int
d: [1, 2, 3]
g: {
h: "abc"
}
e: string
```
Since CUE as a superset of JSON, we can use:
CUE is a superset of JSON, we can use it like json with following convenience:
* C style comments,
* quotes may be omitted from field names without special characters,
@@ -129,141 +45,267 @@ Since CUE as a superset of JSON, we can use:
* comma after last element in list is allowed,
* outer curly braces are optional.
After that, we can then add `parameter` keyword, and use it as a variable reference, this is the very basic CUE feature for templating.
CUE has powerful CLI commands. Let's keep the data in a file named `first.cue` and try.
```cue
parameter: {
name: string
image: string
}
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": parameter.name
}
template: {
metadata: labels: {
"app.oam.dev/component": parameter.name
}
spec: {
containers: [{
name: parameter.name
image: parameter.image
}]
}
}
* Format the CUE file. If you're using Goland or similar JetBrains IDE,
you can [configure save on format](https://wonderflow.info/posts/2020-11-02-goland-cuelang-format/) instead.
This command will not only format the CUE, but also point out the wrong schema. That's very useful.
```shell
cue fmt first.cue
```
* Schema Check, besides `cue fmt`, you can also use `vue vet` to check schema.
```shell
cue vet first.cue
```
* Calculate/Render the result. `cue eval` will calculate the CUE file and render out the result.
You can see the results don't contain `a: float` and `b: int`, because these two variables are calculated.
While the `e: string` doesn't have definitive results, so it keeps as it is.
```shell
$ cue eval first.cue
a: 1.5
b: 1
d: [1, 2, 3]
g: {
h: "abc"
}
e: string
```
* Render for specified result. For example, we want only know the result of `b` in the file, then we can specify the parameter `-e`.
```shell
$ cue eval -e b first.cue
1
```
* Export the result. `cue export` will export the result with final value. It will report an error if some variables are not definitive.
```shell
$ cue export first.cue
e: cannot convert incomplete value "string" to JSON:
./first.cue:9:4
```
We can complete the value by giving a value to `e`, for example:
```shell
echo "e: \"abc\"" >> first.cue
```
Then, the command will work. By default, the result will be rendered in json format.
```shell
$ cue export first.cue
{
"a": 1.5,
"b": 1,
"d": [
1,
2,
3
],
"g": {
"h": "abc"
},
"e": "abc"
}
```
* Export the result in YAML format.
```shell
$ cue export first.cue --out yaml
a: 1.5
b: 1
d:
- 1
- 2
- 3
g:
h: abc
e: abc
```
* Export the result for specified variable.
```shell
$ cue export -e g first.cue
{
"h": "abc"
}
```
For now, you have learned all useful CUE cli operations.
## CUE language basic
* Data structure: Below is the basic data structure of CUE.
```shell
// float
a: 1.5
// int
b: 1
// string
c: "blahblahblah"
// array
d: [1, 2, 3, 1, 2, 3, 1, 2, 3]
// bool
e: true
// struct
f: {
a: 1.5
b: 1
d: [1, 2, 3, 1, 2, 3, 1, 2, 3]
g: {
h: "abc"
}
}
// null
j: null
```
* Define a custom CUE type. You can use a `#` symbol to specify some variable represents a CUE type.
```
#abc: string
```
Let's name it `second.cue`. Then the `cue export` won't complain as the `#abc` is a type not incomplete value.
```shell
$ cue export second.cue
{}
```
You can also define a more complex custom struct, such as:
```
#abc: {
x: int
y: string
z: {
a: float
b: bool
}
}
```
Finally, you can put the above CUE module in the `template` field of `WorkloadDefinition` object and give it a name. Then end users can now author `Application` resource reference this definition as workload type and only have `name` and `image` as configurable parameters.
It's widely used in KubeVela to define templates and do validation.
## Advanced CUE Templating
## CUE templating and reference
In this section, we will introduce advanced CUE templating features supports in KubeVela.
Let's try to define a CUE template with the knowledge just learned.
### Structural Parameter
1. Define a struct variable `parameter`.
This is the most commonly used feature. It enables us to expose complex data structure for end users. For example, environment variable list.
```shell
parameter: {
name: string
image: string
}
```
A simple guide is as below:
Let's save it in a file called `deployment.cue`.
1. Define a type in the CUE template, it includes a struct (`other`), a string and an integer.
2. Define a more complex struct variable `template` and reference the variable `parameter`.
```
#Config: {
name: string
value: int
other: {
key: string
value: string
}
}
```
```
template: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": parameter.name
}
template: {
metadata: labels: {
"app.oam.dev/component": parameter.name
}
spec: {
containers: [{
name: parameter.name
image: parameter.image
}]
}}}
}
```
2. In the `parameter` section, reference above type and define it as `[...#Config]`. Then it can accept inputs from end users as an array list.
People who are familiar with Kubernetes may have understood that is a template of K8s Deployment. The `parameter` part
is the parameters of the template.
```
parameter: {
name: string
image: string
configSingle: #Config
config: [...#Config] # array list parameter
}
```
Add it into the `deployment.cue`.
3. In the `output` section, simply do templating as other parameters.
4. Then, let's add the value by adding following code block:
```
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
env: parameter.config
}]
}
```
parameter:{
name: "mytest"
image: "nginx:v1"
}
```
5. Finally, let's export it in yaml:
```shell
$ cue export deployment.cue -e template --out yaml
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: mytest
image: nginx:v1
metadata:
labels:
app.oam.dev/component: mytest
selector:
matchLabels:
app.oam.dev/component: mytest
```
## Advanced CUE Schematic
* Open struct and list. Using `...` in a list or struct means the object is open.
- A list like `[...string]` means it can hold multiple string elements.
If we don't add `...`, then `[string]` means the list can only have one `string` element in it.
- A struct like below means the struct can contain unknown fields.
```
{
abc: string
...
}
```
}
```
4. As long as you install a workload definition object (e.g. `mydeploy`) with above template in the system, a new field `config` will be available to use like below:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
metadata:
name: website
spec:
components:
- name: backend
type: mydeploy
settings:
image: crccheck/hello-world
name: mysvc
config: # a complex parameter
- name: a
value: 1
other:
key: mykey
value: myvalue
```
* Operator `|`, it represents a value could be both case. Below is an example that the variable `a` could be in string or int type.
```shell
a: string | int
```
### Conditional Parameter
* Default Value, we can use `*` symbol to represent a default value for variable. That's usually used with `|`,
which represents a default value for some type. Below is an example that variable `a` is `int` and it's default value is `1`.
Conditional parameter can be used to do `if..else` logic in template.
```shell
a: *1 | int
```
Below is an example that when `useENV=true`, it will render env section, otherwise, it will not.
* Optional Variable. In some cases, a variable could not be used, they're optional variables, we can use `?:` to define it.
In the below example, `a` is an optional variable, `x` and `z` in `#my` is optional while `y` is a required variable.
```
parameter: {
name: string
image: string
useENV: bool
}
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
if parameter.useENV == true {
env: [{name: "my-env", value: "my-value"}]
}
}]
}
...
a ?: int
#my: {
x ?: string
y : int
z ?:float
}
```
### Optional and Default Value
Optional parameter can be skipped, that usually works together with conditional logic.
Optional variables can be skipped, that usually works together with conditional logic.
Specifically, if some field does not exit, the CUE grammar is `if _variable_ != _|_`, the example is like below:
```
@@ -287,14 +329,55 @@ output: {
}
```
Default Value is marked with a `*` prefix. It's used like
* Operator `&`, it used to calculate two variables.
```shell
a: *1 | int
b: 3
c: a & b
```
Saving it in `third.cue` file.
You can evaluate the result by using `cue eval`:
```shell
$ cue eval third.cue
a: 1
b: 3
c: 3
```
* Conditional statement, it's really useful when you have some cascade operations that different value affects different results.
So you can do `if..else` logic in the template.
```shell
price: number
feel: *"good" | string
// Feel bad if price is too high
if price > 100 {
feel: "bad"
}
price: 200
```
Saving it in `fourth.cue` file.
You can evaluate the result by using `cue eval`:
```shell
$ cue eval fourth.cue
price: 200
feel: "bad"
```
Another example is to use bool type as prameter.
```
parameter: {
name: string
image: *"nginx:v1" | string
port: *80 | int
number: *123.4 | float
name: string
image: string
useENV: bool
}
output: {
...
@@ -302,66 +385,80 @@ output: {
containers: [{
name: parameter.name
image: parameter.image
if parameter.useENV == true {
env: [{name: "my-env", value: "my-value"}]
}
}]
}
...
}
```
So if a parameter field is neither a parameter with default value nor a conditional field, it's a required value.
### Loop
#### Loop for Map
```cue
parameter: {
name: string
image: string
env: [string]: string
}
output: {
spec: {
containers: [{
name: parameter.name
image: parameter.image
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}]
* For Loop: if you want to avoid duplicate, you may want to use for loop.
- Loop for Map
```cue
parameter: {
name: string
image: string
env: [string]: string
}
}
```
#### Loop for Slice
```cue
parameter: {
name: string
image: string
env: [...{name:string,value:string}]
}
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
env: [
for _, v in parameter.env {
name: v.name
value: v.value
},
]
}]
output: {
spec: {
containers: [{
name: parameter.name
image: parameter.image
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}]
}
}
}
```
```
- Loop for type
```
#a: {
"hello": "Barcelona"
"nihao": "Shanghai"
}
for k, v in #a {
"\(k)": {
nameLen: len(v)
value: v
}
}
```
- Loop for Slice
```cue
parameter: {
name: string
image: string
env: [...{name:string,value:string}]
}
output: {
...
spec: {
containers: [{
name: parameter.name
image: parameter.image
env: [
for _, v in parameter.env {
name: v.name
value: v.value
},
]
}]
}
}
```
### Import CUE Internal Packages
Note that we use `"\( _my-statement_ )"` for inner calculation in string.
## Import CUE Internal Packages
CUE has many [internal packages](https://pkg.go.dev/cuelang.org/go@v0.2.2/pkg) which also can be used in KubeVela.
@@ -385,7 +482,7 @@ output: {
}
```
### Import Kube Package
## Import Kube Package
KubeVela automatically generates all K8s resources as internal packages by reading K8s openapi from the
installed K8s cluster.

View File

@@ -1,18 +1,19 @@
# Defining Components
# Use CUE to extend Component type
In this section, we will introduce more examples of using CUE to define component types.
In this section, it will introduce how to use CUE to extend your custom component types.
## Basic Usage
Before reading this part, please make sure you've learned [the definition and template concepts](../platform-engineers/definition-and-templates.md)
and the [basic CUE](./basic.md) knowledge related with KubeVela.
The very basic usage of CUE in component abstraction is to extend a Kubernetes resource as a component type(via `ComponentDefinition`) and expose configurable parameters to users.
## Write ComponentDefinition
A Deployment as component type:
Here is a basic `ComponentDefinition` example:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: worker
name: mydeploy
spec:
workload:
definition:
@@ -41,11 +42,61 @@ spec:
name: parameter.name
image: parameter.image
}]
}}}
}
}
}
}
```
A Job as workload type:
- `.spec.workload` is required to indicate the workload(apiVersion/Kind) defined in the CUE.
- `.spec.schematic.cue.template` is a CUE template, it defines two keywords for KubeVela to build the application abstraction:
* The `parameter` defines the input parameters from end user, i.e. the configurable fields in the abstraction.
* The `output` defines the template for the abstraction.
## Create an Application using the CUE based ComponentDefinition
As long as you installed the ComponentDefinition object (e.g. `kubectl apply -f mydeploy.yaml`) with above template into
the K8s system, it can be used like below:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: Application
metadata:
name: website
spec:
components:
- name: backend
type: mydeploy
properties:
image: crccheck/hello-world
name: mysvc
```
It will finally render out the following object into the K8s system.
```yaml
apiVersion: apps/v1
kind: Deployment
meadata:
name: mydeploy
spec:
template:
spec:
containers:
- name: mysvc
image: crccheck/hello-world
metadata:
labels:
app.oam.dev/component: mysvc
selector:
matchLabels:
app.oam.dev/component: mysvc
```
All information was rendered from the `output` keyword in CUE template.
And so on, a K8s Job as component type could be:
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -87,7 +138,7 @@ spec:
}
```
## Context
## Context in CUE
When you want to reference the runtime instance name for an app, you can use the `conext` keyword to define `parameter`.
@@ -223,13 +274,7 @@ spec:
}
```
Please save the example as file `webserver.yaml`, then register the new workload to kubevela.
```shell
$ kubectl apply -f webserver.yaml
```
Next, we can use the `webserver` type workload in our application, below is the example:
Register the new workload to kubevela. And create an Application to use it:
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -250,42 +295,7 @@ spec:
cpu: "100m"
```
Please save the Application example as file `app.yaml`, then create the new Application.
```shell
kubectl apply -f app.yaml
```
Wait for a while until the status of Application is `running`.
```shell
$ kubectl get application webserver-demo -o yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: webserver-demo
namespace: default
...
spec:
components:
- name: hello-world
type: webserver
properties:
cpu: 100m
env:
- name: PORT
value: "8000"
image: crccheck/hello-world
port: 8000
status:
...
services:
- healthy: true
name: hello-world
status: running
```
In the K8s cluster, you will see the following resources are created:
You will finally got the following resources:
```shell
$ kubectl get deployment
@@ -297,3 +307,643 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
hello-world-trait-7bdcff98f7 ClusterIP <your ip> <none> 8000/TCP 32s
```
## Extend CRD Operator as Component Type
Let's use [OpenKruise](https://github.com/openkruise/kruise) as example of extend CRD as KubeVela Component.
**The mechanism works for all CRD Operators**.
### Step 1: Install the CRD controller
You need to [install the CRD controller](https://github.com/openkruise/kruise#quick-start) into your K8s system.
### Step 2: Create Component Definition
To register Cloneset(one of the OpenKruise workloads) as a new workload type in KubeVela, the only thing needed is to create an `ComponentDefinition` object for it.
A full example can be found in this [cloneset.yaml](https://github.com/oam-dev/catalog/blob/master/registry/cloneset.yaml).
Several highlights are list below.
#### 1. Describe The Workload Type
```yaml
...
annotations:
definition.oam.dev/description: "OpenKruise cloneset"
...
```
A one line description of this component type. It will be shown in helper commands such as `$ vela components`.
#### 2. Register it's underlying CRD
```yaml
...
workload:
definition:
apiVersion: apps.kruise.io/v1alpha1
kind: CloneSet
...
```
This is how you register OpenKruise Cloneset's API resource (`fapps.kruise.io/v1alpha1.CloneSet`) as the workload type.
KubeVela uses Kubernetes API resource discovery mechanism to manage all registered capabilities.
#### 4. Define Template
```yaml
...
schematic:
cue:
template: |
output: {
apiVersion: "apps.kruise.io/v1alpha1"
kind: "CloneSet"
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
replicas: parameter.replicas
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
}]
}
}
}
}
parameter: {
// +usage=Which image would you like to use for your service
// +short=i
image: string
// +usage=Number of pods in the cloneset
replicas: *5 | int
}
```
### Step 3: Register New Component Type to KubeVela
As long as the definition file is ready, you just need to apply it to Kubernetes.
```bash
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/catalog/master/registry/cloneset.yaml
```
And the new component type will immediately become available for developers to use in KubeVela.
## A Full Workflow of how to Debug and Test CUE definitions.
This section will explain how to test and debug CUE templates using CUE CLI as well as
dry-run your capability definitions via KubeVela CLI.
### Combine Definition File
Usually we define the Definition file in two parts, one is the yaml part and the other is the CUE part.
Let's name the yaml part as `def.yaml`.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: microservice
annotations:
definition.oam.dev/description: "Describes a microservice combo Deployment with Service."
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
```
And the CUE Template part as `def.cue`, then we can use `cue fmt` / `cue vet` to format and validate the CUE file.
```
output: {
// Deployment
apiVersion: "apps/v1"
kind: "Deployment"
metadata: {
name: context.name
namespace: "default"
}
spec: {
selector: matchLabels: {
"app": context.name
}
template: {
metadata: {
labels: {
"app": context.name
"version": parameter.version
}
}
spec: {
serviceAccountName: "default"
terminationGracePeriodSeconds: parameter.podShutdownGraceSeconds
containers: [{
name: context.name
image: parameter.image
ports: [{
if parameter.containerPort != _|_ {
containerPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
containerPort: parameter.servicePort
}
}]
if parameter.env != _|_ {
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}
resources: {
requests: {
if parameter.cpu != _|_ {
cpu: parameter.cpu
}
if parameter.memory != _|_ {
memory: parameter.memory
}
}
}
}]
}
}
}
}
// Service
outputs: service: {
apiVersion: "v1"
kind: "Service"
metadata: {
name: context.name
labels: {
"app": context.name
}
}
spec: {
type: "ClusterIP"
selector: {
"app": context.name
}
ports: [{
port: parameter.servicePort
if parameter.containerPort != _|_ {
targetPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
targetPort: parameter.servicePort
}
}]
}
}
parameter: {
version: *"v1" | string
image: string
servicePort: int
containerPort?: int
// +usage=Optional duration in seconds the pod needs to terminate gracefully
podShutdownGraceSeconds: *30 | int
env: [string]: string
cpu?: string
memory?: string
}
```
And finally there's a script [`hack/vela-templates/mergedef.sh`](https://github.com/oam-dev/kubevela/blob/master/hack/vela-templates/mergedef.sh)
can merge the `def.yaml` and `def.cue` to a completed Definition.
```shell
$ ./hack/vela-templates/mergedef.sh def.yaml def.cue > componentdef.yaml
```
### Debug CUE template
#### use `cue vet` to validate
The `cue vet` validates CUE files well.
```shell
$ cue vet def.cue
output.metadata.name: reference "context" not found:
./def.cue:6:14
output.spec.selector.matchLabels.app: reference "context" not found:
./def.cue:11:11
output.spec.template.metadata.labels.app: reference "context" not found:
./def.cue:16:17
output.spec.template.spec.containers.name: reference "context" not found:
./def.cue:24:13
outputs.service.metadata.name: reference "context" not found:
./def.cue:62:9
outputs.service.metadata.labels.app: reference "context" not found:
./def.cue:64:11
outputs.service.spec.selector.app: reference "context" not found:
./def.cue:70:11
```
The `reference "context" not found` is a very common error, this is because the [`context`](workload-type.md#context) is
a KubeVela inner variable that will be existed in runtime.
But in order to check the correctness of the CUE Template more conveniently. We can add a fake `context` in `def.cue` for test.
Note that you need to remove it when you have finished the development and test.
```CUE
output: {
// Deployment
apiVersion: "apps/v1"
kind: "Deployment"
metadata: {
name: context.name
namespace: "default"
}
spec: {
selector: matchLabels: {
"app": context.name
}
template: {
metadata: {
labels: {
"app": context.name
"version": parameter.version
}
}
spec: {
serviceAccountName: "default"
terminationGracePeriodSeconds: parameter.podShutdownGraceSeconds
containers: [{
name: context.name
image: parameter.image
...
}]
}
}
}
}
// Service
outputs: service: {
apiVersion: "v1"
kind: "Service"
metadata: {
name: context.name
labels: {
"app": context.name
}
}
spec: {
type: "ClusterIP"
selector: {
"app": context.name
}
...
}
}
parameter: {
version: *"v1" | string
image: string
servicePort: int
containerPort?: int
// +usage=Optional duration in seconds the pod needs to terminate gracefully
podShutdownGraceSeconds: *30 | int
env: [string]: string
cpu?: string
memory?: string
}
context: {
name: string
}
```
Then execute the command:
```shell
$ cue vet def.cue
some instances are incomplete; use the -c flag to show errors or suppress this message
```
`cue vet` will only validates the data type. The `-c` validates that all regular fields are concrete.
We can fill in the concrete data to verify the correctness of the template.
```shell
$ cue vet def.cue -c
context.name: incomplete value string
output.metadata.name: incomplete value string
output.spec.selector.matchLabels.app: incomplete value string
output.spec.template.metadata.labels.app: incomplete value string
output.spec.template.spec.containers.0.image: incomplete value string
output.spec.template.spec.containers.0.name: incomplete value string
output.spec.template.spec.containers.0.ports.0.containerPort: incomplete value int
outputs.service.metadata.labels.app: incomplete value string
outputs.service.metadata.name: incomplete value string
outputs.service.spec.ports.0.port: incomplete value int
outputs.service.spec.ports.0.targetPort: incomplete value int
outputs.service.spec.selector.app: incomplete value string
parameter.image: incomplete value string
parameter.servicePort: incomplete value int
```
Again, use the mock data for the `context` and `parameter`, append these following data in your `def.cue` file.
```CUE
context: {
name: "test-app"
}
parameter: {
version: "v2"
image: "image-address"
servicePort: 80
containerPort: 8000
env: {"PORT": "8000"}
cpu: "500m"
memory: "128Mi"
}
```
The `cue` will verify the field type in the mock parameter.
You can try any data you want until the following command is executed without complains.
```shell
cue vet def.cue -c
```
#### use `cue export` to check the result
`cue export` can export the result in yaml. It's help you to check the correctness of template with the specified output result.
```shell
$ cue export -e output def.cue --out yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app
namespace: default
spec:
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
version: v2
spec:
serviceAccountName: default
terminationGracePeriodSeconds: 30
containers:
- name: test-app
image: image-address
```
```shell
$ cue export -e outputs.service def.cue --out yaml
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
selector:
app: test-app
type: ClusterIP
```
## Dry-Run Application
After we test the CUE Template well, we can use `vela system dry-run` to dry run an application and test in in real K8s environment.
This command will show you the real k8s resources that will be created.
First, we need use `mergedef.sh` to merge the definition and cue files.
```shell
$ mergedef.sh def.yaml def.cue > componentdef.yaml
```
Then, let's create an Application named `test-app.yaml`.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: boutique
namespace: default
spec:
components:
- name: frontend
type: microservice
properties:
image: registry.cn-hangzhou.aliyuncs.com/vela-samples/frontend:v0.2.2
servicePort: 80
containerPort: 8080
env:
PORT: "8080"
cpu: "100m"
memory: "64Mi"
```
Dry run the application by using `vela system dry-run`.
```shell
$ vela system dry-run -f test-app.yaml -d componentdef.yaml
---
# Application(boutique) -- Comopnent(frontend)
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.oam.dev/component: frontend
app.oam.dev/name: boutique
workload.oam.dev/type: microservice
name: frontend
namespace: default
spec:
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
version: v1
spec:
containers:
- env:
- name: PORT
value: "8080"
image: registry.cn-hangzhou.aliyuncs.com/vela-samples/frontend:v0.2.2
name: frontend
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 64Mi
serviceAccountName: default
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
app.oam.dev/component: frontend
app.oam.dev/name: boutique
trait.oam.dev/resource: service
trait.oam.dev/type: AuxiliaryWorkload
name: frontend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: frontend
type: ClusterIP
---
```
> Note: `vela system dry-run` will execute the same logic of `Application` controller in KubeVela.
> Hence it's helpful for you to test or debug.
### Import Kube Package
KubeVela automatically generates internal packages for all built-in K8s API resources based on K8s OpenAPI.
With the help of `vela system dry-run`, you can use the `import kube package` feature and test it locally.
So some default values in our `def.cue` can be simplified, and the imported package will help you validate the template:
```cue
import (
apps "kube/apps/v1"
corev1 "kube/v1"
)
// output is validated by Deployment.
output: apps.#Deployment
output: {
metadata: {
name: context.name
namespace: "default"
}
spec: {
selector: matchLabels: {
"app": context.name
}
template: {
metadata: {
labels: {
"app": context.name
"version": parameter.version
}
}
spec: {
terminationGracePeriodSeconds: parameter.podShutdownGraceSeconds
containers: [{
name: context.name
image: parameter.image
ports: [{
if parameter.containerPort != _|_ {
containerPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
containerPort: parameter.servicePort
}
}]
if parameter.env != _|_ {
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}
resources: {
requests: {
if parameter.cpu != _|_ {
cpu: parameter.cpu
}
if parameter.memory != _|_ {
memory: parameter.memory
}
}
}
}]
}
}
}
}
outputs:{
service: corev1.#Service
}
// Service
outputs: service: {
metadata: {
name: context.name
labels: {
"app": context.name
}
}
spec: {
//type: "ClusterIP"
selector: {
"app": context.name
}
ports: [{
port: parameter.servicePort
if parameter.containerPort != _|_ {
targetPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
targetPort: parameter.servicePort
}
}]
}
}
parameter: {
version: *"v1" | string
image: string
servicePort: int
containerPort?: int
// +usage=Optional duration in seconds the pod needs to terminate gracefully
podShutdownGraceSeconds: *30 | int
env: [string]: string
cpu?: string
memory?: string
}
```
Then merge them.
```shell
mergedef.sh def.yaml def.cue > componentdef.yaml
```
And dry run.
```shell
vela system dry-run -f test-app.yaml -d componentdef.yaml
```

View File

@@ -1,902 +0,0 @@
# Test and Debug CUE Templates
This documentation explains how to test and debug CUE templates using CUE CLI as well as
dry-run your capability definitions via KubeVela CLI.
## Prerequisites
* [`cue` >=v0.2.2](https://cuelang.org/docs/install/)
* [`vela` (>v1.0.0)](https://kubevela.io/#/en/install?id=_3-optional-get-kubevela-cli)
## CUE CLI basic
Below is the basic CUE data, you can define both schema and value in the same file with the almost same format:
```
a: 1.5
a: float
b: 1
b: int
d: [1, 2, 3]
g: {
h: "abc"
}
e: string
```
Let's write them in a file named `first.cue`.
* Format the CUE file. If you're using Goland or similar JetBrains IDE,
you can [configure save on format](https://wonderflow.info/posts/2020-11-02-goland-cuelang-format/) instead.
This command will not only format the CUE, but also point out the wrong schema. That's very useful.
```shell
cue fmt first.cue
```
* Schema Check, besides `cue fmt`, you can also use `vue vet` to check schema.
```shell
cue vet first.cue
```
* Calculate/Render the result. `cue eval` will calculate the CUE file and render out the result.
You can see the results don't contain `a: float` and `b: int`, because these two variables are calculated.
While the `e: string` doesn't have definitive results, so it keeps as it is.
```shell
$ cue eval first.cue
a: 1.5
b: 1
d: [1, 2, 3]
g: {
h: "abc"
}
e: string
```
* Render for specified result. For example, we want only know the result of `b` in the file, then we can specify the parameter `-e`.
```shell
$ cue eval -e b first.cue
1
```
* Export the result. `cue export` will export the result with final value. It will report an error if some variables are not definitive.
```shell
$ cue export first.cue
e: cannot convert incomplete value "string" to JSON:
./first.cue:9:4
```
We can complete the value by giving a value to `e`, for example:
```shell
echo "e: \"abc\"" >> first.cue
```
Then, the command will work. By default, the result will be rendered in json format.
```shell
$ cue export first.cue
{
"a": 1.5,
"b": 1,
"d": [
1,
2,
3
],
"g": {
"h": "abc"
},
"e": "abc"
}
```
* Export the result in YAML format.
```shell
$ cue export first.cue --out yaml
a: 1.5
b: 1
d:
- 1
- 2
- 3
g:
h: abc
e: abc
```
* Export the result for specified variable.
```shell
$ cue export -e g first.cue
{
"h": "abc"
}
```
For now, you have learned all useful CUE cli operations.
## CUE language basic
* Data structure: Below is the basic data structure of CUE.
```shell
// float
a: 1.5
// int
b: 1
// string
c: "blahblahblah"
// array
d: [1, 2, 3, 1, 2, 3, 1, 2, 3]
// bool
e: true
// struct
f: {
a: 1.5
b: 1
d: [1, 2, 3, 1, 2, 3, 1, 2, 3]
g: {
h: "abc"
}
}
// null
j: null
```
* Define a custom CUE type. You can use a `#` symbol to specify some variable represents a CUE type.
```
#abc: string
```
Let's name it `second.cue`. Then the `cue export` won't complain as the `#abc` is a type not incomplete value.
```shell
$ cue export second.cue
{}
```
You can also define a more complex custom struct, such as:
```
#abc: {
x: int
y: string
z: {
a: float
b: bool
}
}
```
It's widely used in KubeVela to define templates and do validation.
* Operator `|`, it represents a value could be both case. Below is an example that the variable `a` could be in string or int type.
```shell
a: string | int
```
* Default Value, we can use `*` symbol to represent a default value for variable. That's usually used with `|`,
which represents a default value for some type. Below is an example that variable `a` is `int` and it's default value is `1`.
```shell
a: *1 | int
```
* Optional Variable. In some cases, a variable could not be used, they're optional variables, we can use `?:` to define it.
In the below example, `a` is an optional variable, `x` and `z` in `#my` is optional while `y` is a required variable.
```
a ?: int
#my: {
x ?: string
y : int
z ?:float
}
```
* Operator `&`, it used to calculate two variables.
```shell
a: *1 | int
b: 3
c: a & b
```
Saving it in `third.cue` file.
You can evaluate the result by using `cue eval`:
```shell
$ cue eval third.cue
a: 1
b: 3
c: 3
```
* Conditional statement, it's really useful when you have some cascade operations that different value affects different results.
```shell
price: number
feel: *"good" | string
// Feel bad if price is too high
if price > 100 {
feel: "bad"
}
price: 200
```
Saving it in `fourth.cue` file.
You can evaluate the result by using `cue eval`:
```shell
$ cue eval fourth.cue
price: 200
feel: "bad"
```
* For Loop: if you want to avoid duplicate, you may want to use for loop.
```
#a: {
"hello": "Barcelona"
"nihao": "Shanghai"
}
for k, v in #a {
"\(k)": {
nameLen: len(v)
value: v
}
}
```
Note that we use `"\( _my-statement_ )"` for inner calculation in string.
For now, you have finished learning all CUE language basic.
## CUE templating and reference
Let's try to define a CUE template with the knowledge just learned.
1. Define a custom CUE type.
```
#Config: {
name: string
value: string
}
```
Let's save it in a file called `deployment.cue`.
2. Define a variable named `parameter`, and use the custom CUE type `#Config`. Using `...` in a list means the
list can be appended with multiple elements. If we don't add `...`, then `[#Config]` means the list can only have one element in it.
```shell
parameter: {
name: string
image: string
config: [...#Config]
}
```
Append it into the `deployment.cue`.
3. Define a more complex `template` variable and reference the variable `parameter`.
```
template: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": parameter.name
}
template: {
metadata: labels: {
"app.oam.dev/component": parameter.name
}
spec: {
containers: [{
name: parameter.name
image: parameter.image
if parameter["config"] != _|_ {
env: parameter.config
}
}]
}}}
}
```
People who are familiar with Kubernetes may have understood that is a template of K8s Deployment. The `parameter` part
is the parameters of the template.
Append it into the `deployment.cue`.
4. Then, let's append the value by:
```
parameter:{
name: "mytest"
image: "nginx:v1"
config: [{name:"a",value:"b"}]
}
```
5. Finally, let's export it in yaml:
```shell
$ cue export deployment.cue -e template --out yaml
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: mytest
env:
- name: a
value: b
image: nginx:v1
metadata:
labels:
app.oam.dev/component: mytest
selector:
matchLabels:
app.oam.dev/component: mytest
```
## A Full Workflow
Before reading this part, please make sure you've learned [the definition and template concepts](../platform-engineers/definition-and-templates.md).
This section will guide you some useful tips to write a definition in CUE.
### Combine Definition File
Usually we define the Definition file in two parts, one is the yaml part and the other is the CUE part.
Let's name the yaml part as `def.yaml`.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: microservice
annotations:
definition.oam.dev/description: "Describes a microservice combo Deployment with Service."
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
```
And the CUE Template part as `def.cue`, then we can use `cue fmt` / `cue vet` to format and validate the CUE file.
```
output: {
// Deployment
apiVersion: "apps/v1"
kind: "Deployment"
metadata: {
name: context.name
namespace: "default"
}
spec: {
selector: matchLabels: {
"app": context.name
}
template: {
metadata: {
labels: {
"app": context.name
"version": parameter.version
}
}
spec: {
serviceAccountName: "default"
terminationGracePeriodSeconds: parameter.podShutdownGraceSeconds
containers: [{
name: context.name
image: parameter.image
ports: [{
if parameter.containerPort != _|_ {
containerPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
containerPort: parameter.servicePort
}
}]
if parameter.env != _|_ {
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}
resources: {
requests: {
if parameter.cpu != _|_ {
cpu: parameter.cpu
}
if parameter.memory != _|_ {
memory: parameter.memory
}
}
}
}]
}
}
}
}
// Service
outputs: service: {
apiVersion: "v1"
kind: "Service"
metadata: {
name: context.name
labels: {
"app": context.name
}
}
spec: {
type: "ClusterIP"
selector: {
"app": context.name
}
ports: [{
port: parameter.servicePort
if parameter.containerPort != _|_ {
targetPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
targetPort: parameter.servicePort
}
}]
}
}
parameter: {
version: *"v1" | string
image: string
servicePort: int
containerPort?: int
// +usage=Optional duration in seconds the pod needs to terminate gracefully
podShutdownGraceSeconds: *30 | int
env: [string]: string
cpu?: string
memory?: string
}
```
And finally there's a script [`hack/vela-templates/mergedef.sh`](https://github.com/oam-dev/kubevela/blob/master/hack/vela-templates/mergedef.sh)
can merge the `def.yaml` and `def.cue` to a completed Definition.
```shell
$ ./hack/vela-templates/mergedef.sh def.yaml def.cue > componentdef.yaml
```
### Debug CUE template
#### use `cue vet` to validate
The `cue vet` validates CUE files well.
```shell
$ cue vet def.cue
output.metadata.name: reference "context" not found:
./def.cue:6:14
output.spec.selector.matchLabels.app: reference "context" not found:
./def.cue:11:11
output.spec.template.metadata.labels.app: reference "context" not found:
./def.cue:16:17
output.spec.template.spec.containers.name: reference "context" not found:
./def.cue:24:13
outputs.service.metadata.name: reference "context" not found:
./def.cue:62:9
outputs.service.metadata.labels.app: reference "context" not found:
./def.cue:64:11
outputs.service.spec.selector.app: reference "context" not found:
./def.cue:70:11
```
The `reference "context" not found` is a very common error, this is because the [`context`](workload-type.md#context) is
a KubeVela inner variable that will be existed in runtime.
But in order to check the correctness of the CUE Template more conveniently. We can add a fake `context` in `def.cue` for test.
Note that you need to remove it when you have finished the development and test.
```CUE
output: {
// Deployment
apiVersion: "apps/v1"
kind: "Deployment"
metadata: {
name: context.name
namespace: "default"
}
spec: {
selector: matchLabels: {
"app": context.name
}
template: {
metadata: {
labels: {
"app": context.name
"version": parameter.version
}
}
spec: {
serviceAccountName: "default"
terminationGracePeriodSeconds: parameter.podShutdownGraceSeconds
containers: [{
name: context.name
image: parameter.image
...
}]
}
}
}
}
// Service
outputs: service: {
apiVersion: "v1"
kind: "Service"
metadata: {
name: context.name
labels: {
"app": context.name
}
}
spec: {
type: "ClusterIP"
selector: {
"app": context.name
}
...
}
}
parameter: {
version: *"v1" | string
image: string
servicePort: int
containerPort?: int
// +usage=Optional duration in seconds the pod needs to terminate gracefully
podShutdownGraceSeconds: *30 | int
env: [string]: string
cpu?: string
memory?: string
}
context: {
name: string
}
```
Then execute the command:
```shell
$ cue vet def.cue
some instances are incomplete; use the -c flag to show errors or suppress this message
```
`cue vet` will only validates the data type. The `-c` validates that all regular fields are concrete.
We can fill in the concrete data to verify the correctness of the template.
```shell
$ cue vet def.cue -c
context.name: incomplete value string
output.metadata.name: incomplete value string
output.spec.selector.matchLabels.app: incomplete value string
output.spec.template.metadata.labels.app: incomplete value string
output.spec.template.spec.containers.0.image: incomplete value string
output.spec.template.spec.containers.0.name: incomplete value string
output.spec.template.spec.containers.0.ports.0.containerPort: incomplete value int
outputs.service.metadata.labels.app: incomplete value string
outputs.service.metadata.name: incomplete value string
outputs.service.spec.ports.0.port: incomplete value int
outputs.service.spec.ports.0.targetPort: incomplete value int
outputs.service.spec.selector.app: incomplete value string
parameter.image: incomplete value string
parameter.servicePort: incomplete value int
```
Again, use the mock data for the `context` and `parameter`, append these following data in your `def.cue` file.
```CUE
context: {
name: "test-app"
}
parameter: {
version: "v2"
image: "image-address"
servicePort: 80
containerPort: 8000
env: {"PORT": "8000"}
cpu: "500m"
memory: "128Mi"
}
```
The `cue` will verify the field type in the mock parameter.
You can try any data you want until the following command is executed without complains.
```shell
cue vet def.cue -c
```
#### use `cue export` to check the result
`cue export` can export the result in yaml. It's help you to check the correctness of template with the specified output result.
```shell
$ cue export -e output def.cue --out yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app
namespace: default
spec:
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
version: v2
spec:
serviceAccountName: default
terminationGracePeriodSeconds: 30
containers:
- name: test-app
image: image-address
```
```shell
$ cue export -e outputs.service def.cue --out yaml
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
selector:
app: test-app
type: ClusterIP
```
## Dry-Run Application
After we test the CUE Template well, we can use `vela system dry-run` to dry run an application and test in in real K8s environment.
This command will show you the real k8s resources that will be created.
First, we need use `mergedef.sh` to merge the definition and cue files.
```shell
$ mergedef.sh def.yaml def.cue > componentdef.yaml
```
Then, let's create an Application named `test-app.yaml`.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: boutique
namespace: default
spec:
components:
- name: frontend
type: microservice
properties:
image: registry.cn-hangzhou.aliyuncs.com/vela-samples/frontend:v0.2.2
servicePort: 80
containerPort: 8080
env:
PORT: "8080"
cpu: "100m"
memory: "64Mi"
```
Dry run the application by using `vela system dry-run`.
```shell
$ vela system dry-run -f test-app.yaml -d componentdef.yaml
---
# Application(boutique) -- Comopnent(frontend)
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.oam.dev/component: frontend
app.oam.dev/name: boutique
workload.oam.dev/type: microservice
name: frontend
namespace: default
spec:
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
version: v1
spec:
containers:
- env:
- name: PORT
value: "8080"
image: registry.cn-hangzhou.aliyuncs.com/vela-samples/frontend:v0.2.2
name: frontend
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 64Mi
serviceAccountName: default
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
app.oam.dev/component: frontend
app.oam.dev/name: boutique
trait.oam.dev/resource: service
trait.oam.dev/type: AuxiliaryWorkload
name: frontend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: frontend
type: ClusterIP
---
```
> Note: `vela system dry-run` will execute the same logic of `Application` controller in KubeVela.
> Hence it's helpful for you to test or debug.
### Import Kube Package
KubeVela automatically generates internal packages for all built-in K8s API resources based on K8s OpenAPI.
With the help of `vela system dry-run`, you can use the `import kube package` feature and test it locally.
So some default values in our `def.cue` can be simplified, and the imported package will help you validate the template:
```cue
import (
apps "kube/apps/v1"
corev1 "kube/v1"
)
// output is validated by Deployment.
output: apps.#Deployment
output: {
metadata: {
name: context.name
namespace: "default"
}
spec: {
selector: matchLabels: {
"app": context.name
}
template: {
metadata: {
labels: {
"app": context.name
"version": parameter.version
}
}
spec: {
terminationGracePeriodSeconds: parameter.podShutdownGraceSeconds
containers: [{
name: context.name
image: parameter.image
ports: [{
if parameter.containerPort != _|_ {
containerPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
containerPort: parameter.servicePort
}
}]
if parameter.env != _|_ {
env: [
for k, v in parameter.env {
name: k
value: v
},
]
}
resources: {
requests: {
if parameter.cpu != _|_ {
cpu: parameter.cpu
}
if parameter.memory != _|_ {
memory: parameter.memory
}
}
}
}]
}
}
}
}
outputs:{
service: corev1.#Service
}
// Service
outputs: service: {
metadata: {
name: context.name
labels: {
"app": context.name
}
}
spec: {
//type: "ClusterIP"
selector: {
"app": context.name
}
ports: [{
port: parameter.servicePort
if parameter.containerPort != _|_ {
targetPort: parameter.containerPort
}
if parameter.containerPort == _|_ {
targetPort: parameter.servicePort
}
}]
}
}
parameter: {
version: *"v1" | string
image: string
servicePort: int
containerPort?: int
// +usage=Optional duration in seconds the pod needs to terminate gracefully
podShutdownGraceSeconds: *30 | int
env: [string]: string
cpu?: string
memory?: string
}
```
Then merge them.
```shell
mergedef.sh def.yaml def.cue > componentdef.yaml
```
And dry run.
```shell
vela system dry-run -f test-app.yaml -d componentdef.yaml
```

View File

@@ -1,4 +1,4 @@
# Defining Traits
# Defining Traits in CUE
In this section we will introduce how to define a Trait with CUE template.

View File

@@ -1,21 +1,16 @@
# Use Helm To Define a Component
# Use Helm To Extend a Component type
This documentation explains how to use Helm chart to define an application component.
## Install fluxcd/flux2 as dependencies
Before reading this part, please make sure you've learned [the definition and template concepts](../platform-engineers/definition-and-templates.md).
Using helm as a workload depends on several CRDs and controllers from [fluxcd/flux2](https://github.com/fluxcd/flux2), make sure you have make them installed before continue.
## Prerequisite
It's worth to note that flux2 doesn't offer an official Helm chart to install,
so we provide a chart which only includes minimal dependencies KubeVela relies on as an alternative choice.
* [fluxcd/flux2](../install.md#3-optional-install-flux2), make sure you have installed the flux2 in the [installation guide](https://kubevela.io/#/en/install).
Install the minimal flux2 chart provided by KubeVela:
```shell
$ helm install --create-namespace -n flux-system helm-flux http://oam.dev/catalog/helm-flux2-0.1.0.tgz
```
## Write ComponentDefinition
## Write WorkloadDefinition
Here is an example `WorkloadDefinition` about how to use Helm as schematic module.
Here is an example `ComponentDefinition` about how to use Helm as schematic module.
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -42,7 +37,7 @@ spec:
Just like using CUE as schematic module, we also have some rules and contracts to use helm chart as schematic module.
- `.spec.definitionRef` is required to indicate the main workload(Group/Verison/Kind) in your Helm chart.
- `.spec.workload` is required to indicate the main workload(apiVersion/Kind) in your Helm chart.
Only one workload allowed in one helm chart.
For example, in our sample chart, the core workload is `deployments.apps/v1`, other resources will also be deployed but mechanism of KubeVela won't work for them.
- `.spec.schematic.helm` contains information of Helm release & repository.
@@ -50,7 +45,7 @@ For example, in our sample chart, the core workload is `deployments.apps/v1`, ot
There are two fields `release` and `repository` in the `.spec.schematic.helm` section, these two fields align with the APIs of `fluxcd/flux2`. Spec of `release` aligns with [`HelmReleaseSpec`](https://github.com/fluxcd/helm-controller/blob/main/docs/api/helmrelease.md) and spec of `repository` aligns with [`HelmRepositorySpec`](https://github.com/fluxcd/source-controller/blob/main/docs/api/source.md#source.toolkit.fluxcd.io/v1beta1.HelmRepository).
In a word, just like the fields shown in the sample, the helm schematic module describes a specific Helm chart release and its repository.
## Create an Application using the helm based WorkloadDefinition
## Create an Application using the helm based ComponentDefinition
Here is an example `Application`.
@@ -68,6 +63,7 @@ spec:
image:
tag: "5.1.2"
```
Helm module workload will use data in `settings` as [Helm chart values](https://github.com/captainroy-hy/podinfo/blob/master/charts/podinfo/values.yaml).
You can learn the schema of settings by reading the `README.md` of the Helm
chart, and the schema are totally align with

View File

@@ -10,7 +10,7 @@ To tell KubeVela which one is the main workload, you must follow these two steps
#### 1. Declare main workload's resource definition
The field `.spec.definitionRef` in `WorkloadDefinition` is used to record the
The field `.spec.definitionRef` in `ComponentDefinition` is used to record the
resource definition of the main workload.
The name should be in the format: `<resource>.<group>`.

View File

@@ -131,7 +131,28 @@ These steps will install KubeVela controller and its dependency.
helm install --create-namespace -n vela-system --set admissionWebhooks.certManager.enabled=true kubevela kubevela/vela-core
```
## 3. (Optional) Get KubeVela CLI
## 3. (Optional) Install flux2
This installation step is optional, it's required if you want to register [Helm Chart](https://helm.sh/) as KubeVela capabilities.
KubeVela relies on several CRDs and controllers from [fluxcd/flux2](https://github.com/fluxcd/flux2).
| CRD | Controller Image |
| ----------- | ----------- |
| helmrepositories.source.toolkit.fluxcd.io | fluxcd/source-controller:v0.9.0 |
| helmcharts.source.toolkit.fluxcd.io | - |
| buckets.source.toolkit.fluxcd.io | - |
| gitrepositories.source.toolkit.fluxcd.io | - |
| helmreleases.helm.toolkit.fluxcd.io | fluxcd/helm-controller:v0.8.0 |
You can install the whole flux2 from their [official website](https://github.com/fluxcd/flux2)
or install the chart with minimal parts provided by KubeVela:
```shell
$ helm install --create-namespace -n flux-system helm-flux http://oam.dev/catalog/helm-flux2-0.1.0.tgz
```
## 4. (Optional) Get KubeVela CLI
Here are three ways to get KubeVela Cli:
@@ -173,7 +194,7 @@ sudo mv ./vela /usr/local/bin/vela
<!-- tabs:end -->
## 4. (Optional) Sync Capability from Cluster
## 5. (Optional) Sync Capability from Cluster
If you want to run application from `vela` cli, then you should sync capabilities first like below:
@@ -205,7 +226,7 @@ worker Describes long-running, scalable, containerized services that running
receive external network traffic.
```
## 5. (Optional) Clean Up
## 6. (Optional) Clean Up
<details>
Run:

View File

@@ -0,0 +1,31 @@
# Component Definition
In the following tutorial, you will learn about define your own Component Definition to extend KubeVela.
Before continue, make sure you have learned the basic concept of [Definition Objects](definition-and-templates.md) in KubeVela.
Usually, there are general two kinds of capability resources you can find in K8s ecosystem.
1. Compose K8s built-in resources: in this case, you can easily use them by apply YAML files.
This is widely used as helm charts. For example, [wordpress helm chart](https://bitnami.com/stack/wordpress/helm), [mysql helm chart](https://bitnami.com/stack/mysql/helm).
2. CRD(Custom Resource Definition) Operator: in this case, you need to install it and create CR(Custom Resource) instance for use.
This is widely used such as [Promethus Operator](https://github.com/prometheus-operator/prometheus-operator), [TiDB Operator](https://github.com/pingcap/tidb-operator), etc.
For both cases, they can all be extended into KubeVela as Component type.
## Extend helm chart as KubeVela Component
In this case, it's very straight forward to register a helm chart as KubeVela capabilities.
KubeVela will deploy the helm chart, and with the help of KubeVela, the extended helm charts can use all the KubeVela traits.
Refer to ["Use Helm To Extend a Component type"](https://kubevela.io/#/en/helm/component) to learn details in this case.
## Extend CRD Operator as KubeVela Component
In this case, you're more likely to make a CUE template to do the abstraction and encapsulation.
KubeVela will render the CUE template, and deploy the rendered resources. This is the most native and powerful way in KubeVela.
Refer to ["Use CUE to extend Component type"](https://kubevela.io/#/en/cue/component) to learn details in this case.

View File

@@ -8,7 +8,7 @@ This documentation explains how to register and manage available *components* an
## Overview
Essentially, a definition object in KubeVela is consisted by three section:
- **Capability Indexer** defined by `spec.definitionRef`
- **Capability Indexer** defined by `spec.workload` in `ComponentDefinition` and `spec.definitionRef` in `TraitDefinition`.
- this is for discovering the provider of this capability.
- **Interoperability Fields**
- they are for the platform to ensure a trait can work with given workload type. Hence only `TraitDefinition` has these fields.

View File

@@ -43,7 +43,7 @@ spec:
## Encapsulation
With `Application` provides an abstraction to deploy apps, each *component* and *trait* specification in this application is actually enforced by another set of building block objects named *"definitions"*, for example, [`WorkloadDefinition`](https://github.com/oam-dev/kubevela/tree/master/docs/examplesapplication#workload-definition) and [`TraitDefinition`](https://github.com/oam-dev/kubevela/tree/master/docs/examplesapplication#scaler-trait-definition).
With `Application` provides an abstraction to deploy apps, each *component* and *trait* specification in this application is actually enforced by another set of building block objects named *"definitions"*, for example, [`ComponentDefinition`](https://github.com/oam-dev/kubevela/tree/master/docs/examplesapplication#workload-definition) and [`TraitDefinition`](https://github.com/oam-dev/kubevela/tree/master/docs/examplesapplication#scaler-trait-definition).
Definitions are designed to leverage encapsulation technologies such as `CUE`, `Helm` and `Terraform modules` to template and parameterize Kubernetes resources as well as cloud services. This enables users to assemble templated capabilities (defined via Helm charts or CUE modules etc) into an `Application` by simply providing parameters, i.e. they only need to interact with an unified abstraction. Actually, in the `application-sample` above, it models a Kubernetes Deployment (component `foo`) to run container and a Alibaba Cloud OSS bucket (component `bar`) alongside.

View File

@@ -1,17 +1,31 @@
# Trait Definition
In the following tutorial, you will learn about definition objects with [KubeWatch](https://github.com/wonderflow/kubewatch) as example.
In the following tutorial, you will learn about define your own trait to extend KubeVela.
> This is a fork because we make it work as CRD controller. So user could use CRD(`kubewatches.labs.bitnami.com`) to describe K8s resources they want to watch including any types of CRD.
Before continue, make sure you have learned the basic concept of [Definition Objects](definition-and-templates.md) in KubeVela.
## Step 1: Create Trait Definition
The KubeVela trait system is very powerful. Generally, you could define a trait(e.g. "do some patch") with very low code,
just writing some CUE template is enough. Refer to ["Defining Traits in CUE"](https://kubevela.io/#/en/cue/trait) for
more details in this case.
To register [KubeWatch](https://github.com/wonderflow/kubewatch) as a new trait in KubeVela,
the only thing needed is to create an `TraitDefinition` object for it.
A full example can be found in this [kubewatch.yaml](https://github.com/oam-dev/catalog/blob/master/registry/kubewatch.yaml).
## Extend CRD Operator as Trait
In the following tutorial, you will learn to extend traits into KubeVela with [KEDA](https://keda.sh/) as example.
KEDA is a very cool Event Driven Autoscaler.
### Step 1: Install the CRD controller
[Install the KEDA controller](https://keda.sh/docs/2.2/deploy/) into your K8s system.
### Step 2: Create Trait Definition
To register KEDA as a new trait in KubeVela, the only thing needed is to create an `TraitDefinition` object for it.
A full example can be found in this [keda.yaml](https://github.com/oam-dev/catalog/blob/master/registry/keda.yaml).
Several highlights are list below.
### 1. Describe The Trait Usage
#### 1. Describe The Trait Usage
```yaml
...
@@ -24,7 +38,7 @@ Several highlights are list below.
We use label `definition.oam.dev/description` to add one line description for this trait.
It will be shown in helper commands such as `$ vela traits`.
### 2. Register API Resource
#### 2. Register API Resource
```yaml
...
@@ -40,26 +54,8 @@ This is how you register Kubewatch's API resource (`kubewatches.labs.bitnami.com
KubeVela uses Kubernetes API resource discovery mechanism to manage all registered capabilities.
### 3. Configure Installation Dependency
```yaml
...
extension:
install:
helm:
repo: my-repo
name: kubewatch
url: https://wonderflow.info/kubewatch/archives/
version: 0.1.0
...
```
The `extension.install` field is used by KubeVela to automatically install the dependency (if any) when the new workload
type added to KubeVela. The dependency is described by a Helm chart custom resource.
We highly recommend you to configure this field since otherwise,
users will have to install dependencies like this kubewatch controller manually later to user your new trait.
### 4. Define Workloads this trait can apply to
#### 3. Define Workloads this trait can apply to
```yaml
...
@@ -85,7 +81,7 @@ spec:
...
```
### 5. Define the field if the trait can receive workload reference
#### 4. Define the field if the trait can receive workload reference
```yaml
...
@@ -101,7 +97,7 @@ from this reference.
With the help of the OAM framework, end users will never bother writing the relationship info such like `targetReference`.
Platform builders only need to declare this info here once, then the OAM framework will help glue them together.
### 6. Define Template
#### 5. Define Template
```yaml
...
@@ -120,7 +116,7 @@ This is a CUE based template to define end user abstraction for this workload ty
Note that in this example, we only need to give the webhook url as parameter for using KubeWatch.
## Step 2: Register New Trait to KubeVela
### Step 2: Register New Trait to KubeVela
As long as the definition file is ready, you just need to apply it to Kubernetes.
@@ -129,27 +125,4 @@ $ kubectl apply -f https://raw.githubusercontent.com/oam-dev/catalog/master/regi
```
And the new trait will immediately become available for developers to use in KubeVela.
It may take some time to be available as the dependency (helm chart) need to install.
## Step 3: Verify
```bash
$ vela traits
"my-repo" has been added to your repositories
Successfully installed chart (kubewatch) with release name (kubewatch)
Automatically discover capabilities successfully ✅ Add(1) Update(0) Delete(0)
TYPE CATEGORY DESCRIPTION
+kubewatch trait Add a watch for resource
NAME DESCRIPTION APPLIES TO
autoscale Automatically scale the app following certain triggers or metrics webservice
worker
kubewatch Add a watch for resource
metrics Configure metrics targets to be monitored for the app webservice
task
rollout Configure canary deployment strategy to release the app webservice
route Configure route policy to the app webservice
scaler Manually scale the app webservice
worker
```

View File

@@ -1,116 +0,0 @@
# Workload Definition
In the following tutorial, you will learn about definition objects with OpenFaaS workload type.
## Step 1: Create Workload Definition
To register OpenFaaS as a new workload type in KubeVela, the only thing needed is to create an OAM `WorkloadDefinition` object for it. A full example can be found in this [openfaas.yaml](https://github.com/oam-dev/catalog/blob/master/registry/openfaas.yaml). Several highlights are list below.
### 1. Describe The Workload Type
```yaml
...
annotations:
definition.oam.dev/description: "OpenFaaS function"
...
```
A one line description of this workload type. It will be shown in helper commands such as `$ vela workloads`.
### 2. Register API Resource
```yaml
...
spec:
definitionRef:
name: functions.openfaas.com
...
```
This is how you register OpenFaaS Function's API resource (`functions.openfaas.com`) as the workload type. KubeVela uses Kubernetes API resource discovery mechanism to manage all registered capabilities.
### 3. Configure Installation Dependency
```yaml
...
extension:
install:
helm:
repo: openfaas
name: openfaas
namespace: openfaas
url: https://openfaas.github.io/faas-netes/
version: 6.1.2
...
```
The `extension.install` field is used by KubeVela to automatically install the dependency (if any) when the new workload type is added to KubeVela. The dependency is described by a Helm chart custom resource. We highly recommend you to configure this field since otherwise, users will have to install dependencies like OpenFaaS operator manually later to user your new workload type.
### 4. Define Template
```yaml
...
template: |
output: {
apiVersion: "openfaas.com/v1"
kind: "Function"
spec: {
handler: parameter.handler
image: parameter.image
name: context.name
}
}
parameter: {
image: string
handler: string
}
```
This is a CUE based template to define end user abstraction for this workload type. Please check the [templating documentation](../cue/workload-type.md) for more detail.
Note that OpenFaaS also requires a namespace and secret configured before first-time usage:
<details>
```bash
# create namespace
$ kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml
# generate a random password
$ PASSWORD=$(head -c 12 /dev/urandom | shasum| cut -d' ' -f1)
$ kubectl -n openfaas create secret generic basic-auth \
--from-literal=basic-auth-user=admin \
--from-literal=basic-auth-password="$PASSWORD"
```
</details>
## Step 2: Register New Workload Type to KubeVela
As long as the definition file is ready, you just need to apply it to Kubernetes.
```bash
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/catalog/master/registry/openfaas.yaml
```
And the new workload type will immediately become available for developers to use in KubeVela.
It may take some time to be available as the dependency (helm chart) need to install.
## Step 3: Verify
```bash
$ vela workloads
Successfully installed chart (openfaas) with release name (openfaas)
"my-repo" has been added to your repositories
Automatically discover capabilities successfully ✅ Add(1) Update(0) Delete(0)
TYPE CATEGORY DESCRIPTION
+openfaas workload OpenFaaS function workload
NAME DESCRIPTION
openfaas OpenFaaS function workload
task One-off task to run a piece of code or script to completion
webservice Long-running scalable service with stable endpoint to receive external traffic
worker Long-running scalable backend worker without network endpoint
```

View File

@@ -1,4 +1,4 @@
apiVersion: core.oam.dev/v1alpha2
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations: