Compare commits

...

35 Commits

Author SHA1 Message Date
Jianbo Sun
5d9b3fbce6 Merge pull request #1776 from yangsoon/rfc-log-system
Cherry pick the feature(refactor the log system) to release-1.0 branch
2021-06-08 19:35:13 +08:00
yangsoon
6c0364453f add options (#1775)
1. add ConcurrentReconciles for setting the concurrent reconcile number of the controller
2. add DependCheckWait for setting the time to wait for ApplicationConfiguration's dependent-resource ready
2021-06-08 15:20:55 +08:00
Kinso
2aa5b47e45 fix(log): debug level flag (#1768)
Co-authored-by: kinsolee <lijingzhao@forchange.tech>
2021-06-08 14:40:31 +08:00
yangsoon
efa60a67a1 Add Logging Convention in CONTRIBUTING.md (#1762)
* add logging convention in contributing

* fix log
2021-06-08 14:32:26 +08:00
yangsoon
89b479e1bc improve log system in appconfig (#1758) 2021-06-08 14:32:17 +08:00
yangsoon
d591d6ad64 Improve the logging system (#1735)
* change to klog/v2

* add logfilepath opt in helm chart

* replace klog.InfoF with klog.InfoS and improve messages

Remove string formatting from log message, improve messages in klog.InfoS and use lowerCamelCase to fmt Name arguments in klog.InfoS

* fix klog.Error

for expected errors (errors that can happen during routine operations) use klog.InfoS and pass error in err key instead.

* use klog.KObj and klog.KRef for Kubernetes objects

ensure that kubernetes objects references are consistent within the codebase

* enable set logdebug level

* add log-file-max-size
2021-06-08 14:31:53 +08:00
Jianbo Sun
916b3b52de remove restful API from website docs (#1772) 2021-06-08 10:38:43 +08:00
Zheng Xi Zhou
df0eaaa74a Generate OpenAPI JSON schema for Terraform Component (#1738) (#1753)
Fix #1736
2021-06-04 13:12:44 +08:00
yangsoon
2740067eb5 fix bug: When the Component contains multiple traits of the same type, the status of the trait in the Application is reported incorrectly (#1731) (#1743)
* fix status

* add test

* fix webhook

* add paramter context for customStatus
2021-06-02 10:31:06 +08:00
Jianbo Sun
53b49aca53 Merge pull request #1676 from yangsoon/cherry-pick-doc
cherry-pick some docs from master to 1.0
2021-05-18 15:11:36 +08:00
yangsoon
b75de8aa94 Explain vela next (#1671)
* Explain vela next

* fix def

Co-authored-by: Lei Zhang <resouer@gmail.com>
2021-05-18 12:24:05 +08:00
Jianbo Sun
2fc6971261 Merge pull request #1669 from yangsoon/cherrypick-1.0
cherry-pick master into 1.0 for fix bugs and add necessary features
2021-05-17 22:36:38 +08:00
yangsoon
27a5630652 Enable Dynamic Admission Control for Application (#1619)
* add webhook

* fix test

* fix validating_handler
2021-05-17 15:00:25 +08:00
Yue Wang
14334dd954 make ResourceTracker to own cluster-scope resource (#1634)
add e2e tests and unit tests

Signed-off-by: roy wang <seiwy2010@gmail.com>
2021-05-17 12:14:39 +08:00
Yue Wang
8948518d9f fix staticcheck lint (#1657)
make CI work consistently with Makefile/staticcheck

Signed-off-by: roy wang <seiwy2010@gmail.com>
2021-05-17 12:14:19 +08:00
Zheng Xi Zhou
368dd687c2 Fix Terraform application status issue (#1611)
* Fix Terraform application status issue

Fix #1599

* add unit tests

* fix import issue
2021-05-17 12:12:36 +08:00
yangsoon
34bee10bc3 fix bug: Use stricter syntax check for CUE (#1643)
* fix cue render

* fix e2e-api-test

* add test
2021-05-17 12:12:26 +08:00
Lei Zhang (Harry)
d6aadacc9b Update introduction (#1651) 2021-05-17 12:12:15 +08:00
Lei Zhang (Harry)
ea738c119a update the revision doc (#1650) 2021-05-17 12:12:02 +08:00
wyike
7cf425969a fix empty rolloutBatch will panic whole controller bug (#1646)
* add webhook for app and avoid controller panic

neat imports

* add test for rollout verify
2021-05-17 12:11:37 +08:00
yangsoon
b5cd647876 fix docs (#1636) 2021-05-17 12:10:53 +08:00
wyike
d2a28cbb8e fix application rollout annotation false issue (#1633)
* fix annotation false issue

neat imports

* rewrite judge logic

* add webhook

* add webhook verify annotation

* fix test

fix bug

* modify error message
2021-05-17 12:10:33 +08:00
wyike
a350a44982 add app embed rollout pause test (#1626)
* add pause test

* verify update plan shouldn't generate new revision

fix error test
2021-05-17 11:04:27 +08:00
Zheng Xi Zhou
e01ff21d30 Update samples for "vela show xxx --web" (#1616)
* Update samples for "vela show xxx --web"

Also stop generating components/traits docs under
docs/en/end-user and remove the command from Makefile

Fix #1602

* fix doc build issue

* Update references/plugins/references.go

Co-authored-by: Jianbo Sun <wonderflow.sun@gmail.com>

* Enhance --web feature for "vela show"

- Find the capability in "vela-system" if notfound in the current ns
- Don't depend on local capabilities

* add ut for GetCapabilityByName in references/plugins/cluster.go

* add ut for GetNamespacedCapabilitiesFromCluster in references/plugins/cluster.go

* add ut for GenerateTerraformCapabilityProperties in references/plugins/references.go

* fix import issue

Co-authored-by: Jianbo Sun <wonderflow.sun@gmail.com>
2021-05-17 11:04:06 +08:00
Zheng Xi Zhou
2fdd33a80a Renaming the name of function "SyncDefinitionToLocal" (#1607)
As we don't need to sync the defintiion .yaml and .cue to local and
then extract it, besides, there is a simliar function "SyncDefinitionsToLocal"
just besides it, rename function "SyncDefinitionToLocal" in order
not to cause confustions
2021-05-17 11:03:21 +08:00
wyike
7cee9cef98 fix error test and add middle state test (#1622) 2021-05-17 10:55:15 +08:00
yangsoon
01dfedfac2 applicaiton supports specifying different versions of Definition (#1597)
* app support specify the version of definition

* add e2e-test

* add docs

* add helm related test

* fix doc

* add more test

* fix docs
2021-05-17 10:53:31 +08:00
Zheng Xi Zhou
a05a983a9a Rename file name for trait "cpusclaer" and "scaler" (#1617)
As the file name (.cue/.yaml) of trait "cpusclaer" and "scaler"
are "cpuhpa" and "manaualscale" respectively, renmae them in
order to avoid confustions
2021-05-17 10:53:17 +08:00
yangsoon
53b28d61a9 fix ci (#1610) 2021-05-17 10:52:55 +08:00
Zheng Xi Zhou
a8605a02f9 Fix unstability of ApplicationContext UT (#1609)
* Fix unstablity of ApplicationContext UT

* fix import order

* fix compatibility issue
2021-05-17 10:52:26 +08:00
Lei Zhang
acb6f3ba04 Fix docs 2021-05-17 10:52:14 +08:00
Jianbo Sun
0d264c16f7 Merge pull request #1589 from oam-dev/master
merge master into 1.0 as they all patches
2021-04-30 19:55:29 +08:00
Jianbo Sun
4be93ad8b8 update github token (#1527) 2021-04-19 13:04:00 +08:00
Jianbo Sun
290b6c0815 Merge pull request #1506 from oam-dev/master
Merge master into release-1.0 as patches
2021-04-16 23:51:28 +08:00
Jianbo Sun
b3fd25cae1 Merge pull request #1489 from oam-dev/master
merge commits from master into release-1.0
2021-04-14 16:52:25 +08:00
182 changed files with 5638 additions and 2933 deletions

View File

@@ -14,6 +14,11 @@ jobs:
- uses: actions/setup-node@v1
with:
node-version: '12.x'
- uses: actions/setup-python@v2
with:
python-version: '3.7'
- name: Clean md files
run: python ./hack/website/clean-md.py ./docs/en
- name: Test Build
env:
VERSION: ${{ github.ref }}

View File

@@ -203,6 +203,11 @@ jobs:
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Checkout
uses: actions/checkout@v2
with:
@@ -215,8 +220,11 @@ jobs:
key: ${{ runner.os }}-pkg-${{ hashFiles('**/go.sum') }}
restore-keys: ${{ runner.os }}-pkg-
- name: Install StaticCheck
run: GO111MODULE=off go get honnef.co/go/tools/cmd/staticcheck
- name: Static Check
run: go run honnef.co/go/tools/cmd/staticcheck -- ./...
run: staticcheck ./...
- name: License Header Check
run: make check-license-header
@@ -293,4 +301,4 @@ jobs:
- name: Run cross-build
run: make cross-build
- name: Run compress binary
run: make compress
run: make compress

View File

@@ -14,6 +14,11 @@ jobs:
- uses: actions/setup-node@v1
with:
node-version: '12.x'
- uses: actions/setup-python@v2
with:
python-version: '3.7'
- name: Clean md files
run: python ./hack/website/clean-md.py ./docs/en
- name: Sync to kubevela.io Repo
env:
SSH_DEPLOY_KEY: ${{ secrets.GH_PAGES_DEPLOY }}

View File

@@ -109,8 +109,45 @@ Start to test.
```
make e2e-test
```
## Logging Conventions
### Contribute Docs
### Structured logging
We recommend using `klog.InfoS` to structure the log. The `msg` argument need start from a capital letter.
and name arguments should always use lowerCamelCase.
```golang
// func InfoS(msg string, keysAndValues ...interface{})
klog.InfoS("Reconcile traitDefinition", "traitDefinition", klog.KRef(req.Namespace, req.Name))
// output:
// I0605 10:10:57.308074 22276 traitdefinition_controller.go:59] "Reconcile traitDefinition" traitDefinition="vela-system/expose"
```
### Use `klog.KObj` and `klog.KRef` for Kubernetes objects
`klog.KObj` and `klog.KRef` can unify the output of kubernetes object.
```golang
// KObj is used to create ObjectRef when logging information about Kubernetes objects
klog.InfoS("Start to reconcile", "appDeployment", klog.KObj(appDeployment))
// KRef is used to create ObjectRef when logging information about Kubernetes objects without access to metav1.Object
klog.InfoS("Reconcile application", "application", klog.KRef(req.Namespace, req.Name))
```
### Logging Level
[This file](https://github.com/oam-dev/kubevela/blob/master/pkg/controller/common/logs.go) contains KubeVela's log level,
you can set the log level by `klog.V(level)`.
```golang
// you can use klog.V(common.LogDebug) to print debug log
klog.V(common.LogDebug).InfoS("Successfully applied components", "workloads", len(workloads))
```
more detail in [Structured Logging Guide](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/migration-to-structured-logging.md#structured-logging-in-kubernetes).
## Contribute Docs
Please read [the documentation](https://github.com/oam-dev/kubevela/tree/master/docs/README.md) before contributing to the docs.

View File

@@ -65,7 +65,6 @@ dashboard-build:
doc-gen:
rm -r docs/en/cli/*
go run hack/docgen/gen.go
go run hack/references/generate.go
docs-build:
ifneq ($(wildcard git-page),)
@@ -157,7 +156,7 @@ e2e-setup:
helm install --create-namespace -n flux-system helm-flux http://oam.dev/catalog/helm-flux2-0.1.0.tgz
helm install kruise https://github.com/openkruise/kruise/releases/download/v0.7.0/kruise-chart.tgz
sh ./hack/e2e/modify_charts.sh
helm upgrade --install --create-namespace --namespace vela-system --set image.pullPolicy=IfNotPresent --set image.repository=vela-core-test --set applicationRevisionLimit=5 --set image.tag=$(GIT_COMMIT) --wait kubevela ./charts/vela-core
helm upgrade --install --create-namespace --namespace vela-system --set image.pullPolicy=IfNotPresent --set image.repository=vela-core-test --set applicationRevisionLimit=5 --set dependCheckWait=10s --set image.tag=$(GIT_COMMIT) --wait kubevela ./charts/vela-core
ginkgo version
ginkgo -v -r e2e/setup
kubectl wait --for=condition=Ready pod -l app.kubernetes.io/name=vela-core,app.kubernetes.io/instance=kubevela -n vela-system --timeout=600s

View File

@@ -14,27 +14,21 @@
# KubeVela
*Developers simply want to deploy.*
Traditional *Platform-as-a-Service (PaaS)* systems enable easy application deployments, but this happiness disappears when your application outgrows the capabilities of your platform. This is inevitable regardless of your PaaS is built on Kubernetes or not - the root cause is its inflexibility.
KubeVela is a modern application platform that is fully self-service, and adapts to your needs when you grow.
Leveraging Kubernetes as control plane, KubeVela itself is runtime agnostic. It allows you to deploy (and manage) containerized workloads, cloud functions, databases, or even EC2 instances with a consistent workflow.
KubeVela is a modern application platform that makes deploying and managing applications across today's hybrid, multi-cloud environments easier and faster.
## Features
**Developer Centric** - KubeVela introduces higher level API to capture a full deployment of microservices, and builds features around the application needs only. Progressive rollout and multi-cluster deployment are provided out-of-box. No infrastructure level concerns, simply deploy.
**Application Centric** - Leveraging Open Application Model (OAM), KubeVela introduces consistent yet higher level API to capture a full deployment of microservices on top of hybrid environments. Placement strategy and patch, traffic shifting and rolling update are all declared at application level. No infrastructure level concern, simply deploy.
**Self-service** - KubeVela models platform features (such as workloads, operational behaviors, and cloud services) as reusable [CUE](https://github.com/cuelang/cue) and/or [Helm](https://helm.sh/) components, and expose them to end users as self-service building blocks. When your needs grow, these capabilities can extend naturally in a programmable approach. No restriction, fully flexible.
**Natively Extensible** - KubeVela uses [CUE](https://github.com/cuelang/cue) to glue capabilities (e.g. workload types, operational behaviors, and cloud services) provided by runtime infrastructure and expose them to users via self-service API. When users' needs grow, these API can naturally expand in programmable approach. No restriction, fully flexible.
**Simple yet Reliable** - KubeVela is built with Kubernetes as control plane so unlike traditional X-as-Code solutions, it never leaves configuration drift in your clusters. Also, this makes KubeVela work with any CI/CD or GitOps tools via declarative API without any integration burden.
**Runtime Agnostic** - KubeVela is built with Kubernetes as control plane but adaptable to any runtime as data-plane. It can deploy (and manage) diverse workload types such as container, cloud functions, databases, or even EC2 instances across hybrid environments. Also, this means KubeVela seamlessly works with any Kubernetes compatible CI/CD or GitOps tools via declarative API.
## Getting Started
- [Introduction](https://kubevela.io/docs)
- [Installation](https://kubevela.io/docs/install)
- [Quick start](https://kubevela.io/docs/quick-start)
- [How it works](https://kubevela.io/docs/concepts)
- [Deploy an Application](https://kubevela.io/docs/application)
## Documentation

View File

@@ -138,5 +138,29 @@ webhooks:
- UPDATE
resources:
- podspecworkloads
- clientConfig:
caBundle: Cg==
service:
name: {{ template "kubevela.name" . }}-webhook
namespace: {{ .Release.Namespace }}
path: /validating-core-oam-dev-v1beta1-applications
{{- if .Values.admissionWebhooks.patch.enabled }}
failurePolicy: Ignore
{{- else }}
failurePolicy: {{ .Values.admissionWebhooks.failurePolicy }}
{{- end }}
name: validating.core.oam.dev.v1beta1.applications
admissionReviewVersions:
- v1beta1
sideEffects: None
rules:
- apiGroups:
- core.oam.dev
apiVersions:
- v1beta1
operations:
- CREATE
- UPDATE
resources:
- applications
{{- end -}}

View File

@@ -3,7 +3,7 @@ apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "configure k8s HPA with CPU metrics for Deployment"
definition.oam.dev/description: "Automatically scale the component based on CPU usage."
name: cpuscaler
namespace: {{.Values.systemDefinitionNamespace}}
spec:
@@ -13,7 +13,7 @@ spec:
schematic:
cue:
template: |
outputs: cpuhpa: {
outputs: cpuscaler: {
apiVersion: "autoscaling/v2beta2"
kind: "HorizontalPodAutoscaler"
metadata: name: context.name

View File

@@ -3,8 +3,7 @@ apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "Configures K8s ingress and service to enable web traffic for your service.
Please use route trait in cap center for advanced usage."
definition.oam.dev/description: "Enable public web traffic for the component."
name: ingress
namespace: {{.Values.systemDefinitionNamespace}}
spec:

View File

@@ -3,7 +3,7 @@ apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "Configures replicas for your service by patch replicas field."
definition.oam.dev/description: "Manually scale the component."
name: scaler
namespace: {{.Values.systemDefinitionNamespace}}
spec:

View File

@@ -3,7 +3,7 @@ apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "inject a sidecar container into your app"
definition.oam.dev/description: "Inject a sidecar container to the component."
name: sidecar
namespace: {{.Values.systemDefinitionNamespace}}
spec:

View File

@@ -107,6 +107,13 @@ spec:
args:
- "--metrics-addr=:8080"
- "--enable-leader-election"
{{ if ne .Values.logFilePath "" }}
- "--log-file-path={{ .Values.logFilePath }}"
- "--log-file-max-size={{ .Values.logFileMaxSize }}"
{{ end }}
{{ if .Values.logDebug }}
- "--log-debug=true"
{{ end }}
{{ if .Values.admissionWebhooks.enabled }}
- "--use-webhook=true"
- "--webhook-port={{ .Values.webhookService.port }}"

View File

@@ -79,8 +79,21 @@ admissionWebhooks:
enabled: false
#If non-empty, write log files in this path
logFilePath: ""
#Defines the maximum size a log file can grow to. Unit is megabytes.
#If the value is 0, the maximum file size is unlimited.
logFileMaxSize: 1024
systemDefinitionNamespace: vela-system
applicationRevisionLimit: 10
definitionRevisionLimit: 20
# concurrentReconciles is the concurrent reconcile number of the controller
concurrentReconciles: 4
# dependCheckWait is the time to wait for ApplicationConfiguration's dependent-resource ready
dependCheckWait: 30s

View File

@@ -27,14 +27,12 @@ import (
"strings"
"time"
"github.com/crossplane/crossplane-runtime/pkg/logging"
"go.uber.org/zap/zapcore"
"gopkg.in/natefinch/lumberjack.v2"
"k8s.io/klog/v2"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
standardcontroller "github.com/oam-dev/kubevela/pkg/controller"
commonconfig "github.com/oam-dev/kubevela/pkg/controller/common"
oamcontroller "github.com/oam-dev/kubevela/pkg/controller/core.oam.dev"
oamv1alpha2 "github.com/oam-dev/kubevela/pkg/controller/core.oam.dev/v1alpha2"
"github.com/oam-dev/kubevela/pkg/controller/utils"
@@ -53,7 +51,6 @@ const (
)
var (
setupLog = ctrl.Log.WithName(kubevelaName)
scheme = common.Scheme
waitSecretTimeout = 90 * time.Second
waitSecretInterval = 2 * time.Second
@@ -61,8 +58,8 @@ var (
func main() {
var metricsAddr, logFilePath, leaderElectionNamespace string
var enableLeaderElection, logCompress, logDebug bool
var logRetainDate int
var enableLeaderElection, logDebug bool
var logFileMaxSize uint64
var certDir string
var webhookPort int
var useWebhook bool
@@ -82,8 +79,7 @@ func main() {
flag.StringVar(&leaderElectionNamespace, "leader-election-namespace", "",
"Determines the namespace in which the leader election configmap will be created.")
flag.StringVar(&logFilePath, "log-file-path", "", "The file to write logs to.")
flag.IntVar(&logRetainDate, "log-retain-date", 7, "The number of days of logs history to retain.")
flag.BoolVar(&logCompress, "log-compress", true, "Enable compression on the rotated logs.")
flag.Uint64Var(&logFileMaxSize, "log-file-max-size", 1024, "Defines the maximum size a log file can grow to, Unit is megabytes.")
flag.BoolVar(&logDebug, "log-debug", false, "Enable debug logs for development purpose")
flag.IntVar(&controllerArgs.RevisionLimit, "revision-limit", 50,
"RevisionLimit is the maximum number of revisions that will be maintained. The default value is 50.")
@@ -103,29 +99,26 @@ func main() {
flag.DurationVar(&syncPeriod, "informer-re-sync-interval", 60*time.Minute,
"controller shared informer lister full re-sync period")
flag.StringVar(&oam.SystemDefinitonNamespace, "system-definition-namespace", "vela-system", "define the namespace of the system-level definition")
flag.Parse()
flag.IntVar(&controllerArgs.ConcurrentReconciles, "concurrent-reconciles", 4, "concurrent-reconciles is the concurrent reconcile number of the controller. The default value is 4")
flag.DurationVar(&controllerArgs.DependCheckWait, "depend-check-wait", 30*time.Second, "depend-check-wait is the time to wait for ApplicationConfiguration's dependent-resource ready."+
"The default value is 30s, which means if dependent resources were not prepared, the ApplicationConfiguration would be reconciled after 30s.")
flag.Parse()
// setup logging
var w io.Writer
if len(logFilePath) > 0 {
w = zapcore.AddSync(&lumberjack.Logger{
Filename: logFilePath,
MaxAge: logRetainDate, // days
Compress: logCompress,
})
} else {
w = os.Stdout
klog.InitFlags(nil)
if logDebug {
_ = flag.Set("v", strconv.Itoa(int(commonconfig.LogDebug)))
}
logger := zap.New(func(o *zap.Options) {
o.Development = logDebug
o.DestWritter = w
})
ctrl.SetLogger(logger)
if logFilePath != "" {
_ = flag.Set("logtostderr", "false")
_ = flag.Set("log_file", logFilePath)
_ = flag.Set("log_file_max_size", strconv.FormatUint(logFileMaxSize, 10))
}
setupLog.Info(fmt.Sprintf("KubeVela Version: %s, GIT Revision: %s.", version.VelaVersion, version.GitRevision))
setupLog.Info(fmt.Sprintf("Disable Capabilities: %s.", disableCaps))
setupLog.Info(fmt.Sprintf("core init with definition namespace %s", oam.SystemDefinitonNamespace))
klog.InfoS("KubeVela information", "version", version.VelaVersion, "revision", version.GitRevision)
klog.InfoS("Disable capabilities", "name", disableCaps)
klog.InfoS("Vela-Core init", "definition namespace", oam.SystemDefinitonNamespace)
restConfig := ctrl.GetConfigOrDie()
restConfig.UserAgent = kubevelaName + "/" + version.GitRevision
@@ -142,46 +135,46 @@ func main() {
SyncPeriod: &syncPeriod,
})
if err != nil {
setupLog.Error(err, "unable to create a controller manager")
klog.ErrorS(err, "Unable to create a controller manager")
os.Exit(1)
}
if err := registerHealthChecks(mgr); err != nil {
setupLog.Error(err, "unable to register ready/health checks")
klog.ErrorS(err, "Unable to register ready/health checks")
os.Exit(1)
}
if err := utils.CheckDisabledCapabilities(disableCaps); err != nil {
setupLog.Error(err, "unable to get enabled capabilities")
klog.ErrorS(err, "Unable to get enabled capabilities")
os.Exit(1)
}
switch strings.ToLower(applyOnceOnly) {
case "", "false", string(oamcontroller.ApplyOnceOnlyOff):
controllerArgs.ApplyMode = oamcontroller.ApplyOnceOnlyOff
setupLog.Info("ApplyOnceOnly is disabled")
klog.Info("ApplyOnceOnly is disabled")
case "true", string(oamcontroller.ApplyOnceOnlyOn):
controllerArgs.ApplyMode = oamcontroller.ApplyOnceOnlyOn
setupLog.Info("ApplyOnceOnly is enabled, that means workload or trait only apply once if no spec change even they are changed by others")
klog.Info("ApplyOnceOnly is enabled, that means workload or trait only apply once if no spec change even they are changed by others")
case string(oamcontroller.ApplyOnceOnlyForce):
controllerArgs.ApplyMode = oamcontroller.ApplyOnceOnlyForce
setupLog.Info("ApplyOnceOnlyForce is enabled, that means workload or trait only apply once if no spec change even they are changed or deleted by others")
klog.Info("ApplyOnceOnlyForce is enabled, that means workload or trait only apply once if no spec change even they are changed or deleted by others")
default:
setupLog.Error(fmt.Errorf("invalid apply-once-only value: %s", applyOnceOnly),
"unable to setup the vela core controller",
"valid apply-once-only value:", "on/off/force, by default it's off")
klog.ErrorS(fmt.Errorf("invalid apply-once-only value: %s", applyOnceOnly),
"Unable to setup the vela core controller",
"apply-once-only", "on/off/force, by default it's off")
os.Exit(1)
}
dm, err := discoverymapper.New(mgr.GetConfig())
if err != nil {
setupLog.Error(err, "failed to create CRD discovery client")
klog.ErrorS(err, "Failed to create CRD discovery client")
os.Exit(1)
}
controllerArgs.DiscoveryMapper = dm
pd, err := definition.NewPackageDiscover(mgr.GetConfig())
if err != nil {
setupLog.Error(err, "failed to create CRD discovery for CUE package client")
klog.Error(err, "Failed to create CRD discovery for CUE package client")
if !definition.IsCUEParseErr(err) {
os.Exit(1)
}
@@ -189,46 +182,49 @@ func main() {
controllerArgs.PackageDiscover = pd
if useWebhook {
setupLog.Info("vela webhook enabled, will serving at :" + strconv.Itoa(webhookPort))
klog.InfoS("Enable webhook", "server port", strconv.Itoa(webhookPort))
oamwebhook.Register(mgr, controllerArgs)
velawebhook.Register(mgr, disableCaps)
if err := waitWebhookSecretVolume(certDir, waitSecretTimeout, waitSecretInterval); err != nil {
setupLog.Error(err, "unable to get webhook secret")
klog.ErrorS(err, "Unable to get webhook secret")
os.Exit(1)
}
}
if err = oamv1alpha2.Setup(mgr, controllerArgs, logging.NewLogrLogger(setupLog)); err != nil {
setupLog.Error(err, "unable to setup the oam core controller")
if err = oamv1alpha2.Setup(mgr, controllerArgs); err != nil {
klog.ErrorS(err, "Unable to setup the oam core controller")
os.Exit(1)
}
if err = standardcontroller.Setup(mgr, disableCaps); err != nil {
setupLog.Error(err, "unable to setup the vela core controller")
klog.ErrorS(err, "Unable to setup the vela core controller")
os.Exit(1)
}
if driver := os.Getenv(system.StorageDriverEnv); len(driver) == 0 {
// first use system environment,
err := os.Setenv(system.StorageDriverEnv, storageDriver)
if err != nil {
setupLog.Error(err, "unable to setup the vela core controller")
klog.ErrorS(err, "Unable to setup the vela core controller")
os.Exit(1)
}
}
setupLog.Info("use storage driver", "storageDriver", os.Getenv(system.StorageDriverEnv))
klog.InfoS("Use storage driver", "storageDriver", os.Getenv(system.StorageDriverEnv))
setupLog.Info("starting the vela controller manager")
klog.Info("Start the vela controller manager")
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
setupLog.Error(err, "problem running manager")
klog.ErrorS(err, "Failed to run manager")
os.Exit(1)
}
setupLog.Info("program safely stops...")
if logFilePath != "" {
klog.Flush()
}
klog.Info("Safely stops Program...")
}
// registerHealthChecks is used to create readiness&liveness probes
func registerHealthChecks(mgr ctrl.Manager) error {
setupLog.Info("creating readiness/health check")
klog.Info("Create readiness/health check")
if err := mgr.AddReadyzCheck("ping", healthz.Ping); err != nil {
return err
}
@@ -247,8 +243,8 @@ func waitWebhookSecretVolume(certDir string, timeout, interval time.Duration) er
if time.Since(start) > timeout {
return fmt.Errorf("getting webhook secret timeout after %s", timeout.String())
}
setupLog.Info(fmt.Sprintf("waiting webhook secret, time consumed: %d/%d seconds ...",
int64(time.Since(start).Seconds()), int64(timeout.Seconds())))
klog.InfoS("Wait webhook secret", "time consumed(second)", int64(time.Since(start).Seconds()),
"timeout(second)", int64(timeout.Seconds()))
if _, err := os.Stat(certDir); !os.IsNotExist(err) {
ready := func() bool {
f, err := os.Open(filepath.Clean(certDir))
@@ -270,8 +266,8 @@ func waitWebhookSecretVolume(certDir string, timeout, interval time.Duration) er
return nil
})
if err == nil {
setupLog.Info(fmt.Sprintf("webhook secret is ready (time consumed: %d seconds)",
int64(time.Since(start).Seconds())))
klog.InfoS("Webhook secret is ready", "time consumed(second)",
int64(time.Since(start).Seconds()))
return true
}
return false

View File

@@ -4,7 +4,7 @@
# KubeVela
KubeVela is a modern application platform that is fully self-service, and adapts to your needs when you grow.
KubeVela is a modern application platform that makes deploying and managing applications across today's hybrid, multi-cloud environments easier and faster.
## Community

View File

@@ -1,5 +1,5 @@
---
title: Advanced Topics for Installation
title: Other Install Topics
---
## Install KubeVela with cert-manager

View File

@@ -1,149 +0,0 @@
---
title: Deploy Application
---
This documentation will walk through a full application deployment workflow on KubeVela platform.
## Introduction
KubeVela is a fully self-service platform. All capabilities an application deployment needs are maintained as building block modules in this platform. Specifically:
- Components - deployable/provisionable entities that composed your application deployment
- e.g. a Kubernetes workload, a MySQL database, or a AWS OSS bucket
- Traits - attachable operational features per your needs
- e.g. autoscaling rules, rollout strategies, ingress rules, sidecars, security policies etc
## Step 1: Check Capabilities in the Platform
As user of this platform, you could check available components you can deploy, and available traits you can attach.
```console
$ kubectl get componentdefinitions -n vela-system
NAME WORKLOAD-KIND DESCRIPTION AGE
task Job Describes jobs that run code or a script to completion. 5h52m
webservice Deployment Describes long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers. 5h52m
worker Deployment Describes long-running, scalable, containerized services that running at backend. They do NOT have network endpoint to receive external network traffic. 5h52m
```
```console
$ kubectl get traitdefinitions -n vela-system
NAME APPLIES-TO DESCRIPTION AGE
ingress ["webservice","worker"] Configures K8s ingress and service to enable web traffic for your service. Please use route trait in cap center for advanced usage. 6h8m
cpuscaler ["webservice","worker"] Configure k8s HPA with CPU metrics for Deployment 6h8m
```
To show the specification for given capability, you could use `vela` CLI. For example, `vela show webservice` will return full schema of *Web Service* component and `vela show webservice --web` will open its capability reference documentation in your browser.
## Step 2: Design and Deploy Application
In KubeVela, `Application` is the main API to define your application deployment based on available capabilities. Every `Application` could contain multiple components, each of them can be attached with a number of traits per needs.
Now let's define an application composed by *Web Service* and *Worker* components.
```yaml
# sample.yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: website
spec:
components:
- name: frontend
type: webservice
properties:
image: nginx
traits:
- type: cpuscaler
properties:
min: 1
max: 10
cpuPercent: 60
- type: sidecar
properties:
name: "sidecar-test"
image: "fluentd"
- name: backend
type: worker
properties:
image: busybox
cmd:
- sleep
- '1000'
```
In this sample, we also attached `sidecar` and `cpuscaler` traits to the `frontend` component.
So after deployed, the `frontend` component instance (a Kubernetes Deployment workload) will be automatically injected
with a `fluentd` sidecar and automatically scale from 1-10 replicas based on CPU usage.
### Deploy the Application
Apply application YAML to Kubernetes:
```shell
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/enduser/sample.yaml
application.core.oam.dev/website created
```
You'll get the application becomes `running`.
```shell
$ kubectl get application website -o yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: website
....
status:
components:
- apiVersion: core.oam.dev/v1alpha2
kind: Component
name: backend
- apiVersion: core.oam.dev/v1alpha2
kind: Component
name: frontend
....
status: running
```
### Verify the Deployment
You could see a Deployment named `frontend` is running, with port exposed, and with a container `fluentd` injected.
```shell
$ kubectl get deploy frontend
NAME READY UP-TO-DATE AVAILABLE AGE
frontend 1/1 1 1 97s
```
```shell
$ kubectl get deploy frontend -o yaml
...
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: frontend
ports:
- containerPort: 80
protocol: TCP
- image: fluentd
imagePullPolicy: Always
name: sidecar-test
...
```
Another Deployment is also running named `backend`.
```shell
$ kubectl get deploy backend
NAME READY UP-TO-DATE AVAILABLE AGE
backend 1/1 1 1 111s
```
An HPA was also created by the `cpuscaler` trait.
```shell
$ kubectl get HorizontalPodAutoscaler frontend
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
frontend Deployment/frontend <unknown>/50% 1 10 1 101m
```

View File

@@ -0,0 +1,35 @@
## vela system cue-packages
List cue package
### Synopsis
List cue package
```
vela system cue-packages
```
### Examples
```
vela system cue-packages
```
### Options
```
-h, --help help for cue-packages
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela system](vela_system.md) - System management utilities
###### Auto generated by spf13/cobra on 2-May-2021

View File

@@ -0,0 +1,39 @@
## vela system live-diff
Dry-run an application, and do diff on a specific app revison
### Synopsis
Dry-run an application, and do diff on a specific app revison. The provided capability definitions will be used during Dry-run. If any capabilities used in the app are not found in the provided ones, it will try to find from cluster.
```
vela system live-diff
```
### Examples
```
vela live-diff -f app-v2.yaml -r app-v1 --context 10
```
### Options
```
-r, --Revision string specify an application Revision name, by default, it will compare with the latest Revision
-c, --context int output number lines of context around changes, by default show all unchanged lines (default -1)
-d, --definition string specify a file or directory containing capability definitions, they will only be used in dry-run rather than applied to K8s cluster
-f, --file string application file name (default "./app.yaml")
-h, --help help for live-diff
```
### Options inherited from parent commands
```
-e, --env string specify environment name for application
```
### SEE ALSO
* [vela system](vela_system.md) - System management utilities
###### Auto generated by spf13/cobra on 2-May-2021

View File

@@ -4,110 +4,44 @@ title: How it Works
In this documentation, we will explain the core idea of KubeVela and clarify some technical terms that are widely used in the project.
## Overview
## API
First of all, KubeVela introduces a workflow with separate of concerns as below:
On control plane, KubeVela introduces [Open Application Model (OAM)](https://oam.dev) as the main API to model a full deployment of modern microservices application. This API could be explained as below:
![alt](resources/concepts.png)
In detail:
- Components - deployable/provisionable entities that compose your application.
- e.g. a Helm chart, a Kubernetes workload, a Terraform module, or a cloud database instance etc.
- Traits - attachable features that will *overlay* given component with operational behaviors.
- e.g. autoscaling rules, rollout strategies, ingress rules, sidecars, security policies etc.
- Application - full description of your application deployment assembled with components and traits.
- Environment - the target environments to deploy this application.
We also reference components and traits as *"capabilities"* in KubeVela.
## Workflow
To ensure simple yet consistent user experience across hybrid environments. KubeVela introduces a workflow with separate of concerns as below:
- **Platform Team**
- Model and manage platform capabilities as components or traits, together with deployment environment configurations.
- **End Users**
- Choose a deployment environment, assemble the app with available components and traits per needs, and then deploy the app to target environment.
- Model and manage platform capabilities as components or traits, together with target environments specifications.
- **Application Team**
- Choose a environment, assemble the application with components and traits per needs, and deploy it to target environment.
> Note that either platform team or application team application will only talk to the control plane cluster. KubeVela is designed to hide the details of runtime infrastructures except for debugging or verifying purpose.
Below is how this workflow looks like:
![alt](resources/how-it-works.png)
This design make it possible for platform team to enforce best practices by *coding* platform capabilities into reusable building blocks, and leverage them to expose a *PaaS-like* experience (*i.e. app-centric abstractions, self-service workflow etc*) to end users.
Also, as programmable components, all capabilities in KubeVela can be updated or extended easily per your needs at any time.
![alt](resources/what-is-kubevela.png)
In the model layer, KubeVela leverages [Open Application Model (OAM)](https://oam.dev) to make above design happen.
## `Application`
The *Application* is the core API of KubeVela. It allows developers to work with a single artifact to capture the complete application deployment with simplified primitives.
In application delivery platform, having an "application" concept is important to simplify administrative tasks and can serve as an anchor to avoid configuration drifts during operation. Also, it provides a much simpler path for on-boarding Kubernetes capabilities to end users without relying on low level details. For example, a developer will be able to model a "web service" without defining a detailed Kubernetes Deployment + Service combo each time, or claim the auto-scaling requirements without referring to the underlying KEDA ScaleObject.
### Example
An example of `website` application with two components (i.e. `frontend` and `backend`) could be modeled as below:
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: website
spec:
components:
- name: backend
type: worker
properties:
image: busybox
cmd:
- sleep
- '1000'
- name: frontend
type: webservice
properties:
image: nginx
traits:
- type: autoscaler
properties:
min: 1
max: 10
- type: sidecar
properties:
name: "sidecar-test"
image: "fluentd"
```
The `Application` resource in KubeVela is a LEGO-style entity and does not even have fixed schema. Instead, it is composed by several building blocks (app components and traits etc.) that gives you full flexibility to model platform capabilities around the application's needs.
These building blocks named `ComponentDefinition` and `TraitDefinition`.
### `ComponentDefinition`
`ComponentDefinition` is an object that models a deployable entity in your platform, for example, a *Long Running Web Service*, a *Helm chart* or a *Alibaba Cloud RDS*. A typical `ComponentDefinition` carries workload type description (i.e. `WorkloadDefinition`) of this component, and the configurable parameter list this component exposed to users.
Hence, components are designed to be shareable and reusable. For example, by referencing the same *Alibaba Cloud RDS* component and setting different parameter values, users could easily provision Alibaba Cloud RDS instances of different sizes in different availability zones.
Users will use the `Application` entity to declare how they want to instantiate and deploy certain component definitions. Specifically, the `.type` field references the name of a `ComponentDefinition` and `.properties` are user provided parameter values to instantiate it.
All component definitions expected to be provided by component providers such as 3rd-party software vendors, or pre-installed in the system by platform team.
### `TraitDefinition`
Optionally, each component has a `.traits` section that augments the component instance with operational behaviors such as load balancing policy, network ingress routing, auto-scaling policies, or upgrade strategies, etc.
Traits are operational features provided by the platform. To attach a trait to component instance, the user will declare `.type` field to reference the specific `TraitDefinition`, and `.properties` field to set property values of the given trait. Similarly, `TraitDefiniton` also allows you to define *template* for operational features.
We also reference component definitions and trait definitions as *"capabilities"* in KubeVela.
### Summary
These main concepts of KubeVela could be shown as below:
![alt](resources/concepts.png)
Essentially:
- Components - deployable/provisionable entities that composed your application
- e.g. a Helm chart, a stateless workload, a MySQL database, or a AWS S3 bucket
- Traits - attachable operational features per your needs
- e.g. autoscaling rules, rollout strategies, ingress rules, sidecars, security policies etc
- Application - full description of your application deployment assembled with components and traits
- Environment - the target environments to deploy this application
All the capability building blocks can be updated or extended easily at any time since they are fully programmable via CUE.
## Environment
Before releasing an application to production, it's important to test the code in testing/staging workspaces. In KubeVela, we describe these workspaces as "deployment environments" or "environments" for short. Each environment has its own configuration (e.g., domain, Kubernetes cluster and namespace, configuration data, access control policy, etc.) to allow user to create different deployment environments such as "test" and "production".
Before releasing an application to production, it's important to test the code in testing/staging workspaces. In KubeVela, we describe these workspaces as "environments". Each environment has its own configuration (e.g., domain, Kubernetes cluster and namespace, configuration data, access control policy, etc.) to allow user to create different deployment environments such as "test" and "production".
Currently, a KubeVela `environment` only maps to a Kubernetes namespace, while the cluster level environment is work in progress.
## What's Next
## Architecture
Here are some recommended next steps:
The overall architecture of KubeVela is shown as below:
![alt](resources/arch.png)
In nutshell, in *control plane cluster*, the application controller is responsible for application deployment orchestration and the placement controller is responsible for deploying the application across multiple *runtime clusters* with traffic shifting features supported out-of-box. The needed addons in runtime cluster are automatically discovered and installed with leverage of [CRD Lifecycle Management (CLM)](https://github.com/cloudnativeapp/CLM).
- Learn KubeVela's [core concepts](./concepts)
- Learn how to [deploy an application](end-user/application) in detail and understand how it works.

View File

@@ -55,7 +55,7 @@ Introduce how to automatically scale workloads by cron.
timezone: "America/Los_Angeles"
```
> The full specification of `autoscale` could show up by `$ vela show autoscale` or be found on [its reference documentation](../references/traits/autoscale)
> The full specification of `autoscale` could show up by `$ vela show autoscale`.
2. Deploy an application

View File

@@ -55,7 +55,7 @@ The app will emit random latencies as metrics.
EOF
```
> The full specification of `metrics` could show up by `$ vela show metrics` or be found on [its reference documentation](../references/traits/metrics)
> The full specification of `metrics` could show up by `$ vela show metrics`.
2. Deploy the application:

View File

@@ -26,7 +26,7 @@ services:
domain: "example.com"
```
> The full specification of `rollout` could show up by `$ vela show rollout` or be found on [its reference documentation](../references/traits/rollout)
> The full specification of `rollout` could show up by `$ vela show rollout`.
Apply this `appfile.yaml`:

View File

@@ -42,7 +42,7 @@ services:
rewriteTarget: /
```
> The full specification of `route` could show up by `$ vela show route` or be found on [its reference documentation](../references/traits/route)
> The full specification of `route` could show up by `$ vela show route`.
Apply again:

View File

@@ -72,7 +72,7 @@ services:
```
> To learn about how to set the properties of specific workload type or trait, please check the [reference documentation guide](./check-ref-doc).
> To learn about how to set the properties of specific workload type or trait, please use `vela show <TYPE | TRAIT>`.
## Example Workflow
@@ -221,7 +221,7 @@ spec:
### [Optional] Configure another workload type
By now we have deployed a *[Web Service](references/component-types/webservice)*, which is the default workload type in KubeVela. We can also add another service of *[Task](references/component-types/task)* type in the same app:
By now we have deployed a *[Web Service](../end-user/components/webservice)*, which is the default workload type in KubeVela. We can also add another service of *[Task](../end-user/components/task)* type in the same app:
```yaml
services:

View File

@@ -1,113 +0,0 @@
---
title: Overview
---
In this documentation, we will show how to check the detailed schema of a given capability (i.e. component type or trait).
This may sound challenging because every capability is a "plug-in" in KubeVela (even for the built-in ones), also, it's by design that KubeVela allows platform administrators to modify the capability templates at any time. In this case, do we need to manually write documentation for every newly installed capability? And how can we ensure those documentations for the system is up-to-date?
## Using Browser
Actually, as a important part of its "extensibility" design, KubeVela will always **automatically generate** reference documentation for every workload type or trait registered in your Kubernetes cluster, based on the template in its definition of course. This feature works for any capability: either built-in ones or your own workload types/traits.
Thus, as an end user, the only thing you need to do is:
```console
$ vela show COMPONENT_TYPE or TRAIT --web
```
This command will automatically open the reference documentation for given component type or trait in your default browser.
Let's take `$ vela show webservice --web` as example. The detailed schema documentation for `Web Service` component type will show up immediately as below:
![](../../resources/vela_show_webservice.jpg)
Note that there's in the section named `Specification`, it even provides you with a full sample for the usage of this workload type with a fake name `my-service-name`.
Similarly, we can also do `$ vela show autoscale`:
![](../../resources/vela_show_autoscale.jpg)
With these auto-generated reference documentations, we could easily complete the application description by simple copy-paste, for example:
```yaml
name: helloworld
services:
backend: # copy-paste from the webservice ref doc above
image: oamdev/testapp:v1
cmd: ["node", "server.js"]
port: 8080
cpu: "0.1"
autoscale: # copy-paste and modify from autoscaler ref doc above
min: 1
max: 8
cron:
startAt: "19:00"
duration: "2h"
days: "Friday"
replicas: 4
timezone: "America/Los_Angeles"
```
## Using Terminal
This reference doc feature also works for terminal-only case. For example:
```shell
$ vela show webservice
# Properties
+-------+----------------------------------------------------------------------------------+---------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+-------+----------------------------------------------------------------------------------+---------------+----------+---------+
| cmd | Commands to run in the container | []string | false | |
| env | Define arguments by using environment variables | [[]env](#env) | false | |
| image | Which image would you like to use for your service | string | true | |
| port | Which port do you want customer traffic sent to | int | true | 80 |
| cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false | |
+-------+----------------------------------------------------------------------------------+---------------+----------+---------+
## env
+-----------+-----------------------------------------------------------+-------------------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+-----------+-----------------------------------------------------------+-------------------------+----------+---------+
| name | Environment variable name | string | true | |
| value | The value of the environment variable | string | false | |
| valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false | |
+-----------+-----------------------------------------------------------+-------------------------+----------+---------+
### valueFrom
+--------------+--------------------------------------------------+-------------------------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+--------------+--------------------------------------------------+-------------------------------+----------+---------+
| secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | true | |
+--------------+--------------------------------------------------+-------------------------------+----------+---------+
#### secretKeyRef
+------+------------------------------------------------------------------+--------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+------+------------------------------------------------------------------+--------+----------+---------+
| name | The name of the secret in the pod's namespace to select from | string | true | |
| key | The key of the secret to select from. Must be a valid secret key | string | true | |
+------+------------------------------------------------------------------+--------+----------+---------+
```
## For Built-in Capabilities
Note that for all the built-in capabilities, we already published their reference docs below based on the same doc generation mechanism.
- Workload Types
- [webservice](component-types/webservice)
- [task](component-types/task)
- [worker](component-types/worker)
- Traits
- [route](traits/route)
- [autoscale](traits/autoscale)
- [rollout](traits/rollout)
- [metrics](traits/metrics)
- [scaler](traits/scaler)

View File

@@ -1,27 +0,0 @@
---
title: Task
---
## Description
Describes jobs that run code or a script to completion.
## Specification
List of all configuration options for a `Task` workload type.
```yaml
...
image: perl
count: 10
cmd: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
```
## Properties
Name | Description | Type | Required | Default
------------ | ------------- | ------------- | ------------- | -------------
cmd | Commands to run in the container | []string | false |
count | specify number of tasks to run in parallel | int | true | 1
restart | Define the job restart policy, the value can only be Never or OnFailure. By default, it's Never. | string | true | Never
image | Which image would you like to use for your service | string | true |

View File

@@ -1,61 +0,0 @@
---
title: Webservice
---
## Description
Describes long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers. If workload type is skipped for any service defined in Appfile, it will be defaulted to `webservice` type.
## Specification
List of all configuration options for a `Webservice` workload type.
```yaml
...
image: oamdev/testapp:v1
cmd: ["node", "server.js"]
port: 8080
cpu: "0.1"
env:
- name: FOO
value: bar
- name: FOO
valueFrom:
secretKeyRef:
name: bar
key: bar
```
## Properties
Name | Description | Type | Required | Default
------------ | ------------- | ------------- | ------------- | -------------
cmd | Commands to run in the container | []string | false |
env | Define arguments by using environment variables | [[]env](#env) | false |
image | Which image would you like to use for your service | string | true |
port | Which port do you want customer traffic sent to | int | true | 80
cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false |
### env
Name | Description | Type | Required | Default
------------ | ------------- | ------------- | ------------- | -------------
name | Environment variable name | string | true |
value | The value of the environment variable | string | false |
valueFrom | Specifies a source the value of this var should come from | [valueFrom](#valueFrom) | false |
#### valueFrom
Name | Description | Type | Required | Default
------------ | ------------- | ------------- | ------------- | -------------
secretKeyRef | Selects a key of a secret in the pod's namespace | [secretKeyRef](#secretKeyRef) | true |
##### secretKeyRef
Name | Description | Type | Required | Default
------------ | ------------- | ------------- | ------------- | -------------
name | The name of the secret in the pod's namespace to select from | string | true |
key | The key of the secret to select from. Must be a valid secret key | string | true |

View File

@@ -1,24 +0,0 @@
---
title: Worker
---
## Description
Describes long-running, scalable, containerized services that running at backend. They do NOT have network endpoint to receive external network traffic.
## Specification
List of all configuration options for a `Worker` workload type.
```yaml
...
image: oamdev/testapp:v1
cmd: ["node", "server.js"]
```
## Properties
Name | Description | Type | Required | Default
------------ | ------------- | ------------- | ------------- | -------------
cmd | Commands to run in the container | []string | false |
image | Which image would you like to use for your service | string | true |

View File

@@ -1,44 +0,0 @@
---
title: Autoscale
---
## Description
Automatically scales workloads by resource utilization metrics or cron triggers.
## Specification
List of all configuration options for a `Autoscale` trait.
```yaml
...
min: 1
max: 4
cron:
startAt: "14:00"
duration: "2h"
days: "Monday, Thursday"
replicas: 2
timezone: "America/Los_Angeles"
cpuPercent: 10
```
## Properties
Name | Description | Type | Required | Default
------------ | ------------- | ------------- | ------------- | -------------
min | Minimal replicas of the workload | int | true |
max | Maximal replicas of the workload | int | true |
cpuPercent | Specify the value for CPU utilization, like 80, which means 80% | int | false |
cron | Cron type auto-scaling. Just for `appfile`, not available for Cli usage | [cron](#cron) | false |
### cron
Name | Description | Type | Required | Default
------------ | ------------- | ------------- | ------------- | -------------
startAt | The time to start scaling, like `08:00` | string | true |
duration | For how long the scaling will last | string | true |
days | Several workdays or weekends, like "Monday, Tuesday" | string | true |
replicas | The target replicas to be scaled to | int | true |
timezone | Timezone, like "America/Los_Angeles" | string | true |

View File

@@ -1,25 +0,0 @@
---
title: Ingress
---
## Description
Configures K8s ingress and service to enable web traffic for your service. Please use route trait in cap center for advanced usage.
## Specification
List of all configuration options for a `Ingress` trait.
```yaml
...
domain: testsvc.example.com
http:
/: 8000
```
## Properties
Name | Description | Type | Required | Default
------------ | ------------- | ------------- | ------------- | -------------
domain | | string | true |
http | | map[string]int | true |

View File

@@ -1,23 +0,0 @@
---
title: Scaler
---
## Description
Configures replicas for your service.
## Specification
List of all configuration options for a `Scaler` trait.
```yaml
...
scaler:
replicas: 100
```
## Properties
Name | Description | Type | Required | Default
------------ | ------------- | ------------- | ------------- | -------------
replicas | Replicas of the workload | int | true | 1

View File

@@ -0,0 +1,220 @@
---
title: Application
---
This documentation will walk through how to use KubeVela to design a simple application without any polices or placement rule defined.
> Note: since you didn't declare placement rule, KubeVela will deploy this application directly to the control plane cluster (i.e. the cluster your `kubectl` is talking to). This is also the same case if you are using local cluster such as KinD or MiniKube to play KubeVela.
## Step 1: Check Available Components
Components are deployable or provisionable entities that compose your application. It could be a Helm chart, a simple Kubernetes workload, a CUE or Terraform module, or a cloud database etc.
Let's check the available components in fresh new KubeVela.
```shell
kubectl get comp -n vela-system
NAME WORKLOAD-KIND DESCRIPTION
task Job Describes jobs that run code or a script to completion.
webservice Deployment Describes long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers.
worker Deployment Describes long-running, scalable, containerized services that running at backend. They do NOT have network endpoint to receive external network traffic.
```
To show the specification for given component, you could use `vela show`.
```shell
$ kubectl vela show webservice
# Properties
+------------------+----------------------------------------------------------------------------------+-----------------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+------------------+----------------------------------------------------------------------------------+-----------------------+----------+---------+
| cmd | Commands to run in the container | []string | false | |
| env | Define arguments by using environment variables | [[]env](#env) | false | |
| addRevisionLabel | | bool | true | false |
| image | Which image would you like to use for your service | string | true | |
| port | Which port do you want customer traffic sent to | int | true | 80 |
| cpu | Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core) | string | false | |
| volumes | Declare volumes and volumeMounts | [[]volumes](#volumes) | false | |
+------------------+----------------------------------------------------------------------------------+-----------------------+----------+---------+
... // skip other fields
```
> Tips: `vela show xxx --web` will open its capability reference documentation in your default browser.
You could always [add more components](components/more) to the platform at any time.
## Step 2: Declare an Application
Application is the full description of a deployment. Let's define an application that deploys a *Web Service* and a *Worker* components.
```yaml
# sample.yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: website
spec:
components:
- name: frontend
type: webservice
properties:
image: nginx
- name: backend
type: worker
properties:
image: busybox
cmd:
- sleep
- '1000'
```
## Step 3: Attach Traits
Traits are platform provided features that could *overlay* a given component with extra operational behaviors.
```shell
$ kubectl get trait -n vela-system
NAME APPLIES-TO DESCRIPTION
cpuscaler [webservice worker] Automatically scale the component based on CPU usage.
ingress [webservice worker] Enable public web traffic for the component.
scaler [webservice worker] Manually scale the component.
sidecar [webservice worker] Inject a sidecar container to the component.
```
Let's check the specification of `sidecar` trait.
```shell
$ kubectl vela show sidecar
# Properties
+---------+-----------------------------------------+----------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+---------+-----------------------------------------+----------+----------+---------+
| name | Specify the name of sidecar container | string | true | |
| image | Specify the image of sidecar container | string | true | |
| command | Specify the commands run in the sidecar | []string | false | |
+---------+-----------------------------------------+----------+----------+---------+
```
Note that traits are designed to be *overlays*.
This means for `sidecar` trait, your `frontend` component doesn't need to have a sidecar template or bring a webhook to enable sidecar injection. Instead, KubeVela is able to patch a sidecar to its workload instance after it is generated by the component (no matter it's a Helm chart or CUE module) but before it is applied to runtime cluster.
Similarly, the system will assign a HPA instance based on the properties you set and "link" it to the target workload instance, the component itself is untouched.
Now let's attach `sidecar` and `cpuscaler` traits to the `frontend` component.
```yaml
# sample.yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: website
spec:
components:
- name: frontend # This is the component I want to deploy
type: webservice
properties:
image: nginx
traits:
- type: cpuscaler # Automatically scale the component by CPU usage after deployed
properties:
min: 1
max: 10
cpuPercent: 60
- type: sidecar # Inject a fluentd sidecar before applying the component to runtime cluster
properties:
name: "sidecar-test"
image: "fluentd"
- name: backend
type: worker
properties:
image: busybox
cmd:
- sleep
- '1000'
```
## Step 4: Deploy the Application
```shell
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/enduser/sample.yaml
application.core.oam.dev/website created
```
You'll get the application becomes `running`.
```shell
$ kubectl get application
NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE
website frontend webservice running true 4m54s
```
Check the details of the application.
```shell
$ kubectl get app website -o yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
generation: 1
name: website
namespace: default
spec:
components:
- name: frontend
properties:
image: nginx
traits:
- properties:
cpuPercent: 60
max: 10
min: 1
type: cpuscaler
- properties:
image: fluentd
name: sidecar-test
type: sidecar
type: webservice
- name: backend
properties:
cmd:
- sleep
- "1000"
image: busybox
type: worker
status:
...
latestRevision:
name: website-v1
revision: 1
revisionHash: e9e062e2cddfe5fb
services:
- healthy: true
name: frontend
traits:
- healthy: true
type: cpuscaler
- healthy: true
type: sidecar
- healthy: true
name: backend
status: running
```
Specifically:
1. `status.latestRevision` declares current revision of this deployment.
2. `status.services` declares the component created by this deployment and the healthy state.
3. `status.status` declares the global state of this deployment.
### List Revisions
When updating an application entity, KubeVela will create a new revision for this change.
```shell
$ kubectl get apprev -l app.oam.dev/name=website
NAME AGE
website-v1 35m
```
Furthermore, the system will decide how to/whether to rollout the application based on the attached [rollout plan](scopes/rollout-plan).

View File

@@ -1,242 +0,0 @@
---
title: Provision and Consume Cloud Resources by Crossplane
---
> ⚠️ This section requires your platform builder has already installed the [cloud resources related capabilities](../platform-engineers/cloud-services).
## Provision and consume cloud resource in a single application v1 (one cloud resource)
Check the parameters of cloud resource component:
```shell
$ kubectl vela show alibaba-rds
# Properties
+---------------+------------------------------------------------+--------+----------+--------------------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+---------------+------------------------------------------------+--------+----------+--------------------+
| engine | RDS engine | string | true | mysql |
| engineVersion | The version of RDS engine | string | true | 8.0 |
| instanceClass | The instance class for the RDS | string | true | rds.mysql.c1.large |
| username | RDS username | string | true | |
| secretName | Secret name which RDS connection will write to | string | true | |
+---------------+------------------------------------------------+--------+----------+--------------------+
```
Use the service binding trait to bind cloud resources into workload as ENV.
Create an application with a cloud resource provisioning component and a consuming component as below.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: webapp
spec:
components:
- name: express-server
type: webservice
properties:
image: zzxwill/flask-web-application:v0.3.1-crossplane
ports: 80
traits:
- type: service-binding
properties:
envMappings:
# environments refer to db-conn secret
DB_PASSWORD:
secret: db-conn
key: password # 1) If the env name is different from secret key, secret key has to be set.
endpoint:
secret: db-conn # 2) If the env name is the same as the secret key, secret key can be omitted.
username:
secret: db-conn
- name: sample-db
type: alibaba-rds
properties:
name: sample-db
engine: mysql
engineVersion: "8.0"
instanceClass: rds.mysql.c1.large
username: oamtest
secretName: db-conn
```
Apply it and verify the application.
```shell
$ kubectl get application
NAME AGE
webapp 46m
$ kubectl port-forward deployment/express-server 80:80
Forwarding from 127.0.0.1:80 -> 80
Forwarding from [::1]:80 -> 80
Handling connection for 80
Handling connection for 80
```
![](../resources/crossplane-visit-application.jpg)
## Provision and consume cloud resource in a single application v2 (two cloud resources)
Based on the section `Provisioning and consuming cloud resource in a single application v1 (one cloud resource)`,
Update the application to also consume cloud resource OSS.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: webapp
spec:
components:
- name: express-server
type: webservice
properties:
image: zzxwill/flask-web-application:v0.3.1-crossplane
ports: 80
traits:
- type: service-binding
properties:
envMappings:
# environments refer to db-conn secret
DB_PASSWORD:
secret: db-conn
key: password # 1) If the env name is different from secret key, secret key has to be set.
endpoint:
secret: db-conn # 2) If the env name is the same as the secret key, secret key can be omitted.
username:
secret: db-conn
# environments refer to oss-conn secret
BUCKET_NAME:
secret: oss-conn
key: Bucket
- name: sample-db
type: alibaba-rds
properties:
name: sample-db
engine: mysql
engineVersion: "8.0"
instanceClass: rds.mysql.c1.large
username: oamtest
secretName: db-conn
- name: sample-oss
type: alibaba-oss
properties:
name: velaweb
secretName: oss-conn
```
Apply it and verify the application.
```shell
$ kubectl port-forward deployment/express-server 80:80
Forwarding from 127.0.0.1:80 -> 80
Forwarding from [::1]:80 -> 80
Handling connection for 80
Handling connection for 80
```
![](../resources/crossplane-visit-application-v2.jpg)
## Provision and consume cloud resource in different applications
In this section, cloud resource will be provisioned in one application and consumed in another application.
### Provision Cloud Resource
Instantiate RDS component with `alibaba-rds` workload type in an [Application](../application) to provide cloud resources.
As we have claimed an RDS instance with ComponentDefinition name `alibaba-rds`.
The component in the application should refer to this type.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: baas-rds
spec:
components:
- name: sample-db
type: alibaba-rds
properties:
name: sample-db
engine: mysql
engineVersion: "8.0"
instanceClass: rds.mysql.c1.large
username: oamtest
secretName: db-conn
```
Apply the application to Kubernetes and a RDS instance will be automatically provisioned (may take some time, ~2 mins).
A secret `db-conn` will also be created in the same namespace as that of the application.
```shell
$ kubectl get application
NAME AGE
baas-rds 9h
$ kubectl get rdsinstance
NAME READY SYNCED STATE ENGINE VERSION AGE
sample-db-v1 True True Running mysql 8.0 9h
$ kubectl get secret
NAME TYPE DATA AGE
db-conn connection.crossplane.io/v1alpha1 4 9h
$ ✗ kubectl get secret db-conn -o yaml
apiVersion: v1
data:
endpoint: xxx==
password: yyy
port: MzMwNg==
username: b2FtdGVzdA==
kind: Secret
```
### Consume the Cloud Resource
In this section, we will show how another component consumes the RDS instance.
> Note: we recommend defining the cloud resource claiming to an independent application if that cloud resource has
> standalone lifecycle.
Now create the Application to consume the data.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: webapp
spec:
components:
- name: express-server
type: webconsumer
properties:
image: zzxwill/flask-web-application:v0.3.1-crossplane
ports: 80
dbSecret: db-conn
```
```shell
$ kubectl get application
NAME AGE
baas-rds 10h
webapp 14h
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
express-server-v1 1/1 1 1 9h
$ kubectl port-forward deployment/express-server 80:80
```
We can see the cloud resource is successfully consumed by the application.
![](../resources/crossplane-visit-application.jpg)

View File

@@ -1,8 +1,16 @@
---
title: Provision and Consume Cloud Resources by Terraform
title: Cloud Services
---
> ⚠️ This section requires your platform builder has already installed the [Terraform related capabilities](../platform-engineers/terraform).
KubeVela allows you to declare cloud services your application needs in consistent API. Currently, we support both Terraform and Crossplane.
> Please check [the platform team guide for cloud services](../../platform-engineers/cloud-services) if you are interested in how these capabilities are maintained in KubeVela.
The cloud services will be consumed by the application via [Service Binding Trait](../traits/service-binding).
## Terraform
> ⚠️ This section assumes [Terraform related capabilities](../../platform-engineers/terraform) have been installed in your platform.
Check the parameters of cloud resource components and trait.
@@ -36,7 +44,9 @@ $ kubectl vela show service-binding
+-------------+------------------------------------------------+------------------+----------+---------+
```
Now apply an [application](https://github.com/oam-dev/kubevela/tree/master/docs/examples/terraform/cloud-resource-provision-and-consume/application.yaml) as below.
### Alibaba Cloud RDS and OSS
A sample [application](https://github.com/oam-dev/kubevela/tree/master/docs/examples/terraform/cloud-resource-provision-and-consume/application.yaml) is as below.
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -87,7 +97,79 @@ spec:
```
Apply it and verify the application.
## Crossplane
> ⚠️ This section assumes [Crossplane related capabilities](../../platform-engineers/crossplane) have been installed in your platform.
### Alibaba Cloud RDS and OSS
Check the parameters of cloud service component:
```shell
$ kubectl vela show alibaba-rds
# Properties
+---------------+------------------------------------------------+--------+----------+--------------------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+---------------+------------------------------------------------+--------+----------+--------------------+
| engine | RDS engine | string | true | mysql |
| engineVersion | The version of RDS engine | string | true | 8.0 |
| instanceClass | The instance class for the RDS | string | true | rds.mysql.c1.large |
| username | RDS username | string | true | |
| secretName | Secret name which RDS connection will write to | string | true | |
+---------------+------------------------------------------------+--------+----------+--------------------+
```
A sample application is as below.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: webapp
spec:
components:
- name: express-server
type: webservice
properties:
image: zzxwill/flask-web-application:v0.3.1-crossplane
ports: 80
traits:
- type: service-binding
properties:
envMappings:
# environments refer to db-conn secret
DB_PASSWORD:
secret: db-conn
key: password # 1) If the env name is different from secret key, secret key has to be set.
endpoint:
secret: db-conn # 2) If the env name is the same as the secret key, secret key can be omitted.
username:
secret: db-conn
# environments refer to oss-conn secret
BUCKET_NAME:
secret: oss-conn
key: Bucket
- name: sample-db
type: alibaba-rds
properties:
name: sample-db
engine: mysql
engineVersion: "8.0"
instanceClass: rds.mysql.c1.large
username: oamtest
secretName: db-conn
- name: sample-oss
type: alibaba-oss
properties:
name: velaweb
secretName: oss-conn
```
## Verify
Deploy and verify the application (by either provider is OK).
```shell
$ kubectl get application
@@ -101,4 +183,4 @@ Handling connection for 80
Handling connection for 80
```
![](../resources/crossplane-visit-application.jpg)
![](../../resources/crossplane-visit-application.jpg)

View File

@@ -0,0 +1,12 @@
---
title: Want More?
---
Components in KubeVela are designed to be brought by users.
Check below documentations about how to bring your own components to the system in various approaches.
- [Helm](../../platform-engineers/helm/component) - Helm chart is a natural form of component, note that you need to have a valid Helm repository (e.g. GitHub repo or a Helm hub) to host the chart in this case.
- [CUE](../../platform-engineers/cue/component) - CUE is powerful approach to encapsulate a component and it doesn't require any repository.
- [Simple Template](../../platform-engineers/kube/component) - Not a Helm or CUE expert? A simple template approach is also provided to define any Kubernetes API resource as a component. Note that only key-value style parameters are supported in this case.
- [Cloud Services](../../platform-engineers/cloud-services) - KubeVela allows you to declare cloud services as part of the application and provision them in consistent API.

View File

@@ -0,0 +1,38 @@
---
title: Task
---
## Description
Describes jobs that run code or a script to completion.
## Samples
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: app-worker
spec:
components:
- name: mytask
type: task
properties:
image: perl
count: 10
cmd: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
```
## Specification
```console
# Properties
+---------+--------------------------------------------------------------------------------------------------+----------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+---------+--------------------------------------------------------------------------------------------------+----------+----------+---------+
| cmd | Commands to run in the container | []string | false | |
| count | specify number of tasks to run in parallel | int | true | 1 |
| restart | Define the job restart policy, the value can only be Never or OnFailure. By default, it's Never. | string | true | Never |
| image | Which image would you like to use for your service | string | true | |
+---------+--------------------------------------------------------------------------------------------------+----------+----------+---------+
```

View File

@@ -1,107 +1,82 @@
---
title: Explore Applications
title: Web Service
---
We will introduce how to explore application related resources in this section.
## Description
## List Application
Describes long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers.
```shell
$ kubectl get application
NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE
app-basic app-basic webservice running true 12d
website frontend webservice running true 4m54s
```
## Samples
You can also use the short name `kubectl get app`.
### View Application Details
```shell
$ kubectl get app website -o yaml
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
generation: 1
name: website
namespace: default
spec:
components:
- name: frontend
properties:
image: nginx
traits:
- properties:
cpuPercent: 60
max: 10
min: 1
type: cpuscaler
- properties:
image: fluentd
name: sidecar-test
type: sidecar
type: webservice
- name: backend
properties:
cmd:
- sleep
- "1000"
image: busybox
type: worker
status:
...
latestRevision:
name: website-v1
revision: 1
revisionHash: e9e062e2cddfe5fb
services:
- healthy: true
name: frontend
traits:
- healthy: true
type: cpuscaler
- healthy: true
type: sidecar
- healthy: true
name: backend
status: running
- name: frontend
type: webservice
properties:
image: oamdev/testapp:v1
cmd: ["node", "server.js"]
port: 8080
cpu: "0.1"
env:
- name: FOO
value: bar
- name: FOO
valueFrom:
secretKeyRef:
name: bar
key: bar
```
Here are some highlight information that you need to know:
### Declare Volumes
1. `status.latestRevision` declares current revision of this application.
2. `status.services` declares the component created by this application and the healthy state.
3. `status.status` declares the global state of this application.
The `Web Service` component exposes configurations for certain volume types including `PersistenVolumeClaim`, `ConfigMap`, `Secret`, and `EmptyDir`.
### List Application Revisions
When we update an application, if there's any difference on spec, KubeVela will create a new revision.
```shell
$ kubectl get apprev -l app.oam.dev/name=website
NAME AGE
website-v1 35m
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: website
spec:
components:
- name: frontend
type: webservice
properties:
image: nginx
volumes:
- name: "my-pvc"
mountPath: "/var/www/html1"
type: "pvc" # PersistenVolumeClaim volume
claimName: "myclaim"
- name: "my-cm"
mountPath: "/var/www/html2"
type: "configMap" # ConfigMap volume (specifying items)
cmName: "myCmName"
items:
- key: "k1"
path: "./a1"
- key: "k2"
path: "./a2"
- name: "my-cm-noitems"
mountPath: "/var/www/html22"
type: "configMap" # ConfigMap volume (not specifying items)
cmName: "myCmName2"
- name: "mysecret"
type: "secret" # Secret volume
mountPath: "/var/www/html3"
secretName: "mysecret"
- name: "my-empty-dir"
type: "emptyDir" # EmptyDir volume
mountPath: "/var/www/html4"
```
## Explore Components
## Specification
You can explore what kinds of component definitions supported in your system.
```shell
kubectl get comp -n vela-system
NAME WORKLOAD-KIND DESCRIPTION
task Job Describes jobs that run code or a script to completion.
webservice Deployment Describes long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers.
worker Deployment Describes long-running, scalable, containerized services that running at backend. They do NOT have network endpoint to receive external network traffic.
```
The component definition objects are namespace isolated align with application, while the `vela-system` is a common system namespace of KubeVela,
definitions laid here can be used by every application.
You can use [vela kubectl plugin](./kubectlplugin) to view the detail usage of specific component definition.
```shell
$ kubectl vela show webservice
```console
# Properties
+------------------+----------------------------------------------------------------------------------+-----------------------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
@@ -151,34 +126,4 @@ $ kubectl vela show webservice
| name | The name of the secret in the pod's namespace to select from | string | true | |
| key | The key of the secret to select from. Must be a valid secret key | string | true | |
+------+------------------------------------------------------------------+--------+----------+---------+
```
## Explore Traits
You can explore what kinds of trait definitions supported in your system.
```shell
$ kubectl get trait -n vela-system
NAME APPLIES-TO DESCRIPTION
cpuscaler [webservice worker] configure k8s HPA with CPU metrics for Deployment
ingress [webservice worker] Configures K8s ingress and service to enable web traffic for your service. Please use route trait in cap center for advanced usage.
scaler [webservice worker] Configures replicas for your service.
sidecar [webservice worker] inject a sidecar container into your app
```
The trait definition objects are namespace isolated align with application, while the `vela-system` is a common system namespace of KubeVela,
definitions laid here can be used by every application.
You can use `kubectl vela show` to see the usage of specific trait definition.
```shell
$ kubectl vela show sidecar
# Properties
+---------+-----------------------------------------+----------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+---------+-----------------------------------------+----------+----------+---------+
| name | Specify the name of sidecar container | string | true | |
| image | Specify the image of sidecar container | string | true | |
| command | Specify the commands run in the sidecar | []string | false | |
+---------+-----------------------------------------+----------+----------+---------+
```

View File

@@ -0,0 +1,37 @@
---
title: Worker
---
## Description
Describes long-running, scalable, containerized services that running at backend. They do NOT have network endpoint to receive external network traffic.
## Samples
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: app-worker
spec:
components:
- name: myworker
type: worker
properties:
image: "busybox"
cmd:
- sleep
- "1000"
```
## Specification
```console
# Properties
+-------+----------------------------------------------------+----------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+-------+----------------------------------------------------+----------+----------+---------+
| cmd | Commands to run in the container | []string | false | |
| image | Which image would you like to use for your service | string | true | |
+-------+----------------------------------------------------+----------+----------+---------+
```

View File

@@ -1,8 +1,8 @@
---
title: Debug and Test
title: Dry-Run and Live-Diff
---
You can make further debug and test for your application by using [vela kubectl plugin](./kubectlplugin).
KubeVela allows you to dry-run and live-diff your application.
## Dry-Run the `Application`

View File

@@ -1,74 +0,0 @@
---
title: Labels and Annotations
---
We will introduce how to add labels and annotations to your Application.
## List Traits
```bash
$ kubectl get trait -n vela-system
NAME APPLIES-TO DESCRIPTION
annotations ["webservice","worker"] Add annotations for your Workload.
cpuscaler ["webservice","worker"] configure k8s HPA with CPU metrics for Deployment
ingress ["webservice","worker"] Configures K8s ingress and service to enable web traffic for your service. Please use route trait in cap center for advanced usage.
labels ["webservice","worker"] Add labels for your Workload.
scaler ["webservice","worker"] Configures replicas for your service by patch replicas field.
sidecar ["webservice","worker"] inject a sidecar container into your app
```
You can use `label` and `annotations` traits to add labels and annotations for your workload.
## Apply Application
Let's use `label` and `annotations` traits in your Application.
```shell
# myapp.yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: myapp
spec:
components:
- name: express-server
type: webservice
properties:
image: crccheck/hello-world
port: 8000
traits:
- type: labels
properties:
"release": "stable"
- type: annotations
properties:
"description": "web application"
```
Apply this Application.
```shell
kubectl apply -f myapp.yaml
```
Check the workload has been created successfully.
```bash
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
express-server 1/1 1 1 15s
```
Check the `labels` trait.
```bash
$ kubectl get deployments express-server -o jsonpath='{.spec.template.metadata.labels}'
{"app.oam.dev/component":"express-server","release": "stable"}
```
Check the `annotations` trait.
```bash
$ kubectl get deployments express-server -o jsonpath='{.spec.template.metadata.annotations}'
{"description":"web application"}
```

View File

@@ -1,97 +0,0 @@
---
title: Application Rollout
---
In this documentation, we will show how to use the rollout plan within application to do progressive rollout.
## Overview
By default, when we update the spec of Application, KubeVela will update workload directly which relies on the underlying workload to provide availability.
KubeVela provides a unified progressive rollout mechanism, you can specify the `spec.rolloutPlan` in application to do so.
## User Workflow
Here is the end to end user experience based on Deployment
1. Deploy application to the cluster
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: test-rolling
spec:
components:
- name: metrics-provider
type: worker
properties:
cmd:
- ./podinfo
- stress-cpu=1.0
image: stefanprodan/podinfo:4.0.6
port: 8080
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 50%
- replicas: 50%
targetSize: 6
```
2. User can modify the application container command and apply
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: test-rolling
spec:
components:
- name: metrics-provider
type: worker
properties:
cmd:
- ./podinfo
- stress-cpu=2.0
image: stefanprodan/podinfo:4.0.6
port: 8080
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 50%
- replicas: 50%
targetSize: 6
```
User can check the status of the Application and see the rollout completes, and the
Application's `status.rollout.rollingState` becomes `rolloutSucceed`
## Using AppRollout to adopt Application with rolloutPlan
Sometimes, we want to use [AppRollout](../rollout/rollout) to adopt the Application Rollout, so we can use the `AppRollout` to specify more specific revision. The `AppRollout` can both rollout or revert the version of application.
If you want to let `AppRollout` adopt the Application with `rolloutPlan`, please add the annotations in application to tell `AppRollout` to adopt rollout, and clean the strategy in `spec.rolloutPlan` to avoid conflicts.
eg. to let AppRollout adopt the application above, you should update the application like below:
```shell
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: test-rolling
annotations:
"app.oam.dev/rolling-components": "metrics-provider"
"app.oam.dev/rollout-template": "true"
spec:
components:
- name: metrics-provider
type: worker
properties:
cmd:
- ./podinfo
- stress-cpu=2.0
image: stefanprodan/podinfo:4.0.6
port: 8080
```
Please refer to [AppRollout](../rollout/rollout) to learn more details.

View File

@@ -0,0 +1,238 @@
---
title: Advanced Rollout Plan
---
The rollout plan feature in KubeVela is essentially provided by `AppRollout` API.
## AppRollout
Below is an example for rolling update an application from v1 to v2 in three batches. The
first batch contains only 1 pod while the rest of the batches split the rest.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: AppRollout
metadata:
name: rolling-example
spec:
sourceAppRevisionName: test-rolling-v1
targetAppRevisionName: test-rolling-v2
componentList:
- metrics-provider
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 1
- replicas: 50%
- replicas: 50%
batchPartition: 1
```
## Basic Usage
1. Deploy application
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: test-rolling
annotations:
"app.oam.dev/rolling-components": "metrics-provider"
"app.oam.dev/rollout-template": "true"
spec:
components:
- name: metrics-provider
type: worker
properties:
cmd:
- ./podinfo
- stress-cpu=1
image: stefanprodan/podinfo:4.0.6
port: 8080
replicas: 5
```
Verify AppRevision `test-rolling-v1` have generated
```shell
$ kubectl get apprev test-rolling-v1
NAME AGE
test-rolling-v1 9s
```
2. Attach the following rollout plan to upgrade the application to v1
```yaml
apiVersion: core.oam.dev/v1beta1
kind: AppRollout
metadata:
name: rolling-example
spec:
# application (revision) reference
targetAppRevisionName: test-rolling-v1
componentList:
- metrics-provider
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 10%
- replicas: 40%
- replicas: 50%
targetSize: 5
```
Use can check the status of the ApplicationRollout and wait for the rollout to complete.
3. User can continue to modify the application image tag and apply.This will generate new AppRevision `test-rolling-v2`
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: test-rolling
annotations:
"app.oam.dev/rolling-components": "metrics-provider"
"app.oam.dev/rollout-template": "true"
spec:
components:
- name: metrics-provider
type: worker
properties:
cmd:
- ./podinfo
- stress-cpu=1
image: stefanprodan/podinfo:5.0.2
port: 8080
replicas: 5
```
Verify AppRevision `test-rolling-v2` have generated
```shell
$ kubectl get apprev test-rolling-v2
NAME AGE
test-rolling-v2 7s
```
4. Apply the application rollout that upgrade the application from v1 to v2
```yaml
apiVersion: core.oam.dev/v1beta1
kind: AppRollout
metadata:
name: rolling-example
spec:
# application (revision) reference
sourceAppRevisionName: test-rolling-v1
targetAppRevisionName: test-rolling-v2
componentList:
- metrics-provider
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 1
- replicas: 2
- replicas: 2
```
User can check the status of the ApplicationRollout and see the rollout completes, and the
ApplicationRollout's "Rolling State" becomes `rolloutSucceed`
## Advanced Usage
Using `AppRollout` separately can enable some advanced use case.
### Revert
5. Apply the application rollout that revert the application from v2 to v1
```yaml
apiVersion: core.oam.dev/v1beta1
kind: AppRollout
metadata:
name: rolling-example
spec:
# application (revision) reference
sourceAppRevisionName: test-rolling-v2
targetAppRevisionName: test-rolling-v1
componentList:
- metrics-provider
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 1
- replicas: 2
- replicas: 2
```
### Skip Revision Rollout
6. User can apply this yaml continue to modify the application image tag.This will generate new AppRevision `test-rolling-v3`
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: test-rolling
annotations:
"app.oam.dev/rolling-components": "metrics-provider"
"app.oam.dev/rollout-template": "true"
spec:
components:
- name: metrics-provider
type: worker
properties:
cmd:
- ./podinfo
- stress-cpu=1
image: stefanprodan/podinfo:5.2.0
port: 8080
replicas: 5
```
Verify AppRevision `test-rolling-v3` have generated
```shell
$ kubectl get apprev test-rolling-v3
NAME AGE
test-rolling-v3 7s
```
7. Apply the application rollout that rollout the application from v1 to v3
```yaml
apiVersion: core.oam.dev/v1beta1
kind: AppRollout
metadata:
name: rolling-example
spec:
# application (revision) reference
sourceAppRevisionName: test-rolling-v1
targetAppRevisionName: test-rolling-v3
componentList:
- metrics-provider
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 1
- replicas: 2
- replicas: 2
```
## More Details About `AppRollout`
### Design Principles and Goals
There are several attempts at solving rollout problem in the cloud native community. However, none
of them provide a true rolling style upgrade. For example, flagger supports Blue/Green, Canary
and A/B testing. Therefore, we decide to add support for batch based rolling upgrade as
our first style to support in KubeVela.
We design KubeVela rollout solutions with the following principles in mind
- First, we want all flavors of rollout controllers share the same core rollout
related logic. The trait and application related logic can be easily encapsulated into its own
package.
- Second, the core rollout related logic is easily extensible to support different type of
workloads, i.e. Deployment, CloneSet, Statefulset, DaemonSet or even customized workloads.
- Thirdly, the core rollout related logic has a well documented state machine that
does state transition explicitly.
- Finally, the controllers can support all the rollout/upgrade needs of an application running
in a production environment including Blue/Green, Canary and A/B testing.
### State Transition
Here is the high level state transition graph
![](../../resources/approllout-status-transition.jpg)
### Roadmap
Our recent roadmap for rollout plan is [here](./roadmap).

View File

@@ -1,12 +1,10 @@
---
title: Multi-Cluster Deployment
title: Placement
---
## Introduction
Modern application infrastructure involves multiple clusters to ensure high availability and maximize service throughput. In this section, we will introduce how to use KubeVela to achieve application deployment across multiple clusters with following features supported:
- Rolling Upgrade: To continuously deploy apps requires to rollout in a safe manner which usually involves step by step rollout batches and analysis.
- Traffic shifting: When rolling upgrade an app, it needs to split the traffic onto both the old and new revisions to verify the new version while preserving service availability.
In this section, we will introduce how to use KubeVela to place application across multiple clusters with traffic management enabled. For traffic management, KubeVela currently allows you to split the traffic onto both the old and new revisions during rolling update and verify the new version while preserving service availability.
### AppDeployment

View File

@@ -1,5 +1,5 @@
---
title: Rollout
title: Canary
---
## Description

View File

@@ -1,18 +1,10 @@
---
title: Define Application Health Probe
title: Aggregated Health Probe
---
In this documentation, we will show how to define health probe for application
The `HealthyScope` allows you to define an aggregated health probe for all components in same application.
## Set Health Check Rule
Basically you can set application `spec.status.healthPolicy` field to specifying health check rule for application. [reference](../cue/status)
## Advanced Health Probe
By using HealthyScope you can check all pods of workload weather are healthy.
1.Create health scope by apply this yaml
1.Create health scope instance.
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: HealthScope
@@ -26,7 +18,7 @@ spec:
kind: Deployment
name: express-server
```
2. Create an application with the health scope
2. Create an application that drops in this health scope.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
@@ -43,7 +35,7 @@ spec:
scopes:
healthscopes.core.oam.dev: health-check
```
3. Check app status, will see health scope in `status.service.scopes`
3. Check the reference of the aggregated health probe (`status.service.scopes`).
```shell
$ kubectl get app vela-app -o yaml
```
@@ -63,7 +55,7 @@ status:
kind: HealthScope
name: health-check
```
4.Check health scope status
4.Check health scope detail.
```shell
$ kubectl get healthscope health-check -o yaml
```
@@ -93,3 +85,5 @@ status:
healthyWorkloads: 1
total: 1
```
It shows the aggregated health status for all components in this application.

View File

@@ -2,7 +2,7 @@
title: Progressive Rollout RoadMap
---
Here are some workitems on the roadmap
Here are some working items on the roadmap
## Embed rollout in an application

View File

@@ -0,0 +1,71 @@
---
title: Rollout Plan
---
In this documentation, we will show how to use the rollout plan to rolling update an application.
## Overview
By default, when we update the properties of application, KubeVela will update the underlying instances directly. The availability of the application will be guaranteed by rollout traits (if any).
Though KubeVela also provides a rolling style update mechanism, you can specify the `spec.rolloutPlan` in application to do so.
## Example
1. Deploy application to the cluster
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: test-rolling
spec:
components:
- name: metrics-provider
type: worker
properties:
cmd:
- ./podinfo
- stress-cpu=1.0
image: stefanprodan/podinfo:4.0.6
port: 8080
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 50%
- replicas: 50%
targetSize: 6
```
2. User can modify the application container command and apply
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: test-rolling
spec:
components:
- name: metrics-provider
type: worker
properties:
cmd:
- ./podinfo
- stress-cpu=2.0
image: stefanprodan/podinfo:4.0.6
port: 8080
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 50%
- replicas: 50%
targetSize: 6
```
User can check the status of the application and see the rollout completes, and the
application's `status.rollout.rollingState` becomes `rolloutSucceed`.
## Advanced Usage
If you want to control and rollout the specific application revisions, or do revert, please refer to [Advanced Usage](advanced-rollout) to learn more details.

View File

@@ -0,0 +1,58 @@
---
title: Labels and Annotations
---
## List Traits
The `label` and `annotations` traits allows you to append labels and annotations to the component.
```shell
# myapp.yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: myapp
spec:
components:
- name: express-server
type: webservice
properties:
image: crccheck/hello-world
port: 8000
traits:
- type: labels
properties:
"release": "stable"
- type: annotations
properties:
"description": "web application"
```
Deploy this application.
```shell
kubectl apply -f myapp.yaml
```
On runtime cluster, check the workload has been created successfully.
```bash
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
express-server 1/1 1 1 15s
```
Check the `labels`.
```bash
$ kubectl get deployments express-server -o jsonpath='{.spec.template.metadata.labels}'
{"app.oam.dev/component":"express-server","release": "stable"}
```
Check the `annotations`.
```bash
$ kubectl get deployments express-server -o jsonpath='{.spec.template.metadata.annotations}'
{"description":"web application"}
```

View File

@@ -1,12 +1,10 @@
---
title: Expose Application
title: Ingress
---
> ⚠️ This section requires your cluster has a working ingress.
> ⚠️ This section requires your runtime cluster has a working ingress controller.
To expose your application publicly, you just need to add an `ingress` trait.
View ingress schema by [vela kubectl plugin](./kubectlplugin).
The `ingress` trait exposes a component to public Internet via a valid domain.
```shell
$ kubectl vela show ingress
@@ -19,7 +17,7 @@ $ kubectl vela show ingress
+--------+------------------------------------------------------------------------------+----------------+----------+---------+
```
Then modify and deploy this application.
Attach a `ingress` trait to the component you want to expose and deploy.
```yaml
# vela-app.yaml
@@ -56,7 +54,7 @@ first-vela-app express-server webservice healthChecking
first-vela-app express-server webservice running true 42s
```
You can also see the trait detail for the visiting url:
Check the trait detail for the its visiting url:
```shell
$ kubectl get application first-vela-app -o yaml
@@ -78,7 +76,7 @@ spec:
...
```
Then you will be able to visit this service.
Then you will be able to visit this application via its domain.
```
$ curl -H "Host:testsvc.example.com" http://<your ip address>/

View File

@@ -0,0 +1,7 @@
---
title: Want More?
---
Traits in KubeVela are designed as modularized building blocks, they are fully customizable and pluggable.
Check [this documentation](../../platform-engineers/cue/trait) about how to design and enable your own traits in KubeVela platform.

View File

@@ -1,12 +1,8 @@
---
title: Scale
title: Manual Scaling
---
In the [Deploy Application](../application) section, we use `cpuscaler` trait as an auto-scaler for the sample application.
## Manuel Scale
You can use scale your application manually by using `scaler` trait.
The `scaler` trait allows you to scale your component instance manually.
```shell
$ kubectl vela show scaler
@@ -18,7 +14,7 @@ $ kubectl vela show scaler
+----------+--------------------------------+------+----------+---------+
```
Deploy the application.
Declare an application with scaler trait.
```yaml
# sample-manual.yaml
@@ -49,14 +45,14 @@ spec:
- '1000'
```
Change and Apply the sample application:
Apply the sample application:
```shell
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/enduser/sample-manual.yaml
application.core.oam.dev/website configured
```
After a while, you can see the underlying deployment of `frontend` component has two replicas now.
In runtime cluster, you can see the underlying deployment of `frontend` component has 2 replicas now.
```shell
$ kubectl get deploy -l app.oam.dev/name=website
@@ -65,4 +61,4 @@ backend 1/1 1 1 19h
frontend 2/2 2 2 19h
```
To scale up or scale down, you can just modify the `replicas` field of `scaler` trait and apply the application again.
To scale up or scale down, you just need to modify the `replicas` field of `scaler` trait and re-apply the YAML.

View File

@@ -0,0 +1,92 @@
---
title: Service Binding
---
Service binding trait will bind data from Kubernetes `Secret` to the application container's ENV.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "binding cloud resource secrets to pod env"
name: service-binding
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {
spec: template: spec: {
// +patchKey=name
containers: [{
name: context.name
// +patchKey=name
env: [
for envName, v in parameter.envMappings {
name: envName
valueFrom: {
secretKeyRef: {
name: v.secret
if v["key"] != _|_ {
key: v.key
}
if v["key"] == _|_ {
key: envName
}
}
}
},
]
}]
}
}
parameter: {
// +usage=The mapping of environment variables to secret
envMappings: [string]: [string]: string
}
```
With the help of this `service-binding` trait, you can explicitly set parameter `envMappings` to mapping all
environment names with secret key. Here is an example.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: webapp
spec:
components:
- name: express-server
type: webservice
properties:
image: zzxwill/flask-web-application:v0.3.1-crossplane
ports: 80
traits:
- type: service-binding
properties:
envMappings:
# environments refer to db-conn secret
DB_PASSWORD:
secret: db-conn
key: password # 1) If the env name is different from secret key, secret key has to be set.
endpoint:
secret: db-conn # 2) If the env name is the same as the secret key, secret key can be omitted.
username:
secret: db-conn
- name: sample-db
type: alibaba-rds
properties:
name: sample-db
engine: mysql
engineVersion: "8.0"
instanceClass: rds.mysql.c1.large
username: oamtest
secretName: db-conn
```

View File

@@ -1,8 +1,8 @@
---
title: Attach Sidecar
title: Attaching Sidecar
---
In this section, we will show you how to use `sidecar` trait to collect logs.
The `sidecar` trait allows you to attach a sidecar container to the component.
## Show the Usage of Sidecar
@@ -28,7 +28,7 @@ $ kubectl vela show sidecar
+-----------+-------------+--------+----------+---------+
```
## Apply the Application
## Deploy the Application
In this Application, component `log-gen-worker` and sidecar share the data volume that saves the logs.
The sidebar will re-output the log to stdout.
@@ -71,13 +71,13 @@ spec:
path: /var/log
```
Apply this Application.
Deploy this Application.
```shell
kubectl apply -f app.yaml
```
Check the workload generate by Application.
On runtime cluster, check the name of running pod.
```shell
$ kubectl get pod
@@ -85,7 +85,7 @@ NAME READY STATUS RESTARTS AGE
log-gen-worker-76945f458b-k7n9k 2/2 Running 0 90s
```
Check the output of sidecar.
And check the logging output of sidecar.
```shell
$ kubectl logs -f log-gen-worker-76945f458b-k7n9k count-log

View File

@@ -0,0 +1,51 @@
---
title: Cloud Volumes
---
This section introduces how to attach cloud volumes to the component. For example, AWS ElasticBlockStore,
Azure Disk, Alibaba Cloud OSS, etc.
Cloud volumes are not built-in capabilities in KubeVela so you need to enable these traits first. Let's use AWS EBS as example.
Install and check the `TraitDefinition` for AWS EBS volume trait.
```shell
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/app-with-volumes/td-awsEBS.yaml
```
```shell
$ kubectl vela show aws-ebs-volume
+-----------+----------------------------------------------------------------+--------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+-----------+----------------------------------------------------------------+--------+----------+---------+
| name | The name of volume. | string | true | |
| mountPath | | string | true | |
| volumeID | Unique id of the persistent disk resource. | string | true | |
| fsType | Filesystem type to mount. | string | true | ext4 |
| partition | Partition on the disk to mount. | int | false | |
| readOnly | ReadOnly here will force the ReadOnly setting in VolumeMounts. | bool | true | false |
+-----------+----------------------------------------------------------------+--------+----------+---------+
```
Then we can now attach a `aws-ebs` volume to a component.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: app-worker
spec:
components:
- name: myworker
type: worker
properties:
image: "busybox"
cmd:
- sleep
- "1000"
traits:
- type: aws-ebs-volume
properties:
name: "my-ebs"
mountPath: "/myebs"
volumeID: "my-ebs-id"
```

View File

@@ -1,101 +0,0 @@
---
title: Attach Volumes
---
We will introduce how to attach basic volumes as well as extended custom
volume types for applications.
## Attach Basic Volume
`worker` and `webservice` both are capable of attaching multiple common types of
volumes, including `persistenVolumeClaim`, `configMap`, `secret`, and `emptyDir`.
You should indicate the name of volume type in components properties.
(we use `pvc` instead of `persistenVolumeClaim` for brevity)
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: website
spec:
components:
- name: frontend
type: webservice
properties:
image: nginx
volumes:
- name: "my-pvc"
mountPath: "/var/www/html1"
type: "pvc" # persistenVolumeClaim type volume
claimName: "myclaim"
- name: "my-cm"
mountPath: "/var/www/html2"
type: "configMap" # configMap type volume (specifying items)
cmName: "myCmName"
items:
- key: "k1"
path: "./a1"
- key: "k2"
path: "./a2"
- name: "my-cm-noitems"
mountPath: "/var/www/html22"
type: "configMap" # configMap type volume (not specifying items)
cmName: "myCmName2"
- name: "mysecret"
type: "secret" # secret type volume
mountPath: "/var/www/html3"
secretName: "mysecret"
- name: "my-empty-dir"
type: "emptyDir" # emptyDir type volume
mountPath: "/var/www/html4"
```
You should make sure the attached volume sources are prepared in your cluster.
## Extend custom volume types and attach
It's also allowed to extend custom volume types, such as AWS ElasticBlockStore,
Azure disk, Alibaba Cloud OSS, etc.
To enable attaching extended volume types, we should install specific Trait
capability first.
```shell
$ kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/app-with-volumes/td-awsEBS.yaml
```
```shell
$ kubectl vela show aws-ebs-volume
+-----------+----------------------------------------------------------------+--------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+-----------+----------------------------------------------------------------+--------+----------+---------+
| name | The name of volume. | string | true | |
| mountPath | | string | true | |
| volumeID | Unique id of the persistent disk resource. | string | true | |
| fsType | Filesystem type to mount. | string | true | ext4 |
| partition | Partition on the disk to mount. | int | false | |
| readOnly | ReadOnly here will force the ReadOnly setting in VolumeMounts. | bool | true | false |
+-----------+----------------------------------------------------------------+--------+----------+---------+
```
Then we can define an Application using aws-ebs volumes.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: app-worker
spec:
components:
- name: myworker
type: worker
properties:
image: "busybox"
cmd:
- sleep
- "1000"
traits:
- type: aws-ebs-volume
properties:
name: "my-ebs"
mountPath: "/myebs"
volumeID: "my-ebs-id"
```

View File

@@ -7,13 +7,13 @@ import TabItem from '@theme/TabItem';
> For upgrading existing KubeVela, please read the [upgrade guide](./advanced-install#upgrade).
## 1. Choose Kubernetes Cluster
## 1. Choose Control Plane Cluster
Requirements:
- Kubernetes cluster >= v1.15.0
- `kubectl` installed and configured
KubeVela is a simple custom controller that can be installed on any Kubernetes cluster including managed offerings or your own clusters. The only requirement is please ensure [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/) is installed and enabled.
KubeVela relies on Kubernetes as control plane. The control plane could be any managed Kubernetes offering or your own cluster. The only requirement is please ensure [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/) is installed and enabled.
For for local deployment and test, you could use `minikube` or `kind`.
@@ -78,7 +78,7 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/mast
</Tabs>
## 2. Install KubeVela Controller
## 2. Install KubeVela
1. Add helm chart repo for KubeVela
```shell script
@@ -130,9 +130,9 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/mast
## 3. Get KubeVela CLI
Using KubeVela CLI gives you a simplified workflow with optimized output comparing to using `kubectl`. It is not mandatory though.
KubeVela CLI gives you a simplified workflow to manage applications with optimized output. It is not mandatory though.
Here are three ways to get KubeVela Cli:
KubeVela CLI could be [installed as kubectl plugin](./kubectl-plugin.mdx), or install as standalone binary.
<Tabs
className="unique-tabs"
@@ -188,7 +188,6 @@ sudo mv ./vela /usr/local/bin/vela
</TabItem>
</Tabs>
## 4. Enable Helm Support
KubeVela leverages Helm controller from [Flux v2](https://github.com/fluxcd/flux2) to deploy [Helm](https://helm.sh/) based components.

View File

@@ -8,64 +8,77 @@ slug: /
## Motivation
The trend of cloud-native technology is moving towards pursuing consistent application delivery across clouds and on-premises infrastructures using Kubernetes as the common abstraction layer. Kubernetes, although excellent in abstracting low-level infrastructure details, does introduce extra complexity to application developers, namely understanding the concepts of pods, port exposing, privilege escalation, resource claims, CRD, and so on. Weve seen the nontrivial learning curve and the lack of developer-facing abstraction have impacted user experiences, slowed down productivity, led to unexpected errors or misconfigurations in production. People start to question the value of this revolution: "why am I bothered with all these details?".
The trend of cloud-native technology is moving towards pursuing consistent application delivery across clouds and on-premises infrastructures using Kubernetes as the common layer. Kubernetes, although excellent in abstracting low-level infrastructure details, does not introduce abstractions to model software deployment on top of hybrid environments. Weve seen the lack of such application level context have impacted user experiences, slowed down productivity, led to unexpected errors or misconfigurations in production.
On the other hand, abstracting Kubernetes to serve developers' requirements is a highly opinionated process, and the resultant abstractions would only make sense had the decision makers been the platform builders. Unfortunately, the platform builders today face the following dilemma:
*There is no tool or framework for them to easily build user friendly yet highly extensible abstractions*.
Thus, many platforms today are essentially restricted abstractions with in-house add-on mechanisms despite the extensibility of Kubernetes. This makes extending such platforms for developers' requirements or to wider scenarios almost impossible, not to mention taking the full advantage of the rich Kubernetes ecosystems.
In the end, developers complain those platforms are too rigid and slow in response to feature requests or improvements. The platform builders do want to help but the engineering effort is daunting: any simple API change in the platform could easily become a marathon negotiation around the opinionated abstraction design.
On the other hand, modeling a deployment of a modern microservice application is a highly opinionated and fragmented process today. Thus, most solutions aim to solve above problem (although built with Kubernetes) are essentially restricted systems and barely extensible. As the needs of your application grow, they are almost certain to outgrow the capabilities of such systems. Application teams complain they are too rigid and slow in response to feature requests or improvements. The platform team do want to help but the engineering effort is daunting: any simple API change in the platform could easily become a marathon negotiation around the opinionated abstraction design.
## What is KubeVela?
For platform builders, KubeVela serves as a framework that relieves the pains of building developer focused platforms by doing the following:
KubeVela is a modern application platform that aims to make deploying and managing applications across hybrid, multi-cloud environments easier and faster by doing the following:
- Developer Centric. KubeVela introduces the Application as the main API to capture a full deployment of microservices, and builds features around the application needs only. Progressive rollout and multi-cluster deployment are provided out-of-box. No infrastructure level concerns, simply deploy.
- Extending Natively. In KubeVela, all platform features (such as workloads, operational behaviors, and cloud services) are defined as reusable [CUE](https://github.com/cuelang/cue) and/or [Helm](https://helm.sh) components, per needs of the application deployment. And when application's needs grow, your platform capabilities expand naturally in a programmable approach.
**Application Centric** - KubeVela introduces consistent yet application centric API to capture a full deployment of microservices on top of hybrid environments. No infrastructure level concern, simply deploy.
- Simple yet Reliable. Perfect in flexibility though, X-as-Code may lead to configuration drift (i.e. the running instances are not in line with the expected configuration). KubeVela solves this by modeling its capabilities as code but enforce them via Kubernetes control loop which will never leave inconsistency in your clusters. This also makes KubeVela work with any CI/CD or GitOps tools via declarative API without integration burden.
**Natively Extensible** - KubeVela uses CUE to glue capabilities provided by runtime infrastructure and expose them to users via self-service API. When users' needs grow, these API can naturally expand in programmable approach.
With KubeVela, the platform builders finally have the tooling supports to design easy-to-use abstractions and ship them to end-users with high confidence and low turn around time.
**Runtime Agnostic** - KubeVela is built with Kubernetes as control plane but adaptable to any runtime as data-plane. It can deploy (and manage) diverse workload types such as container, cloud functions, databases, or even EC2 instances across hybrid environments.
For end-users (e.g. app developers and operators), these abstractions will enable them to design and ship applications to Kubernetes clusters with minimal effort, and instead of managing a handful infrastructure details, a simple application definition that can be easily integrated with any CI/CD pipeline is all they need.
## Architecture
The overall architecture of KubeVela is shown as below:
![alt](resources/arch.png)
### Control Plane
Control plane is where KubeVela itself lives in. As the project's name implies, KubeVela by design leverages Kubernetes as control plane. This is the key of how KubeVela brings full *automation* and strong *determinism* to application delivery at scale. Users will interact with KubeVela via the application centric API to model the application deployment, and KubeVela will distribute it to target *runtime infrastructure* per policies and rules declared by users.
### Runtime Infrastructures
Runtime infrastructures are where the applications are actually running on. KubeVela allows you to model and deploy applications to any Kubernetes based infrastructure (either local, managed offerings, or IoT/Edge/On-Premise ones), or to public cloud platforms.
## Comparisons
### KubeVela vs. Platform-as-a-Service (PaaS)
The typical examples are Heroku and Cloud Foundry. They provide full application management capabilities and aim to improve developer experience and efficiency. In this context, KubeVela shares the same goal.
The typical examples are Heroku and Cloud Foundry. They provide full application deployment and management capabilities and aim to improve developer experience and efficiency. In this context, KubeVela shares the same goal.
Though the biggest difference lies in **flexibility**.
KubeVela is a Kubernetes add-on that enabling you to serve end users with programmable building blocks which are fully flexible and coded by yourself. Comparing to this mechanism, traditional PaaS systems are highly restricted, i.e. they have to enforce constraints in the type of supported applications and capabilities, and as application needs grows, you always outgrow the capabilities of the PaaS system - this will never happen in KubeVela platform.
KubeVela enables you to serve end users with programmable building blocks (based on [CUE](https://cuelang.org/)) which are fully flexible and can be extended at any time. Comparing to this mechanism, traditional PaaS systems are highly restricted, i.e. they have to enforce constraints in the type of supported applications and capabilities, and as application needs grows, you always outgrow the capabilities of the PaaS system - this will never happen in KubeVela platform.
So think of KubeVela as a Heroku that is fully extensible to serve your needs as you grow.
So think of KubeVela as a Heroku but it is fully extensible when your needs grow.
### KubeVela vs. Serverless
Serverless platform such as AWS Lambda provides extraordinary user experience and agility to deploy serverless applications. However, those platforms impose even more constraints in extensibility. They are arguably "hard-coded" PaaS.
Kubernetes based serverless platforms such as Knative, OpenFaaS can be easily integrated with KubeVela by registering themselves as new workload types and traits. Even for AWS Lambda, there is a success story to integrate it with KubeVela by the tools developed by Crossplane.
KubeVela can easily deploy both Kubernetes based serverless workloads such as Knative/OpenFaaS, or cloud functions such as AWS Lambda. Simply register what you want to deploy as a "component".
### KubeVela vs. Platform agnostic developer tools
The typical example is Hashicorp's Waypoint. Waypoint is a developer facing tool which introduces a consistent workflow (i.e., build, deploy, release) to ship applications on top of different platforms.
KubeVela can be integrated into such tools as an application platform. In this case, developers could use the Waypoint workflow to manage applications with leverage of abstractions (e.g. application, rollout, ingress, autoscaling etc) you built via KubeVela.
KubeVela can be integrated with such tools seamlessly. In this case, developers would use the Waypoint workflow as the UI to deploy and manage applications across hybrid environments with KubeVela's abstractions (e.g. applications, components, traits etc).
### KubeVela vs. Helm
Helm is a package manager for Kubernetes that provides package, install, and upgrade a set of YAML files for Kubernetes as a unit. KubeVela can patch, deploy and rollout Helm packaged application components, and it also leverages Helm to manage the capability dependencies in system level.
Helm is a package manager for Kubernetes that provides package, install, and upgrade a set of YAML files for Kubernetes as a unit.
Though KubeVela itself is not a package manager, it's a core engine for platform builders to create developer-centric deployment system with easy and repeatable approach.
KubeVela as a modern deployment system can naturally deploys Helm charts across hybrid environments. For example, you could easily use KubeVela to declare and deploy an application which is composed by a WordPress Helm chart and a AWS RDS instance defined by Terraform, or distribute the Helm chart to multiple clusters.
KubeVela also leverages Helm to manage the capability addons in runtime clusters.
### KubeVela vs. Kubernetes
KubeVela is a Kubernetes add-on for building developer-centric deployment system. It leverages [Open Application Model](https://github.com/oam-dev/spec) and the native Kubernetes extensibility to resolve a hard problem - making shipping applications enjoyable on Kubernetes.
## Getting Started
Now let's [get started](./quick-start) with KubeVela!
## What's Next
Here are some recommended next steps:
- [Get started](./quick-start) with KubeVela.
- Learn KubeVela's [core concepts](./concepts).
- Learn how to [deploy an application](end-user/application) in detail and understand how it works.
- Join `#kubevela` channel in CNCF [Slack](https://cloud-native.slack.com) and/or [Gitter](https://gitter.im/oam-dev/community)
Welcome onboard and sail Vela!

View File

@@ -1,355 +1,22 @@
---
title: Crossplane
title: Overview
---
Cloud services is also part of your application deployment.
Cloud services are important components of your application, and KubeVela allows you to provision and consume them in a consistent experience.
## Should a Cloud Service be a Component or Trait?
## How Does KubeVela Manage Cloud Services?
The following practice could be considered:
- Use `ComponentDefinition` if:
- you want to allow your end users explicitly claim a "instance" of the cloud service and consume it, and release the "instance" when deleting the application.
- Use `TraitDefinition` if:
- you don't want to give your end users any control/workflow of claiming or releasing the cloud service, you only want to give them a way to consume a cloud service which could even be managed by some other system. A `Service Binding` trait is widely used in this case.
In this documentation, we will define an Alibaba Cloud's RDS (Relational Database Service), and an Alibaba Cloud's OSS (Object Storage System) as example. This mechanism works the same with other cloud providers.
In a single application, they are in form of Traits, and in multiple applications, they are in form of Components.
In KubeVela, the needed cloud services are claimed as *components* in an application, and consumed via *Service Binding Trait* by other components.
## Install and Configure Crossplane
## Does KubeVela Talk to the Clouds?
This guide will use [Crossplane](https://crossplane.io/) as the cloud service provider. Please Refer to [Installation](https://github.com/crossplane/provider-alibaba/releases/tag/v0.5.0)
to install Crossplane Alibaba provider v0.5.0.
KubeVela relies on [Terraform Controller](https://github.com/oam-dev/terraform-controller) or [Crossplane](http://crossplane.io/) as providers to talk to the clouds. Please check the documentations below for detailed steps.
If you'd like to configure any other Crossplane providers, please refer to [Crossplane Select a Getting Started Configuration](https://crossplane.io/docs/v1.1/getting-started/install-configure.html#select-a-getting-started-configuration).
- [Terraform](./terraform.md)
- [Crossplane](./crossplane.md)
```
$ kubectl crossplane install provider crossplane/provider-alibaba:v0.5.0
## Can a Instance of Cloud Services be Shared by Multiple Applications?
# Note the xxx and yyy here is your own AccessKey and SecretKey to the cloud resources.
$ kubectl create secret generic alibaba-account-creds -n crossplane-system --from-literal=accessKeyId=xxx --from-literal=accessKeySecret=yyy
Yes. Though we currently defer this to providers so by default the cloud service instances are not shared and dedicated per `Application`. A workaround for now is you could use a separate `Application` to declare the cloud service only, then other `Application` can consume it via service binding trait in a shared approach.
$ kubectl apply -f provider.yaml
```
`provider.yaml` is as below.
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: crossplane-system
---
apiVersion: alibaba.crossplane.io/v1alpha1
kind: ProviderConfig
metadata:
name: default
spec:
credentials:
source: Secret
secretRef:
namespace: crossplane-system
name: alibaba-account-creds
key: credentials
region: cn-beijing
```
Note: We currently just use Crossplane Alibaba provider. But we are about to use [Crossplane](https://crossplane.io/) as the
cloud resource operator for Kubernetes in the near future.
## Register ComponentDefinition and TraitDefinition
### Register ComponentDefinition `alibaba-rds` as RDS cloud resource producer
Register the `alibaba-rds` workload type to KubeVela.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: alibaba-rds
namespace: vela-system
annotations:
definition.oam.dev/description: "Alibaba Cloud RDS Resource"
spec:
workload:
definition:
apiVersion: database.alibaba.crossplane.io/v1alpha1
kind: RDSInstance
schematic:
cue:
template: |
output: {
apiVersion: "database.alibaba.crossplane.io/v1alpha1"
kind: "RDSInstance"
spec: {
forProvider: {
engine: parameter.engine
engineVersion: parameter.engineVersion
dbInstanceClass: parameter.instanceClass
dbInstanceStorageInGB: 20
securityIPList: "0.0.0.0/0"
masterUsername: parameter.username
}
writeConnectionSecretToRef: {
namespace: context.namespace
name: parameter.secretName
}
providerConfigRef: {
name: "default"
}
deletionPolicy: "Delete"
}
}
parameter: {
// +usage=RDS engine
engine: *"mysql" | string
// +usage=The version of RDS engine
engineVersion: *"8.0" | string
// +usage=The instance class for the RDS
instanceClass: *"rds.mysql.c1.large" | string
// +usage=RDS username
username: string
// +usage=Secret name which RDS connection will write to
secretName: string
}
```
### Register ComponentDefinition `alibaba-oss` as OSS cloud resource producer
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: alibaba-oss
namespace: vela-system
annotations:
definition.oam.dev/description: "Alibaba Cloud OSS Resource"
spec:
workload:
definition:
apiVersion: oss.alibaba.crossplane.io/v1alpha1
kind: Bucket
schematic:
cue:
template: |
output: {
apiVersion: "oss.alibaba.crossplane.io/v1alpha1"
kind: "Bucket"
spec: {
name: parameter.name
acl: parameter.acl
storageClass: parameter.storageClass
dataRedundancyType: parameter.dataRedundancyType
writeConnectionSecretToRef: {
namespace: context.namespace
name: parameter.secretName
}
providerConfigRef: {
name: "default"
}
deletionPolicy: "Delete"
}
}
parameter: {
// +usage=OSS bucket name
name: string
// +usage=The access control list of the OSS bucket
acl: *"private" | string
// +usage=The storage type of OSS bucket
storageClass: *"Standard" | string
// +usage=The data Redundancy type of OSS bucket
dataRedundancyType: *"LRS" | string
// +usage=Secret name which RDS connection will write to
secretName: string
}
```
### Register ComponentDefinition `webconsumer` with Secret Reference
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: webconsumer
annotations:
definition.oam.dev/description: A Deployment provides declarative updates for Pods and ReplicaSets
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": context.name
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
}
spec: {
containers: [{
name: context.name
image: parameter.image
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
if parameter["dbSecret"] != _|_ {
env: [
{
name: "username"
value: dbConn.username
},
{
name: "endpoint"
value: dbConn.endpoint
},
{
name: "DB_PASSWORD"
value: dbConn.password
},
]
}
ports: [{
containerPort: parameter.port
}]
if parameter["cpu"] != _|_ {
resources: {
limits:
cpu: parameter.cpu
requests:
cpu: parameter.cpu
}
}
}]
}
}
}
}
parameter: {
// +usage=Which image would you like to use for your service
// +short=i
image: string
// +usage=Commands to run in the container
cmd?: [...string]
// +usage=Which port do you want customer traffic sent to
// +short=p
port: *80 | int
// +usage=Referred db secret
// +insertSecretTo=dbConn
dbSecret?: string
// +usage=Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core)
cpu?: string
}
dbConn: {
username: string
endpoint: string
password: string
}
```
The key point is the annotation `//+insertSecretTo=dbConn`, KubeVela will know the parameter is a K8s secret, it will parse
the secret and bind the data into the CUE struct `dbConn`.
Then the `output` can reference the `dbConn` struct for the data value. The name `dbConn` can be any name.
It's just an example in this case. The `+insertSecretTo` is keyword, it defines the data binding mechanism.
### Prepare TraitDefinition `service-binding` to do env-secret mapping
As for data binding in Application, KubeVela recommends defining a trait to finish the job. We have prepared a common
trait for convenience. This trait works well for binding resources' info into pod spec Env.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "binding cloud resource secrets to pod env"
name: service-binding
spec:
appliesToWorkloads:
- webservice
- worker
schematic:
cue:
template: |
patch: {
spec: template: spec: {
// +patchKey=name
containers: [{
name: context.name
// +patchKey=name
env: [
for envName, v in parameter.envMappings {
name: envName
valueFrom: {
secretKeyRef: {
name: v.secret
if v["key"] != _|_ {
key: v.key
}
if v["key"] == _|_ {
key: envName
}
}
}
},
]
}]
}
}
parameter: {
// +usage=The mapping of environment variables to secret
envMappings: [string]: [string]: string
}
```
With the help of this `service-binding` trait, developers can explicitly set parameter `envMappings` to mapping all
environment names with secret key. Here is an example.
```yaml
...
traits:
- type: service-binding
properties:
envMappings:
# environments refer to db-conn secret
DB_PASSWORD:
secret: db-conn
key: password # 1) If the env name is different from secret key, secret key has to be set.
endpoint:
secret: db-conn # 2) If the env name is the same as the secret key, secret key can be omitted.
username:
secret: db-conn
# environments refer to oss-conn secret
BUCKET_NAME:
secret: oss-conn
key: Bucket
...
```
You can see [the end user usage workflow](../end-user/cloud-resources) to know how it used.
In the future, we are considering making this part as a standard feature of KubeVela so you could claim whether a given cloud service component should be shared or not.

View File

@@ -0,0 +1,155 @@
---
title: Crossplane
---
In this documentation, we will use Alibaba Cloud's RDS (Relational Database Service), and Alibaba Cloud's OSS (Object Storage System) as examples to show how to enable cloud services as part of the application deployment.
These cloud services are provided by Crossplane.
## Prepare Crossplane
<details>
Please Refer to [Installation](https://github.com/crossplane/provider-alibaba/releases/tag/v0.5.0)
to install Crossplane Alibaba provider v0.5.0.
> If you'd like to configure any other Crossplane providers, please refer to [Crossplane Select a Getting Started Configuration](https://crossplane.io/docs/v1.1/getting-started/install-configure.html#select-a-getting-started-configuration).
```
$ kubectl crossplane install provider crossplane/provider-alibaba:v0.5.0
# Note the xxx and yyy here is your own AccessKey and SecretKey to the cloud resources.
$ kubectl create secret generic alibaba-account-creds -n crossplane-system --from-literal=accessKeyId=xxx --from-literal=accessKeySecret=yyy
$ kubectl apply -f provider.yaml
```
`provider.yaml` is as below.
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: crossplane-system
---
apiVersion: alibaba.crossplane.io/v1alpha1
kind: ProviderConfig
metadata:
name: default
spec:
credentials:
source: Secret
secretRef:
namespace: crossplane-system
name: alibaba-account-creds
key: credentials
region: cn-beijing
```
</details>
## Register `alibaba-rds` Component
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: alibaba-rds
namespace: vela-system
annotations:
definition.oam.dev/description: "Alibaba Cloud RDS Resource"
spec:
workload:
definition:
apiVersion: database.alibaba.crossplane.io/v1alpha1
kind: RDSInstance
schematic:
cue:
template: |
output: {
apiVersion: "database.alibaba.crossplane.io/v1alpha1"
kind: "RDSInstance"
spec: {
forProvider: {
engine: parameter.engine
engineVersion: parameter.engineVersion
dbInstanceClass: parameter.instanceClass
dbInstanceStorageInGB: 20
securityIPList: "0.0.0.0/0"
masterUsername: parameter.username
}
writeConnectionSecretToRef: {
namespace: context.namespace
name: parameter.secretName
}
providerConfigRef: {
name: "default"
}
deletionPolicy: "Delete"
}
}
parameter: {
// +usage=RDS engine
engine: *"mysql" | string
// +usage=The version of RDS engine
engineVersion: *"8.0" | string
// +usage=The instance class for the RDS
instanceClass: *"rds.mysql.c1.large" | string
// +usage=RDS username
username: string
// +usage=Secret name which RDS connection will write to
secretName: string
}
```
## Register `alibaba-oss` Component
```yaml
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: alibaba-oss
namespace: vela-system
annotations:
definition.oam.dev/description: "Alibaba Cloud RDS Resource"
spec:
workload:
definition:
apiVersion: oss.alibaba.crossplane.io/v1alpha1
kind: Bucket
schematic:
cue:
template: |
output: {
apiVersion: "oss.alibaba.crossplane.io/v1alpha1"
kind: "Bucket"
spec: {
name: parameter.name
acl: parameter.acl
storageClass: parameter.storageClass
dataRedundancyType: parameter.dataRedundancyType
writeConnectionSecretToRef: {
namespace: context.namespace
name: parameter.secretName
}
providerConfigRef: {
name: "default"
}
deletionPolicy: "Delete"
}
}
parameter: {
// +usage=OSS bucket name
name: string
// +usage=The access control list of the OSS bucket
acl: *"private" | string
// +usage=The storage type of OSS bucket
storageClass: *"Standard" | string
// +usage=The data Redundancy type of OSS bucket
dataRedundancyType: *"LRS" | string
// +usage=Secret name which RDS connection will write to
secretName: string
}
```

View File

@@ -22,7 +22,7 @@ The reasons for KubeVela supports CUE as a first-class solution to design abstra
Please make sure below CLIs are present in your environment:
* [`cue` >=v0.2.2](https://cuelang.org/docs/install/)
* [`vela` (>v1.0.0)](../install#4-optional-get-kubevela-cli)
* [`vela` (>v1.0.0)](../../install#3-get-kubevela-cli)
## CUE CLI Basic

View File

@@ -4,7 +4,7 @@ title: How-to
In this section, it will introduce how to use [CUE](https://cuelang.org/) to declare app components via `ComponentDefinition`.
> Before reading this part, please make sure you've learned the [Definition CRD](../platform-engineers/definition-and-templates) in KubeVela.
> Before reading this part, please make sure you've learned the [Definition CRD](../definition-and-templates.md) in KubeVela.
## Declare `ComponentDefinition`
@@ -221,7 +221,7 @@ output: {
It's common that a component definition is composed by multiple API resources, for example, a `webserver` component that is composed by a Deployment and a Service. CUE is a great solution to achieve this in simplified primitives.
> Another approach to do composition in KubeVela of course is [using Helm](/docs/helm/component).
> Another approach to do composition in KubeVela of course is [using Helm](../helm/component.md).
## How-to

View File

@@ -1,10 +1,10 @@
---
title: Define resources located in defferent namespace with application
title: Render Resource to Other Namespaces
---
In this section, we will introduce how to use cue template create resources (workload/trait) in different namespace with the application.
In this section, we will introduce how to use cue template create resources in different namespace with the application.
By default, the `metadata.namespace` of K8s resource in CuE template is automatically filled with the same namespace of the application.
By default, the `metadata.namespace` of K8s resource in CUE template is automatically filled with the same namespace of the application.
If you want to create K8s resources running in a specific namespace witch is different with the application, you can set the `metadata.namespace` field.
KubeVela will create the resources in the specified namespace, and create a resourceTracker object as owener of those resources.
@@ -26,7 +26,7 @@ spec:
parameter: {
name: string
image: string
namespace: string # make this parameter `namespace` as keyword which represents the resource maybe located in defferent namespace with application
namespace: string # make this parameter `namespace` as keyword which represents the resource maybe located in different namespace with application
}
output: {
apiVersion: "apps/v1"

View File

@@ -160,7 +160,7 @@ outputs.service.spec.selector.app: reference "context" not found:
./def.cue:70:11
```
The `reference "context" not found` is a common error in this step as [`context`](/docs/cue/component?id=cue-context) is a runtime information that only exist in KubeVela controllers. In order to validate the CUE template end-to-end, we can add a mock `context` in `def.cue`.
The `reference "context" not found` is a common error in this step as [`context`](cue/component?id=cue-context) is a runtime information that only exist in KubeVela controllers. In order to validate the CUE template end-to-end, we can add a mock `context` in `def.cue`.
> Note that you need to remove all mock data when you finished the validation.

View File

@@ -1,5 +1,5 @@
---
title: Programmable Building Blocks
title: Definition Objects
---
This documentation explains `ComponentDefinition` and `TraitDefinition` in detail.
@@ -233,4 +233,87 @@ spec:
The specification of `schematic` is explained in following CUE and Helm specific documentations.
Also, the `schematic` filed enables you to render UI forms directly based on them, please check the [Generate Forms from Definitions](/docs/platform-engineers/openapi-v3-json-schema) section about how to.
Also, the `schematic` filed enables you to render UI forms directly based on them, please check the [Generate Forms from Definitions](openapi-v3-json-schema) section about how to.
## Definition Revisions
In KubeVela, definition entities are mutable. Each time a `ComponentDefinition` or `TraitDefinition` is updated, a corresponding `DefinitionRevision` will be generated to snapshot this change. Hence, KubeVela allows user to reference a specific revision of definition to declare an application.
For example, we can design a new parameter named `args` for the `webservice` component definition by applying a new definition with same name as below.
```shell
$ kubectl vela show webservice
# Properties
+-------+----------------------------------------------------+----------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+-------+----------------------------------------------------+----------+----------+---------+
| cmd | Commands to run in the container | []string | false | |
... // skip
```
```shell
kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/docs/examples/definition-revision/webservice-v2.yaml
```
The change will take effect immediately.
```shell
$ kubectl vela show webservice
# Properties
+-------+----------------------------------------------------+----------+----------+---------+
| NAME | DESCRIPTION | TYPE | REQUIRED | DEFAULT |
+-------+----------------------------------------------------+----------+----------+---------+
| cmd | Commands to run in the container | []string | false | |
| args | Arguments to the cmd | []string | false | |
... // skip
```
We will see a new definition revision will be automatically generated, `v2` is the latest version, `v1` is the previous one.
```shell
$ kubectl get definitionrevision -l="componentdefinition.oam.dev/name=webservice" -n vela-system
NAME REVISION HASH TYPE
webservice-v1 1 3f6886d9832021ba Component
webservice-v2 2 b3b9978e7164d973 Component
```
### Specify Definition Revision in Application
Users can specify the revision with `@version` approach, for example, if a user want to stick to using the `v1` revision of `webservice` component:
```yaml
# testapp.yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: testapp
spec:
components:
- name: server
type: webservice@v1
properties:
image: foo
cmd:
- sleep
- '1000'
```
If no revision is specified, KubeVela will always use the latest revision for a given component definition.
```yaml
# testapp.yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: testapp
spec:
components:
- name: server
type: webservice # type: webservice@v2
properties:
image: foo
cmd:
- sleep
- '1000'
args:
- wait
```

View File

@@ -2,13 +2,13 @@
title: How-to
---
In this section, it will introduce how to declare Helm charts as app components via `ComponentDefinition`.
In this section, it will introduce how to declare Helm charts as components via `ComponentDefinition`.
> Before reading this part, please make sure you've learned [the definition and template concepts](../platform-engineers/definition-and-templates).
> Before reading this part, please make sure you've learned [the definition and template concepts](../definition-and-templates.md).
## Prerequisite
* Make sure you have enabled Helm support in the [installation guide](/docs/install).
* Make sure you have enabled Helm support in the [installation guide](../../install#4-enable-helm-support).
## Declare `ComponentDefinition`
@@ -38,9 +38,9 @@ spec:
```
In detail:
- `.spec.workload` is required to indicate the workload type of this Helm based component. Please also check for [Known Limitations](/docs/helm/known-issues?id=workload-type-indicator) if you have multiple workloads packaged in one chart.
- `.spec.workload` is required to indicate the workload type of this Helm based component. Please also check for [known limitations](known-issues?id=workload-type-indicator) if you have multiple workloads packaged in one chart.
- `.spec.schematic.helm` contains information of Helm `release` and `repository` which leverages `fluxcd/flux2`.
- i.e. the pec of `release` aligns with [`HelmReleaseSpec`](https://github.com/fluxcd/helm-controller/blob/main/docs/api/helmrelease.md) and spec of `repository` aligns with [`HelmRepositorySpec`](https://github.com/fluxcd/source-controller/blob/main/docs/api/source.md#source.toolkit.fluxcd.io/v1beta1.HelmRepository).
- i.e. the spec of `release` aligns with [`HelmReleaseSpec`](https://github.com/fluxcd/helm-controller/blob/main/docs/api/helmrelease.md) and spec of `repository` aligns with [`HelmRepositorySpec`](https://github.com/fluxcd/source-controller/blob/main/docs/api/source.md#source.toolkit.fluxcd.io/v1beta1.HelmRepository).
## Declare an `Application`
@@ -80,10 +80,3 @@ Check the values (`image.tag = 5.1.2`) from application's `properties` are assig
$ kubectl get deployment myapp-demo-podinfo -o json | jq '.spec.template.spec.containers[0].image'
"ghcr.io/stefanprodan/podinfo:5.1.2"
```
### Generate Form from Helm Based Components
KubeVela will automatically generate OpenAPI v3 JSON schema based on [`values.schema.json`](https://helm.sh/docs/topics/charts/#schema-files) in the Helm chart, and store it in a `ConfigMap` in the same `namespace` with the definition object. Furthermore, if `values.schema.json` is not provided by the chart author, KubeVela will generate OpenAPI v3 JSON schema based on its `values.yaml` file automatically.
Please check the [Generate Forms from Definitions](/docs/platform-engineers/openapi-v3-json-schema) guide for more detail of using this schema to render GUI forms.

View File

@@ -75,7 +75,7 @@ The known issues will be fixed in following releases.
### Rollout Strategy
For now, Helm based components cannot benefit from [application level rollout strategy](https://github.com/oam-dev/kubevela/blob/master/design/vela-core/rollout-design.md#applicationdeployment-workflow). As shown in [this sample](./trait#update-an-applicatiion), if the application is updated, it can only be rollouted directly without canary or blue-green approach.
For now, Helm based components cannot benefit from [rolling update API](https://github.com/oam-dev/kubevela/blob/master/design/vela-core/rollout-design.md#applicationdeployment-workflow). As shown in [this sample](./trait#update-an-applicatiion), if the application is updated, it can only be rollouted directly without canary or blue-green approach.
### Updating Traits Properties may Also Lead to Pods Restart

View File

@@ -2,7 +2,7 @@
title: KEDA as Autoscaling Trait
---
> Before continue, make sure you have learned about the concepts of [Definition Objects](definition-and-templates) and [Defining Traits with CUE](/docs/cue/trait) section.
> Before continue, make sure you have learned about the concepts of [Definition Objects](definition-and-templates) and [Defining Traits with CUE](cue/trait) section.
In the following tutorial, you will learn to add [KEDA](https://keda.sh/) as a new autoscaling trait to your KubeVela based platform.
@@ -98,7 +98,7 @@ schematic:
This is a CUE based template which only exposes `type` and `value` as trait properties for user to set.
> Please check the [Defining Trait with CUE](../cue/trait) section for more details regarding to CUE templating.
> Please check the [Defining Trait with CUE](cue/trait) section for more details regarding to CUE templating.
## Step 2: Register New Trait to KubeVela

View File

@@ -2,13 +2,13 @@
title: How-to
---
In this section, it will introduce how to use raw K8s Object to declare app components via `ComponentDefinition`.
In this section, it will introduce how to use simple template to declare Kubernetes API resource into a component.
> Before reading this part, please make sure you've learned [the definition and template concepts](../platform-engineers/definition-and-templates).
> Before reading this part, please make sure you've learned [the definition and template concepts](../definition-and-templates).
## Declare `ComponentDefinition`
Here is a raw template based `ComponentDefinition` example which provides a abstraction for worker workload type:
Here is a simple template based `ComponentDefinition` example which provides a abstraction for worker workload type:
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -49,10 +49,10 @@ spec:
In detail, the `.spec.schematic.kube` contains template of a workload resource and
configurable parameters.
- `.spec.schematic.kube.template` is the raw template in YAML format.
- `.spec.schematic.kube.template` is the simple template in YAML format.
- `.spec.schematic.kube.parameters` contains a set of configurable parameters. The `name`, `type`, and `fieldPaths` are required fields, `description` and `required` are optional fields.
- The parameter `name` must be unique in a `ComponentDefinition`.
- `type` indicates the data type of value set to the field. This is a required field which will help KubeVela to generate a OpenAPI JSON schema for the parameters automatically. In raw template, only basic data types are allowed, including `string`, `number`, and `boolean`, while `array` and `object` are not.
- `type` indicates the data type of value set to the field. This is a required field which will help KubeVela to generate a OpenAPI JSON schema for the parameters automatically. In simple template, only basic data types are allowed, including `string`, `number`, and `boolean`, while `array` and `object` are not.
- `fieldPaths` in the parameter specifies an array of fields within the template that will be overwritten by the value of this parameter. Fields are specified as JSON field paths without a leading dot, for example
`spec.replicas`, `spec.containers[0].image`.

View File

@@ -2,7 +2,7 @@
title: Attach Traits
---
All traits in the KubeVela system works well with the Raw K8s Object Template based Component.
All traits in the KubeVela system works well with the simple template based Component.
In this sample, we will attach two traits,
[scaler](https://github.com/oam-dev/kubevela/blob/master/charts/vela-core/templates/defwithtemplate/manualscale.yaml)

View File

@@ -51,8 +51,8 @@ data:
Specifically, this schema is generated based on `parameter` section in capability definition:
* For CUE based definition: the [`parameter`](../cue/component#Write-ComponentDefinition) is a keyword in CUE template.
* For Helm based definition: the [`parameter`](../helm/component#Write-ComponentDefinition) is generated from `values.yaml` in Helm chart.
* For CUE based definition: the `parameter` is a keyword in CUE template.
* For Helm based definition: the `parameter` is generated from `values.yaml` in Helm chart.
## Render Form
@@ -62,6 +62,10 @@ Below is a form rendered with `form-render`:
![](../resources/json-schema-render-example.jpg)
### Helm Based Components
If a Helm based component definition is installed in KubeVela, it will also generate OpenAPI v3 JSON schema based on the [`values.schema.json`](https://helm.sh/docs/topics/charts/#schema-files) in the Helm chart, and store it in the `ConfigMap` following convention above. If `values.schema.json` is not provided by the chart author, KubeVela will automatically generate OpenAPI v3 JSON schema based on its `values.yaml` file automatically.
# What's Next
It's by design that KubeVela supports multiple ways to define the schematic. Hence, we will explain `.schematic` field in detail with following guides.

View File

@@ -6,7 +6,11 @@ This documentation will explain the core resource model of KubeVela which is ful
## Application
KubeVela introduces an `Application` CRD as its main API that captures a full application deployment. Every application is composed by multiple components with attachable operational behaviors (traits). For example:
The *Application* is the core API of KubeVela. It allows application team to work with a single artifact to capture the complete application deployment with simplified primitives.
This provides a simpler path for on-boarding application team to the platform without leaking low level details in runtime infrastructure. For example, they will be able to declare a "web service" without defining a detailed Kubernetes Deployment + Service combo each time, or claim the auto-scaling requirements without referring to the underlying KEDA ScaleObject. They can also declare a cloud database with same API if they want.
Every application is composed by multiple components with attachable operational behaviors (traits). For example:
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -35,17 +39,18 @@ spec:
bucket: "my-bucket"
```
The `Application` resource in KubeVela is a LEGO-style entity and does not even have fixed schema. Instead, it is assembled by below building block entities that are maintained by the platform team.
Though the application object doesn't have fixed schema, it is a composition object assembled by several *programmable building blocks* as shown below.
## Component
The component model in KubeVela is designed to allow *component providers* to encapsulate deployable/provisionable entities by leveraging widely adopted tools such as CUE, Helm etc, and give a easier path to developers to deploy complicated microservices with ease.
Templates based encapsulation is probably the mostly widely used approach to enable efficient application deployment and exposes easier interfaces to end users. For example, many tools today encapsulate Kubernetes *Deployment* and *Service* into a *Web Service* module, and then instantiate this module by simply providing parameters such as *image=foo* and *ports=80*. This pattern can be found in cdk8s (e.g. [`web-service.ts` ](https://github.com/awslabs/cdk8s/blob/master/examples/typescript/web-service/web-service.ts)), CUE (e.g. [`kube.cue`](https://github.com/cuelang/cue/blob/b8b489251a3f9ea318830788794c1b4a753031c0/doc/tutorial/kubernetes/quick/services/kube.cue#L70)), and many widely used Helm charts (e.g. [Web Service](https://docs.bitnami.com/tutorials/create-your-first-helm-chart/)).
The component model (`ComponentDefinition` API) is designed to allow *component providers* to encapsulate deployable/provisionable entities with a wide range of tools, and hence give a easier path to application team to deploy complicated microservices across hybrid environments at ease. A component normally carries its workload type description (i.e. `WorkloadDefinition`), a encapsulation module with a parameter list.
> Hence, a components provider could be anyone who packages software components in form of Helm chart of CUE modules. Think about 3rd-party software distributor, DevOps team, or even your CI pipeline.
In above example, it describes an application composed with Kubernetes stateless workload (component `foo`) and a Alibaba Cloud OSS bucket (component `bar`) alongside.
Components are shareable and reusable. For example, by referencing the same *Alibaba Cloud RDS* component and setting different parameter values, application team could easily provision Alibaba Cloud RDS instances of different sizes in different availability zones.
Application team will use the `Application` entity to declare how they want to instantiate and deploy a group of certain components. In above example, it describes an application composed with Kubernetes stateless workload (component `foo`) and a Alibaba Cloud OSS bucket (component `bar`) alongside.
### How it Works?
@@ -99,11 +104,13 @@ spec:
Hence, the `properties` section of `backend` only exposes two parameters to fill: `image` and `cmd`, this is enforced by the `parameter` list of the `.spec.template` field of the definition.
## Traits
Traits are operational features that can be attached to component per needs. Traits are normally considered as platform features and maintained by platform team. In the above example, `type: autoscaler` in `frontend` means the specification (i.e. `properties` section)
of this trait will be enforced by a `TraitDefinition` object named `autoscaler` as below:
Traits (`TraitDefinition` API) are operational features provided by the platform. A trait augments the component instance with operational behaviors such as load balancing policy, network ingress routing, auto-scaling policies, or upgrade strategies, etc.
To attach a trait to component instance, the user will declare `.type` field to reference the specific `TraitDefinition`, and `.properties` field to set property values of the given trait. Similarly, `TraitDefiniton` also allows you to define *template* for operational features.
In the above example, `type: autoscaler` in `frontend` means the specification (i.e. `properties` section) of this trait will be enforced by a `TraitDefinition` object named `autoscaler` as below:
```yaml
apiVersion: core.oam.dev/v1beta1
@@ -177,7 +184,7 @@ spec:
}
```
Please note that the end users of KubeVela do NOT need to know about definition objects, they learn how to use a given capability with visualized forms (or the JSON schema of parameters if they prefer). Please check the [Generate Forms from Definitions](/docs/platform-engineers/openapi-v3-json-schema) section about how this is achieved.
Please note that the application team do NOT need to know about definition objects, they learn how to use a given capability with visualized forms (or the JSON schema of parameters if they prefer). Please check the [Generate Forms from Definitions](openapi-v3-json-schema) section about how this is achieved.
## Standard Contract Behind The Abstractions

View File

@@ -2,12 +2,13 @@
title: Terraform
---
In addition to provisioning and consuming cloud resources by [Crossplane](./cloud-services), we can also use Terraform,
which is one of four ComponentDefinition schematic types `cue`, `kube`, `helm` and `terraform`.
In this documentation, we will use Alibaba Cloud's RDS (Relational Database Service), and Alibaba Cloud's OSS (Object Storage System) as examples to show how to enable cloud services as part of the application deployment.
To enable end users to be able to create application by Terraform, please follow these steps.
These cloud services are provided by Terraform.
## Install Terraform Controller
## Prepare Terraform Controller
<details>
Download the latest chart, like `terraform-controller-chart-0.1.4.tgz`, from the latest [releases list](https://github.com/oam-dev/terraform-controller/releases) and install it.
@@ -21,18 +22,17 @@ REVISION: 1
TEST SUITE: None
```
## Apply Provider credentials
### Apply Provider Credentials
By applying Terraform Provider credentials, Terraform controller can be authenticated to deploy and manage cloud resources.
Please refer to [Terraform controller getting started](https://github.com/oam-dev/terraform-controller/blob/master/getting-started.md) on how to apply Provider for Alibaba Cloud or AWS.
</details>
## Register ComponentDefinition and TraitDefinition
## Register `alibaba-rds` Component
### Register ComponentDefinition `alibaba-rds` as RDS cloud resource producer
Register [alibaba-rds](https://github.com/oam-dev/kubevela/tree/master/docs/examples/terraform/cloud-resource-provision-and-consume/ComponentDefinition-alibaba-rds.yaml) Component type to KubeVela.
Register [alibaba-rds](https://github.com/oam-dev/kubevela/tree/master/docs/examples/terraform/cloud-resource-provision-and-consume/ComponentDefinition-alibaba-rds.yaml) to KubeVela.
```yaml
apiVersion: core.oam.dev/v1alpha2
@@ -97,9 +97,9 @@ spec:
```
### Register ComponentDefinition `alibaba-oss` as OSS cloud resource producer
### Register `alibaba-oss` Component
Register [alibaba-oss](https://github.com/oam-dev/kubevela/tree/master/docs/examples/terraform/cloud-resource-provision-and-consume/ComponentDefinition-alibaba-oss.yaml) Component type to KubeVela.
Register [alibaba-oss](https://github.com/oam-dev/kubevela/tree/master/docs/examples/terraform/cloud-resource-provision-and-consume/ComponentDefinition-alibaba-oss.yaml) to KubeVela.
```yaml
@@ -140,14 +140,4 @@ spec:
}
```
### Prepare TraitDefinition `service-binding` to do env-secret mapping
Apply [service-binding](https://github.com/oam-dev/kubevela/tree/master/docs/examples/terraform/cloud-resource-provision-and-consume/TraitDefinition-service-binding.yaml) to apply service binding trait.
For more detailed introduction, please refer to [Crossplane](https://kubevela.io/docs/platform-engineers/cloud-services#prepare-traitdefinition-service-binding-to-do-env-secret-mapping).
## Next
Now You can refer to [Terraform for end users](../end-user/terraform) to provision and consume cloud resource by Terraform.
```

View File

@@ -15,30 +15,15 @@ $ kubectl apply -f https://raw.githubusercontent.com/oam-dev/kubevela/master/doc
application.core.oam.dev/first-vela-app created
```
Above command will apply an application to KubeVela and let it distribute the application to proper runtime infrastructure.
Check the status until we see `status` is `running` and services are `healthy`:
```bash
$ kubectl get application first-vela-app -o yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
generation: 1
name: first-vela-app
...
namespace: default
spec:
components:
- name: express-server
type: webservice
properties:
image: crccheck/hello-world
port: 8000
traits:
- type: ingress
properties:
domain: testsvc.example.com
http:
/: 8000
...
status:
...
services:
@@ -51,22 +36,7 @@ status:
status: running
```
Under the neath, the K8s resources was created:
```bash
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
express-server-v1 1/1 1 1 8m
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
express-server ClusterIP 172.21.11.152 <none> 8000/TCP 7m43s
kubernetes ClusterIP 172.21.0.1 <none> 443/TCP 116d
$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
express-server <none> testsvc.example.com <your ip address> 80 7m47s
```
If your cluster has a working ingress, you can visit the service.
You can now directly visit the application (regardless of where it is running).
```
$ curl -H "Host:testsvc.example.com" http://<your ip address>/
@@ -90,8 +60,6 @@ Hello World
Here are some recommended next steps:
- Learn KubeVela starting from its [core concepts](./concepts)
- Learn more details about [`Application`](./application) and understand how it works.
- Join `#kubevela` channel in CNCF [Slack](https://cloud-native.slack.com) and/or [Gitter](https://gitter.im/oam-dev/community)
Welcome onboard and sail Vela!
- Learn KubeVela's [core concepts](./concepts)
- Learn more details about [`Application`](end-user/application) and what it can do for you.
- Learn how to attach [rollout plan](end-user/scopes/rollout-plan) to this application, or [place it to multiple runtime clusters](end-user/scopes/appdeploy).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 266 KiB

After

Width:  |  Height:  |  Size: 429 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 229 KiB

After

Width:  |  Height:  |  Size: 206 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 200 KiB

After

Width:  |  Height:  |  Size: 266 KiB

View File

@@ -1,236 +0,0 @@
---
title: Progressive Rollout
---
`Rollout` or `Upgrade` is one of the most essential "day 2" operation on any application
. KubeVela, as an application centric platform, definitely needs to provide a customized solution
to alleviate the burden on the application operators.
## Overview
There are several attempts at solving this problem in the cloud native community. However, none
of them provide a true rolling style upgrade. For example, flagger supports Blue/Green, Canary
and A/B testing. Therefore, we decide to add support for batch based rolling upgrade as
our first style to support in KubeVela.
### Design Principles and Goals
We design KubeVela rollout solutions with the following principles in mind
- First, we want all flavors of rollout controllers share the same core rollout
related logic. The trait and application related logic can be easily encapsulated into its own
package.
- Second, the core rollout related logic is easily extensible to support different type of
workloads, i.e. Deployment, CloneSet, Statefulset, DaemonSet or even customized workloads.
- Thirdly, the core rollout related logic has a well documented state machine that
does state transition explicitly.
- Finally, the controllers can support all the rollout/upgrade needs of an application running
in a production environment including Blue/Green, Canary and A/B testing.
## AppRollout
Here is a simple `AppRollout` that upgrade an application from v1 to v2 in three batches. The
first batch contains only 1 pod while the rest of the batches split the rest.
```yaml
apiVersion: core.oam.dev/v1beta1
kind: AppRollout
metadata:
name: rolling-example
spec:
sourceAppRevisionName: test-rolling-v1
targetAppRevisionName: test-rolling-v2
componentList:
- metrics-provider
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 1
- replicas: 50%
- replicas: 50%
batchPartition: 1
```
## User Workflow
Here is the end to end user experience based on Deployment
1. Deploy application to the cluster
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: test-rolling
annotations:
"app.oam.dev/rolling-components": "metrics-provider"
"app.oam.dev/rollout-template": "true"
spec:
components:
- name: metrics-provider
type: worker
properties:
cmd:
- ./podinfo
- stress-cpu=1
image: stefanprodan/podinfo:4.0.6
port: 8080
replicas: 5
```
Verify AppRevision `test-rolling-v1` have generated
```shell
$ kubectl get apprev test-rolling-v1
NAME AGE
test-rolling-v1 9s
```
2. Attach the following rollout plan to upgrade the application to v1
```yaml
apiVersion: core.oam.dev/v1beta1
kind: AppRollout
metadata:
name: rolling-example
spec:
# application (revision) reference
targetAppRevisionName: test-rolling-v1
componentList:
- metrics-provider
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 10%
- replicas: 40%
- replicas: 50%
targetSize: 5
```
Use can check the status of the ApplicationRollout and wait for the rollout to complete.
3. User can continue to modify the application image tag and apply.This will generate new AppRevision `test-rolling-v2`
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: test-rolling
annotations:
"app.oam.dev/rolling-components": "metrics-provider"
"app.oam.dev/rollout-template": "true"
spec:
components:
- name: metrics-provider
type: worker
properties:
cmd:
- ./podinfo
- stress-cpu=1
image: stefanprodan/podinfo:5.0.2
port: 8080
replicas: 5
```
Verify AppRevision `test-rolling-v2` have generated
```shell
$ kubectl get apprev test-rolling-v2
NAME AGE
test-rolling-v2 7s
```
4. Apply the application rollout that upgrade the application from v1 to v2
```yaml
apiVersion: core.oam.dev/v1beta1
kind: AppRollout
metadata:
name: rolling-example
spec:
# application (revision) reference
sourceAppRevisionName: test-rolling-v1
targetAppRevisionName: test-rolling-v2
componentList:
- metrics-provider
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 1
- replicas: 2
- replicas: 2
```
User can check the status of the ApplicationRollout and see the rollout completes, and the
ApplicationRollout's "Rolling State" becomes `rolloutSucceed`
### Revert
5. Apply the application rollout that revert the application from v2 to v1
```yaml
apiVersion: core.oam.dev/v1beta1
kind: AppRollout
metadata:
name: rolling-example
spec:
# application (revision) reference
sourceAppRevisionName: test-rolling-v2
targetAppRevisionName: test-rolling-v1
componentList:
- metrics-provider
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 1
- replicas: 2
- replicas: 2
```
### Skip revision rollout
6. User can apply this yaml continue to modify the application image tag.This will generate new AppRevision `test-rolling-v3`
```yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: test-rolling
annotations:
"app.oam.dev/rolling-components": "metrics-provider"
"app.oam.dev/rollout-template": "true"
spec:
components:
- name: metrics-provider
type: worker
properties:
cmd:
- ./podinfo
- stress-cpu=1
image: stefanprodan/podinfo:5.2.0
port: 8080
replicas: 5
```
Verify AppRevision `test-rolling-v3` have generated
```shell
$ kubectl get apprev test-rolling-v3
NAME AGE
test-rolling-v3 7s
```
7. Apply the application rollout that rollout the application from v1 to v3
```yaml
apiVersion: core.oam.dev/v1beta1
kind: AppRollout
metadata:
name: rolling-example
spec:
# application (revision) reference
sourceAppRevisionName: test-rolling-v1
targetAppRevisionName: test-rolling-v3
componentList:
- metrics-provider
rolloutPlan:
rolloutStrategy: "IncreaseFirst"
rolloutBatches:
- replicas: 1
- replicas: 2
- replicas: 2
```
## State Transition
Here is the high level state transition graph
![](../resources/approllout-status-transition.jpg)
## Roadmap
Our recent roadmap for progressive rollout is [here](./roadmap).

View File

@@ -0,0 +1,181 @@
apiVersion: core.oam.dev/v1beta1
kind: ComponentDefinition
metadata:
name: webservice
namespace: vela-system
annotations:
definition.oam.dev/description: "Describes long-running, scalable, containerized services that have a stable network endpoint to receive external network traffic from customers."
spec:
workload:
definition:
apiVersion: apps/v1
kind: Deployment
schematic:
cue:
template: |
output: {
apiVersion: "apps/v1"
kind: "Deployment"
spec: {
selector: matchLabels: {
"app.oam.dev/component": context.name
if parameter.addRevisionLabel {
"app.oam.dev/appRevision": context.appRevision
}
}
template: {
metadata: labels: {
"app.oam.dev/component": context.name
if parameter.addRevisionLabel {
"app.oam.dev/appRevision": context.appRevision
}
}
spec: {
containers: [{
name: context.name
image: parameter.image
if parameter["cmd"] != _|_ {
command: parameter.cmd
}
if parameter["args"] != _|_ {
command: parameter.args
}
if parameter["env"] != _|_ {
env: parameter.env
}
if context["config"] != _|_ {
env: context.config
}
ports: [{
containerPort: parameter.port
}]
if parameter["cpu"] != _|_ {
resources: {
limits:
cpu: parameter.cpu
requests:
cpu: parameter.cpu
}
}
if parameter["volumes"] != _|_ {
volumeMounts: [ for v in parameter.volumes {
{
mountPath: v.mountPath
name: v.name
}}]
}
}]
if parameter["volumes"] != _|_ {
volumes: [ for v in parameter.volumes {
{
name: v.name
if v.type == "pvc" {
persistentVolumeClaim: {
claimName: v.claimName
}
}
if v.type == "configMap" {
configMap: {
defaultMode: v.defaultMode
name: v.cmName
if v.items != _|_ {
items: v.items
}
}
}
if v.type == "secret" {
secret: {
defaultMode: v.defaultMode
secretName: v.secretName
if v.items != _|_ {
items: v.items
}
}
}
if v.type == "emptyDir" {
emptyDir: {
medium: v.medium
}
}
}}]
}
}
}
}
}
parameter: {
// +usage=Which image would you like to use for your service
// +short=i
image: string
// +usage=Commands to run in the container
cmd?: [...string]
// +usage=Arguments to the cmd
args?: [...string]
// +usage=Which port do you want customer traffic sent to
// +short=p
port: *80 | int
// +usage=Define arguments by using environment variables
env?: [...{
// +usage=Environment variable name
name: string
// +usage=The value of the environment variable
value?: string
// +usage=Specifies a source the value of this var should come from
valueFrom?: {
// +usage=Selects a key of a secret in the pod's namespace
secretKeyRef: {
// +usage=The name of the secret in the pod's namespace to select from
name: string
// +usage=The key of the secret to select from. Must be a valid secret key
key: string
}
}
}]
// +usage=Number of CPU units for the service, like `0.5` (0.5 CPU core), `1` (1 CPU core)
cpu?: string
// If addRevisionLabel is true, the appRevision label will be added to the underlying pods
addRevisionLabel: *false | bool
// +usage=Declare volumes and volumeMounts
volumes?: [...{
name: string
mountPath: string
// +usage=Specify volume type, options: "pvc","configMap","secret","emptyDir"
type: "pvc" | "configMap" | "secret" | "emptyDir"
if type == "pvc" {
claimName: string
}
if type == "configMap" {
defaultMode: *420 | int
cmName: string
items?: [...{
key: string
path: string
mode: *511 | int
}]
}
if type == "secret" {
defaultMode: *420 | int
secretName: string
items?: [...{
key: string
path: string
mode: *511 | int
}]
}
if type == "emptyDir" {
medium: *"" | "Memory"
}
}]
}

View File

@@ -20,34 +20,47 @@ module.exports = {
},
{
type: 'category',
label: 'Application Deployment',
label: 'Application Team Guide',
collapsed: false,
items:[
'application',
"rollout/rollout",
'rollout/appdeploy',
'end-user/application',
{
'More Operations': [
'end-user/kubectlplugin',
'end-user/explore',
'end-user/diagnose',
'end-user/expose',
'end-user/scale',
'end-user/labels',
'end-user/sidecar',
'end-user/cloud-resources',
'end-user/terraform',
'end-user/volumes',
'end-user/monitoring',
'end-user/health',
'end-user/rollout-app'
'Components': [
'end-user/components/webservice',
'end-user/components/task',
'end-user/components/worker',
'end-user/components/cloud-services',
'end-user/components/more',
]
},
{
'Traits': [
'end-user/traits/ingress',
'end-user/traits/scaler',
'end-user/traits/annotations-and-labels',
'end-user/traits/sidecar',
'end-user/traits/volumes',
'end-user/traits/service-binding',
'end-user/traits/more',
]
},
'end-user/scopes/appdeploy',
'end-user/scopes/rollout-plan',
{
'Observability': [
'end-user/scopes/health',
]
},
{
'Debugging': [
'end-user/debug',
]
},
]
},
{
type: 'category',
label: 'Platform Operation Guide',
label: 'Platform Team Guide',
collapsed: false,
items: [
'platform-engineers/overview',
@@ -59,29 +72,30 @@ module.exports = {
items: [
{
'CUE': [
'cue/component',
'cue/basic',
'platform-engineers/cue/component',
'platform-engineers/cue/basic',
]
},
{
'Helm': [
'helm/component',
'helm/trait',
'helm/known-issues'
'platform-engineers/helm/component',
'platform-engineers/helm/trait',
'platform-engineers/helm/known-issues'
]
},
{
'Raw Template': [
'kube/component',
'kube/trait',
'Simple Template': [
'platform-engineers/kube/component',
'platform-engineers/kube/trait',
]
},
{
type: 'category',
label: 'Defining Cloud Service',
label: 'Cloud Services',
items: [
'platform-engineers/cloud-services',
'platform-engineers/terraform',
'platform-engineers/crossplane',
]
},
]
@@ -90,10 +104,10 @@ module.exports = {
type: 'category',
label: 'Defining Traits',
items: [
'cue/trait',
'cue/patch-trait',
'cue/status',
'cue/advanced',
'platform-engineers/cue/trait',
'platform-engineers/cue/patch-trait',
'platform-engineers/cue/status',
'platform-engineers/cue/advanced',
]
},
{
@@ -155,20 +169,6 @@ module.exports = {
'cli/vela_cap',
],
},
{
type: 'category',
label: 'Capabilities',
items: [
'developers/references/README',
'developers/references/component-types/webservice',
'developers/references/component-types/task',
'developers/references/component-types/worker',
'developers/references/traits/route',
'developers/references/traits/metrics',
'developers/references/traits/scaler',
'developers/references/restful-api/rest',
],
},
],
},
{

View File

@@ -64,7 +64,7 @@ var (
svcName: map[string]interface{}{
"type": workloadType,
"image": "wordpress:php7.4-apache",
"port": "80",
"port": 80,
"cpu": "1",
},
},

View File

@@ -51,7 +51,7 @@ func Exec(cli string) (string, error) {
if err != nil {
return string(output), err
}
s := session.Wait(30 * time.Second)
s := session.Wait(60 * time.Second)
return string(s.Out.Contents()) + string(s.Err.Contents()), nil
}

View File

@@ -160,7 +160,7 @@ spec:
properties:
image: crccheck/hello-world
port: 5000
cpu: 0.5
cpu: "0.5"
traits:
- type: test-ingress
properties:
@@ -535,7 +535,7 @@ var livediffResult = `---
- - name: express-server
+ - name: new-express-server
properties:
+ cpu: 0.5
+ cpu: "0.5"
image: crccheck/hello-world
- port: 80
+ port: 5000
@@ -671,6 +671,11 @@ var livediffResult = `---
+ name: new-express-server
+ ports:
+ - containerPort: 5000
+ resources:
+ limits:
+ cpu: "0.5"
+ requests:
+ cpu: "0.5"
+ status:
+ observedGeneration: 0

2
go.mod
View File

@@ -37,7 +37,7 @@ require (
github.com/mholt/archiver/v3 v3.3.0
github.com/mitchellh/hashstructure/v2 v2.0.1
github.com/oam-dev/terraform-config-inspect v0.0.0-20210418082552-fc72d929aa28
github.com/oam-dev/terraform-controller v0.1.4
github.com/oam-dev/terraform-controller v0.1.6
github.com/olekukonko/tablewriter v0.0.2
github.com/onsi/ginkgo v1.13.0
github.com/onsi/gomega v1.10.3

4
go.sum
View File

@@ -946,8 +946,8 @@ github.com/oam-dev/stern v1.13.0-alpha h1:EVjM8Qvh6LssB6t4RZrjf9DtCq1cz+/cy6OF7f
github.com/oam-dev/stern v1.13.0-alpha/go.mod h1:AOkvfFUv0Arz7GBi0jz7S0Jsu4K/kdvSjNsnRt1+BIg=
github.com/oam-dev/terraform-config-inspect v0.0.0-20210418082552-fc72d929aa28 h1:tD8HiFKnt0jnwdTWjeqUnfnUYLD/+Nsmj8ZGIxqDWiU=
github.com/oam-dev/terraform-config-inspect v0.0.0-20210418082552-fc72d929aa28/go.mod h1:Mu8i0/DdplvnjwRbAYPsc8+LRR27n/mp8VWdkN10GzE=
github.com/oam-dev/terraform-controller v0.1.4 h1:GFd2RqfBZOmCtFzyl/QkB91X1KEfKOs6u6hO+JmlOtI=
github.com/oam-dev/terraform-controller v0.1.4/go.mod h1:8plSKkwgTSWnPDCEaQm/PAwPKed/kVo94Z2wwmJSL6k=
github.com/oam-dev/terraform-controller v0.1.6 h1:Uhd8iMibQ6SNeF5jQI5Z9w7/H7U7XaDwKtNtmbaHTeI=
github.com/oam-dev/terraform-controller v0.1.6/go.mod h1:8plSKkwgTSWnPDCEaQm/PAwPKed/kVo94Z2wwmJSL6k=
github.com/oklog/oklog v0.3.2/go.mod h1:FCV+B7mhrz4o+ueLpx+KqkyXRGMWOYEvfiXtdGtbWGs=
github.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA=
github.com/oklog/run v1.1.0/go.mod h1:sVPdnTZT1zYwAJeCMu2Th4T21pA3FPOQRfWjQlk7DVU=

View File

@@ -1,4 +1,4 @@
outputs: cpuhpa: {
outputs: cpuscaler: {
apiVersion: "autoscaling/v2beta2"
kind: "HorizontalPodAutoscaler"
metadata: name: context.name

View File

@@ -2,7 +2,7 @@ apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "configure k8s HPA with CPU metrics for Deployment"
definition.oam.dev/description: "Automatically scale the component based on CPU usage."
name: cpuscaler
namespace: {{.Values.systemDefinitionNamespace}}
spec:

View File

@@ -2,8 +2,7 @@ apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "Configures K8s ingress and service to enable web traffic for your service.
Please use route trait in cap center for advanced usage."
definition.oam.dev/description: "Enable public web traffic for the component."
name: ingress
namespace: {{.Values.systemDefinitionNamespace}}
spec:

View File

@@ -2,7 +2,7 @@ apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "Configures replicas for your service by patch replicas field."
definition.oam.dev/description: "Manually scale the component."
name: scaler
namespace: {{.Values.systemDefinitionNamespace}}
spec:

View File

@@ -2,7 +2,7 @@ apiVersion: core.oam.dev/v1beta1
kind: TraitDefinition
metadata:
annotations:
definition.oam.dev/description: "inject a sidecar container into your app"
definition.oam.dev/description: "Inject a sidecar container to the component."
name: sidecar
namespace: {{.Values.systemDefinitionNamespace}}
spec:

37
hack/website/clean-md.py Normal file
View File

@@ -0,0 +1,37 @@
import re
import os
import sys
def convert_link(md):
data = ""
with open(md, "r") as f:
data = f.read()
url_arr = re.findall(r'(\[(.*?)\]\((.*?)\))', data)
for url in url_arr:
if url[2].startswith("http") or url[2].startswith("https"):
continue
if ".md" in url[2] or ".mdx" in url[2]:
new_path = url[2].replace(".md", "")
new_path = new_path.replace(".mdx", "")
new_url = "[{}]({})".format(url[1], new_path)
data = data.replace(url[0], new_url)
print(f"convert {url[0]} to {new_url}")
with open(md, "w") as f:
f.write(data)
def main(path):
files = os.walk(path)
for path, dir_list,file_list in files:
for file_name in file_list:
file_path = os.path.join(path, file_name)
if file_path[-3:] == ".md" or file_path[-3:] == ".mdx" :
convert_link(file_path)
if __name__ == "__main__":
if len(sys.argv) != 2:
sys.exit(1)
main(sys.argv[1])

Some files were not shown because too many files have changed in this diff Show More