Files
kubevela/docs/en/install.md
2020-11-11 17:13:24 -08:00

5.1 KiB

Install KubeVela

1. Setup Kubernetes cluster

Requirements:

  • Kubernetes cluster >= v1.15.0
  • kubectl installed and configured

You may pick either Minikube or KinD as local cluster testing option.

NOTE: If you are not using minikube or kind, please make sure to install or enable ingress-nginx by yourself.

Minikube

Follow the minikube installation guide.

Once minikube is installed, create a cluster:

$ minikube start

Install ingress:

$ minikube addons enable ingress

KinD

Follow this guide to install kind.

Then spins up a kind cluster:

cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
EOF

Then install ingress for kind:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml

2. Get KubeVela

  1. Download the latest vela binary from the releases page.
  2. Unpack the vela binary and add it to $PATH to get started.
$ sudo mv ./vela /usr/local/bin/vela

3. Initialize KubeVela

Run:

$ vela install

This will install KubeVela server component and its dependency components.

(Advanced) Verify Installation Manually

Check Vela Helm Chart has been installed:

$ helm list -n vela-system
NAME      NAMESPACE   REVISION  ...
kubevela  vela-system 1         ...

Later on, check that all dependency components has been installed (they will need 5-10 minutes to show up):

$ helm list --all-namespaces
NAME                  NAMESPACE   REVISION  UPDATED                               STATUS    CHART                       APP VERSION
flagger               vela-system 1         2020-11-10 18:47:14.0829416 +0000 UTC deployed  flagger-1.1.0               1.1.0
keda                  keda        1         2020-11-10 18:45:15.6981827 +0000 UTC deployed  keda-2.0.0-rc3              2.0.0-rc2
kube-prometheus-stack monitoring  1         2020-11-10 18:45:37.9608079 +0000 UTC deployed  kube-prometheus-stack-9.4.4 0.38.1
kubevela              vela-system 1         2020-11-10 10:44:20.663582 -0800 PST  deployed

We will introduce a vela system health command to check the dependencies in the future.

(Advanced) Customize Your Installation

We have installed the following dependency components along with Vela server component:

The config has been saved in a ConfigMap in "vela-system/vela-config":

$ kubectl -n vela-system get cm vela-config -o yaml
apiVersion: v1
data:
  certificates.cert-manager.io: |
    {
      "repo": "jetstack",
      "urL": "https://charts.jetstack.io",
      "name": "cert-manager",
      "namespace": "cert-manager",
      "version": "1.0.3"
    }
  flagger.app: |
  ...
kind: ConfigMap

User can specify their own dependencies by editing the vela-config ConfigMap. Currently adding new chart or updating existing chart requires redeploying Vela:

$ kubectl -n vela-system edit cm vela-config
...

$ helm uninstall -n vela-system kubevela
$ helm install -n vela-system kubevela

4. (Optional) Clean Up

Run:

$ helm uninstall -n vela-system kubevela
$ rm -r ~/.vela

This will uninstall KubeVela server component and its dependency components. This also cleans up local CLI cache.

Then clean up CRDs (CRDs are not removed via helm by default):

$ kubectl delete crd \
  applicationconfigurations.core.oam.dev \
  applicationdeployments.core.oam.dev \
  autoscalers.standard.oam.dev \
  certificaterequests.cert-manager.io \
  certificates.cert-manager.io \
  challenges.acme.cert-manager.io \
  clusterissuers.cert-manager.io \
  components.core.oam.dev \
  containerizedworkloads.core.oam.dev \
  healthscopes.core.oam.dev \
  issuers.cert-manager.io \
  manualscalertraits.core.oam.dev \
  metricstraits.standard.oam.dev \
  orders.acme.cert-manager.io \
  podspecworkloads.standard.oam.dev \
  routes.standard.oam.dev \
  scopedefinitions.core.oam.dev \
  servicemonitors.monitoring.coreos.com \
  traitdefinitions.core.oam.dev \
  workloaddefinitions.core.oam.dev