Files
kubevela/docs/en/install.md
Jianbo Sun c41bd241ce update kubevela installation guide (#955)
* update kubevela installation guide

* adress comments

* Update docs/en/install.md

Co-authored-by: Hongchao Deng <hongchaodeng1@gmail.com>

* Update docs/en/install.md

Co-authored-by: Hongchao Deng <hongchaodeng1@gmail.com>

* remove line

* address comments

Co-authored-by: Hongchao Deng <hongchaodeng1@gmail.com>
2021-01-29 15:46:58 +08:00

5.0 KiB

Install KubeVela

1. Setup Kubernetes cluster

Requirements:

  • Kubernetes cluster >= v1.15.0
  • kubectl installed and configured

If you don't have K8s cluster from Cloud Provider, you may pick either Minikube or KinD as local cluster testing option.

NOTE: If you are not using minikube or kind, please make sure to install or enable ingress-nginx by yourself.

Minikube

Follow the minikube installation guide.

Once minikube is installed, create a cluster:

$ minikube start

Install ingress:

$ minikube addons enable ingress

KinD

Follow this guide to install kind.

Then spins up a kind cluster:

cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
EOF

Then install ingress for kind:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml

2. Install KubeVela Controller

These steps will install KubeVela controller and its dependency.

  1. Add helm chart repo for KubeVela

    helm repo add kubevela https://kubevelacharts.oss-cn-hangzhou.aliyuncs.com/core
    
  2. Update the chart repo

    helm repo update
    
  3. Create Namespace for KubeVela controller

    kubectl create namespace vela-system 
    
  4. Install KubeVela

    helm install -n vela-system kubevela kubevela/vela-core
    

    By default, it will enable webhook. KubeVela relies on cert-manager to create certificates for webhook. If cert-manager hasn't been installed, please refer to cert-manager installation doc.

    You can add an argument --set useWebhook=false after the command to disable the webhook if you don't want to rely on cert-manager. If you just want to have a try this can also work:

    helm install -n vela-system kubevela kubevela/vela-core --set useWebhook=false
    

    You can also install cert-manager via kubevela chart by adding the argument --set installCertManager=true.

    helm install -n vela-system kubevela kubevela/vela-core --set installCertManager=true
    

    If you want to try the latest master branch, try the following command:

    helm install -n vela-system kubevela  https://kubevelacharts.oss-cn-hangzhou.aliyuncs.com/core/vela-core-latest.tgz --set useWebhook=false
    

3. Get KubeVela CLI

Here are three ways to get KubeVela Cli:

Script

macOS/Linux

$ curl -fsSl https://kubevela.io/install.sh | bash

Windows

$ powershell -Command "iwr -useb https://kubevela.io/install.ps1 | iex"

Homebrew

macOS/Linux

$ brew install kubevela

Download directly from releases

  • Download the latest vela binary from the releases page.
  • Unpack the vela binary and add it to $PATH to get started.
$ sudo mv ./vela /usr/local/bin/vela

Known Issue(https://github.com/oam-dev/kubevela/issues/625): If you're using mac, it will report that “vela” cannot be opened because the developer cannot be verified.

The new version of MacOS is stricter about running software you've downloaded that isn't signed with an Apple developer key. And we haven't supported that for KubeVela yet.
You can open your 'System Preference' -> 'Security & Privacy' -> General, click the 'Allow Anyway' to temporarily fix it.

4. (Optional) Clean Up

Run:

$ helm uninstall -n vela-system kubevela
$ rm -r ~/.vela

This will uninstall KubeVela server component and its dependency components. This also cleans up local CLI cache.

Then clean up CRDs (CRDs are not removed via helm by default):

$ kubectl delete crd \
  applicationconfigurations.core.oam.dev \
  applicationdeployments.core.oam.dev \
  autoscalers.standard.oam.dev \
  components.core.oam.dev \
  containerizedworkloads.core.oam.dev \
  healthscopes.core.oam.dev \
  issuers.cert-manager.io \
  manualscalertraits.core.oam.dev \
  metricstraits.standard.oam.dev \
  podspecworkloads.standard.oam.dev \
  routes.standard.oam.dev \
  scopedefinitions.core.oam.dev \
  traitdefinitions.core.oam.dev \
  workloaddefinitions.core.oam.dev