# Capsule Development ## Prerequisites Make sure you have these tools installed: - [Go 1.18+](https://golang.org/dl/) - [Operator SDK 1.7.2+](https://github.com/operator-framework/operator-sdk), or [Kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) - [KinD](https://github.com/kubernetes-sigs/kind) or [k3d](https://k3d.io/), with `kubectl` - [ngrok](https://ngrok.com/) (if you want to run locally with remote Kubernetes) - [golangci-lint](https://github.com/golangci/golangci-lint) - OpenSSL ## Setup a Kubernetes Cluster A lightweight Kubernetes within your laptop can be very handy for Kubernetes-native development like Capsule. ### By `k3d` ```shell # Install K3d cli by brew in Mac, or your preferred way $ brew install k3d # Export your laptop's IP, e.g. retrieving it by: ifconfig # Do change this IP to yours $ export LAPTOP_HOST_IP=192.168.10.101 # Spin up a bare minimum cluster # Refer to here for more options: https://k3d.io/v4.4.8/usage/commands/k3d_cluster_create/ $ k3d cluster create k3s-capsule --servers 1 --agents 1 --no-lb --k3s-server-arg --tls-san=${LAPTOP_HOST_IP} # Get Kubeconfig $ k3d kubeconfig get k3s-capsule > /tmp/k3s-capsule && export KUBECONFIG="/tmp/k3s-capsule" # This will create a cluster with 1 server and 1 worker node $ kubectl get nodes NAME STATUS ROLES AGE VERSION k3d-k3s-capsule-server-0 Ready control-plane,master 2m13s v1.21.2+k3s1 k3d-k3s-capsule-agent-0 Ready 2m3s v1.21.2+k3s1 # Or 2 Docker containers if you view it from Docker perspective $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5c26ad840c62 rancher/k3s:v1.21.2-k3s1 "/bin/k3s agent" 53 seconds ago Up 45 seconds k3d-k3s-capsule-agent-0 753998879b28 rancher/k3s:v1.21.2-k3s1 "/bin/k3s server --t…" 53 seconds ago Up 51 seconds 0.0.0.0:49708->6443/tcp k3d-k3s-capsule-server-0 ``` ### By `kind` ```shell # # Install kind cli by brew in Mac, or your preferred way $ brew install kind # Prepare a kind config file with necessary customization $ cat > kind.yaml < 56s v1.21.1 # Or 2 Docker containers if you view it from Docker perspective $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7b329fd3a838 kindest/node:v1.21.1 "/usr/local/bin/entr…" About a minute ago Up About a minute 0.0.0.0:54894->6443/tcp kind-capsule-control-plane 7d50f1633555 kindest/node:v1.21.1 "/usr/local/bin/entr…" About a minute ago Up About a minute kind-capsule-worker ``` ## Fork, build, and deploy Capsule The `fork-clone-contribute-pr` flow is common for contributing to OSS projects like Kubernetes and Capsule. Let's assume you've forked it into your GitHub namespace, say `myuser`, and then you can clone it with Git protocol. Do remember to change the `myuser` to yours. ```shell $ git clone git@github.com:myuser/capsule.git && cd capsule ``` It's a good practice to add the upstream as the remote too so we can easily fetch and merge the upstream to our fork: ```shell $ git remote add upstream https://github.com/clastix/capsule.git $ git remote -vv origin git@github.com:myuser/capsule.git (fetch) origin git@github.com:myuser/capsule.git (push) upstream https://github.com/clastix/capsule.git (fetch) upstream https://github.com/clastix/capsule.git (push) ``` Build and deploy: ```shell # Download the project dependencies $ go mod download # Build the Capsule image $ make docker-build # Retrieve the built image version $ export CAPSULE_IMAGE_VESION=`docker images --format '{{.Tag}}' clastix/capsule` # If k3s, load the image into cluster by $ k3d image import --cluster k3s-capsule capsule clastix/capsule:${CAPSULE_IMAGE_VESION} # If Kind, load the image into cluster by $ kind load docker-image --name kind-capsule clastix/capsule:${CAPSULE_IMAGE_VESION} # deploy all the required manifests # Note: 1) please retry if you saw errors; 2) if you want to clean it up first, run: make remove $ make deploy # Make sure the controller is running $ kubectl get pod -n capsule-system NAME READY STATUS RESTARTS AGE capsule-controller-manager-5c6b8445cf-566dc 1/1 Running 0 23s # Check the logs if needed $ kubectl -n capsule-system logs --all-containers -l control-plane=controller-manager # You may have a try to deploy a Tenant too to make sure it works end to end $ kubectl apply -f - < _tls.cnf <