update images using parametrized promotions

This commit is contained in:
tomas f
2021-08-03 11:01:24 -03:00
parent 580a6f4678
commit d127917263
14 changed files with 52 additions and 20 deletions

View File

@@ -262,7 +262,7 @@ A good CI/CD workflow takes planning as there are many moving parts: building, t
Our CI/CD workflow begins with the mandatory continuous integration pipeline:
![Continuous Integration Flow](./figures/05-flow-docker-build.png)
![Continuous Integration Flow](./figures/05-flow-docker-build.png){ width=95% }
The CI pipeline performs the following steps:
@@ -286,7 +286,7 @@ A canary deployment is a limited release of a new version. Well call it _cana
We can do a canary deployment by connecting the canary pods to the same load balancer as the rest of the pods. As a result, a set fraction of user traffic goes to the canary. For example, if we have nine stable pods and one canary pod, 10% of the users would get the canary release.
![Canary release flow](./figures/05-flow-canary-deployment.png)
![Canary release flow](./figures/05-flow-canary-deployment.png){ width=95% }
The canary release performs the following steps:
@@ -299,10 +299,10 @@ Lets take a closer look at how the stable release works.
Imagine that this is your initial state: you have three pods running version **v1**.
![Stable release via rolling update](./figures/05-transition-canary.png)
![Stable release via rolling update](./figures/05-transition-canary.png){ width=95% }
When you deploy **v2** as a canary, you scale down the number of **v1** pods to 2, to keep the total amount of pods to 3.
Then, you can start a rolling update to version **v2** on the stable deployment. One at a time, all its pods are updated and restarted, until they are all running on **v2** and you can get rid of the canary.
![Completing a stable release](./figures/05-transition-stable.png)
![Completing a stable release](./figures/05-transition-stable.png){ width=95% }

View File

@@ -65,7 +65,7 @@ Creating a cluster on AWS is, unequivocally, a complex affair. So complex that t
- Sign up or log in to your AWS account at [aws.amazon.com](https://aws.amazon.com).
- Select one of the available regions.
- Find and go to the *ECR* service. Create a new repository called “semaphore-demo-cicd-kubernetes” and copy its address.
- Find and go to the *ECR* service. Create a new private repository called “semaphore-demo-cicd-kubernetes” and copy its address.
- Install *eksctl* from `eksctl.io` and *awscli* from `aws.amazon.com/cli` in your machine.
- Find the *IAM* console in AWS and create a user with Administrator permissions. Get its *Access Key Id* and *Secret Access Key* values.

View File

@@ -16,12 +16,22 @@ Create a new promotion using the *+Add First Promotion* button. Promotions conne
Check the *Enable automatic promotion* box. Now we can define the following auto-starting conditions for the new pipeline:
```
```text
result = 'passed' and (branch = 'master' or tag =~ '^hotfix*')
```
![Automatic promotion](./figures/05-sem-canary-auto-promotion.png){ width=95% }
Below the promotion options, click on *+ Add Environment Variables* to create a parameter for a pipeline. Parametrization let us set runtime values and reuse a pipeline for similar tasks.
The parameter we're going to add is going to let us specify the number of Canary pods to deploy. Set the variable name to `CANARY_PODS`, ensure that "This is a required parameter" is checked and type "1" in default value.
![](./figures/08-param1.png){ width=60% }
Create a second parameter called `STABLE_PODS`. Set the default value to "2".
![](./figures/08-param2.png){ width=60% }
In the new pipeline, click on the first block. Let's call it “Push”. The push block takes the Docker image that we built earlier and uploads it to the private Container Registry. The secrets and the login command will vary depending on the cloud of choice.
Open the *Secrets* section and check the `do-key` secret.
@@ -33,10 +43,10 @@ docker login \
-u $SEMAPHORE_REGISTRY_USERNAME \
-p $SEMAPHORE_REGISTRY_PASSWORD \
$SEMAPHORE_REGISTRY_URL
docker pull \
$SEMAPHORE_REGISTRY_URL/demo:$SEMAPHORE_WORKFLOW_ID
docker tag \
$SEMAPHORE_REGISTRY_URL/demo:$SEMAPHORE_WORKFLOW_ID \
registry.digitalocean.com/$REGISTRY_NAME/demo:$SEMAPHORE_WORKFLOW_ID
@@ -67,34 +77,39 @@ Add the following commands to the *job*:
```bash
doctl auth init --access-token $DO_ACCESS_TOKEN
doctl kubernetes cluster kubeconfig save "${CLUSTER_NAME}"
doctl registry kubernetes-manifest | kubectl apply -f -
checkout
kubectl apply -f manifests/service.yml
./apply.sh \
manifests/deployment.yml \
addressbook-canary 1 \
addressbook-canary $CANARY_PODS \
registry.digitalocean.com/$REGISTRY_NAME/demo:$SEMAPHORE_WORKFLOW_ID
if kubectl get deployment addressbook-stable; then \
kubectl scale --replicas=2 deployment/addressbook-stable; \
kubectl scale --replicas=$STABLE_PODS deployment/addressbook-stable; \
fi
```
This is the canary job sequence:
- Initialize the cluster config.
- Generate the Container Service credentials and import them into the cluster.
- Clone the GitHub repository with `checkout`.
- Create a load balancer service with `kubectl apply`.
- Execute `apply.sh`, which creates the canary deployment.
- Reduce the size of the stable deployment with `kubectl scale`.
- Execute `apply.sh`, which creates the canary deployment. The number of pods in the deployment is held `$CANARY_PODS`.
- Reduce the size of the stable deployment with `kubectl scale` to `$STABLE_PODS`.
![Deploy block](./figures/05-sem-canary-deploy-block.png){ width=95% }
Create a third block called “Functional test and migration” and enable the `do-key` secret. Repeat the environment variables. This is the last block in the pipeline and it runs some automated tests on the canary. By combining `kubectl get pod` and `kubectl exec`, we can run commands inside the pod.
Create a third block called “Functional test” and enable the `do-key` secret. Repeat the environment variables. This is the last block in the pipeline and it runs some automated tests on the canary. By combining `kubectl get pod` and `kubectl exec`, we can run commands inside the pod.
Type the following commands in the job:
```bash
doctl auth init --access-token $DO_ACCESS_TOKEN
doctl kubernetes cluster kubeconfig save "${CLUSTER_NAME}"
doctl registry kubernetes-manifest | kubectl apply -f -
checkout
POD=$(kubectl get pod -l deployment=addressbook-canary -o name | head -n 1)
kubectl exec -it "$POD" -- npm run ping
@@ -115,6 +130,10 @@ Create a new pipeline (using the *Add promotion* button) branching out from the
![Stable promotion](./figures/05-sem-stable-promotion.png){ width=95% }
Add a parameter called `$STABLE_PODS` with default value "3".
![](./figures/08-param3.png){ width=60% }
Create the “Deploy to Kubernetes” block with the `do-key` and `db-params` secrets. Also, create the `CLUSTER_NAME` and `REGISTRY_NAME` variables as we did in the previous step.
In the job command box, type the following lines to make the rolling deployment and delete the canary pods:
@@ -122,12 +141,13 @@ In the job command box, type the following lines to make the rolling deployment
```bash
doctl auth init --access-token $DO_ACCESS_TOKEN
doctl kubernetes cluster kubeconfig save "${CLUSTER_NAME}"
doctl registry kubernetes-manifest | kubectl apply -f -
checkout
kubectl apply -f manifests/service.yml
./apply.sh \
manifests/deployment.yml \
addressbook-stable 3 \
addressbook-stable $STABLE_PODS \
registry.digitalocean.com/$REGISTRY_NAME/demo:$SEMAPHORE_WORKFLOW_ID
if kubectl get deployment addressbook-canary; then \
@@ -143,12 +163,17 @@ Good! Were done with the release pipeline.
Here is the moment of truth. Will the canary work? Click on *Run the workflow* and then *Start*.
Wait until the CI pipeline is done an click on *Promote* to start the canary pipeline[^no-autopromotion].
Wait until the CI pipeline is done an click on *Promote* to start the canary pipeline[^no-autopromotion]. As you can see on the screenshot below, manually starting a promotion lets you customize the parameters.
[^no-autopromotion]: You might be wondering why the automatic promotion hasnt kicked in for the canary pipeline. The reason is that we set it to trigger only for the master branch, and the Workflow Builder by default saves all its changes on a separate branch called `setup-semaphore`.
![Canary Promote](./figures/05-sem-promote-canary.png)
![Canary Pipeline](./figures/05-sem-canary-pipeline.png)
![Canary Promote](./figures/08-promote1.png){ width=60% }
Press *Start Promotion* to run the canary pipeline.
![Canary Pipeline](./figures/05-sem-canary-pipeline.png){ width=60% }
Once it completes, we can check how the canary is doing.
@@ -163,11 +188,15 @@ addressbook-canary 1/1 1 1 8m40s
In tandem with the canary deployment, we should have a dashboard to monitor errors, user reports, and performance metrics to compare against the baseline. After some pre-determined amount of time, we would reach a go vs. no-go decision. Is the canary version is good enough to be promoted to stable? If so, the deployment continues. If not, after collecting the necessary error reports and stack traces, we rollback and regroup.
Lets say we decide to go ahead. So go on and hit the *Promote* button next to the stable pipeline.
Lets say we decide to go ahead. So go on and hit the *Promote* button, you can tweak the number of final pods to deploy.
![](./figures/08-promote-stable.png){ width=95% }
The stable pipeline should be done in a few seconds.
![Stable Pipeline](./figures/05-sem-stable-pipeline.png){ width=60% }
While the block runs, you should see both the existing canary and a new “addressbook-stable” deployment:
If you're fast enough, while the block runs you can see both the existing canary and a new “addressbook-stable” deployment.
``` bash
$ kubectl get deployment
@@ -254,11 +283,14 @@ result = 'failed'
![Rollback promotion](./figures/05-sem-rollback-promotion.png){ width=95% }
Create the `STABLE_PODS` parameter with default value "3" to finalize the promotion configuration.
The rollback job collects information to help diagnose the problem. Create a new block called “Rollback Canary”, import the `do-ctl` secret, and create `CLUSTER_NAME` and `REGISTRY_NAME`. Type these lines in the job:
```bash
doctl auth init --access-token $DO_ACCESS_TOKEN
doctl kubernetes cluster kubeconfig save "${CLUSTER_NAME}"
doctl registry kubernetes-manifest | kubectl apply -f -
kubectl get all -o wide
kubectl get events
kubectl describe deployment addressbook-canary || true
@@ -266,7 +298,7 @@ POD=$(kubectl get pod -l deployment=addressbook-canary -o name | head -n 1)
kubectl logs "$POD" || true
if kubectl get deployment addressbook-stable; then \
kubectl scale --replicas=3 \
kubectl scale --replicas=$STABLE_PODS \
deployment/addressbook-stable; \
fi

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 250 KiB

After

Width:  |  Height:  |  Size: 259 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 239 KiB

After

Width:  |  Height:  |  Size: 222 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 274 KiB

After

Width:  |  Height:  |  Size: 195 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 259 KiB

After

Width:  |  Height:  |  Size: 252 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

After

Width:  |  Height:  |  Size: 68 KiB

BIN
figures/08-param1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 106 KiB

BIN
figures/08-param2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

BIN
figures/08-param3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

BIN
figures/08-promote1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 146 KiB