Merge pull request #25 from TomFern/tutorial-wb1

Add draft of the new Work Builder approach
This commit is contained in:
Marko Anastasov
2020-05-04 10:31:02 +02:00
committed by GitHub
25 changed files with 399 additions and 741 deletions

View File

@@ -2,109 +2,93 @@
# 4 Implementing a CI/CD Pipeline
Going to a restaurant and looking at menu with all those delicious
dishes is fun. In the end, however, we have to pick something and eat
it—the whole point of going out is to have a nice meal.
Going to a restaurant and looking at the menu with all those delicious dishes is undoubtedly fun. But in the end, we have to pick something and eat it—the whole point of going out is to have a nice meal. So far, this book has been like a menu, showing you all the possibilities and their ingredients. In this chapter, you are ready to order. *Bon appétit*.
So far, this book has been like a menu, showing you all the
possibilities and their ingredients. In this chapter, you are ready to
order. *Bon appétit*.
Our goal is to get an application running on Kubernetes using CI/CD best
practices.
Our goal is to get an application running on Kubernetes using CI/CD best practices.
![High Level Flow](./figures/05-high-level-steps.png){ width=95% }
Our process will include the following steps:
- **Build**: package the application into a Docker image.
- **Run end-to-end tests**: run end-to-end tests inside the image.
- **Canary deploy**: deploy the image as a canary to a fraction of the users.
- **Run functional tests**: verify the canary in production to decide if we
should go ahead.
- **Deploy**: if the canary passes the test, deploy the image to all
users.
- **Run functional tests**: verify the canary in production to decide if we should go ahead.
- **Deploy**: if the canary passes the test, deploy the image to all users.
- **Rollback**: if it fails, undo all changes, so we can fix a problem and try again later.
## 4.1 Docker and Kubernetes Commands
In previous chapters weve learned most of the Docker and Kubernetes commands
that we'll need in this chapter. Here are a few that we havent seen yet.
In previous chapters weve learned most of the Docker and Kubernetes commands that well need in this chapter. Here are a few that we havent seen yet.
### 4.1.1 Docker Commands
A *Docker registry* stores Docker images. Docker CLI provides the following
commands for managing images:
A Docker *registry* stores Docker images. Docker CLI provides the following commands for managing images:
- `push` and `pull`: these commands work like in Git. We can use them to
transfer images to and from the registry.
- `push` and `pull`: these commands work like in Git. We can use them to transfer images to and from the registry.
- `login`: takes a username, password, and an optional registry URL.
We need to log in before we can push images.
- `login`: we need to log in before we can push images. Takes a username, password, and an optional registry URL.
- `build`: creates a custom image from a `Dockerfile`.
- `tag`: renames an image or changes its tag.
- `exec`: starts a process in an already-running container. Compare it
with `docker run` which starts a new container instead.
- `exec`: starts a process in an already-running container. Compare it with `docker run` which starts a new container instead.
### 4.1.2 Kubectl Commands
*Kubectl* is the primary admin CLI for Kubernetes. Well use the
following commands during deployments:
*Kubectl* is the primary admin CLI for Kubernetes. Well use the following commands during deployments:
- `get service`: in chapter 2, we learned about services in Kubernetes.
This command shows what services are running in a cluster.
For instance, we can check the status and external IP of a load balancer.
- `get service`: in chapter 2, we learned about services in Kubernetes; this shows what services are running in a cluster. For instance, we can check the status and external IP of a load balancer.
- `get events`: shows recent cluster events.
- `describe`: shows detailed information about services, deployments,
nodes, and pods.
- `describe`: shows detailed information about services, deployments, nodes, and pods.
- `logs`: dumps a containers stdout messages.
- `apply`: starts a declarative deployment. Kubernetes compares the
existing and target states and takes the necessary steps to
reconcile them.
- `apply`: starts a declarative deployment. Kubernetes compares the current and target states and takes the necessary steps to reconcile them.
- `rollout status`: shows the deployment progress and waits until the
deployment finishes.
- `rollout status`: shows the deployment progress and waits until the deployment finishes.
- `exec`: works like `docker exec`, runs a command in an already-running pod.
- `exec`: works like `docker exec`, executes a command in an already-running pod.
- `delete`: stops and removes pods, deployments, and services.
## 4.2 Setting Up The Demo Project
Its time to put the book down and get our hands busy for a few minutes.
In this section, youll fork a demo repository and install some tools.
Its time to put the book down and get our hands busy for a few minutes. In this section, youll fork a demo repository and install some tools.
### 4.2.1 Install Prerequisites
Youll need to the following tools installed on your computer:
- **git** (_[https://git-scm.com](https://git-scm.com)_) to manage the code.
- **docker** (_[https://www.docker.com](https://www.docker.com)_)
to run containers.
- **kubectl** (_[https://kubernetes.io/docs/tasks/tools/install-kubectl/](https://kubernetes.io/docs/tasks/tools/install-kubectl/)_)
to control the Kubernetes cluster.
- **curl** (_[https://curl.haxx.se](https://curl.haxx.se)_)
to test the application.
- **docker** (_[https://www.docker.com](https://www.docker.com)_) to run containers.
- **kubectl** (_[https://kubernetes.io/docs/tasks/tools/install-kubectl/](https://kubernetes.io/docs/tasks/tools/install-kubectl/)_) to control the Kubernetes cluster.
- **curl** (_[https://curl.haxx.se](https://curl.haxx.se)_) to test the application.
### 4.2.2 Download The Git Repository
We have prepared a demo project on GitHub with everything that youll need
to set up a CI/CD pipeline:
We have prepared a demo project on GitHub with everything that youll need to set up a CI/CD pipeline:
- Visit _[https://github.com/semaphoreci-demos/semaphore-demo-cicd-kubernetes](https://github.com/semaphoreci-demos/semaphore-demo-cicd-kubernetes)_
- Click on the *Fork* button.
- Click on the *Clone or download* button and copy the URL.
- Clone the Git repository to your computer: `git clone YOUR_REPOSITORY_URL`
The repository contains a microservice called “addressbook” that exposes a
few API endpoints. It runs on Node.js and uses a PostgreSQL database.
- Click on the *Fork* button.
- Click on the *Clone or download* button and copy the URL.
- Clone the Git repository to your computer: `git clone YOUR_REPOSITORY_URL`.
The repository contains a microservice called “addressbook” that exposes a few API endpoints. It runs on Node.js and PostgreSQL.
You will see the following directories and files:
@@ -119,15 +103,13 @@ You will see the following directories and files:
Use `docker-compose` to start a development environment:
```
``` bash
$ docker-compose up --build
```
Docker Compose will build and run the container image as required. It will also
download and start a PostgreSQL database for you.
Docker Compose builds and runs the container image as required. It also downloads and starts a PostgreSQL database for you.
The included `Dockerfile` builds a container image from an official Node.js
image:
The included `Dockerfile` builds a container image from an official Node.js image:
``` dockerfile
FROM node:12.16.1-alpine3.10
@@ -149,17 +131,16 @@ EXPOSE 3000
CMD ["node", "src/app.js"]
```
Based on this configuration, Docker will run the following steps:
Based on this configuration, Docker runs the following steps:
- Pull the Node.js image.
- Copy the application files.
- Run `npm` inside the container to install the libraries.
- Set the starting command to serve on port 3000.
To verify that the microservice is running correctly, run the following
command to create a new record:
To verify that the microservice is running correctly, run the following command to create a new record:
```
``` bash
$ curl -w "\n" -X PUT -d "firstName=al&lastName=pacino" localhost:3000/person
{"id":1,"firstName":"al","lastName":"pacino", \
"updatedAt":"2020-03-27T10:59:09.987Z", \
@@ -168,7 +149,7 @@ $ curl -w "\n" -X PUT -d "firstName=al&lastName=pacino" localhost:3000/person
To list all records:
```
``` bash
$ curl -w "\n" localhost:3000/all
[{"id":1,"firstName":"al","lastName":"pacino", \
"createdAt":"2020-03-27T10:59:09.987Z", \
@@ -178,14 +159,11 @@ $ curl -w "\n" localhost:3000/all
### 4.2.4 Reviewing Kubernetes Manifests
In chapter 3, we learned that Kubernetes is a declarative system: instead
of telling it what to do, we state what we want and trust it knows how
to get there.
In chapter 3, we learned that Kubernetes is a declarative system: instead of telling it what to do, we state what we want and trust it knows how to get there.
The `manifests` directory contains all the Kubernetes manifest files.
`manifests/service.yml`: the LoadBalancer service. Forwards traffic from port
80 (HTTP) to port 3000.
- `service.yml`: the LoadBalancer service. Forwards traffic from port 80 (HTTP) to port 3000.
``` yaml
apiVersion: v1
@@ -201,8 +179,7 @@ spec:
targetPort: 3000
```
`manifests/deployment.yml`: all deployments. The directory also contains
an AWS-specific deployment manifest.
- `deployment.yml`: the deployment, the directory also has some AWS-specific manifests.
``` yaml
apiVersion: apps/v1
@@ -251,23 +228,18 @@ spec:
value: "$DB_SSL"
```
The deployment manifest combines many of the Kubernetes concepts weve
discussed in chapter 3:
The deployment manifest combines many of the Kubernetes concepts weve discussed in chapter 3:
1. A deployment called “addressbook” with rolling updates.
2. Labels for the pods manage traffic and identify release channels.
3. Environment variables for the containers in the pod.
4. A readiness probe to detect when the pod is ready to accept
connections.
4. A readiness probe to detect when the pod is ready to accept connections.
Note that were using dollar ($) variables in the file. This gives us
some flexibility to reuse the same manifest for deploying to multiple
environments.
Note that were using dollar ($) variables in the file. This gives us some flexibility to reuse the same manifest for deploying to multiple environments.
## 4.3 Overview of the CI/CD Workflow
A good CI/CD workflow takes planning as there are many moving parts:
building, testing, and safely deploying code.
A good CI/CD workflow takes planning as there are many moving parts: building, testing, and safely deploying code.
### 4.3.1 CI Pipeline: Building a Docker Image and Running Tests
@@ -278,32 +250,24 @@ Our CI/CD workflow begins with the mandatory continuous integration pipeline:
The CI pipeline performs the following steps:
- **Git checkout**: Get the latest source code.
- **Docker pull**: Get the latest available application image, if it exists,
from the CI Docker registry. This optional step decreases the build time
in the following step.
- **Docker build**: Create a Docker image.
- **Test**: Start the container and run tests inside.
- **Docker push**: If all test pass, push the accepted image to the
production registry.
In this process we will use Semaphore's built-in Docker registry. This is both
faster and cheaper than using a registry from a cloud vendor to work with
containers in the CI/CD context.
- **Docker pull**: Get the latest available application image, if it exists, from the CI Docker registry. This optional step decreases the build time in the following step.
- **Docker build**: Create a Docker image.
- **Test**: Start the container and run tests inside.
- **Docker push**: If all test pass, push the accepted image to the production registry.
In this process, we'll use Semaphores built-in Docker registry. This is faster and cheaper than using a registry from a cloud vendor to work with containers in the CI/CD context.
### 4.3.2 CD Pipelines: Canary and Stable Deployments
In chapter 3, we have talked about Continuous Delivery and Continuous
Deployment. In chapter 2, we learned about canaries and rolling
deployments. Our CI/CD workflow combines these two practices.
In chapter 3, we have talked about Continuous Delivery and Continuous Deployment. In chapter 2, we learned about canaries and rolling deployments. Our CI/CD workflow combines these two practices.
A canary deployment is a limited release of a new version.
Well call it _canary release_, and the previous version that is still used
by a majority of users the _stable release_.
A canary deployment is a limited release of a new version. Well call it _canary release_, and the previous version that is still used by a majority of users the _stable release_.
We can do a canary deployment by connecting the canary pods to the same
load balancer as the rest of the pods. As a result, a set fraction of
user traffic goes to the canary. For example, if we have nine stable
pods and one canary pod, 10% of the users would get the canary release.
We can do a canary deployment by connecting the canary pods to the same load balancer as the rest of the pods. As a result, a set fraction of user traffic goes to the canary. For example, if we have nine stable pods and one canary pod, 10% of the users would get the canary release.
![Canary release flow](./figures/05-flow-canary-deployment.png)
@@ -311,22 +275,17 @@ The canary release performs the following steps:
- **Copy** the image from the Semaphore registry to the production registry.
- **Canary deploy** a canary pod.
- **Test** the canary pod to ensure it's working by running automate functional
tests. We may optionally also perform manual QA.
- **Stable release**: if test pass, update the rest of the pods.
- **Test** the canary pod to ensure its working by running automate functional tests. We may optionally also perform manual QA.
- **Stable release**: if test passes, update the rest of the pods.
Let's take a closer look at how the stable release works.
Lets take a closer look at how the stable release works.
Imagine that this is your initial state: you have three pods running version
**v1**.
Imagine that this is your initial state: you have three pods running version **v1**.
![Stable release via rolling update](./figures/05-transition-canary.png)
When you deploy **v2** as a canary, you scale down the number of **v1**
pods to 2, to keep the total amount of pods to 3.
When you deploy **v2** as a canary, you scale down the number of **v1** pods to 2, to keep the total amount of pods to 3.
Then, you can start a rolling update to version **v2** on the stable
deployment. One at a time, all its pods are updated and restarted, until
they are all running on **v2** and you can get rid of the canary.
Then, you can start a rolling update to version **v2** on the stable deployment. One at a time, all its pods are updated and restarted, until they are all running on **v2** and you can get rid of the canary.
![Completing a stable release](./figures/05-transition-stable.png)

View File

@@ -1,42 +1,26 @@
\newpage
## 4.4 Implementing a CI/CD Pipeline With Semaphore
In this section, well learn about Semaphore and how to use it to build
cloud-based CI/CD pipelines.
In this section, well learn about Semaphore and how to use it to build cloud-based CI/CD pipelines.
### 4.4.1 Introduction to Semaphore
For a long time, engineers looking for a CI/CD tool had to choose between
power and ease of use.
For a long time, engineers looking for a CI/CD tool had to choose between power and ease of use.
On one hand, there was predominantly Jenkins which can
do just about anything, but is difficult to use and requires companies to
allocate dedicated ops teams to configure, maintain and scale it — along with
the infrastructure on which it runs.
On the other hand, there were several hosted services which let
developers just push their code and not worry about the rest of the process.
However, these services are usually limited to running simple build and test
steps, and would often fall short in need of more elaborate continuous delivery
workflows, which is often the case with containers.
On one hand, there was predominantly Jenkins which can do just about anything, but is difficult to use and requires companies to allocate dedicated ops teams to configure, maintain and scale it — along with the infrastructure on which it runs.
Semaphore (_[https://semaphoreci.com](https://semaphoreci.com)_) started
as one of the simple hosted CI services, but eventually
evolved to support custom continuous delivery pipelines with containers, while
retaining a way of being easy to use by any developer, not just dedicated ops
teams. As such it removes all technical barriers to adopting continuous
delivery at scale:
On the other hand, there were several hosted services that let developers just push their code and not worry about the rest of the process. However, these services are usually limited to running simple build and test steps, and would often fall short in need of more elaborate continuous delivery workflows, which is often the case with containers.
- It's a cloud-based service: there's no software for you to install and
maintain.
Semaphore (_[https://semaphoreci.com](https://semaphoreci.com)_) started as one of the simple hosted CI services, but eventually evolved to support custom continuous delivery pipelines with containers, while retaining a way of being easy to use by any developer, not just dedicated ops teams. As such, it removes all technical barriers to adopting continuous delivery at scale:
- It's a cloud-based service: there's no software for you to install and maintain.
- It provides a visual interface to model CI/CD workflows quickly.
- It's the fastest CI/CD service, due to being based on dedicated hardware
instead of common cloud computing services.
- It's the fastest CI/CD service, due to being based on dedicated hardware instead of common cloud computing services.
- It's free for open source and small private projects.
The key benefit of using Semaphore is increased team productivity.
Since there is there is no need to hire supporting staff or expensive
infrastructure, and it runs CI/CD workflows faster than any other solution,
companies that adopt Semaphore report a very large, 41x ROI comparing to their
previous solution [^roi].
The key benefit of using Semaphore is increased team productivity. Since there is no need to hire supporting staff or expensive infrastructure, and it runs CI/CD workflows faster than any other solution, companies that adopt Semaphore report a very large, 41x ROI comparing to their previous solution [^roi].
We'll learn about Semaphore's features as we go hands-on in this chapter.
@@ -46,284 +30,176 @@ We'll learn about Semaphore's features as we go hands-on in this chapter.
To get started with Semaphore:
- Go to [https://semaphoreci.com](https://semaphoreci.com) and click to
sign up with your GitHub account.
- GitHub will ask you to let Semaphore access your profile information.
Allow this so that Semaphore can create an account for you.
- Semaphore will walk you through the process of creating an organization.
Since software development is a team sport, all Semaphore projects belong to
an organization. Your organization will have its own domain, for example
`awesomecode.semaphoreci.com`.
- Semaphore will ask you to choose between a time-limited free trial
with unlimited capacity, free plan and open source plan. Since we're going
to work with an open source repository you can choose the open source option.
- Finally you'll be greeted with a quick product tour.
- Go to _[https://semaphoreci.com](https://semaphoreci.com)_ and click to sign up with your GitHub account.
- GitHub will ask you to let Semaphore access your profile information. Allow this so that Semaphore can create an account for you.
- Semaphore will walk you through the process of creating an organization. Since software development is a team sport, all Semaphore projects belong to an organization. Your organization will have its own domain, for example, `awesomecode.semaphoreci.com`.
- Semaphore will ask you to choose between a time-limited free trial with unlimited capacity, a free plan, and an open-source plan. Since we're going to work with an open-source repository, you can choose the open-source option.
- Finally, you'll be greeted with a quick product tour.
### 4.4.2 Creating a Semaphore Project For The Demo Repository
We assume that you have previously forked the demo project from
[https://github.com/semaphoreci-demos/semaphore-demo-cicd-kubernetes](https://github.com/semaphoreci-demos/semaphore-demo-cicd-kubernetes)
to your GitHub account.
We assume that you have previously forked the demo project from _[https://github.com/semaphoreci-demos/semaphore-demo-cicd-kubernetes](https://github.com/semaphoreci-demos/semaphore-demo-cicd-kubernetes)_ to your GitHub account.
Follow the prompt to create a project. The first time you do this,
you will see a screen which asks you to choose between connecting Semaphore
to either your public, or both public and private repositories on GitHub:
Follow the prompts to create a project. The first time you do this, you will see a screen which asks you to choose between connecting Semaphore to either your public, or both public and private repositories on GitHub:
![Authorizing Semaphore to access your GitHub repositories](./figures/05-github-repo-auth.png)
![Authorizing Semaphore to access your GitHub repositories](./figures/05-github-repo-auth.png){ width=95% }
To keep things simple, select the "Public repositories" option.
If you decide that you want to use Semaphore with your private projects as well,
you can extend the permission at any time.
To keep things simple, select the "Public repositories" option. If you later decide that you want to use Semaphore with your private projects as well, you can extend the permission at any time.
Next, Semaphore will present you a list of repositories to choose from as the
source of your project:
Next, Semaphore will present you a list of repositories to choose from as the source of your project:
![Choosing a repository to set up CI/CD for](./figures/05-choose-repo.png)
![Choosing a repository to set up CI/CD for](./figures/05-choose-repo.png){ width=95% }
In the search field, start typing `semaphore-demo-cicd-kubernetes` and choose
that repository.
In the search field, start typing `semaphore-demo-cicd-kubernetes` and choose that repository.
Semaphore will quickly initialize the project. Behind the scenes, it will
set up everything that's needed to know about every Git push automatically
pull the latest code — without you configuring anything.
Semaphore will quickly initialize the project. Behind the scenes, it will set up everything that's needed to know about every Git push automatically pull the latest code — without you configuring anything.
The next screen optionally lets you invite your repository collaborators
to join the Semaphore project. Semaphore mirrors access permissions of GitHub,
so if you add some people to the GitHub repository later, you can "sync" them
inside project settings on Semaphore.
The next screen lets you invite collaborators to your project. Semaphore mirrors access permissions of GitHub, so if you add some people to the GitHub repository later, you can "sync" them inside project settings on Semaphore.
TBC
![Add collaborators](./figures/05-sem-add-collaborators.png){ width=95% }
### 4.4.3 The Semaphore Syntax
Click on *Go to Workflow Builder*. Semaphore will ask you if you want to use the existing pipelines or create one from scratch. At this point, you can choose to use the existing configuration to get directly to the final workflow. In this chapter, however, we want to learn how to create the pipelines so well make a fresh start.
You can completely define the CI/CD environment for your project with
Semaphore Pipelines.
![Start from scratch or use existing pipeline](./figures/05-sem-existing-pipeline.png){ width=95% }
A Semaphore pipeline consists of one or more YAML files that follow the
Semaphore syntax\[1\].
### 4.4.3 The Semaphore Workflow Builder
These are some common elements youll find in a pipeline:
When choosing to start from scratch, Semaphore shows some starter workflows with popular frameworks and languages. Choose the Build Docker workflow and click on *Run this workflow*.
**Version**: sets the syntax version of the file; at the time of writing
the only valid value is “v1.0”.
![Choosing a starter workflow](./figures/05-sem-starter-workflow.png){ width=95% }
``` yaml
version: v1.0
Semaphore will immediately start the workflow. Wait a few seconds and your first Docker image is ready, congratulations!
![Starter run](./figures/05-sem-starter-run.png){ width=95% }
Of course, since we havent told Semaphore where to store the image yet, its lost as soon as the job ends. Well correct that next.
See the *Edit Workflow* button on the top right corner? Click it to open the Workflow Builder.
![Workflow builder overview](./figures/05-sem-wb-overview.png){ width=95% }
Now its a good moment to learn how the Workflow Builder works.
**Pipelines**
Pipelines are represented on the builder as big gray boxes. Pipelines organize the workflow in blocks that are executed from left to right. Each pipeline usually has a specific objective such as test, build, or deploy. Pipelines can be chained together to make complex workflows.
**Agent**
The agent is the combination of hardware and software that powers the pipeline. The *machine type* determines the amount of CPUs and memory allocated to the virtual machine. The operating system is controlled by the *Environment Type* and *OS Image* settings.
The default machine is called `e1-standard-2` and has 2 CPUs, 4 GB RAM, and runs a custom Ubuntu 18.04 image.
**Jobs and Blocks**
Blocks and jobs define what to do at each step. Jobs define the commands that do the work. Blocks contain jobs with a common objective and shared settings.
Jobs inherit their configuration from their parent block. All the jobs in a block run in parallel, each in its isolated environment. If any of the jobs fails, the pipeline stops with an error.
Blocks run sequentially, once all the jobs in the block complete, the next block starts.
### 4.4.4 The Continous Integration Pipeline
We talked about the benefits of CI/CD in chapter 3. In the previous section, we created our very first pipeline. In this section, well extend it with tests and a place to store the images.
At this point, you should be seeing the Workflow Builder with the Docker Build starter workflow. Click on the *Build* block so we can see how it works.
![Build block](./figures/05-sem-build-block.png){ width=95% }
Each line on the job is a command to execute. The first command in the job is `checkout`, which is a built-in script that clones the repository at the correct revision. The next command, `docker build`, builds the image using our `Dockerfile`.
Replace the contents of the job with the following commands:
```bash
checkout
docker login -u $SEMAPHORE_REGISTRY_USERNAME -p $SEMAPHORE_REGISTRY_PASSWORD $SEMAPHORE_REGISTRY_URL
docker pull $SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:latest || true
docker build --cache-from $SEMAPHORE_REGISTRY_URL/seamphore-demo-cicd-kubernetes:latest -t $SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID .
docker push $SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID
```
**Name**: gives an optional name to the pipeline.
Each line has its purpose:
``` yaml
name: This is the name of the pipeline
```
- Line 1 clones the repository with `checkout`.
- Line 2 logs in the Semaphore private Docker registry.
- Line 3 pulls the Docker image tagged as `latest`.
- Line 4 builds a newer version of the image using the latest code.
- Line 5 pushes the new image to the registry.
**Agent**: the agent is the combination of hardware and software that
runs the jobs. The `machine.type` and `machine.os_image` properties
describe the virtual machine\[2\] and the operating system. The
`e1-standard-2` machine has 2 CPUs and 4 GB RAM and runs a Ubuntu 18.04
LTS\[3\]:
The perceptive reader will note that we introduced special environment variables; these come predefined in every job. The variables starting with `SEMAPHORE_REGISTRY_*` are used to access the private registry. Also, were using `SEMAPHORE_WORKFLOW_ID`, which is guaranteed to be unique for each run, to tag the image.
``` yaml
agent:
machine:
type: e1-standard-2
os_image: ubuntu1804
```
![Build block](./figures/05-sem-build-block-2.png){ width=95% }
**Blocks** and **jobs**: define what to do at each step. Each block can
have one or more jobs. All jobs in a block run in parallel, each in an
isolated environment. Semaphore waits for all jobs in a block to pass
before starting the next one.
Now that we have a Docker image that we can test lets add a second block. Click on the *+Add Block* dotted box.
This is how a block with two jobs looks like:
The Test block will have jobs:
``` yaml
blocks:
- name: The block name
task:
jobs:
- name: The Job name
commands:
- command 1
- command 2
- name: Another Job
commands:
- command 1
- command 2
```
- Static tests.
- Integration tests.
- Functional tests.
Commands in the **prologue** section are executed before each job in the
block; its a convenient place for setup commands:
The general sequence is the same for all tests:
``` yaml
prologue:
commands:
- checkout
- cache restore
```
1. Pull the image from the registry.
2. Start the container.
3. Run the tests.
The Ubuntu OS image comes with a bunch of convenience scripts and
tools\[4\]:
- **checkout**: clones the Git repository at the proper code revision
and `cd` into the directory.
- **sem-service**: starts an empty database for testing\[5\].
**Environment variables**: defined at the block level are applied for
all its jobs:
``` yaml
env_vars:
- name: MY_ENV_1
value: foo
- name: MY_ENV_2
value: bar
```
When a job starts, Semaphore preloads some special variables\[6\]. One
of these is called `$SEMAPHORE_WORKFLOW_ID` and contains a unique string
that is preserved for all pipelines in a given run. Well use it to
uniquely tag our Docker images.
Also, blocks can have **secrets**. Secrets contain sensitive information
that doesnt belong in a Git repository. Secrets import environment
variables and files into the job\[7\]:
``` yaml
secrets:
- name: secret-1
- name: secret-2
```
**promotions**: Semaphore always executes first the pipeline found at
`.semaphore/semaphore.yml`. We can have multi-stage, multi-branching
workflows by connecting pipelines together with promotions. Promotions
can be started manually or by user-defined conditions\[8\].
``` yaml
promotions:
- name: A manual promotion
pipeline_file: pipeline-file-1.yml
- name: An automated promotion
pipeline_file: pipeline-file-2.yml
auto_promote:
when: "result = 'passed'"
```
### 4.4.2 The Continous Integration Pipeline
We talked about the benefits of CI/CD in chapter 3. Our demo includes a
full-fledged CI pipeline. Open the file from the demo located at
`.semaphore/semaphore.yml` to learn how we can build and test the Docker
image.
Weve already covered the basics, so lets jump directly to the first
block. The prologue clones the repo and logins to the Semaphore Docker
registry:
``` yaml
blocks:
- name: Docker Build
task:
prologue:
commands:
- checkout
- docker login \
-u $SEMAPHORE_REGISTRY_USERNAME \
-p $SEMAPHORE_REGISTRY_PASSWORD \
$SEMAPHORE_REGISTRY_URL
```
The “Build” job then:
- Pulls the “latest” image from the registry.
- Builds the Docker image with the current code revision.
- Tags it with `$SEMAPHORE_WORKFLOW_ID` so each has a distinct ID.
- Pushes it back to the Registry.
<!-- end list -->
``` yaml
jobs:
- name: Build
commands:
- docker pull \
$SEMAPHORE_REGISTRY_URL/addressbook:latest || true
- docker build \
--cache-from $SEMAPHORE_REGISTRY_URL/addressbook:latest \
-t $SEMAPHORE_REGISTRY_URL/addressbook:$SEMAPHORE_WORKFLOW_ID .
- docker push $SEMAPHORE_REGISTRY_URL/addressbook:$SEMAPHORE_WORKFLOW_ID
```
The second block:
- Pulls the recently create image from the registry.
- Runs integration and end-to-end tests.
<!-- end list -->
``` yaml
jobs:
- name: Static test
commands:
- docker run -it \
$SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID \
npm run lint
- name: Integration test
commands:
- sem-service start postgres
- docker run --net=host -it \
$SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID \
npm run test
- name: Functional test
commands:
- sem-service start postgres
- docker run --net=host -it \
$SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID \
npm run ping
- docker run --net=host -it \
$SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID \
npm run migrate
```
The last block repeats the pattern:
- Pulls the image created on the first block.
- Tags it as “latest” and pushes it again to the Semaphore registry
for future runs.
The last section of the file defines the promotion. Uncomment the lines
corresponding to your cloud of choice and save the file. These
promotions are triggered when the Git branch is master or when the
commit tag begins with “hotfix”.
``` yaml
promotions:
- name: Canary Deployment (DigitalOcean)
pipeline_file: deploy-canary-digitalocean.yml
auto_promote:
when: "result = 'passed' and (branch = 'master' or tag =~ '^hotfix*')"
```
### 4.4.3 Your First Run
Weve covered a lot of things in a few pages, here we have the change to
pause for a little bit and do an initial run of the CI pipeline.
You can avoid running the deployment pipeline by making a push in a
non-master branch:
Blocks can have a *prologue* in which we can place shared initialization commands. Open the prologue section on the right side of the block and type the following commands, which will be executed before each job:
``` bash
$ git branch test-integration
$ git checkout test-integration
$ touch any-file
$ git add any-file
$ git commit -m "run integration pipeline for the first time"
$ git push origin test-integration
docker login -u $SEMAPHORE_REGISTRY_USERNAME -p $SEMAPHORE_REGISTRY_PASSWORD $SEMAPHORE_REGISTRY_URL
docker pull $SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID
```
Check the progress of the pipeline from the Semaphore website:
Next, rename the first job as “Unit test” and type the following command, which runs JSHint, a static code analysis tool:
``` bash
docker run -it $SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID npm run lint
```
Next, click on the *+Add another job* link below the job to create a new one called “Functional test”. Type these commands:
``` bash
sem-service start postgres
docker run --net=host -it $SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID npm run ping
docker run --net=host -it $SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID npm run migrate
```
This job tests two things: that the container connects to the database (`ping`) and that it can create the tables (`migrate`). Obviously, well need a database for this to work; fortunately, we have `sem-service`, which lets us start database engines like MySQL, Postgres, or MongoDB with a single command.
Finally, add a third job called “Integration test” and type these commands:
``` bash
sem-service start postgres
docker run --net=host -it $SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID npm run test
```
This last test runs the code in `src/database.test.js`, which checks if the application can write and delete rows in the database.
![Test block](./figures/05-sem-test-block.png){ width=95% }
Create the third block in the pipeline and call it “Push”. This last job will tag the current Docker image as `latest`. Type these commands in the job:
``` bash
docker login -u $SEMAPHORE_REGISTRY_USERNAME -p $SEMAPHORE_REGISTRY_PASSWORD $SEMAPHORE_REGISTRY_URL
docker pull $SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID
docker tag $SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID $SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:latest
docker push $SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:latest
```
![Push block](./figures/05-sem-push-block.png){ width=95% }
This completes the setup of the CI pipeline.
### 4.4.5 Your First Build
Weve covered a lot of things in a few pages; here, we have the chance to pause for a little bit and try the CI pipeline. Click on the *Run the workflow* button on the top-right corner and then click on *Start*.
![Run this workflow](./figures/05-sem-run-workflow.png){ width=95% }
![CI pipeline done](./figures/05-sem-ci-pipeline.png){ width=95% }
Wait until the pipeline is complete then go to the top level of the project. Click on the *Docker Registry* button and open the repository to verify that the Docker image is there.
![Docker registry](./figures/05-sem-registry.png){ width=95% }
![CI Pipeline](./figures/05-sem-ci-pipeline.png)

View File

@@ -1,20 +1,18 @@
\newpage
## 4.5 Preparing the Cloud Services
Our project supports three clouds: Amazon AWS, Google Cloud Platform
(GCP), and DigitalOcean (DO). AWS is, by far, the most popular, but
likely the most expensive to run Kubernetes. DigitalOcean is the easiest
to use, while Google Cloud sits comfortably in the middle.
Our project supports three clouds out of the box: Amazon AWS, Google Cloud Platform (GCP), and DigitalOcean (DO), but with small modifications, it could run in any other cloud. AWS is, by far, the most popular, but likely the most expensive to run Kubernetes. DigitalOcean is the easiest to use, while Google Cloud sits comfortably in the middle.
### 4.5.1 Provision a Kubernetes Cluster
In this tutorial, well use a three-node Kubernetes cluster; you can
pick a different size, though. Youll need at least three nodes to run
an effective canary deployment with rolling updates.
In this tutorial, well deploy the application in a three-node Kubernetes cluster; you can pick a different size though, but youll need at least three nodes to run an effective canary deployment with rolling updates.
**DigitalOcean Cluster**
DO calls its service *Kubernetes*. Since DigitalOcean doesnt have a
private registry\[9\], well use Docker Hub. To create a registry:
DO has a managed Kubernetes service but lacks a private Docker registry[^do-private-reg], so well use Docker Hub for the images.
[^do-private-reg]: At the time of writing, DigitalOcean announced a beta for a private registry offering. For more information, consult the available documentation: _<https://www.digitalocean.com/docs/kubernetes/how-to/set-up-registry>_
- Sign up for a free account on `hub.docker.com`.
- Create a public repository called “semaphore-demo-cicd-kubernetes”
@@ -23,24 +21,18 @@ To create the Kubernetes cluster:
- Sign up for an account on `digitalocean.com`.
- Create a *New Project*.
- Create a *Kubernetes* cluster: select the latest version and choose
one of the available regions. Name your cluster
“semaphore-demo-cicd-kubernetes”.
- Create a *Kubernetes* cluster: select the latest version and choose one of the available regions. Name your cluster “semaphore-demo-cicd-kubernetes”.
- Go to the *API* menu and generate a *Personal Access Token*.
We have to store the DigitalOcean Access Token in secret:
We have to store the DigitalOcean Access Token with a secret:
1. Login to `semaphoreci.com`.
2. On the main page, under *Configuration* select *Secrets* and click
on the *Create New Secret* button.
2. On the main page, under *Configuration* select *Secrets* and click on the *Create New Secret* button.
3. The name of the secret is “do-key”
4. Add the following variables:
- `DO_ACCESS_TOKEN` set its value to your DigitalOcean access
token.
4. Add the `DO_ACCESS_TOKEN` variable and set its value with your personal token.
5. Click on *Save changes*.
Repeat the last steps to add the second secret, call it “dockerhub” and
add the following variables:
Repeat the last steps to add the second secret, call it “dockerhub” and add the following variables:
- `DOCKER_USERNAME` for your DockerHub user name.
- `DOCKER_PASSWORD` with the corresponding password.
@@ -50,10 +42,8 @@ add the following variables:
GCP calls the service *Kubernetes Engine*. To create the services:
- Sign up for a GCP account on `cloud.google.com`.
- Create a *New Project*. In *Project ID* type
“semaphore-demo-cicd-kubernetes.
- Go to *Kubernetes Engine* \> *Clusters* and create a cluster. Select
“Zonal” in *Location Type* and select one of the available zones.
- Create a *New Project*. In *Project ID* type “semaphore-demo-cicd-kubernetes”.
- Go to *Kubernetes Engine* \> *Clusters* and create a cluster. Select “Zonal” in *Location Type* and select one of the available zones.
- Name your cluster “semaphore-demo-cicd-kubernetes”.
- Go to *IAM* \> *Service Accounts*.
- Generate an account with “Project Owner” permissions.
@@ -62,30 +52,22 @@ GCP calls the service *Kubernetes Engine*. To create the services:
Create a secret for your GCP Access Key file:
1. Login to `semaphoreci.com`.
2. On the main page, under *Cconfiguration* select *Secrets* and click
on the *Create New Secret* button.
2. On the main page, under *Cconfiguration* select *Secrets* and click on the *Create New Secret* button.
3. Name the secret “gcp-key”
4. Add the following file:
- `/home/semaphore/gcp-key.json` and upload the GCP Access JSON
from your computer.
4. Add this file: `/home/semaphore/gcp-key.json` and upload the GCP Access JSON from your computer.
5. Click on *Save changes*.
**AWS Cluster**
AWS calls its service *Elastic Kubernetes Service* (EKS). The Docker
private registry is called *Elastic Container Registry* (ECR).
AWS calls its service *Elastic Kubernetes Service* (EKS). The Docker private registry is called *Elastic Container Registry* (ECR).
Creating a cluster on AWS is, unequivocally, a complex, multi-step
affair. So complex, that they created a specialized tool for it:
Creating a cluster on AWS is, unequivocally, a complex, multi-step affair. So complex that they created a specialized tool for it:
- Sign up for an AWS account at `aws.amazon.com`.
- Select one of the available regions.
- Find and go to the *ECR* service. Create a new repository called
“semaphore-demo-cicd-kubernetes” and copy its address.
- Install *eksctl* from `eksctl.io` and *awscli* from
`aws.amazon.com/cli` in your machine.
- Find the *IAM* console in AWS and create a user with Administrator
permissions. Get its *Access Key Id* and *Secret Access Key* values.
- Find and go to the *ECR* service. Create a new repository called “semaphore-demo-cicd-kubernetes” and copy its address.
- Install *eksctl* from `eksctl.io` and *awscli* from `aws.amazon.com/cli` in your machine.
- Find the *IAM* console in AWS and create a user with Administrator permissions. Get its *Access Key Id* and *Secret Access Key* values.
Open a terminal and sign in to AWS:
@@ -107,252 +89,185 @@ $ eksctl create cluster \
**Note**: Select the same region for all AWS services.
Once it finishes, eksctl should have created a kubeconfig file at
`$HOME/.kube/config`. Check the output from eksctl for more details.
Once it finishes, eksctl should have created a kubeconfig file at `$HOME/.kube/config`. Check the output from eksctl for more details.
Create a secret to store the AWS Secret Access Key and the kubeconfig:
1. Login to `semaphoreci.com`.
2. On the main page, under *Configuration* select *Secrets* and click
on the *Create New Secret* button.
2. On the main page, under *Configuration* select *Secrets* and click on the *Create New Secret* button.
3. Call the secret “aws-key”
4. Add the following variables:
- `AWS_ACCESS_KEY_ID` should have your AWS Access Key ID string.
- `AWS_SECRET_ACCESS_KEY` has the AWS Access Secret Key string.
5. Add the following file:
- `/home/semaphore/aws-key.yml` and upload the Kubeconfig file
created by eksctl earlier.
- `/home/semaphore/aws-key.yml` and upload the Kubeconfig file created by eksctl earlier.
6. Click on *Save changes*.
### 4.5.2 Provision a Database
Well need a database to store the data. For that, well use a managed
PostgreSQL service.
Well need a database to store the data. For that, well use a managed PostgreSQL service.
**DigitalOcean Database**
- Go to *Databases*.
- Create a PostgreSQL database. Select the same region where the
cluster is running.
- In the *Connectivity* tab, whitelist the `0.0.0.0/0` network\[10\].
- Go to the *Users & Databases* tab and create a database called
“demo” and a user named “demouser”.
- In the *Overview* tab, take note of the PostgreSQL IP address and
port.
- Create a PostgreSQL database. Select the same region where the cluster is running.
- In the *Connectivity* tab, whitelist the `0.0.0.0/0` network[^network-whitelist].
- Go to the *Users & Databases* tab and create a database called “demo” and a user named “demouser”.
- In the *Overview* tab, take note of the PostgreSQL IP address and port.
[^network-whitelist]: Later, when everything is working, you can restrict access to the Kubernetes nodes to increase security
**GCP Database**
- Select *SQL* on the console menu.
- Create a new PostgreSQL database instance.
- Select the same region and zone where the Kubernetes cluster is
running.
- Select the same region and zone where the Kubernetes cluster is running.
- Enable the *Private IP* network.
- Go to the *Users* tab and create a new user called “demouser”
- Go to the *Databases* tab and create a new DB called “demo”.
- In the *Overview* tab, take note of the database IP address and
port.
- In the *Overview* tab, take note of the database IP address and port.
**AWS Database**
- Find the service called *RDS*.
- Create a PostgreSQL database called “demo” and type in a secure
password.
- Create a PostgreSQL database called “demo” and type in a secure password.
- Choose the same region where the cluster is running.
- Select one of the available *templates*. The free tier is perfect
for demoing the application. Under *Connectivity* select all the
VPCs and subnets where the cluster is running (they should have
appeared in eksctls output).
- Select one of the available *templates*. The free tier is perfect for demoing the application. Under *Connectivity* select all the VPCs and subnets where the cluster is running (they should have appeared in eksctls output).
- Under *Connectivity & Security* take note of the endpoint address
and port.
**Create the Database Secret**
The database secret is the same for all clouds. Create a secret to store
the database credentials:
The database secret is the same for all clouds. Create a secret to store the database credentials:
1. Login to `semaphoreci.com`.
2. On the main page, under *Configuration* select *Secrets* and click
on the *Create New Secret* button.
2. On the main page, under *Configuration* select *Secrets* and click on the *Create New Secret* button.
3. The secret name is “db-params”
4. Add the following variables:
- `DB_HOST` with the database hostname or IP.
- `DB_PORT` points to the database port (default is 5432).
- `DB_SCHEMA` for AWS should be called “postgres”, for the other
clouds its value should be “demo”.
- `DB_SCHEMA` for AWS should be called “postgres”, for the other clouds its value should be “demo”.
- `DB_USER` for the database user.
- `DB_PASSWORD` should have the corresponding password.
- `DB_SSL` should be “true” for DigitalOcean, it can be empty for
the rest.
- `DB_PASSWORD` with the password.
- `DB_SSL` should be “true” for DigitalOcean, it can be left empty for the rest.
5. Click on *Save changes*.
## 4.6 Releasing the Canary
## 4.6 The Canary Pipeline
Now that we have our cloud services, were ready to deploy the canary
for the first time.
Now that we have our cloud services, were ready to prepare the canary deployment pipeline. Our project includes three ready-to-use reference pipelines for deployment. They should work with the secrets as described earlier. For further details, check the `.semaphore` folder in the project. In this section, well focus on the DO deployment but the process is the same for all clouds.
### 4.6.1 Continuous Deployment Pipeline
Open the Workflow Builder again to create the new pipeline.
Depending on your cloud, open one of the following files:
Create a new promotion using the *+Add First Promotion* button. Promotions connect pipelines together to create complex workflows. Lets call it “Canary”.
- AWS: `.semaphore/deploy-canary-aws.yml`
- GCP: `.semaphore/deploy-canary-gcp.yml`
- DO: `.semaphore/deploy-canary-digitalocean.yml`
![Create promotion](./figures/05-sem-canary-create-promotion.png){ width=95% }
Well focus on the DO deployment but the process is the same for all
clouds.
Check the *Enable automatic promotion* box. Now we can define the following auto-starting conditions for the new pipeline:
The pipeline consists of three blocks:
**Push**: the push block takes the docker image that we built earlier
and uploads it to Docker Hub. The image must be in a place that is
accessible by the Kubernetes cluster. This block imports a secret
“dockerhub” secret:
``` yaml
. . .
- name: Push to Registry
task:
secrets:
- name: dockerhub
. . .
```
result = 'passed' and (branch = 'master' or tag =~ '^hotfix*')
```
The secrets and the login command will vary depending on the cloud.
![Automatic promotion](./figures/05-sem-canary-auto-promotion.png){ width=95% }
The job pulls the image from Semaphores registry, tags it with its
final name, and pushes it to production registry:
Click on the first block, well call it “Push”. The push block takes the docker image that we built earlier and uploads it to Docker Hub. The secrets and the login command will vary depending on the cloud of choice. For DigitalOcean, well use Docker Hub as a repository:
``` yaml
. . .
jobs:
- name: Push
commands:
- docker login \
-u $SEMAPHORE_REGISTRY_USERNAME \
-p $SEMAPHORE_REGISTRY_PASSWORD \
$SEMAPHORE_REGISTRY_URL
- Open the *Secrets* section and check the `dockerhub` secret.
- Type the following commands in the job:
- docker pull \
$SEMAPHORE_REGISTRY_URL/addressbook:$SEMAPHORE_WORKFLOW_ID
```bash
docker login -u $SEMAPHORE_REGISTRY_USERNAME -p $SEMAPHORE_REGISTRY_PASSWORD $SEMAPHORE_REGISTRY_URL
docker pull $SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID
- echo "${DOCKER_PASSWORD}" | \
docker login -u "${DOCKER_USERNAME}" --password-stdin
- docker tag \
$SEMAPHORE_REGISTRY_URL/addressbook:$SEMAPHORE_WORKFLOW_ID \
$DOCKER_USERNAME/addressbook:$SEMAPHORE_WORKFLOW_ID
- docker push \
$DOCKER_USERNAME/addressbook:$SEMAPHORE_WORKFLOW_ID
. . .
echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin
docker tag $SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID $DOCKER_USERNAME/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID
docker push $DOCKER_USERNAME/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID
```
**Deploy**: this block imports two extra secrets: “db-params” and the
cloud-specific access token.
![Push block](./figures/05-sem-canary-push-block.png){ width=95% }
``` yaml
. . .
- name: Deploy
task:
secrets:
- name: do-key
- name: db-params
- name: dockerhub
. . .
Create the “Deploy” block and enable the `dockerhub` secret. This block also needs two extra secrets: `db-params` and the cloud-specific access token, which is `do-key` in our case.
Open the *Environment Variables* section and create a variable called `CLUSTER_NAME` with the DigitalOcean cluster name (`semaphore-demo-cicd-kubernetes`).
Next, type the following commands in the *prologue*:
```bash
wget https://github.com/digitalocean/doctl/releases/download/v1.20.0/doctl-1.20.0-linux-amd64.tar.gz
tar xf doctl-1.20.0-linux-amd64.tar.gz
sudo cp doctl /usr/local/bin
doctl auth init --access-token $DO_ACCESS_TOKEN
doctl kubernetes cluster kubeconfig save "${CLUSTER_NAME}"
checkout
```
Next, we define some environment variables. For more details, consult
the comments on the corresponding pipeline files, as you may need to
fill in some values.
The first three lines install DigitalOceans `doctl` manager and the last two lines set up a connection with the cluster.
``` yaml
. . .
env_vars:
- name: CLUSTER_NAME
value: semaphore-demo-cicd-kubernetes
. . .
Type the following commands in the job:
```bash
kubectl apply -f manifests/service.yml
./apply.sh manifests/deployment.yml addressbook-canary 1 $DOCKER_USERNAME/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID
if kubectl get deployment addressbook-stable; then kubectl scale --replicas=2 deployment/addressbook-stable; fi
```
The prologue installs the cloud management CLI tool and creates an
authenticated session.
This is the canary job sequence:
``` yaml
. . .
prologue:
commands:
- wget https://github.com/digitalocean/../doctl-1.20.0-linux-amd64.tar.gz
- tar xf doctl-1.20.0-linux-amd64.tar.gz
- sudo cp doctl /usr/local/bin
- doctl auth init --access-token $DO_ACCESS_TOKEN
- doctl kubernetes cluster kubeconfig save "${CLUSTER_NAME}"
- checkout
. . .
- Create a load balancer service with `kubectl apply`.
- Execute `apply.sh`, which creates the canary deployment.
- Reduce the size of the stable deployment with `kubectl scale`.
![Deploy block](./figures/05-sem-canary-deploy-block.png){ width=95% }
Create a third block called “Functional test and migration” and enable the `do-key` secret. Repeat the environment variables and prologue steps from the previous block. This is the last block in the pipeline and it runs some tests on the canary. By combining `kubectl get pod` and `kubectl exec`, we can run commands inside the pod.
Type the following commands in the job:
```bash
kubectl exec -it $(kubectl get pod -l deployment=addressbook-canary -o name | head -n 1) -- npm run ping
kubectl exec -it $(kubectl get pod -l deployment=addressbook-canary -o name | head -n 1) -- npm run migrate
```
The “Deploy” job starts the canary deployment:
![Test block](./figures/05-sem-canary-test-block.png){ width=95% }
- Creates a load balancer service with `kubectl apply`.
- Executes `apply.sh`, a convenience script for the manifest that
waits for the deployment to finish.
- Scales the stable pods down with `kubectl scale`.
## 4.7 Your First Release
<!-- end list -->
So far, so good. Let's see where we are: we built the Docker image, and, after testing it, weve setup the a one-pod canary deployment pipeline. In this section, well extend the workflow with a stable deployment pipeline.
``` yaml
. . .
jobs:
- name: Deploy
commands:
- kubectl apply -f manifests/service.yml
### 4.7.1 The Stable Deployment Pipeline
- ./apply.sh manifests/deployment.yml \
addressbook-canary \
1 \
$DOCKER_USERNAME/addressbook:$SEMAPHORE_WORKFLOW_ID
The stable pipeline completes the deployment cycle. This pipeline does not introduce anything new; again, we use `apply.sh` script to start a rolling update and `kubectl delete` to clean the canary deployment.
- if kubectl get deployment addressbook-stable; then \
kubectl scale --replicas=2 deployment/addressbook-stable; \
fi
. . .
Create a new pipeline (using the *Add promotion* button) branching out from the canary and name it “Deploy Stable (DigitalOcean)”.
![Stable promotion](./figures/05-sem-stable-promotion.png){ width=95% }
Create the “Deploy to Kubernetes” block with the `do-key`, `db-params`, and `dockerhub` secrets. Also, create the `CLUSTER_NAME` variable and repeat the same commands in the prologue as we did in the previous step.
In the job command box, type the following lines to make the rolling deployment and delete the canary pods:
```bash
./apply.sh manifests/deployment.yml addressbook-stable 3 $DOCKER_USERNAME/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID
if kubectl get deployment addressbook-canary; then kubectl delete deployment/addressbook-canary; fi
```
**Test**: this is the last block in the pipeline. It runs some tests on
the canary. Combining `kubectl get pod` and `kubectl exec`, we can run
commands inside the pod.
![Deploy block](./figures/05-sem-stable-deploy-block.png){ width=95% }
``` yaml
. . .
jobs:
- name: Test and migrate db
commands:
- kubectl exec -it \
$(kubectl get pod -l deployment=addressbook-canary -o name | head -n 1) \
-- npm run ping
Good! Were done with the release pipeline.
- kubectl exec -it \
$(kubectl get pod -l deployment=addressbook-canary -o name | head -n 1) \
-- npm run migrate
. . .
```
### 4.7.2 Releasing the Canary
### 4.6.2 Your First Release
Here is the moment of truth. Will the canary work? Click on *Run the workflow* and then *Start*.
This is the moment of truth. Will the canary work? Make a push on the
master branch to get started:
Wait until the CI pipeline is done an click on *Promote* to start the canary pipeline[^no-autopromotion].
``` bash
$ git checkout master
$ git add .semaphore
$ git commit -m "first deployment"
$ git push origin master
```
[^no-autopromotion]: You might be wondering why the automatic promotion hasnt kicked in for the canary pipeline. The reason is that we set it to trigger only for the master branch, and the Workflow Builder by default saves all its changes on a separate branch called `setup-semaphore`.
You can check the progress of the pipelines on the Semaphore website.
![Canary Promote](./figures/05-sem-promote-canary.png)
![Canary Pipeline](./figures/05-sem-canary-pipeline.png)
![Canary Pipeline](./figures/05-sem-canary-pipeline.png){ width=80% }
Once the deployment is complete, the workflow stops and waits for the
manual promotion. Here is where we can check how the canary is doing:
Once it completes, we can check how the canary is doing.
``` bash
$ kubectl get deployment
@@ -360,50 +275,15 @@ NAME READY UP-TO-DATE AVAILABLE AGE
addressbook-canary 1/1 1 1 8m40s
```
## 4.7 Releasing the Stable
### 4.7.3 Releasing the Stable
So far, so good. Let's see where we are: we built the Docker image, and,
after testing it, we released it as one-pod canary deployment. If the
canary worked, were ready to complete the deployment.
In tandem with the canary deployment, we should have a dashboard to monitor errors, user reports, and performance metrics to compare against the baseline. After some pre-determined amount of time, we would reach a go vs. no-go decision. Is the canaried version is good enough to be promoted to stable? If so, the deployment continues. If not, after collecting the necessary error reports and stack traces, we rollback and regroup.
### 4.7.1 The Continuous Deployment Pipeline
The stable deployment pipeline is the last one in the workflow. The
pipeline does not introduce anything new. Again, we use `apply.sh`
script to start a rolling update and `kubectl delete` to clean the
canary deployment.
``` yaml
. . .
jobs:
- name: Deploy
commands:
- ./apply.sh manifests/deployment.yml \
addressbook-stable \
3 \
$DOCKER_USERNAME/addressbook:$SEMAPHORE_WORKFLOW_ID
- if kubectl get deployment addressbook-canary; then \
kubectl delete deployment/addressbook-canary; \
fi
. . .
```
### 4.7.2 Making the Release
In tandem with the deployment, we should have a dashboard to monitor
errors, user incidents, and performance metrics to compare against the
baseline. After some pre-determined amount of time, we would reach a go
vs. no-go decision. Is the canaried version good enough to be promoted
to stable? If so, the deployment continues. If not, after collecting the
necessary error reports and stack traces, we rollback and regroup.
Lets say we decide to go ahead. So go on and hit the *Promote* button.
Lets say we decide to go ahead. So go on and hit the *Promote* button next to the stable pipeline.
![Stable Pipeline](./figures/05-sem-stable-pipeline.png){ width=60% }
While the block runs, you should get the existing canary and a new
“addressbook-stable” deployment:
While the block runs, you should see both the existing canary and a new “addressbook-stable” deployment:
``` bash
$ kubectl get deployment
@@ -412,8 +292,7 @@ addressbook-canary 1/1 1 1 110s
addressbook-stable 0/3 3 0 1s
```
One at a time, the numbers of replicas should increase until reaching
the target of three:
One at a time, the numbers of replicas should increase until reaching the target of three:
``` bash
$ kubectl get deployment
@@ -439,8 +318,7 @@ addressbook-lb LoadBalancer 10.120.14.50 35.225.210.248 80:30479/TCP 2
kubernetes ClusterIP 10.120.0.1 <none> 443/TCP 49m
```
We can use curl to test the API endpoint directly. For example, to
create a person in the addressbook:
We can use curl to test the API endpoint directly. For example, to create a person in the addressbook:
``` bash
$ curl -w "\n" -X PUT -d "firstName=Sammy&lastName=David Jr" 34.68.150.168/person
@@ -454,7 +332,7 @@ $ curl -w "\n" -X PUT -d "firstName=Sammy&lastName=David Jr" 34.68.150.168/perso
```
To retrieve all persons, use:
To retrieve all persons, try:
``` bash
$ curl -w "\n" 34.68.150.168/all
@@ -471,70 +349,50 @@ $ curl -w "\n" 34.68.150.168/all
The deployment was a success, that was no small feat. Congratulations\!
### 4.7.3 The Rollback Pipeline
### 4.7.4 The Rollback Pipeline
Fortunately, Kubernetes and CI/CD make an exceptional team when it comes
to recovering from errors. Our project includes a rollback pipeline.
Fortunately, Kubernetes and CI/CD make an exceptional team when it comes to recovering from errors. Lets say that we dont like how the canary performs or, even worse, the functional tests at the end of the canary deployment pipeline fails. In that case, wouldnt be great to have the system go back to the previous state automatically? What about being able to undo the change with a click of a button? This is exactly what we are going to create in this step, a rollback pipeline [^no-db-rollback].
Lets say that we dont like how the canary performs. In that case, we
can click on the *Promote* button on the “Rollback canary” pipeline:
[^no-db-rollback]: This isnt technically true for applications that use databases, changes to the database are not automatically rolled back. We should use database backups and migration scripts to manage upgrades.
Open the Workflow Builder once more and go to the end of the canary pipeline. Create a new promotion branching out of it, check the *Enable automatic promotion* box, and set this condition:
```text
result = 'failed'
```
![Rollback promotion](./figures/05-sem-rollback-promotion.png){ width=95% }
The rollback job collects information to help diagnose the problem. Create a new block called “Rollback Canary”, import the `do-ctl` secret, and create `CLUSTER_NAME`. Repeat the prologue commands like we did before and type these lines in the job:
```bash
kubectl get all -o wide
kubectl get events
kubectl describe deployment addressbook-canary || true
kubectl logs $(kubectl get pod -l deployment=addressbook-canary -o name | head -n 1) || true
if kubectl get deployment addressbook-stable; then kubectl scale --replicas=3 deployment/addressbook-stable; fi
if kubectl get deployment addressbook-canary; then kubectl delete deployment/addressbook-canary; fibash
```
![Rollback block](./figures/05-sem-rollback-block.png){ width=95% }
The first four lines print out information about the cluster. The last two, undoes the changes by scaling up the stable deployment and removing the canary.
Run the workflow once more and make a canary release, but this time try rollback pipeline by clicking on its promote button:
![Rollback Pipeline](./figures/05-sem-rollback-canary.png){ width=60% }
Check the pipeline at `.semaphore/rollback-canary-digitalocean.yml`
And were back to normal, phew\! Now its time to check the job logs to see what went wrong and fix it before merging to master again.
The rollback pipeline job is to collect information to diagnose the
problem:
**But what if the problem is found after the stable release?** Lets imagine that a defect sneaked its way into the stable deployment. It can happen, maybe there was some subtle bug that no one found out hours or days in. Or perhaps some error not picked up by the functional test. Is it too late? Can we go back to a previous version?
``` yaml
commands:
- kubectl get all -o wide
- kubectl get events
- kubectl describe deployment addressbook-canary || true
- kubectl logs \
$(kubectl get pod -l deployment=addressbook-canary -o name | head -n 1) || true
The answer is yes, we can go to the previous version, but manual intervention is required. Do you remember that we tagged each Docker image with a unique ID (the `SEMAPHORE_WORKFLOW_ID`)? We can re-promote the stable deployment pipeline for the last good version in Semaphore. If the Docker image is no longer in the registry, we can just regenerate it using the *Rerun* button in the top right corner.
```
### 4.7.5 Troubleshooting and Tips
And then undo the changes by scaling up the stable deployment and
removing the canary:
Even the best plans can fail; failure is certainly an option in the software development business. Maybe the canary is presented with some unexpected errors, perhaps it has performance problems, or we merged the wrong branch into master. The important thing is (1) learn something from them, and (2) know how to go back to solid ground.
``` yaml
- if kubectl get deployment addressbook-stable; then \
kubectl scale --replicas=3 deployment/addressbook-stable; \
fi
- if kubectl get deployment addressbook-canary; then \
kubectl delete deployment/addressbook-canary; \
fi
```
And were back to normal, phew\! Now its time to check the job logs to
see what went wrong and fix it before merging to master again.
**But what if the problem is found after the stable release?** Lets
imagine that a defect sneaked its way into the stable deployment. It can
happen, maybe there was some subtle bug that no one found out hours or
days in. Or perhaps some error not picked up by the functional test. Is
it too late? Can we go back to a previous version?
The answer is yes, we can go to the previous version. Do you remember
that we tagged each Docker image with a unique ID (the
`SEMAPHORE_WORKFLOW_ID`)? We can re-promote the stable deployment
pipeline for the last good version in Semaphore. When the Docker image
is no longer in the registry can just regenerate it using the *Rerun*
button in the top right corner.
### 4.7.2 Troubleshooting and Tips
Even the best plans can fail; failure is certainly an option in the
software business. Maybe the canary is presented with some unexpected
errors, perhaps it has performance problems, or we merged the wrong
branch into master. The important thing is (1) learn something from
them, and (2) know how to go back to solid ground.
Kubectl can give us a lot of insights into what is happening. First, get
an overall picture of the resources on the cluster.
Kubectl can give us a lot of insights into what is happening. First, get an overall picture of the resources on the cluster.
``` bash
$ kubectl get all -o wide
@@ -572,15 +430,13 @@ $ kubectl logs <pod-name>
$ kubectl logs --previous <pod-name>
```
If you need to jump in one of the containers, you can start a shell as
long as the pod is running with:
If you need to jump in one of the containers, you can start a shell as long as the pod is running with:
``` bash
$ kubectl exec -it <pod-name> -- bash
```
To access a pod network from your machine, forward a port with
`port-forward`, for instance:
To access a pod network from your machine, forward a port with `port-forward`, for instance:
``` bash
$ kubectl port-forward <pod-name> 8080:80
@@ -588,71 +444,38 @@ $ kubectl port-forward <pod-name> 8080:80
These are some common error messages that you might run into:
- Manifest is invalid: it usually means that the manifest YAML syntax
is incorrect. Use `--dry-run` or `--validate` options verify the
manifest.
- `ImagePullBackOff` or `ErrImagePull`: the requested image is invalid
or was not found. Check that the image is in the registry and that
the reference on the manifest file is correct.
- `CrashLoopBackOff`: the application is crashing, and the pod is
shutting down. Check the logs for application errors.
- Pod never leaves `Pending` status: this could mean that one of the
Kubernetes secrets is missing.
- Log message says that “container is unhealthy”: this message may
show that the pod is not passing a probe. Check that the probe
definitions are correct.
- Log message says that there are “insufficient resources”: this may
happen when the cluster is running low on memory or CPU.
- Manifest is invalid: it usually means that the manifest YAML syntax is incorrect. Use `kubectl --dry-run` or `--validate` options verify the manifest.
- `ImagePullBackOff` or `ErrImagePull`: the requested image is invalid or was not found. Check that the image is in the registry and that the reference in the manifest is correct.
- `CrashLoopBackOff`: the application is crashing, and the pod is shutting down. Check the logs for application errors.
- Pod never leaves `Pending` status: this could mean that one of the Kubernetes secrets are missing.
- Log message says that “container is unhealthy”: this message may show that the pod is not passing a probe. Check that the probe definitions are correct.
- Log message says that there are “insufficient resources”: this may happen when the cluster is running low on memory or CPU.
## 4.8 Summary
You have learned how to put together the puzzle of CI/CD, Docker, and
Kubernetes into a practical application. In this chapter, you have put
in practice all that youve learned in this book:
You have learned how to put together the puzzle of CI/CD, Docker, and Kubernetes into a practical application. In this chapter, you have put in practice all that youve learned in this book:
- How to setup pipelines in Semaphore CI/CD and use them to deploy to
the cloud.
- How to build Docker images and start a dev environment with the help
of Docker Compose.
- How to setup pipelines in Semaphore CI/CD and use them to deploy to the cloud.
- How to build Docker images and start a dev environment with the help of Docker Compose.
- How to do canaried deployments and rolling updates in Kubernetes.
- How to scale deployments and how to recover when things dont go as
planned.
- How to scale deployments and how to recover when things dont go as planned.
Each of the pieces had its role: Docker brings portability, Kubernetes
adds orchestration, and Semaphore CI/CD drives the test and deployment
process.
Each of the pieces had its role: Docker brings portability, Kubernetes adds orchestration, and Semaphore CI/CD drives the test and deployment process.
## Footnotes
1. The full pipeline reference can be fount at
<https://docs.semaphoreci.com/article/50-pipeline-yaml>
1. The full pipeline reference can be fount at <https://docs.semaphoreci.com/article/50-pipeline-yaml>
2. To see all the available machines, go to
<https://docs.semaphoreci.com/article/20-machine-types>
2. To see all the available machines, go to <https://docs.semaphoreci.com/article/20-machine-types>
3. For more details on the Ubuntu image see:
<https://docs.semaphoreci.com/article/32-ubuntu-1804-image>
3. For more details on the Ubuntu image see: <https://docs.semaphoreci.com/article/32-ubuntu-1804-image>
4. You can find the full toolbox reference here:
<https://docs.semaphoreci.com/article/54-toolbox-reference>
4. You can find the full toolbox reference here: <https://docs.semaphoreci.com/article/54-toolbox-reference>
5. sem-service can start a lot of popular database engines, for the
full list check:
<https://docs.semaphoreci.com/article/132-sem-service-managing-databases-and-services-on-linux>
5. sem-service can start a lot of popular database engines, for the full list check: <https://docs.semaphoreci.com/article/132-sem-service-managing-databases-and-services-on-linux>
6. The full environment reference can be found at
<https://docs.semaphoreci.com/article/12-environment-variables>
6. The full environment reference can be found at <https://docs.semaphoreci.com/article/12-environment-variables>
7. For more details on secrets consult:
<https://docs.semaphoreci.com/article/66-environment-variables-and-secrets>
7. For more details on secrets consult: <https://docs.semaphoreci.com/article/66-environment-variables-and-secrets>
8. For more information on pipelines check
<https://docs.semaphoreci.com/article/67-deploying-with-promotions>
9. At the time of writing, DigitalOcean announced a beta for a private
registry offering. For more information, consult the available
documentation:
<https://www.digitalocean.com/docs/kubernetes/how-to/set-up-registry>
10. Later, when everything is working, you can restrict access to the
Kubernetes nodes to increase security
8. For more information on pipelines check <https://docs.semaphoreci.com/article/67-deploying-with-promotions>

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 82 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

BIN
figures/05-sem-registry.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB