Merge pull request #538 from jpetazzo/review-j3

Review j3
This commit is contained in:
Julien Girardin
2020-01-31 08:55:11 +01:00
committed by GitHub
5 changed files with 180 additions and 109 deletions

View File

@@ -1,80 +1,102 @@
# Development Workflow
In this section we will see how to get started with local development workflow,
In this section we will see how to set up a local development workflow.
We will list multiple options, yet not all tools are mandatory.
We will list multiple options.
It's up to the developer to find what's best suit him/her.
Keep in mind that we don't have to use *all* these tools!
It's up to the developer to find what best suits them.
---
## What does it mean to develop on kubernetes ?
The generic workflow is:
## What does it mean to develop on Kubernetes ?
- (1) Change code or Dockerfile in one repository
In theory, the generic workflow is:
- (2) Build the docker image with a new tag
1. Make changes to our code or edit a Dockerfile
- (3) Push the docker image to a registry
2. Build a new Docker image with a new tag
- (4) Edit the yamls/templates corresponding to the deployment of the docker image
3. Push that Docker image to a registry
- (5) Apply the yamls/templates
4. Update the YAML or templates referencing that Docker image
<br/>(e.g. of the corresponding Deployment, StatefulSet, Job ...)
- (6) Check if you're satisfied, if no repeat from 1 (or 4 if image is ok)
5. Apply the YAML or templates
- (7) Commit and push
6. Are we satisfied with the result?
<br/>No → go back to step 1 (or step 4 if the image is OK)
<br/>Yes → commit and push our changes to source control
---
## A few quirks
Looking more precisely to the workflow, it's quite complicated
In practice, there are some details that make this workflow more complex.
- Need to have docker container registry that the cluster can access.
- We need a Docker container registry to store our images
<br/>
(for Open Source projects, a free Docker Hub account works fine)
(If you work a opensource project then a free account on DockerHub could just work)
- We need to set image tags properly, hopefully automatically
- Need scripting new tag creation
- If we decide to use a fixed tag (like `:latest`) instead:
- or use `imagePullPolicy=Always` to force image pull, then kill the appropriate pods
- we need to specify `imagePullPolicy=Always` to force image pull
- Require high broadband bandwidth and lot of time to push lot of images
- we need to trigger a rollout when we want to deploy a new image
<br/>(with `kubectl rollout restart` or by killing the running pods)
- Requires a registry's clean-up phase to avoid messy situations of left-over tags
- We need a fast internet connection to push the images
- We need to regularly clean up the registry to avoid accumulating old images
---
## Benefits with developping locally
Some of those problem could be solved using a local one-node cluster:
## When developing locally
- Using the node itself to build the image avoid pushing over network
- If we work with a local cluster, pushes and pulls are much faster
- No need of a registry
- Even better, with a one-node cluster, most of these problems disappear
or use a local-registry inside the cluster to avoid requiring broadband access
- If we build and run the images on the same node, ...
- Can use bind mounts to make code change directly available in containers
- we don't need to push images
- we don't need a fast internet connection
- we don't need a registry
- we can use bind mounts to edit code locally and make changes available immediately in running containers
- This means that it is much simpler to deploy to local development environment (like Minikube, Docker Desktop ...) than to a "real" cluster
---
## Minikube
- start a VM with the hypervisor of your choice: virtualbox, kvm, hyperv, ...
- Start a VM with the hypervisor of your choice: VirtualBox, kvm, Hyper-V ...
- well supported by the kubernetes community
- Well supported by the Kubernetes community
- lot of addons
- Lot of addons
- easy cleanup: delete the VM ! (`minikube delete`)
- Easy cleanup: delete the VM with `minikube delete`
- Bind mounts depends on the underlying hypervisor, and may require additionnal setup
- Bind mounts depend on the underlying hypervisor
(they may require additionnal setup)
---
## Docker-for-Mac/Windows
- start a VM with the appropriate hypervisor (even better !)
## Docker Desktop
- bind mount works out of the box
- Available for Mac and Windows
- Start a VM with the appropriate hypervisor (even better!)
- Bind mounts work out of the box
```yaml
volumes:
@@ -83,51 +105,67 @@ volumes:
path: /C/Users/Enix/my_code_repository
```
- ingress and other addons need to be installed manualy
- Ingress and other addons need to be installed manually
---
## Kind
- Use docker-in-docker to run kubernetes
- Kubernetes-in-Docker
- It's actually more "containerd-in-docker" than real "d-in-d"
- Uses Docker-in-Docker to run Kubernetes
<br/>
(technically, it's more like Containerd-in-Docker)
So building "Dockerfile" is not a option
- We don't get a real Docker Engine (and cannot build Dockerfiles)
- Able to simulate multiple nodes
- Single-node by default, but multi-node clusters are possible
→ Kind is quite handy to test kubernetes deployments on Public CI (Travis/Circle)
where only docker is available
- Very convenient to test Kubernetes deployments when only Docker is available
<br/>
(e.g. on public CI services like Travis, Circle, GitHub Actions ...)
- Extra configuration for bind mount
- Bind mounts require extra configuration
- Extra configuration for a couple of addons, totally custom for other
- Warning brtfs user: that doesn't work 😢
- Doesn't work with BTRFS (sorry BTRFS users😢)
---
## microk8s
- snap(container like) distribution of kubernetes
- Distribution of Kubernetes using Snap
- work on: Ubuntu and derivative, or Ubuntu VM
(Snap is a container-like method to install software)
- big list of addons easy to install
- Available on Ubuntu and derivatives
- bind mount work natively, need extra setup if you use a VM
- Bind mounts work natively (but require extra setup if we run in a VM)
- Big list of addons; easy to install
---
## Proper tooling
We ran our neat one-node-cluster. What do we do now ?
The simple workflow seems to be:
- find the remote docker endpoint and export the DOCKER_HOST variable
- set up a one-node cluster with one of the methods mentioned previously,
- follow the previous 7-steps workflow
- find the remote Docker endpoint,
- configure the `DOCKER_HOST` variable to use that endpoint,
- follow the previous 7-step workflow.
Can we do better?
Can we do better ?
---
## Skaffold
Note: Draft and Forge are softwares with some functional overlap
<!-- FIXME Draft semble à l'abandon. Il y a aussi Tilt Garden ... -->

View File

@@ -1,81 +1,99 @@
# Registries
There is lot of options to ship you container image to a registry
- There are lots of options to ship our container images to a registry
Those can be group in different categories:
- We can group them depending on some characteristics:
- hosted / selfhosted
- SaaS or self-hosted
- with / without build system
- with or without a build system
---
## Docker registry
- [open-source](https://github.com/docker/distribution) and self-hosted
- Self-hosted and [open source](https://github.com/docker/distribution)
- Simple docker-based registry
- Runs in a single Docker container
- Support multiple storage backend
- Supports multiple storage backends
- Only support basic-authentification
- Supports basic authentication out of the box
- [Other authentication schemes](https://docs.docker.com/registry/deploying/#more-advanced-authentication) through proxy or delegation
- No build system
```shell
docker run -d -p 5000:5000 --name registry registry:2
```
or the dedicated plugin in minikube, microk8s, ...
- To run it with the Docker engine:
```shell
docker run -d -p 5000:5000 --name registry registry:2
```
- Or use the dedicated plugin in minikube, microk8s, etc.
---
## Harbor
- [open-source](https://github.com/goharbor/harbor) and self-hosted
- Self-hostend and [open source](https://github.com/goharbor/harbor)
- full-featured registry docker/helm registry
- Supports both Docker images and Helm charts
- advanced authentification mechanism
- Advanced authentification mechanism
- multi-site synchronisation
- Multi-site synchronisation
- vulnerability scanning
- Vulnerability scanning
- No build-system
- No build system
```shell
helm repo add harbor https://helm.goharbor.io
helm install my-release harbor/harbor
```
- To run it with Helm:
```shell
helm repo add harbor https://helm.goharbor.io
helm install my-release harbor/harbor
```
---
## Gitlab
- Some part [open-source](https://gitlab.com/gitlab-org/gitlab-foss/) and self-hosted
- Available both as a SaaS product and self-hosted
- Or hosted: gitlab.com (free for opensource project, payed subscription otherwise)
- SaaS product is free for open source projects; paid subscription otherwise
- CI integrated (so in a way: build-system integrated)
- Some parts are [open source](https://gitlab.com/gitlab-org/gitlab-foss/)
```shell
helm repo add gitlab https://charts.gitlab.io/
helm install gitlab gitlab/gitlab
```
- Integrated CI
- No build system (but a custom build system can be hooked to the CI)
- To run it with Helm:
```shell
helm repo add gitlab https://charts.gitlab.io/
helm install gitlab gitlab/gitlab
```
---
## Docker HUB
- hosted: [hub.docker.com](https://hub.docker.com)
## Docker Hub
- free for public image, payed subscription for private ones.
- SaaS product: [hub.docker.com](https://hub.docker.com)
- build-system included
- Free for public image; paid subscription for private ones
- Build system included
---
## Quay
- hosted (quay.io)[https://quay.io]
- Available both as a SaaS product (Quay) and self-hosted ([quay.io](https://quay.io))
- free for public repository, payed subscription otherwise
- SaaS product is free for public repositories; paid subscription otherwise
- acquired by Redhat from CoreOS, opensourced recently (so self-hosted to ?)
- Some components of Quay and quay.io are open source
- build-system included
(see [Project Quay](https://www.projectquay.io/) and the [announcement](https://www.redhat.com/en/blog/red-hat-introduces-open-source-project-quay-container-registry))
- Build system included

View File

@@ -4,12 +4,12 @@ From years, decades, (centuries !), software development has followed the same p
- Development
- Test
- Testing
- Package
- Packaging
- Ship
- Shipping
- Deploy
- Deployment
We will see how this map to kubernetes world
We will see how this map to Kubernetes world.

View File

@@ -1,17 +1,17 @@
# Automation && CI/CD
We already achieved:
What we've done so far:
- Find a way to Develop our application
- development of our application
- Test it manualy, and explore the way to write automated test for it
- manual testing, and exploration of automated testing strategies
- Package it the way we wanted it
- packaging in a container image
- Ship the image to a registry
- shipping that image to a registry
We now have to:
What still need to be done:
- Deploy it
- deployment of our application
And so each time is made a change on the repository. Can we automate this a little bit ?
- automation of the whole build / ship / run cycle

View File

@@ -1,14 +1,24 @@
# Testing
There multiple levels of testing. At this point we will focus on
There are multiple levels of testing:
*unit-testing*, ([Just Say No to More End-to-End Tests](https://testing.googleblog.com/2015/04/just-say-no-to-more-end-to-end-tests.html))
- unit testing (many small tests that run in isolation),
where system interaction are ideally *mocked* everywhere (no real database, no real backend).
- integration testing (bigger tests involving multiple components),
Sadly this is easier said that to be done...
- functional or end-to-end testing (even bigger tests involving the whole app).
In this section, we will focus on *unit testing*, where each test case
should (ideally) be completely isolated from other components and system
interaction: no real database, no real backend, *mocks* everywhere.
(For a good discussion on the merits of unit testing, we can read
[Just Say No to More End-to-End Tests](https://testing.googleblog.com/2015/04/just-say-no-to-more-end-to-end-tests.html).)
Unfortunately, this ideal scenario is easier said than done ...
---
## Multi-stage build
```dockerfile
@@ -26,12 +36,14 @@ RUN <build code>
CMD, EXPOSE ...
```
- If code don't change, test don't run, leveraging the docker cache
- This leverages the Docker cache: it the code doesn't change, the tests don't need to run
- Could use `docker build --network` to make database or backend available during build
- If the tests require a database or other backend, we can use `docker build --network`
- If the tests fail, the build fails; and no image is generated
- But no artifact(image) generated if build fails
---
## Docker Compose
```yaml
@@ -56,12 +68,15 @@ docker-compose build && docker-compose run project pytest -v
```
---
## Skaffold/Container-structure-test
- The `test` field of the `skaffold.yaml` instructs skaffold to run test against your image.
- It uses the [container-structure-test](https://github.com/GoogleContainerTools/container-structure-test)
- It allows to run custom command
- It allows to run custom commands
- Sadly nothing to run other docker image to make a database or a backend reachable
- Unfortunately, nothing to run other Docker images
(to start a database or a backend that we need to run tests)