Compare commits

..

29 Commits

Author SHA1 Message Date
Stefan Prodan
4463d41744 Update traffic routing
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2025-10-16 18:35:53 +03:00
Stefan Prodan
66663c2aaf Update traffic routing
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2025-10-16 18:34:50 +03:00
Stefan Prodan
d34a504516 Update intro
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2025-10-16 18:23:54 +03:00
Stefan Prodan
9fc5f9ca31 Update copyright
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2025-10-16 18:02:25 +03:00
Stefan Prodan
3cf9fe7cb7 Bump actions
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2025-10-16 17:58:02 +03:00
Stefan Prodan
6d57b5c490 Remove deprecated tutorials
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2025-10-16 17:54:41 +03:00
Stefan Prodan
7254a5a7f6 Merge pull request #1250 from staceypotter/patch-1
fix typo on readme
2022-08-23 17:18:33 +03:00
Stacey Potter
671016834d fix typo on readme
Signed-off-by: Stacey Potter <50154848+staceypotter@users.noreply.github.com>

Signed-off-by: Stacey Potter <50154848+staceypotter@users.noreply.github.com>
2022-08-03 09:29:07 -04:00
Stefan Prodan
41dc056c87 Add Kuma
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-02-14 15:31:31 +02:00
Daniel Holbach
6757553685 Merge pull request #1028 from politician/patch-1
Fix typo (closes #1027)
2021-10-05 09:39:05 +02:00
Romain Barissat
1abceb8334 Fix typo
Closes #1027

Signed-off-by: Romain Barissat <romain-noreply@barissat.com>
2021-10-05 14:29:30 +07:00
Stefan Prodan
0939a1bcc9 Add metric providers section
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2021-09-20 11:49:01 +03:00
Stefan Prodan
d91012d79c Add OSM to homepage
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2021-08-23 15:30:00 +03:00
Stefan Prodan
6e9fe2c584 Flagger v1.6
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2021-02-26 17:04:27 +02:00
Stefan Prodan
ccad05c92f Add Traefik tutorial
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2020-12-07 12:13:08 +02:00
stefanprodan
8b9929008b Add Skipper ingress 2020-08-20 09:14:36 +03:00
stefanprodan
a881446252 Fix docs links 2020-05-08 13:20:53 +03:00
stefanprodan
183c73f2c4 Update website to v1.0.0-rc.1 2020-03-05 17:16:27 +02:00
stefanprodan
3e3eaebbf2 Add Contour to ingress providers 2020-01-06 16:06:55 +02:00
stefanprodan
bcc5bbf6fa Fix CI 2019-11-13 15:27:14 +02:00
stefanprodan
42c3052247 Update deployment strategies and tutorials list 2019-11-13 13:38:27 +02:00
Stefan Prodan
3593e20ad5 Add Helm v3 workshop link 2019-09-22 00:14:53 +03:00
stefanprodan
1f0819465b Link to fluxcd/flux repo 2019-08-20 19:23:01 +03:00
stefanprodan
7e48d7537f Add Helm Operator tutorial 2019-08-19 18:59:37 +03:00
stefanprodan
45c69ce3f7 Fix typo 2019-08-19 18:42:40 +03:00
stefanprodan
8699a07de9 Move meta to vue module 2019-08-19 17:39:48 +03:00
stefanprodan
fb92d31744 Remove node_modules 2019-08-19 17:30:55 +03:00
stefanprodan
bb7c4b1909 Remove hero image 2019-08-19 17:28:50 +03:00
stefanprodan
84d74d1eb5 Add Flagger website 2019-08-19 17:03:43 +03:00
558 changed files with 7572 additions and 72431 deletions

View File

@@ -1,14 +0,0 @@
coverage:
status:
project:
default:
target: auto
threshold: 50
base: auto
patch: off
comment:
require_changes: true
branches:
- "!docs"
- "!release"

View File

@@ -1,15 +0,0 @@
root: ./docs/gitbook
redirects:
how-it-works: usage/how-it-works.md
usage/progressive-delivery: tutorials/istio-progressive-delivery.md
usage/ab-testing: tutorials/istio-ab-testing.md
usage/blue-green: tutorials/kubernetes-blue-green.md
usage/appmesh-progressive-delivery: tutorials/appmesh-progressive-delivery.md
usage/linkerd-progressive-delivery: tutorials/linkerd-progressive-delivery.md
usage/contour-progressive-delivery: tutorials/contour-progressive-delivery.md
usage/gloo-progressive-delivery: tutorials/gloo-progressive-delivery.md
usage/nginx-progressive-delivery: tutorials/nginx-progressive-delivery.md
usage/skipper-progressive-delivery: tutorials/skipper-progressive-delivery.md
usage/crossover-progressive-delivery: tutorials/crossover-progressive-delivery.md
usage/traefik-progressive-delivery: tutorials/traefik-progressive-delivery.md

1
.github/CODEOWNERS vendored
View File

@@ -1 +0,0 @@
* @stefanprodan

View File

@@ -1,49 +0,0 @@
name: build
on:
workflow_dispatch:
pull_request:
branches:
- main
push:
branches:
- main
jobs:
container:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Restore Go cache
uses: actions/cache@v1
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: 1.15.x
- name: Download modules
run: |
go mod download
go install golang.org/x/tools/cmd/goimports
- name: Run linters
run: make test-fmt test-codegen
- name: Run tests
run: go test -race -coverprofile=coverage.txt -covermode=atomic $(go list ./pkg/...)
- name: Check if working tree is dirty
run: |
if [[ $(git diff --stat) != '' ]]; then
git --no-pager diff
echo 'run make test and commit changes'
exit 1
fi
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
with:
file: ./coverage.txt
- name: Build container image
run: docker build -t test/flagger:latest .

View File

@@ -1,37 +0,0 @@
name: e2e
on:
workflow_dispatch:
pull_request:
branches:
- main
push:
branches:
- main
jobs:
kind:
runs-on: ubuntu-latest
strategy:
matrix:
provider:
- istio
- linkerd
- contour
- nginx
- traefik
- gloo
- skipper
- kubernetes
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Setup Kubernetes
uses: engineerd/setup-kind@v0.5.0
- name: Build container image
run: |
docker build -t test/flagger:latest .
kind load docker-image test/flagger:latest
- name: Run tests
run: |
./test/${{ matrix['provider'] }}/run.sh

View File

@@ -1,74 +0,0 @@
name: release
on:
push:
tags:
- 'v*'
jobs:
build-push:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Prepare
id: prep
run: |
VERSION=$(grep 'VERSION' pkg/version/version.go | awk '{ print $4 }' | tr -d '"')
CHANGELOG="https://github.com/fluxcd/flagger/blob/main/CHANGELOG.md#$(echo $VERSION | tr -d '.')"
echo ::set-output name=BUILD_DATE::$(date -u +'%Y-%m-%dT%H:%M:%SZ')
echo ::set-output name=VERSION::${VERSION}
echo ::set-output name=CHANGELOG::${CHANGELOG}
- name: Setup QEMU
uses: docker/setup-qemu-action@v1
with:
platforms: all
- name: Setup Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
buildkitd-flags: "--debug"
- name: Login to GitHub Container Registry
uses: docker/login-action@v1
with:
registry: ghcr.io
username: fluxcdbot
password: ${{ secrets.GHCR_TOKEN }}
- name: Publish image
uses: docker/build-push-action@v2
with:
push: true
builder: ${{ steps.buildx.outputs.name }}
context: .
file: ./Dockerfile
platforms: linux/amd64,linux/arm64,linux/arm/v7
build-args: |
REVISON=${{ github.sha }}
tags: |
ghcr.io/fluxcd/flagger:${{ steps.prep.outputs.VERSION }}
labels: |
org.opencontainers.image.title=${{ github.event.repository.name }}
org.opencontainers.image.description=${{ github.event.repository.description }}
org.opencontainers.image.url=${{ github.event.repository.html_url }}
org.opencontainers.image.source=${{ github.event.repository.html_url }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.version=${{ steps.prep.outputs.VERSION }}
org.opencontainers.image.created=${{ steps.prep.outputs.BUILD_DATE }}
- name: Check images
run: |
docker buildx imagetools inspect ghcr.io/fluxcd/flagger:${{ steps.prep.outputs.VERSION }}
- name: Publish Helm charts
uses: stefanprodan/helm-gh-pages@v1.3.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
charts_url: https://flagger.app
linting: off
- name: Create release
uses: actions/create-release@latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: ${{ github.ref }}
draft: false
prerelease: false
body: |
[CHANGELOG](${{ steps.prep.outputs.CHANGELOG }})

53
.github/workflows/website.yaml vendored Normal file
View File

@@ -0,0 +1,53 @@
name: website
on:
push:
branches:
- website
jobs:
vuepress:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v5
- uses: actions/setup-node@v5
with:
node-version: '12.x'
- id: yarn-cache-dir-path
run: echo "::set-output name=dir::$(yarn cache dir)"
- uses: actions/cache@v4
id: yarn-cache
with:
path: ${{ steps.yarn-cache-dir-path.outputs.dir }}
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-
- name: Build
run: |
yarn add -D vuepress
yarn docs:build
mkdir $HOME/site
rsync -avzh docs/.vuepress/dist/ $HOME/site
- name: Publish
env:
REPOSITORY: "https://x-access-token:${{ secrets.BOT_GITHUB_TOKEN }}@github.com/fluxcd/flagger.git"
run: |
tmpDir=$(mktemp -d)
pushd $tmpDir >& /dev/null
git clone ${REPOSITORY}
cd flagger
git config user.name ${{ github.actor }}
git config user.email ${{ github.actor }}@users.noreply.github.com
git remote set-url origin ${REPOSITORY}
git checkout gh-pages
rm -rf assets
rsync -avzh $HOME/site/ .
rm -rf node_modules
git add .
git commit -m "Publish website"
git push origin gh-pages
popd >& /dev/null
rm -rf $tmpDir

80
.gitignore vendored
View File

@@ -1,23 +1,65 @@
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Test binary, build with `go test -c`
*.test
# Runtime data
pids
*.pid
*.seed
*.pid.lock
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
.DS_Store
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
# Coverage directory used by tools like istanbul
coverage
# nyc test coverage
.nyc_output
# Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files)
.grunt
# Bower dependency directory (https://bower.io/)
bower_components
# node-waf configuration
.lock-wscript
# Compiled binary addons (https://nodejs.org/api/addons.html)
build/Release
# Dependency directories
node_modules/
jspm_packages/
# TypeScript v1 declaration files
typings/
# Optional npm cache directory
.npm
# Optional eslint cache
.eslintcache
# Optional REPL history
.node_repl_history
# Output of 'npm pack'
*.tgz
# Yarn Integrity file
.yarn-integrity
# dotenv environment variables file
.env
# next.js build output
.next
bin/
_tmp/
artifacts/gcloud/
.idea
Makefile.dev
vendor
coverage.txt
docs/.vuepress/dist/
Makefile.dev

View File

@@ -1,14 +0,0 @@
builds:
- main: ./cmd/flagger
binary: flagger
ldflags: -s -w -X github.com/fluxcd/flagger/pkg/version.REVISION={{.Commit}}
goos:
- linux
goarch:
- amd64
env:
- CGO_ENABLED=0
archives:
- name_template: "{{ .Binary }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}"
files:
- none*

View File

@@ -1,953 +0,0 @@
# Changelog
All notable changes to this project are documented in this file.
## 1.6.0
**Release date:** 2021-01-05
**Breaking change:** the minimum supported version of Kubernetes is v1.16.0.
This release comes with support for A/B testing using [Gloo Edge](https://docs.flagger.app/tutorials/gloo-progressive-delivery)
HTTP headers based routing.
#### Features
- A/B testing support for Gloo Edge ingress controller
[#765](https://github.com/fluxcd/flagger/pull/765)
#### Improvements
- Upgrade the Kubernetes packages to `v1.20.1` and Flagger's CRDs to `apiextensions.k8s.io/v1`
[#772](https://github.com/fluxcd/flagger/pull/772)
## 1.5.0
**Release date:** 2020-12-22
This is the first release of Flagger under [fluxcd](https://github.com/fluxcd) organization (CNCF sandbox).
Starting with this version, Flagger can be installed on multi-arch Kubernetes clusters (Linux AMD64/ARM64/ARM).
The multi-arch image is available on GitHub Container Registry
at [ghcr.io/fluxcd/flagger](https://github.com/orgs/fluxcd/packages/container/package/flagger).
#### Improvements
- Publish multi-arch image to GitHub Container Registry
[#763](https://github.com/fluxcd/flagger/pull/763)
- Migrate CI to GitHub Actions
[#754](https://github.com/fluxcd/flagger/pull/754)
- Add e2e tests for label prefix inclusion
[#762](https://github.com/fluxcd/flagger/pull/762)
- Added PodDisruptionBudget to the Flagger Helm chart
[#749](https://github.com/fluxcd/flagger/pull/749)
## v1.4.2
**Release date:** 2020-12-09
Fix Istio virtual service delegation
#### Improvements
- Add Prometheus basic-auth config to docs
[#746](https://github.com/fluxcd/flagger/pull/746)
- Update Prometheus to 2.23.0 and Grafana to 7.3.4
[#747](https://github.com/fluxcd/flagger/pull/747)
#### Fixes
- Fix for VirtualService delegation when analysis is enabled
[#745](https://github.com/fluxcd/flagger/pull/745)
## 1.4.1 (2020-12-08)
Prevent primary ConfigMaps and Secrets from being pruned by Flux
#### Improvements
- Apply label prefix rules for ConfigMaps and Secrets
[#743](https://github.com/fluxcd/flagger/pull/743)
## 1.4.0 (2020-12-07)
Add support for Traefik ingress controller
#### Features
- Add Traefik support for progressive traffic shifting with `TraefikService`
[#736](https://github.com/fluxcd/flagger/pull/736)
- Add support for HPA v2beta2 behaviors
[#740](https://github.com/fluxcd/flagger/pull/740)
## 1.3.0 (2020-11-23)
Add support for custom weights when configuring traffic shifting
#### Features
- Support AWS App Mesh backends ARN
[#715](https://github.com/fluxcd/flagger/pull/715)
- Add support for Istio VirtualService delegation
[#715](https://github.com/fluxcd/flagger/pull/715)
- Copy labels from canary to primary workloads based on prefix rules
[#709](https://github.com/fluxcd/flagger/pull/709)
#### Improvements
- Add QPS and Burst configs for kubernetes client
[#725](https://github.com/fluxcd/flagger/pull/725)
- Update Istio to v1.8.0
[#733](https://github.com/fluxcd/flagger/pull/733)
## 1.2.0 (2020-09-29)
Add support for New Relic metrics
#### Features
- Add New Relic as a metrics provider
[#691](https://github.com/fluxcd/flagger/pull/691)
#### Improvements
- Derive the label selector value from the target matchLabel
[#685](https://github.com/fluxcd/flagger/pull/685)
- Preserve Skipper predicates
[#681](https://github.com/fluxcd/flagger/pull/681)
#### Fixes
- Do not promote when not ready on skip analysis
[#695](https://github.com/fluxcd/flagger/pull/695)
## 1.1.0 (2020-08-18)
Add support for Skipper ingress controller
#### Features
- Skipper Ingress Controller support
[#670](https://github.com/fluxcd/flagger/pull/670)
- Support per-config configTracker disable via ConfigMap/Secret annotation
[#671](https://github.com/fluxcd/flagger/pull/671)
#### Improvements
- Add priorityClassName and securityContext to Helm charts
[#652](https://github.com/fluxcd/flagger/pull/652)
[#668](https://github.com/fluxcd/flagger/pull/668)
- Update Kubernetes packages to v1.18.8
[#672](https://github.com/fluxcd/flagger/pull/672)
- Update Istio, Linkerd and Contour e2e tests
[#661](https://github.com/fluxcd/flagger/pull/661)
#### Fixes
- Fix O(log n) bug over network in GetTargetConfigs
[#663](https://github.com/fluxcd/flagger/pull/663)
- Fix(grafana): metrics change since Kubernetes 1.16
[#663](https://github.com/fluxcd/flagger/pull/663)
## 1.0.1 (2020-07-18)
Add support for App Mesh Gateway GA
#### Improvements
- Update App Mesh docs to v1beta2 API
[#649](https://github.com/fluxcd/flagger/pull/649)
- Add threadiness to Flagger helm chart
[#643](https://github.com/fluxcd/flagger/pull/643)
- Add Istio virtual service to loadtester helm chart
[#643](https://github.com/fluxcd/flagger/pull/643)
#### Fixes
- Fix multiple paths per rule on canary ingress
[#632](https://github.com/fluxcd/flagger/pull/632)
- Fix installers for kustomize >= 3.6.0
[#646](https://github.com/fluxcd/flagger/pull/646)
## 1.0.0 (2020-06-17)
This is the GA release for Flagger v1.0.0.
The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.app/dev/upgrade-guide).
Two new resources were added to the API: `MetricTemplate` and `AlertProvider`.
The analysis can reference [metric templates](https://docs.flagger.app//usage/metrics#custom-metrics)
to query Prometheus, Datadog and AWS CloudWatch.
[Alerting](https://docs.flagger.app/v/master/usage/alerting#canary-configuration) can be configured on a per
canary basis for Slack, MS Teams, Discord and Rocket.
#### Features
- Implement progressive promotion
[#593](https://github.com/fluxcd/flagger/pull/593)
#### Improvements
- istio: Add source labels to analysis matching rules
[#594](https://github.com/fluxcd/flagger/pull/594)
- istio: Add allow origins field to CORS spec
[#604](https://github.com/fluxcd/flagger/pull/604)
- istio: Change builtin metrics to work with Istio telemetry v2
[#623](https://github.com/fluxcd/flagger/pull/623)
- appmesh: Implement App Mesh v1beta2 timeout
[#611](https://github.com/fluxcd/flagger/pull/611)
- metrics: Check metrics server availability during canary initialization
[#592](https://github.com/fluxcd/flagger/pull/592)
## 1.0.0-rc.5 (2020-05-14)
This is a release candidate for Flagger v1.0.0.
The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.app/dev/upgrade-guide).
#### Features
- Add support for AWS AppMesh v1beta2 API
[#584](https://github.com/fluxcd/flagger/pull/584)
- Add support for Contour v1.4 ingress class
[#588](https://github.com/fluxcd/flagger/pull/588)
- Add user-specified labels/annotations to the generated Services
[#538](https://github.com/fluxcd/flagger/pull/538)
#### Improvements
- Support compatible Prometheus service
[#557](https://github.com/fluxcd/flagger/pull/557)
- Update e2e tests and packages to Kubernetes v1.18
[#549](https://github.com/fluxcd/flagger/pull/549)
[#576](https://github.com/fluxcd/flagger/pull/576)
#### Fixes
- pkg/controller: retry canary initialization on conflict
[#586](https://github.com/fluxcd/flagger/pull/586)
## 1.0.0-rc.4 (2020-04-03)
This is a release candidate for Flagger v1.0.0.
The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.app/dev/upgrade-guide).
**Breaking change**: the minimum supported version of Kubernetes is v1.14.0.
#### Features
- Implement NGINX Ingress header regex matching
[#546](https://github.com/fluxcd/flagger/pull/546)
#### Improvements
- pkg/router: update ingress API to networking.k8s.io/v1beta1
[#534](https://github.com/fluxcd/flagger/pull/534)
- loadtester: add return cmd output option
[#535](https://github.com/fluxcd/flagger/pull/535)
- refactoring: finalizer error handling and unit testing
[#531](https://github.com/fluxcd/flagger/pull/535)
[#530](https://github.com/fluxcd/flagger/pull/530)
- chart: add finalizers to RBAC rules for OpenShift
[#537](https://github.com/fluxcd/flagger/pull/537)
- chart: allow security context to be disabled on OpenShift
[#543](https://github.com/fluxcd/flagger/pull/543)
- chart: add annotations for service account
[#521](https://github.com/fluxcd/flagger/pull/521)
- docs: Add Prometheus Operator tutorial
[#524](https://github.com/fluxcd/flagger/pull/524)
#### Fixes
- pkg/controller: avoid status conflicts on initialization
[#544](https://github.com/fluxcd/flagger/pull/544)
- pkg/canary: fix status retry
[#541](https://github.com/fluxcd/flagger/pull/541)
- loadtester: fix timeout errors
[#539](https://github.com/fluxcd/flagger/pull/539)
- pkg/canary/daemonset: fix readiness check
[#529](https://github.com/fluxcd/flagger/pull/529)
- logs: reduce log verbosity and fix typos
[#540](https://github.com/fluxcd/flagger/pull/540)
[#526](https://github.com/fluxcd/flagger/pull/526)
## 1.0.0-rc.3 (2020-03-23)
This is a release candidate for Flagger v1.0.0.
The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.app/dev/upgrade-guide).
#### Features
- Add opt-in finalizers to revert Flagger's mutations on deletion of a canary
[#495](https://github.com/fluxcd/flagger/pull/495)
#### Improvements
- e2e: update end-to-end tests to Contour 1.3.0 and Gloo 1.3.14
[#519](https://github.com/fluxcd/flagger/pull/519)
- build: update Kubernetes packages to 1.17.4
[#516](https://github.com/fluxcd/flagger/pull/516)
#### Fixes
- Preserve node ports on service reconciliation
[#514](https://github.com/fluxcd/flagger/pull/514)
## 1.0.0-rc.2 (2020-03-19)
This is a release candidate for Flagger v1.0.0.
The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.app/dev/upgrade-guide).
#### Features
- Make mirror percentage configurable when using Istio traffic shadowing
[#492](https://github.com/fluxcd/flagger/pull/455)
- Add support for running Concord tests with loadtester webhooks
[#507](https://github.com/fluxcd/flagger/pull/507)
#### Improvements
- docs: add Istio telemetry v2 upgrade guide
[#486](https://github.com/fluxcd/flagger/pull/486),
update A/B testing tutorial for Istio 1.5
[#502](https://github.com/fluxcd/flagger/pull/502),
add how to retry a failed release to FAQ
[#494](https://github.com/fluxcd/flagger/pull/494)
- e2e: update end-to-end tests to
Istio 1.5 [#447](https://github.com/fluxcd/flagger/pull/447) and
NGINX Ingress 0.30
[#489](https://github.com/fluxcd/flagger/pull/489)
[#511](https://github.com/fluxcd/flagger/pull/511)
- refactoring:
error handling [#480](https://github.com/fluxcd/flagger/pull/480),
scheduler [#484](https://github.com/fluxcd/flagger/pull/484) and
unit tests [#475](https://github.com/fluxcd/flagger/pull/475)
- chart: add the log level configuration to Flagger helm chart
[#506](https://github.com/fluxcd/flagger/pull/506)
#### Fixes
- Fix nil pointer for the global notifiers [#504](https://github.com/fluxcd/flagger/pull/504)
## 1.0.0-rc.1 (2020-03-03)
This is a release candidate for Flagger v1.0.0.
The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.app/dev/upgrade-guide).
Two new resources were added to the API: `MetricTemplate` and `AlertProvider`.
The analysis can reference [metric templates](https://docs.flagger.app//usage/metrics#custom-metrics)
to query Prometheus, Datadog and AWS CloudWatch.
[Alerting](https://docs.flagger.app/v/master/usage/alerting#canary-configuration) can be configured on a per
canary basis for Slack, MS Teams, Discord and Rocket.
#### Features
- Implement metric templates for Prometheus [#419](https://github.com/fluxcd/flagger/pull/419),
Datadog [#460](https://github.com/fluxcd/flagger/pull/460) and
CloudWatch [#464](https://github.com/fluxcd/flagger/pull/464)
- Implement metric range validation [#424](https://github.com/fluxcd/flagger/pull/424)
- Add support for targeting DaemonSets [#455](https://github.com/fluxcd/flagger/pull/455)
- Implement canary alerts and alert providers (Slack, MS Teams, Discord and Rocket)
[#429](https://github.com/fluxcd/flagger/pull/429)
#### Improvements
- Add support for Istio multi-cluster
[#447](https://github.com/fluxcd/flagger/pull/447) [#450](https://github.com/fluxcd/flagger/pull/450)
- Extend Istio traffic policy [#441](https://github.com/fluxcd/flagger/pull/441),
add support for header operations [#442](https://github.com/fluxcd/flagger/pull/442) and
set ingress destination port when multiple ports are discovered [#436](https://github.com/fluxcd/flagger/pull/436)
- Add support for rollback gating [#449](https://github.com/fluxcd/flagger/pull/449)
- Allow disabling ConfigMaps and Secrets tracking [#425](https://github.com/fluxcd/flagger/pull/425)
#### Fixes
- Fix spec changes detection [#446](https://github.com/fluxcd/flagger/pull/446)
- Track projected ConfigMaps and Secrets [#433](https://github.com/fluxcd/flagger/pull/433)
## 0.23.0 (2020-02-06)
Adds support for service name configuration and rollback webhook
#### Features
- Implement service name override [#416](https://github.com/fluxcd/flagger/pull/416)
- Add support for gated rollback [#420](https://github.com/fluxcd/flagger/pull/420)
## 0.22.0 (2020-01-16)
Adds event dispatching through webhooks
#### Features
- Implement event dispatching webhook [#409](https://github.com/fluxcd/flagger/pull/409)
- Add general purpose event webhook [#401](https://github.com/fluxcd/flagger/pull/401)
#### Improvements
- Update Contour to v1.1 and add Linkerd header [#411](https://github.com/fluxcd/flagger/pull/411)
- Update Istio e2e to v1.4.3 [#407](https://github.com/fluxcd/flagger/pull/407)
- Update Kubernetes packages to 1.17 [#406](https://github.com/fluxcd/flagger/pull/406)
## 0.21.0 (2020-01-06)
Adds support for Contour ingress controller
#### Features
- Add support for Contour ingress controller [#397](https://github.com/fluxcd/flagger/pull/397)
- Add support for Envoy managed by Crossover via SMI [#386](https://github.com/fluxcd/flagger/pull/386)
- Extend canary target ref to Kubernetes Service kind [#372](https://github.com/fluxcd/flagger/pull/372)
#### Improvements
- Add Prometheus operator PodMonitor template to Helm chart [#399](https://github.com/fluxcd/flagger/pull/399)
- Update e2e tests to Kubernetes v1.16 [#390](https://github.com/fluxcd/flagger/pull/390)
## 0.20.4 (2019-12-03)
Adds support for taking over a running deployment without disruption
#### Improvements
- Add initialization phase to Kubernetes router [#384](https://github.com/fluxcd/flagger/pull/384)
- Add canary controller interface and Kubernetes deployment kind implementation [#378](https://github.com/fluxcd/flagger/pull/378)
#### Fixes
- Skip primary check on skip analysis [#380](https://github.com/fluxcd/flagger/pull/380)
## 0.20.3 (2019-11-13)
Adds wrk to load tester tools and the App Mesh gateway chart to Flagger Helm repository
#### Improvements
- Add wrk to load tester tools [#368](https://github.com/fluxcd/flagger/pull/368)
- Add App Mesh gateway chart [#365](https://github.com/fluxcd/flagger/pull/365)
## 0.20.2 (2019-11-07)
Adds support for exposing canaries outside the cluster using App Mesh Gateway annotations
#### Improvements
- Expose canaries on public domains with App Mesh Gateway [#358](https://github.com/fluxcd/flagger/pull/358)
#### Fixes
- Use the specified replicas when scaling up the canary [#363](https://github.com/fluxcd/flagger/pull/363)
## 0.20.1 (2019-11-03)
Fixes promql execution and updates the load testing tools
#### Improvements
- Update load tester Helm tools [#8349dd1](https://github.com/fluxcd/flagger/commit/8349dd1cda59a741c7bed9a0f67c0fc0fbff4635)
- e2e testing: update providers [#346](https://github.com/fluxcd/flagger/pull/346)
#### Fixes
- Fix Prometheus query escape [#353](https://github.com/fluxcd/flagger/pull/353)
- Updating hey release link [#350](https://github.com/fluxcd/flagger/pull/350)
## 0.20.0 (2019-10-21)
Adds support for [A/B Testing](https://docs.flagger.app/usage/progressive-delivery#traffic-mirroring)
and retry policies when using App Mesh
#### Features
- Implement App Mesh A/B testing based on HTTP headers match conditions [#340](https://github.com/fluxcd/flagger/pull/340)
- Implement App Mesh HTTP retry policy [#338](https://github.com/fluxcd/flagger/pull/338)
- Implement metrics server override [#342](https://github.com/fluxcd/flagger/pull/342)
#### Improvements
- Add the app/name label to services and primary deployment [#333](https://github.com/fluxcd/flagger/pull/333)
- Allow setting Slack and Teams URLs with env vars [#334](https://github.com/fluxcd/flagger/pull/334)
- Refactor Gloo integration [#344](https://github.com/fluxcd/flagger/pull/344)
#### Fixes
- Generate unique names for App Mesh virtual routers and routes [#336](https://github.com/fluxcd/flagger/pull/336)
## 0.19.0 (2019-10-08)
Adds support for canary and blue/green [traffic mirroring](https://docs.flagger.app/usage/progressive-delivery#traffic-mirroring)
#### Features
- Add traffic mirroring for Istio service mesh [#311](https://github.com/fluxcd/flagger/pull/311)
- Implement canary service target port [#327](https://github.com/fluxcd/flagger/pull/327)
#### Improvements
- Allow gRPC protocol for App Mesh [#325](https://github.com/fluxcd/flagger/pull/325)
- Enforce blue/green when using Kubernetes networking [#326](https://github.com/fluxcd/flagger/pull/326)
#### Fixes
- Fix port discovery diff [#324](https://github.com/fluxcd/flagger/pull/324)
- Helm chart: Enable Prometheus scraping of Flagger metrics
[#2141d88](https://github.com/fluxcd/flagger/commit/2141d88ce1cc6be220dab34171c215a334ecde24)
## 0.18.6 (2019-10-03)
Adds support for App Mesh conformance tests and latency metric checks
#### Improvements
- Add support for acceptance testing when using App Mesh [#322](https://github.com/fluxcd/flagger/pull/322)
- Add Kustomize installer for App Mesh [#310](https://github.com/fluxcd/flagger/pull/310)
- Update Linkerd to v2.5.0 and Prometheus to v2.12.0 [#323](https://github.com/fluxcd/flagger/pull/323)
#### Fixes
- Fix slack/teams notification fields mapping [#318](https://github.com/fluxcd/flagger/pull/318)
## 0.18.5 (2019-10-02)
Adds support for [confirm-promotion](https://docs.flagger.app/how-it-works#webhooks)
webhooks and blue/green deployments when using a service mesh
#### Features
- Implement confirm-promotion hook [#307](https://github.com/fluxcd/flagger/pull/307)
- Implement B/G for service mesh providers [#305](https://github.com/fluxcd/flagger/pull/305)
#### Improvements
- Canary promotion improvements to avoid dropping in-flight requests [#310](https://github.com/fluxcd/flagger/pull/310)
- Update end-to-end tests to Kubernetes v1.15.3 and Istio 1.3.0 [#306](https://github.com/fluxcd/flagger/pull/306)
#### Fixes
- Skip primary check for App Mesh [#315](https://github.com/fluxcd/flagger/pull/315)
## 0.18.4 (2019-09-08)
Adds support for NGINX custom annotations and Helm v3 acceptance testing
#### Features
- Add annotations prefix for NGINX ingresses [#293](https://github.com/fluxcd/flagger/pull/293)
- Add wide columns in CRD [#289](https://github.com/fluxcd/flagger/pull/289)
- loadtester: implement Helm v3 test command [#296](https://github.com/fluxcd/flagger/pull/296)
- loadtester: add gRPC health check to load tester image [#295](https://github.com/fluxcd/flagger/pull/295)
#### Fixes
- loadtester: fix tests error logging [#286](https://github.com/fluxcd/flagger/pull/286)
## 0.18.3 (2019-08-22)
Adds support for tillerless helm tests and protobuf health checking
#### Features
- loadtester: add support for tillerless helm [#280](https://github.com/fluxcd/flagger/pull/280)
- loadtester: add support for protobuf health checking [#280](https://github.com/fluxcd/flagger/pull/280)
#### Improvements
- Set HTTP listeners for AppMesh virtual routers [#272](https://github.com/fluxcd/flagger/pull/272)
#### Fixes
- Add missing fields to CRD validation spec [#271](https://github.com/fluxcd/flagger/pull/271)
- Fix App Mesh backends validation in CRD [#281](https://github.com/fluxcd/flagger/pull/281)
## 0.18.2 (2019-08-05)
Fixes multi-port support for Istio
#### Fixes
- Fix port discovery for multiple port services [#267](https://github.com/fluxcd/flagger/pull/267)
#### Improvements
- Update e2e testing to Istio v1.2.3, Gloo v0.18.8 and NGINX ingress chart v1.12.1 [#268](https://github.com/fluxcd/flagger/pull/268)
## 0.18.1 (2019-07-30)
Fixes Blue/Green style deployments for Kubernetes and Linkerd providers
#### Fixes
- Fix Blue/Green metrics provider and add e2e tests [#261](https://github.com/fluxcd/flagger/pull/261)
## 0.18.0 (2019-07-29)
Adds support for [manual gating](https://docs.flagger.app/how-it-works#manual-gating) and pausing/resuming an ongoing analysis
#### Features
- Implement confirm rollout gate, hook and API [#251](https://github.com/fluxcd/flagger/pull/251)
#### Improvements
- Refactor canary change detection and status [#240](https://github.com/fluxcd/flagger/pull/240)
- Implement finalising state [#257](https://github.com/fluxcd/flagger/pull/257)
- Add gRPC load testing tool [#248](https://github.com/fluxcd/flagger/pull/248)
#### Breaking changes
- Due to the status sub-resource changes in [#240](https://github.com/fluxcd/flagger/pull/240),
when upgrading Flagger the canaries status phase will be reset to `Initialized`
- Upgrading Flagger with Helm will fail due to Helm poor support of CRDs,
see [workaround](https://github.com/fluxcd/flagger/issues/223)
## 0.17.0 (2019-07-08)
Adds support for Linkerd (SMI Traffic Split API), MS Teams notifications and HA mode with leader election
#### Features
- Add Linkerd support [#230](https://github.com/fluxcd/flagger/pull/230)
- Implement MS Teams notifications [#235](https://github.com/fluxcd/flagger/pull/235)
- Implement leader election [#236](https://github.com/fluxcd/flagger/pull/236)
#### Improvements
- Add [Kustomize](https://docs.flagger.app/install/flagger-install-on-kubernetes#install-flagger-with-kustomize)
installer [#232](https://github.com/fluxcd/flagger/pull/232)
- Add Pod Security Policy to Helm chart [#234](https://github.com/fluxcd/flagger/pull/234)
## 0.16.0 (2019-06-23)
Adds support for running [Blue/Green deployments](https://docs.flagger.app/usage/blue-green)
without a service mesh or ingress controller
#### Features
- Allow blue/green deployments without a service mesh provider [#211](https://github.com/fluxcd/flagger/pull/211)
- Add the service mesh provider to the canary spec [#217](https://github.com/fluxcd/flagger/pull/217)
- Allow multi-port services and implement port discovery [#207](https://github.com/fluxcd/flagger/pull/207)
#### Improvements
- Add [FAQ page](https://docs.flagger.app/faq) to docs website
- Switch to go modules in CI [#218](https://github.com/fluxcd/flagger/pull/218)
- Update e2e testing to Kubernetes Kind 0.3.0 and Istio 1.2.0
#### Fixes
- Update the primary HPA on canary promotion [#216](https://github.com/fluxcd/flagger/pull/216)
## 0.15.0 (2019-06-12)
Adds support for customising the Istio [traffic policy](https://docs.flagger.app/how-it-works#istio-routing) in the canary service spec
#### Features
- Generate Istio destination rules and allow traffic policy customisation [#200](https://github.com/fluxcd/flagger/pull/200)
#### Improvements
- Update Kubernetes packages to 1.14 and use go modules instead of dep [#202](https://github.com/fluxcd/flagger/pull/202)
## 0.14.1 (2019-06-05)
Adds support for running [acceptance/integration tests](https://docs.flagger.app/how-it-works#integration-testing)
with Helm test or Bash Bats using pre-rollout hooks
#### Features
- Implement Helm and Bash pre-rollout hooks [#196](https://github.com/fluxcd/flagger/pull/196)
#### Fixes
- Fix promoting canary when max weight is not a multiple of step [#190](https://github.com/fluxcd/flagger/pull/190)
- Add ability to set Prometheus url with custom path without trailing '/' [#197](https://github.com/fluxcd/flagger/pull/197)
## 0.14.0 (2019-05-21)
Adds support for Service Mesh Interface and [Gloo](https://docs.flagger.app/usage/gloo-progressive-delivery) ingress controller
#### Features
- Add support for SMI (Istio weighted traffic) [#180](https://github.com/fluxcd/flagger/pull/180)
- Add support for Gloo ingress controller (weighted traffic) [#179](https://github.com/fluxcd/flagger/pull/179)
## 0.13.2 (2019-04-11)
Fixes for Jenkins X deployments (prevent the jx GC from removing the primary instance)
#### Fixes
- Do not copy labels from canary to primary deployment [#178](https://github.com/fluxcd/flagger/pull/178)
#### Improvements
- Add NGINX ingress controller e2e and unit tests [#176](https://github.com/fluxcd/flagger/pull/176)
## 0.13.1 (2019-04-09)
Fixes for custom metrics checks and NGINX Prometheus queries
#### Fixes
- Fix promql queries for custom checks and NGINX [#174](https://github.com/fluxcd/flagger/pull/174)
## 0.13.0 (2019-04-08)
Adds support for [NGINX](https://docs.flagger.app/usage/nginx-progressive-delivery) ingress controller
#### Features
- Add support for nginx ingress controller (weighted traffic and A/B testing) [#170](https://github.com/fluxcd/flagger/pull/170)
- Add Prometheus add-on to Flagger Helm chart for App Mesh and
NGINX [79b3370](https://github.com/fluxcd/flagger/pull/170/commits/79b337089294a92961bc8446fd185b38c50a32df)
#### Fixes
- Fix duplicate hosts Istio error when using wildcards [#162](https://github.com/fluxcd/flagger/pull/162)
## 0.12.0 (2019-04-29)
Adds support for [SuperGloo](https://docs.flagger.app/install/flagger-install-with-supergloo)
#### Features
- Supergloo support for canary deployment (weighted traffic) [#151](https://github.com/fluxcd/flagger/pull/151)
## 0.11.1 (2019-04-18)
Move Flagger and the load tester container images to Docker Hub
#### Features
- Add Bash Automated Testing System support to Flagger tester for running acceptance tests as pre-rollout hooks
## 0.11.0 (2019-04-17)
Adds pre/post rollout [webhooks](https://docs.flagger.app/how-it-works#webhooks)
#### Features
- Add `pre-rollout` and `post-rollout` webhook types [#147](https://github.com/fluxcd/flagger/pull/147)
#### Improvements
- Unify App Mesh and Istio builtin metric checks [#146](https://github.com/fluxcd/flagger/pull/146)
- Make the pod selector label configurable [#148](https://github.com/fluxcd/flagger/pull/148)
#### Breaking changes
- Set default `mesh` Istio gateway only if no gateway is specified [#141](https://github.com/fluxcd/flagger/pull/141)
## 0.10.0 (2019-03-27)
Adds support for App Mesh
#### Features
- AWS App Mesh integration
[#107](https://github.com/fluxcd/flagger/pull/107)
[#123](https://github.com/fluxcd/flagger/pull/123)
#### Improvements
- Reconcile Kubernetes ClusterIP services [#122](https://github.com/fluxcd/flagger/pull/122)
#### Fixes
- Preserve pod labels on canary promotion [#105](https://github.com/fluxcd/flagger/pull/105)
- Fix canary status Prometheus metric [#121](https://github.com/fluxcd/flagger/pull/121)
## 0.9.0 (2019-03-11)
Allows A/B testing scenarios where instead of weighted routing, the traffic is split between the
primary and canary based on HTTP headers or cookies.
#### Features
- A/B testing - canary with session affinity [#88](https://github.com/fluxcd/flagger/pull/88)
#### Fixes
- Update the analysis interval when the custom resource changes [#91](https://github.com/fluxcd/flagger/pull/91)
## 0.8.0 (2019-03-06)
Adds support for CORS policy and HTTP request headers manipulation
#### Features
- CORS policy support [#83](https://github.com/fluxcd/flagger/pull/83)
- Allow headers to be appended to HTTP requests [#82](https://github.com/fluxcd/flagger/pull/82)
#### Improvements
- Refactor the routing management
[#72](https://github.com/fluxcd/flagger/pull/72)
[#80](https://github.com/fluxcd/flagger/pull/80)
- Fine-grained RBAC [#73](https://github.com/fluxcd/flagger/pull/73)
- Add option to limit Flagger to a single namespace [#78](https://github.com/fluxcd/flagger/pull/78)
## 0.7.0 (2019-02-28)
Adds support for custom metric checks, HTTP timeouts and HTTP retries
#### Features
- Allow custom promql queries in the canary analysis spec [#60](https://github.com/fluxcd/flagger/pull/60)
- Add HTTP timeout and retries to canary service spec [#62](https://github.com/fluxcd/flagger/pull/62)
## 0.6.0 (2019-02-25)
Allows for [HTTPMatchRequests](https://istio.io/docs/reference/config/istio.networking.v1alpha3/#HTTPMatchRequest)
and [HTTPRewrite](https://istio.io/docs/reference/config/istio.networking.v1alpha3/#HTTPRewrite)
to be customized in the service spec of the canary custom resource.
#### Features
- Add HTTP match conditions and URI rewrite to the canary service spec [#55](https://github.com/fluxcd/flagger/pull/55)
- Update virtual service when the canary service spec changes
[#54](https://github.com/fluxcd/flagger/pull/54)
[#51](https://github.com/fluxcd/flagger/pull/51)
#### Improvements
- Run e2e testing on [Kubernetes Kind](https://github.com/kubernetes-sigs/kind) for canary promotion
[#53](https://github.com/fluxcd/flagger/pull/53)
## 0.5.1 (2019-02-14)
Allows skipping the analysis phase to ship changes directly to production
#### Features
- Add option to skip the canary analysis [#46](https://github.com/fluxcd/flagger/pull/46)
#### Fixes
- Reject deployment if the pod label selector doesn't match `app: <DEPLOYMENT_NAME>` [#43](https://github.com/fluxcd/flagger/pull/43)
## 0.5.0 (2019-01-30)
Track changes in ConfigMaps and Secrets [#37](https://github.com/fluxcd/flagger/pull/37)
#### Features
- Promote configmaps and secrets changes from canary to primary
- Detect changes in configmaps and/or secrets and (re)start canary analysis
- Add configs checksum to Canary CRD status
- Create primary configmaps and secrets at bootstrap
- Scan canary volumes and containers for configmaps and secrets
#### Fixes
- Copy deployment labels from canary to primary at bootstrap and promotion
## 0.4.1 (2019-01-24)
Load testing webhook [#35](https://github.com/fluxcd/flagger/pull/35)
#### Features
- Add the load tester chart to Flagger Helm repository
- Implement a load test runner based on [rakyll/hey](https://github.com/rakyll/hey)
- Log warning when no values are found for Istio metric due to lack of traffic
#### Fixes
- Run wekbooks before the metrics checks to avoid failures when using a load tester
## 0.4.0 (2019-01-18)
Restart canary analysis if revision changes [#31](https://github.com/fluxcd/flagger/pull/31)
#### Breaking changes
- Drop support for Kubernetes 1.10
#### Features
- Detect changes during canary analysis and reset advancement
- Add status and additional printer columns to CRD
- Add canary name and namespace to controller structured logs
#### Fixes
- Allow canary name to be different to the target name
- Check if multiple canaries have the same target and log error
- Use deep copy when updating Kubernetes objects
- Skip readiness checks if canary analysis has finished
## 0.3.0 (2019-01-11)
Configurable canary analysis duration [#20](https://github.com/fluxcd/flagger/pull/20)
#### Breaking changes
- Helm chart: flag `controlLoopInterval` has been removed
#### Features
- CRD: canaries.flagger.app v1alpha3
- Schedule canary analysis independently based on `canaryAnalysis.interval`
- Add analysis interval to Canary CRD (defaults to one minute)
- Make autoscaler (HPA) reference optional
## 0.2.0 (2019-01-04)
Webhooks [#18](https://github.com/fluxcd/flagger/pull/18)
#### Features
- CRD: canaries.flagger.app v1alpha2
- Implement canary external checks based on webhooks HTTP POST calls
- Add webhooks to Canary CRD
- Move docs to gitbook [docs.flagger.app](https://docs.flagger.app)
## 0.1.2 (2018-12-06)
Improve Slack notifications [#14](https://github.com/fluxcd/flagger/pull/14)
#### Features
- Add canary analysis metadata to init and start Slack messages
- Add rollback reason to failed canary Slack messages
## 0.1.1 (2018-11-28)
Canary progress deadline [#10](https://github.com/fluxcd/flagger/pull/10)
#### Features
- Rollback canary based on the deployment progress deadline check
- Add progress deadline to Canary CRD (defaults to 10 minutes)
## 0.1.0 (2018-11-25)
First stable release
#### Features
- CRD: canaries.flagger.app v1alpha1
- Notifications: post canary events to Slack
- Instrumentation: expose Prometheus metrics for canary status and traffic weight percentage
- Autoscaling: add HPA reference to CRD and create primary HPA at bootstrap
- Bootstrap: create primary deployment, ClusterIP services and Istio virtual service based on CRD spec
## 0.0.1 (2018-10-07)
Initial semver release
#### Features
- Implement canary rollback based on failed checks threshold
- Scale up the deployment when canary revision changes
- Add OpenAPI v3 schema validation to Canary CRD
- Use CRD status for canary state persistence
- Add Helm charts for Flagger and Grafana
- Add canary analysis Grafana dashboard

View File

@@ -1,3 +0,0 @@
## Code of Conduct
Flagger follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).

View File

@@ -1,89 +0,0 @@
# How to Contribute
Flagger is [Apache 2.0 licensed](LICENSE) and accepts contributions via GitHub
pull requests. This document outlines some of the conventions on development
workflow, commit message formatting, contact points and other resources to make
it easier to get your contribution accepted.
We gratefully welcome improvements to documentation as well as to code.
## Certificate of Origin
By contributing to this project you agree to the Developer Certificate of
Origin (DCO). This document was created by the Linux Kernel community and is a
simple statement that you, as a contributor, have the legal right to make the
contribution.
We require all commits to be signed. By signing off with your signature, you
certify that you wrote the patch or otherwise have the right to contribute the
material by the rules of the [DCO](DCO):
`Signed-off-by: Jane Doe <jane.doe@example.com>`
The signature must contain your real name
(sorry, no pseudonyms or anonymous contributions)
If your `user.name` and `user.email` are configured in your Git config,
you can sign your commit automatically with `git commit -s`.
## Communications
The project uses Slack: To join the conversation, simply join the
[CNCF](https://slack.cncf.io/) Slack workspace and use the
[#flux](https://cloud-native.slack.com/messages/flux/) channel.
The developers use a mailing list to discuss development as well.
Simply subscribe to [flux-dev on cncf.io](https://lists.cncf.io/g/cncf-flux-dev)
to join the conversation (this will also add an invitation to your
Google calendar for our [Flux
meeting](https://docs.google.com/document/d/1l_M0om0qUEN_NNiGgpqJ2tvsF2iioHkaARDeh6b70B0/edit#)).
## Getting Started
- Fork the repository on GitHub
- If you want to contribute as a developer, read [Flagger Development Guide](https://docs.flagger.app/dev/dev-guide)
- If you have questions, concerns, get stuck or need a hand, let us know
on the Slack channel. We are happy to help and look forward to having
you part of the team. No matter in which capacity.
- Play with the project, submit bugs, submit pull requests!
## Contribution workflow
This is a rough outline of how to prepare a contribution:
- Create a topic branch from where you want to base your work (usually branched from master).
- Make commits of logical units.
- Make sure your commit messages are in the proper format (see below).
- Push your changes to a topic branch in your fork of the repository.
- If you changed code:
- add automated tests to cover your changes
- Submit a pull request to the original repository.
## Acceptance policy
These things will make a PR more likely to be accepted:
- a well-described requirement
- new code and tests follow the conventions in old code and tests
- a good commit message (see below)
- All code must abide [Go Code Review Comments](https://github.com/golang/go/wiki/CodeReviewComments)
- Names should abide [What's in a name](https://talks.golang.org/2014/names.slide#1)
- Code must build on both Linux and Darwin, via plain `go build`
- Code should have appropriate test coverage and tests should be written
to work with `go test`
In general, we will merge a PR once one maintainer has endorsed it.
For substantial changes, more people may become involved, and you might
get asked to resubmit the PR or divide the changes into more than one PR.
### Format of the Commit Message
For Flagger we prefer the following rules for good commit messages:
- Limit the subject to 50 characters and write as the continuation
of the sentence "If applied, this commit will ..."
- Explain what and why in the body, if more than a trivial change;
wrap it at 72 characters.
The [following article](https://chris.beams.io/posts/git-commit/#seven-rules)
has some more helpful advice on documenting your work.

36
DCO
View File

@@ -1,36 +0,0 @@
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.

View File

@@ -1,32 +0,0 @@
FROM golang:1.15-alpine as builder
ARG TARGETPLATFORM
ARG REVISON
WORKDIR /workspace
# copy modules manifests
COPY go.mod go.mod
COPY go.sum go.sum
# cache modules
RUN go mod download
# copy source code
COPY cmd/ cmd/
COPY pkg/ pkg/
# build
RUN CGO_ENABLED=0 go build \
-ldflags "-s -w -X github.com/fluxcd/flagger/pkg/version.REVISION=${REVISON}" \
-a -o flagger ./cmd/flagger
FROM alpine:3.12
RUN apk --no-cache add ca-certificates
USER nobody
COPY --from=builder --chown=nobody:nobody /workspace/flagger .
ENTRYPOINT ["./flagger"]

View File

@@ -1,69 +0,0 @@
FROM alpine:3.11 as build
RUN apk --no-cache add alpine-sdk perl curl
RUN curl -sSLo hey "https://storage.googleapis.com/hey-release/hey_linux_amd64" && \
chmod +x hey && mv hey /usr/local/bin/hey
RUN HELM2_VERSION=2.16.8 && \
curl -sSL "https://get.helm.sh/helm-v${HELM2_VERSION}-linux-amd64.tar.gz" | tar xvz && \
chmod +x linux-amd64/helm && mv linux-amd64/helm /usr/local/bin/helm && \
chmod +x linux-amd64/tiller && mv linux-amd64/tiller /usr/local/bin/tiller
RUN HELM3_VERSION=3.2.3 && \
curl -sSL "https://get.helm.sh/helm-v${HELM3_VERSION}-linux-amd64.tar.gz" | tar xvz && \
chmod +x linux-amd64/helm && mv linux-amd64/helm /usr/local/bin/helmv3
RUN GRPC_HEALTH_PROBE_VERSION=v0.3.1 && \
wget -qO /usr/local/bin/grpc_health_probe https://github.com/grpc-ecosystem/grpc-health-probe/releases/download/${GRPC_HEALTH_PROBE_VERSION}/grpc_health_probe-linux-amd64 && \
chmod +x /usr/local/bin/grpc_health_probe
RUN GHZ_VERSION=0.39.0 && \
curl -sSL "https://github.com/bojand/ghz/releases/download/v${GHZ_VERSION}/ghz_${GHZ_VERSION}_Linux_x86_64.tar.gz" | tar xz -C /tmp && \
mv /tmp/ghz /usr/local/bin && chmod +x /usr/local/bin/ghz
RUN HELM_TILLER_VERSION=0.9.3 && \
curl -sSL "https://github.com/rimusz/helm-tiller/archive/v${HELM_TILLER_VERSION}.tar.gz" | tar xz -C /tmp && \
mv /tmp/helm-tiller-${HELM_TILLER_VERSION} /tmp/helm-tiller
RUN WRK_VERSION=4.0.2 && \
cd /tmp && git clone -b ${WRK_VERSION} https://github.com/wg/wrk
RUN cd /tmp/wrk && make
FROM bash:5.0
RUN addgroup -S app && \
adduser -S -g app app && \
apk --no-cache add ca-certificates curl jq libgcc
WORKDIR /home/app
COPY --from=bats/bats:v1.1.0 /opt/bats/ /opt/bats/
RUN ln -s /opt/bats/bin/bats /usr/local/bin/
COPY --from=build /usr/local/bin/hey /usr/local/bin/
COPY --from=build /tmp/wrk/wrk /usr/local/bin/
COPY --from=build /usr/local/bin/helm /usr/local/bin/
COPY --from=build /usr/local/bin/tiller /usr/local/bin/
COPY --from=build /usr/local/bin/ghz /usr/local/bin/
COPY --from=build /usr/local/bin/helmv3 /usr/local/bin/
COPY --from=build /usr/local/bin/grpc_health_probe /usr/local/bin/
COPY --from=build /tmp/helm-tiller /tmp/helm-tiller
ADD https://raw.githubusercontent.com/grpc/grpc-proto/master/grpc/health/v1/health.proto /tmp/ghz/health.proto
COPY ./bin/loadtester .
RUN chown -R app:app ./
RUN chown -R app:app /tmp/ghz
USER app
# test load generator tools
RUN hey -n 1 -c 1 https://flagger.app > /dev/null && echo $? | grep 0
RUN wrk -d 1s -c 1 -t 1 https://flagger.app > /dev/null && echo $? | grep 0
# install Helm v2 plugins
RUN helm init --client-only && helm plugin install /tmp/helm-tiller
ENTRYPOINT ["./loadtester"]

View File

@@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Copyright 2019 Weaveworks
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -1,6 +0,0 @@
The maintainers are generally available in Slack at
https://weave-community.slack.com/messages/flagger/ (obtain an invitation
at https://slack.weave.works/).
Stefan Prodan, Weaveworks <stefan@weave.works> (Slack: @stefan Twitter: @stefanprodan)
Takeshi Yoneda, DMM.com <cz.rk.t0415y.g@gmail.com> (Slack: @mathetake Twitter: @mathetake)

View File

@@ -1,49 +0,0 @@
TAG?=latest
VERSION?=$(shell grep 'VERSION' pkg/version/version.go | awk '{ print $$4 }' | tr -d '"')
LT_VERSION?=$(shell grep 'VERSION' cmd/loadtester/main.go | awk '{ print $$4 }' | tr -d '"' | head -n1)
build:
CGO_ENABLED=0 go build -a -o ./bin/flagger ./cmd/flagger
fmt:
gofmt -l -s -w ./
goimports -l -w ./
test-fmt:
gofmt -l -s ./ | grep ".*\.go"; if [ "$$?" = "0" ]; then exit 1; fi
goimports -l ./ | grep ".*\.go"; if [ "$$?" = "0" ]; then exit 1; fi
codegen:
./hack/update-codegen.sh
test-codegen:
./hack/verify-codegen.sh
test: test-fmt test-codegen
go test ./...
crd:
cat artifacts/flagger/crd.yaml > charts/flagger/crds/crd.yaml
cat artifacts/flagger/crd.yaml > kustomize/base/flagger/crd.yaml
version-set:
@next="$(TAG)" && \
current="$(VERSION)" && \
sed -i '' "s/$$current/$$next/g" pkg/version/version.go && \
sed -i '' "s/flagger:$$current/flagger:$$next/g" artifacts/flagger/deployment.yaml && \
sed -i '' "s/tag: $$current/tag: $$next/g" charts/flagger/values.yaml && \
sed -i '' "s/appVersion: $$current/appVersion: $$next/g" charts/flagger/Chart.yaml && \
sed -i '' "s/version: $$current/version: $$next/g" charts/flagger/Chart.yaml && \
sed -i '' "s/newTag: $$current/newTag: $$next/g" kustomize/base/flagger/kustomization.yaml && \
echo "Version $$next set in code, deployment, chart and kustomize"
release:
git tag "v$(VERSION)"
git push origin "v$(VERSION)"
loadtester-build:
CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o ./bin/loadtester ./cmd/loadtester/*
docker build -t ghcr.io/fluxcd/flagger-loadtester:$(LT_VERSION) . -f Dockerfile.loadtester
loadtester-push:
docker push ghcr.io/fluxcd/flagger-loadtester:$(LT_VERSION)

252
README.md
View File

@@ -1,252 +1,4 @@
# flagger
# flagger website
[![build](https://github.com/fluxcd/flagger/workflows/build/badge.svg)](https://github.com/fluxcd/flagger/actions)
[![report](https://goreportcard.com/badge/github.com/fluxcd/flagger)](https://goreportcard.com/report/github.com/fluxcd/flagger)
[![license](https://img.shields.io/github/license/fluxcd/flagger.svg)](https://github.com/fluxcd/flagger/blob/main/LICENSE)
[![release](https://img.shields.io/github/release/fluxcd/flagger/all.svg)](https://github.com/fluxcd/flagger/releases)
[flagger.app](https://flagger.app)
Flagger is a progressive delivery tool that automates the release process for applications running on Kubernetes.
It reduces the risk of introducing a new software version in production
by gradually shifting traffic to the new version while measuring metrics and running conformance tests.
![flagger-overview](https://raw.githubusercontent.com/fluxcd/flagger/main/docs/diagrams/flagger-canary-overview.png)
Flagger implements several deployment strategies (Canary releases, A/B testing, Blue/Green mirroring)
using a service mesh (App Mesh, Istio, Linkerd) or an ingress controller (Contour, Gloo, NGINX, Skipper, Traefik) for traffic routing.
For release analysis, Flagger can query Prometheus, Datadog or CloudWatch
and for alerting it uses Slack, MS Teams, Discord and Rocket.
### Documentation
Flagger documentation can be found at [docs.flagger.app](https://docs.flagger.app).
* Install
* [Flagger install on Kubernetes](https://docs.flagger.app/install/flagger-install-on-kubernetes)
* Usage
* [How it works](https://docs.flagger.app/usage/how-it-works)
* [Deployment strategies](https://docs.flagger.app/usage/deployment-strategies)
* [Metrics analysis](https://docs.flagger.app/usage/metrics)
* [Webhooks](https://docs.flagger.app/usage/webhooks)
* [Alerting](https://docs.flagger.app/usage/alerting)
* [Monitoring](https://docs.flagger.app/usage/monitoring)
* Tutorials
* [App Mesh](https://docs.flagger.app/tutorials/appmesh-progressive-delivery)
* [Istio](https://docs.flagger.app/tutorials/istio-progressive-delivery)
* [Linkerd](https://docs.flagger.app/tutorials/linkerd-progressive-delivery)
* [Contour](https://docs.flagger.app/tutorials/contour-progressive-delivery)
* [Gloo](https://docs.flagger.app/tutorials/gloo-progressive-delivery)
* [NGINX Ingress](https://docs.flagger.app/tutorials/nginx-progressive-delivery)
* [Skipper](https://docs.flagger.app/tutorials/skipper-progressive-delivery)
* [Traefik](https://docs.flagger.app/tutorials/traefik-progressive-delivery)
* [Kubernetes Blue/Green](https://docs.flagger.app/tutorials/kubernetes-blue-green)
### Who is using Flagger
List of organizations using Flagger:
* [Chick-fil-A](https://www.chick-fil-a.com)
* [Capra Consulting](https://www.capraconsulting.no)
* [DMM.com](https://dmm-corp.com)
* [MediaMarktSaturn](https://www.mediamarktsaturn.com)
* [Weaveworks](https://weave.works)
* [Jumia Group](https://group.jumia.com)
* [eLife](https://elifesciences.org/)
If you are using Flagger, please submit a PR to add your organization to the list!
### Canary CRD
Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA),
then creates a series of objects (Kubernetes deployments, ClusterIP services, service mesh or ingress routes).
These objects expose the application on the mesh and drive the canary analysis and promotion.
Flagger keeps track of ConfigMaps and Secrets referenced by a Kubernetes Deployment and triggers a canary analysis if any of those objects change.
When promoting a workload in production, both code (container images) and configuration (config maps and secrets) are being synchronised.
For a deployment named _podinfo_, a canary promotion can be defined using Flagger's custom resource:
```yaml
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# service mesh provider (optional)
# can be: kubernetes, istio, linkerd, appmesh, nginx, skipper, contour, gloo, supergloo, traefik
provider: istio
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
# service name (defaults to targetRef.name)
name: podinfo
# ClusterIP port number
port: 9898
# container port name or number (optional)
targetPort: 9898
# port name can be http or grpc (default http)
portName: http
# add all the other container ports
# to the ClusterIP services (default false)
portDiscovery: true
# HTTP match conditions (optional)
match:
- uri:
prefix: /
# HTTP rewrite (optional)
rewrite:
uri: /
# request timeout (optional)
timeout: 5s
# promote the canary without analysing it (default false)
skipAnalysis: false
# define the canary analysis timing and KPIs
analysis:
# schedule interval (default 60s)
interval: 1m
# max number of failed metric checks before rollback
threshold: 10
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 5
# validation (optional)
metrics:
- name: request-success-rate
# builtin Prometheus check
# minimum req success rate (non 5xx responses)
# percentage (0-100)
thresholdRange:
min: 99
interval: 1m
- name: request-duration
# builtin Prometheus check
# maximum req duration P99
# milliseconds
thresholdRange:
max: 500
interval: 30s
- name: "database connections"
# custom metric check
templateRef:
name: db-connections
thresholdRange:
min: 2
max: 100
interval: 1m
# testing (optional)
webhooks:
- name: "conformance test"
type: pre-rollout
url: http://flagger-helmtester.test/
timeout: 5m
metadata:
type: "helmv3"
cmd: "test run podinfo -n test"
- name: "load test"
type: rollout
url: http://flagger-loadtester.test/
metadata:
cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"
# alerting (optional)
alerts:
- name: "dev team Slack"
severity: error
providerRef:
name: dev-slack
namespace: flagger
- name: "qa team Discord"
severity: warn
providerRef:
name: qa-discord
- name: "on-call MS Teams"
severity: info
providerRef:
name: on-call-msteams
```
For more details on how the canary analysis and promotion works please [read the docs](https://docs.flagger.app/usage/how-it-works).
### Features
**Service Mesh**
| Feature | App Mesh | Istio | Linkerd | Kubernetes CNI |
| ------------------------------------------ | ------------------ | ------------------ | ------------------ | ----------------- |
| Canary deployments (weighted traffic) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: |
| A/B testing (headers and cookies routing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: |
| Blue/Green deployments (traffic switch) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Blue/Green deployments (traffic mirroring) | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: |
| Webhooks (acceptance/load testing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Manual gating (approve/pause/resume) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Request success rate check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: |
| Request duration check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: |
| Custom metric checks | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
**Ingress**
| Feature | Contour | Gloo | NGINX | Skipper | Traefik |
| ------------------------------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
| Canary deployments (weighted traffic) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| A/B testing (headers and cookies routing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: |
| Blue/Green deployments (traffic switch) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Webhooks (acceptance/load testing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Manual gating (approve/pause/resume) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Request success rate check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_check_mark: |
| Request duration check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_check_mark: |
| Custom metric checks | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
### Roadmap
#### [GitOps Toolkit](https://github.com/fluxcd/flux2) compatibility
* Migrate Flagger to Kubernetes controller-runtime and [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder)
* Make the Canary status compatible with [kstatus](https://github.com/kubernetes-sigs/cli-utils)
* Make Flagger emit Kubernetes events compatible with Flux v2 notification API
* Integrate Flagger into Flux v2 as the progressive delivery component
#### Integrations
* Add support for Kubernetes [Ingress v2](https://github.com/kubernetes-sigs/service-apis)
* Add support for SMI compatible service mesh solutions like Open Service Mesh and Consul Connect
* Add support for ingress controllers like HAProxy and ALB
* Add support for metrics providers like InfluxDB, Stackdriver, SignalFX
### Contributing
Flagger is Apache 2.0 licensed and accepts contributions via GitHub pull requests.
To start contributing please read the [development guide](https://docs.flagger.app/dev/dev-guide).
When submitting bug reports please include as much details as possible:
* which Flagger version
* which Flagger CRD version
* which Kubernetes version
* what configuration (canary, ingress and workloads definitions)
* what happened (Flagger and Proxy logs)
### Getting Help
If you have any questions about Flagger and progressive delivery:
* Read the Flagger [docs](https://docs.flagger.app).
* Invite yourself to the [Weave community slack](https://slack.weave.works/)
and join the [#flagger](https://weave-community.slack.com/messages/flagger/) channel.
* Join the [Weave User Group](https://www.meetup.com/pro/Weave/) and get invited to online talks,
hands-on training and meetups in your area.
* File an [issue](https://github.com/fluxcd/flagger/issues/new).
Your feedback is always welcome!

View File

@@ -1,62 +0,0 @@
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
provider: appmesh
progressDeadlineSeconds: 600
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
port: 80
targetPort: 9898
meshName: global
retries:
attempts: 3
perTryTimeout: 5s
retryOn: "gateway-error,client-error,stream-error"
timeout: 35s
match:
- uri:
prefix: /
rewrite:
uri: /
analysis:
interval: 15s
threshold: 10
iterations: 10
match:
- headers:
x-canary:
exact: "insider"
metrics:
- name: request-success-rate
thresholdRange:
min: 99
interval: 1m
- name: request-duration
thresholdRange:
max: 500
interval: 30s
webhooks:
- name: conformance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 15s
metadata:
type: "bash"
cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token"
- name: load-test
type: rollout
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 -H 'X-Canary: insider' http://podinfo-canary.test/"

View File

@@ -1,59 +0,0 @@
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
provider: appmesh
progressDeadlineSeconds: 600
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
port: 80
targetPort: http
meshName: global
retries:
attempts: 3
perTryTimeout: 5s
retryOn: "gateway-error,client-error,stream-error"
timeout: 35s
match:
- uri:
prefix: /
rewrite:
uri: /
analysis:
interval: 15s
threshold: 10
maxWeight: 50
stepWeight: 5
metrics:
- name: request-success-rate
thresholdRange:
min: 99
interval: 1m
- name: request-duration
thresholdRange:
max: 500
interval: 30s
webhooks:
- name: conformance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 15s
metadata:
type: "bash"
cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token"
- name: load-test
type: rollout
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test/"

View File

@@ -1,70 +0,0 @@
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
provider: istio
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
name: podinfo
port: 80
targetPort: 9898
portName: http
portDiscovery: true
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
hosts:
- app.example.com
trafficPolicy:
tls:
mode: DISABLE
match:
- uri:
prefix: /
rewrite:
uri: /
timeout: 30s
analysis:
interval: 15s
threshold: 10
iterations: 10
match:
- headers:
cookie:
regex: "^(.*?;)?(type=insider)(;.*)?$"
- headers:
user-agent:
regex: ".*Firefox.*"
metrics:
- name: request-success-rate
thresholdRange:
min: 99
interval: 1m
- name: request-duration
thresholdRange:
max: 500
interval: 30s
webhooks:
- name: conformance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 15s
metadata:
type: "bash"
cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token"
- name: load-test
type: rollout
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 -H 'Cookie: type=insider' http://podinfo.test/"

View File

@@ -1,66 +0,0 @@
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
provider: istio
progressDeadlineSeconds: 600
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
name: podinfo
port: 80
targetPort: 9898
portName: http
portDiscovery: true
gateways:
- public-gateway.istio-system.svc.cluster.local
- mesh
hosts:
- app.example.com
trafficPolicy:
tls:
mode: DISABLE
match:
- uri:
prefix: /
rewrite:
uri: /
timeout: 30s
skipAnalysis: false
analysis:
interval: 15s
threshold: 10
maxWeight: 50
stepWeight: 5
metrics:
- name: request-success-rate
thresholdRange:
min: 99
interval: 1m
- name: request-duration
thresholdRange:
max: 500
interval: 30s
webhooks:
- name: conformance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 15s
metadata:
type: "bash"
cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token"
- name: load-test
type: rollout
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test/"

View File

@@ -1,51 +0,0 @@
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
provider: linkerd
progressDeadlineSeconds: 600
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
name: podinfo
port: 80
targetPort: 9898
portName: http
portDiscovery: true
skipAnalysis: false
analysis:
interval: 15s
threshold: 10
stepWeights: [5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55]
metrics:
- name: request-success-rate
thresholdRange:
min: 99
interval: 1m
- name: request-duration
thresholdRange:
max: 500
interval: 30s
webhooks:
- name: conformance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 15s
metadata:
type: "bash"
cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token"
- name: load-test
type: rollout
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test/"

View File

@@ -1,52 +0,0 @@
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
provider: linkerd
progressDeadlineSeconds: 600
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
name: podinfo
port: 80
targetPort: 9898
portName: http
portDiscovery: true
skipAnalysis: false
analysis:
interval: 15s
threshold: 10
maxWeight: 50
stepWeight: 5
metrics:
- name: request-success-rate
thresholdRange:
min: 99
interval: 1m
- name: request-duration
thresholdRange:
max: 500
interval: 30s
webhooks:
- name: conformance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 15s
metadata:
type: "bash"
cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token"
- name: load-test
type: rollout
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test/"

View File

@@ -1,208 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: flagger
namespace: default
labels:
app: flagger
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: flagger
labels:
app: flagger
rules:
- apiGroups:
- ""
resources:
- events
- configmaps
- configmaps/finalizers
- secrets
- secrets/finalizers
- services
- services/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- apps
resources:
- daemonsets
- daemonsets/finalizers
- deployments
- deployments/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
- horizontalpodautoscalers/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
- ingresses/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- flagger.app
resources:
- canaries
- canaries/status
- metrictemplates
- metrictemplates/status
- alertproviders
- alertproviders/status
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- networking.istio.io
resources:
- virtualservices
- virtualservices/finalizers
- destinationrules
- destinationrules/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- appmesh.k8s.aws
resources:
- virtualnodes
- virtualnodes/finalizers
- virtualrouters
- virtualrouters/finalizers
- virtualservices
- virtualservices/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- split.smi-spec.io
resources:
- trafficsplits
- trafficsplits/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- specs.smi-spec.io
resources:
- httproutegroups
- httproutegroups/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- gloo.solo.io
resources:
- upstreams
- upstreams/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- gateway.solo.io
resources:
- routetables
- routetables/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- projectcontour.io
resources:
- httpproxies
- httpproxies/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- nonResourceURLs:
- /version
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: flagger
labels:
app: flagger
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flagger
subjects:
- kind: ServiceAccount
name: flagger
namespace: default

View File

@@ -1,926 +0,0 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: canaries.flagger.app
annotations:
helm.sh/resource-policy: keep
spec:
group: flagger.app
names:
kind: Canary
listKind: CanaryList
plural: canaries
singular: canary
categories:
- all
scope: Namespaced
versions:
- name: v1beta1
served: true
storage: true
subresources:
status: {}
additionalPrinterColumns:
- name: Status
type: string
jsonPath: .status.phase
- name: Weight
type: string
jsonPath: .status.canaryWeight
- name: FailedChecks
type: string
jsonPath: .status.failedChecks
priority: 1
- name: Interval
type: string
jsonPath: .spec.analysis.interval
priority: 1
- name: Mirror
type: boolean
jsonPath: .spec.analysis.mirror
priority: 1
- name: StepWeight
type: string
jsonPath: .spec.analysis.stepWeight
priority: 1
- name: StepWeights
type: string
jsonPath: .spec.analysis.stepWeights
priority: 1
- name: MaxWeight
type: string
jsonPath: .spec.analysis.maxWeight
priority: 1
- name: LastTransitionTime
type: string
jsonPath: .status.lastTransitionTime
schema:
openAPIV3Schema:
description: Canary is the Schema for the Canary API.
type: object
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: CanarySpec defines the desired state of a Canary.
type: object
required:
- targetRef
- service
- analysis
properties:
provider:
description: Traffic managent provider
type: string
metricsServer:
description: Prometheus URL
type: string
progressDeadlineSeconds:
description: Deployment progress deadline
type: number
targetRef:
description: Target selector
type: object
required: ["apiVersion", "kind", "name"]
properties:
apiVersion:
type: string
kind:
type: string
enum:
- DaemonSet
- Deployment
- Service
name:
type: string
autoscalerRef:
description: HPA selector
type: object
required: ["apiVersion", "kind", "name"]
properties:
apiVersion:
type: string
kind:
type: string
enum:
- HorizontalPodAutoscaler
name:
type: string
ingressRef:
description: Ingress selector
type: object
required: ["apiVersion", "kind", "name"]
properties:
apiVersion:
type: string
kind:
type: string
enum:
- Ingress
name:
type: string
service:
description: Kubernetes Service spec
type: object
required: ["port"]
properties:
name:
description: Kubernetes service name
type: string
port:
description: Container port number
type: number
portName:
description: Container port name
type: string
targetPort:
description: Container target port name
x-kubernetes-int-or-string: true
portDiscovery:
description: Enable port dicovery
type: boolean
timeout:
description: HTTP or gRPC request timeout
type: string
meshName:
description: AppMesh mesh name
type: string
backends:
description: AppMesh backend array
type: array
items:
type: string
hosts:
description: The list of host names for this service
type: array
items:
type: string
delegation:
description: enable behaving as a delegate VirtualService
type: boolean
match:
description: URI match conditions
type: array
items:
type: object
properties:
uri:
type: object
oneOf:
- required: ["exact"]
- required: ["prefix"]
- required: ["suffix"]
- required: ["regex"]
properties:
exact:
format: string
type: string
prefix:
format: string
type: string
suffix:
format: string
type: string
regex:
format: string
type: string
retries:
description: Retry policy for HTTP requests
type: object
properties:
attempts:
description: Number of retries for a given request
format: int32
type: integer
perTryTimeout:
description: Timeout per retry attempt for a given request
type: string
retryOn:
description: Specifies the conditions under which retry takes place
format: string
type: string
rewrite:
description: Rewrite HTTP URIs
type: object
properties:
uri:
format: string
type: string
headers:
description: Headers operations
type: object
properties:
request:
properties:
add:
additionalProperties:
format: string
type: string
type: object
remove:
items:
format: string
type: string
type: array
set:
additionalProperties:
format: string
type: string
type: object
type: object
response:
properties:
add:
additionalProperties:
format: string
type: string
type: object
remove:
items:
format: string
type: string
type: array
set:
additionalProperties:
format: string
type: string
type: object
type: object
gateways:
description: The list of Istio gateway for this virtual service
type: array
items:
type: string
corsPolicy:
description: Istio Cross-Origin Resource Sharing policy (CORS)
type: object
properties:
allowCredentials:
type: boolean
allowHeaders:
items:
format: string
type: string
type: array
allowMethods:
description: List of HTTP methods allowed to access the resource
items:
format: string
type: string
type: array
allowOrigin:
description: The list of origins that are allowed to perform
CORS requests.
items:
format: string
type: string
type: array
allowOrigins:
description: String patterns that match allowed origins
type: array
items:
type: object
oneOf:
- required:
- exact
- required:
- prefix
- required:
- regex
properties:
exact:
format: string
type: string
prefix:
format: string
type: string
regex:
format: string
type: string
exposeHeaders:
items:
format: string
type: string
type: array
maxAge:
type: string
trafficPolicy:
description: Istio traffic policy
type: object
properties:
connectionPool:
type: object
properties:
http:
description: HTTP connection pool settings.
type: object
properties:
h2UpgradePolicy:
description: Specify if http1.1 connection should
be upgraded to http2 for the associated destination.
enum:
- DEFAULT
- DO_NOT_UPGRADE
- UPGRADE
type: string
http1MaxPendingRequests:
description: Maximum number of pending HTTP requests
to a destination.
format: int32
type: integer
http2MaxRequests:
description: Maximum number of requests to a backend.
format: int32
type: integer
idleTimeout:
description: The idle timeout for upstream connection
pool connections.
type: string
maxRequestsPerConnection:
description: Maximum number of requests per connection
to a backend.
format: int32
type: integer
maxRetries:
format: int32
type: integer
loadBalancer:
description: Settings controlling the load balancer algorithms.
type: object
oneOf:
- required:
- simple
- properties:
consistentHash:
oneOf:
- required:
- httpHeaderName
- required:
- httpCookie
- required:
- useSourceIp
- required:
- httpQueryParameterName
required:
- consistentHash
properties:
consistentHash:
properties:
httpCookie:
description: Hash based on HTTP cookie.
properties:
name:
description: Name of the cookie.
format: string
type: string
path:
description: Path to set for the cookie.
format: string
type: string
ttl:
description: Lifetime of the cookie.
type: string
type: object
httpHeaderName:
description: Hash based on a specific HTTP header.
format: string
type: string
httpQueryParameterName:
description: Hash based on a specific HTTP query parameter.
format: string
type: string
minimumRingSize:
type: integer
useSourceIp:
description: Hash based on the source IP address.
type: boolean
type: object
localityLbSetting:
properties:
distribute:
description: 'Optional: only one of distribute or
failover can be set.'
items:
properties:
from:
description: Originating locality, '/' separated,
e.g.
format: string
type: string
to:
additionalProperties:
type: integer
description: Map of upstream localities to traffic
distribution weights.
type: object
type: object
type: array
enabled:
description: enable locality load balancing, this
is DestinationRule-level and will override mesh
wide settings in entirety.
type: boolean
failover:
description: 'Optional: only failover or distribute
can be set.'
items:
properties:
from:
description: Originating region.
format: string
type: string
to:
format: string
type: string
type: object
type: array
type: object
simple:
enum:
- ROUND_ROBIN
- LEAST_CONN
- RANDOM
- PASSTHROUGH
type: string
outlierDetection:
description: Settings controlling eviction of unhealthy hosts from the load balancing pool.
type: object
properties:
baseEjectionTime:
description: Minimum ejection duration.
type: string
consecutive5xxErrors:
description: Number of 5xx errors before a host is ejected
from the connection pool.
type: integer
consecutiveErrors:
format: int32
type: integer
consecutiveGatewayErrors:
description: Number of gateway errors before a host is
ejected from the connection pool.
format: int32
type: integer
interval:
description: Time interval between ejection sweep analysis.
type: string
maxEjectionPercent:
format: int32
type: integer
minHealthPercent:
format: int32
type: integer
tls:
description: Istio TLS related settings for connections to the upstream service
type: object
properties:
caCertificates:
format: string
type: string
clientCertificate:
description: REQUIRED if mode is `MUTUAL`.
format: string
type: string
mode:
enum:
- DISABLE
- SIMPLE
- MUTUAL
- ISTIO_MUTUAL
type: string
privateKey:
description: REQUIRED if mode is `MUTUAL`.
format: string
type: string
sni:
description: SNI string to present to the server
during TLS handshake.
format: string
type: string
subjectAltNames:
items:
format: string
type: string
type: array
apex:
description: Metadata to add to the apex service
type: object
properties:
labels:
type: object
additionalProperties:
type: string
annotations:
type: object
additionalProperties:
type: string
primary:
description: Metadata to add to the primary service
type: object
properties:
labels:
type: object
additionalProperties:
type: string
annotations:
type: object
additionalProperties:
type: string
canary:
description: Metadata to add to the canary service
type: object
properties:
labels:
type: object
additionalProperties:
type: string
annotations:
type: object
additionalProperties:
type: string
skipAnalysis:
description: Skip analysis and promote canary
type: boolean
revertOnDeletion:
description: Revert mutated resources to original spec on deletion
type: boolean
analysis:
description: Canary analysis for this canary
type: object
oneOf:
- required: ["interval", "threshold", "iterations"]
- required: ["interval", "threshold", "stepWeight"]
- required: ["interval", "threshold", "stepWeights"]
properties:
interval:
description: Schedule interval for this canary
type: string
pattern: "^[0-9]+(m|s)"
iterations:
description: Number of checks to run for A/B Testing and Blue/Green
type: number
threshold:
description: Max number of failed checks before rollback
type: number
maxWeight:
description: Max traffic weight routed to canary
type: number
stepWeight:
description: Incremental traffic step weight for the analysis phase
type: number
stepWeights:
description: Incremental traffic step weights for the analysis phase
type: array
items:
type: number
stepWeightPromotion:
description: Incremental traffic step weight for the promotion phase
type: number
mirror:
description: Mirror traffic to canary
type: boolean
mirrorWeight:
description: Weight of traffic to be mirrored
type: number
match:
description: A/B testing match conditions
type: array
items:
type: object
properties:
headers:
type: object
additionalProperties:
oneOf:
- required: ["exact"]
- required: ["prefix"]
- required: ["suffix"]
- required: ["regex"]
type: object
properties:
exact:
format: string
type: string
prefix:
format: string
type: string
suffix:
format: string
type: string
regex:
description: RE2 style regex-based match (https://github.com/google/re2/wiki/Syntax)
format: string
type: string
sourceLabels:
description: Applicable only when the 'mesh' gateway is included in the service.gateways list
type: object
additionalProperties:
format: string
type: string
metrics:
description: Metric check list for this canary
type: array
items:
type: object
required: ["name"]
properties:
name:
description: Name of the metric
type: string
interval:
description: Interval of the query
type: string
pattern: "^[0-9]+(m|s)"
threshold:
description: Max value accepted for this metric
type: number
thresholdRange:
description: Range accepted for this metric
type: object
properties:
min:
description: Min value accepted for this metric
type: number
max:
description: Max value accepted for this metric
type: number
query:
description: Prometheus query
type: string
templateRef:
description: Metric template reference
type: object
required: ["name"]
properties:
name:
description: Name of this metric template
type: string
namespace:
description: Namespace of this metric template
type: string
webhooks:
description: Webhook list for this canary
type: array
items:
type: object
required: ["name", "url"]
properties:
name:
description: Name of the webhook
type: string
type:
description: Type of the webhook pre, post or during rollout
type: string
enum:
- ""
- confirm-rollout
- pre-rollout
- rollout
- confirm-promotion
- post-rollout
- event
- rollback
url:
description: URL address of this webhook
type: string
format: url
timeout:
description: Request timeout for this webhook
type: string
pattern: "^[0-9]+(m|s)"
metadata:
description: Metadata (key-value pairs) for this webhook
type: object
additionalProperties:
type: string
status:
description: CanaryStatus defines the observed state of a canary.
type: object
properties:
phase:
description: Analysis phase of this canary
type: string
enum:
- ""
- Initializing
- Initialized
- Waiting
- Progressing
- Promoting
- Finalising
- Succeeded
- Failed
- Terminating
- Terminated
canaryWeight:
description: Traffic weight routed to canary
type: number
failedChecks:
description: Failed check count of the current canary analysis
type: number
iterations:
description: Iteration count of the current canary analysis
type: number
lastAppliedSpec:
description: LastAppliedSpec of this canary
type: string
lastTransitionTime:
description: LastTransitionTime of this canary
format: date-time
type: string
conditions:
description: Status conditions of this canary
type: array
items:
type: object
required: [ "type", "status", "reason" ]
properties:
lastTransitionTime:
description: LastTransitionTime of this condition
format: date-time
type: string
lastUpdateTime:
description: LastUpdateTime of this condition
format: date-time
type: string
message:
description: Message associated with this condition
type: string
reason:
description: Reason for the current status of this condition
type: string
status:
description: Status of this condition
type: string
type:
description: Type of this condition
type: string
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: metrictemplates.flagger.app
annotations:
helm.sh/resource-policy: keep
spec:
group: flagger.app
names:
kind: MetricTemplate
listKind: MetricTemplateList
plural: metrictemplates
singular: metrictemplate
categories:
- all
scope: Namespaced
versions:
- name: v1beta1
served: true
storage: true
subresources:
status: {}
additionalPrinterColumns:
- name: Provider
type: string
jsonPath: .spec.provider.type
schema:
openAPIV3Schema:
description: MetricTemplate is the Schema for the MetricTemplates API.
type: object
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: MetricTemplateSpec defines the desired state of a MetricTemplate.
type: object
required:
- provider
- query
properties:
provider:
description: Provider of this metric template
type: object
required:
- type
properties:
type:
description: Type of this provider
type: string
enum:
- prometheus
- influxdb
- datadog
- cloudwatch
- newrelic
address:
description: API address of this provider
type: string
secretRef:
description: Kubernetes secret reference containing the provider credentials
type: object
required:
- name
properties:
name:
description: Name of the Kubernetes secret
type: string
region:
description: Region of the provider
type: string
query:
description: Query of this metric template
type: string
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: alertproviders.flagger.app
annotations:
helm.sh/resource-policy: keep
spec:
group: flagger.app
names:
kind: AlertProvider
listKind: AlertProviderList
plural: alertproviders
singular: alertprovider
categories:
- all
scope: Namespaced
versions:
- name: v1beta1
served: true
storage: true
subresources:
status: {}
additionalPrinterColumns:
- name: Type
type: string
jsonPath: .spec.type
schema:
openAPIV3Schema:
description: AlertProvider is the Schema for the AlertProvider API.
type: object
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: AlertProviderSpec defines the desired state of a AlertProvider.
type: object
oneOf:
- required:
- type
- address
- required:
- type
- secretRef
properties:
type:
description: Type of this provider
type: string
enum:
- slack
- msteams
- discord
- rocket
address:
description: Hook URL address of this provider
type: string
secretRef:
description: Kubernetes secret reference containing the provider address
type: object
required:
- name
properties:
name:
description: Name of the Kubernetes secret
type: string

View File

@@ -1,62 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: flagger
namespace: default
labels:
app: flagger
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: flagger
template:
metadata:
labels:
app: flagger
annotations:
prometheus.io/scrape: "true"
spec:
serviceAccountName: flagger
containers:
- name: flagger
image: ghcr.io/fluxcd/flagger:1.6.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
command:
- ./flagger
- -log-level=info
livenessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=2
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
readinessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=2
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
resources:
limits:
memory: "512Mi"
cpu: "1000m"
requests:
memory: "32Mi"
cpu: "10m"
securityContext:
readOnlyRootFilesystem: true
runAsUser: 10001

View File

@@ -1,21 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@@ -1,25 +0,0 @@
apiVersion: v1
name: flagger
version: 1.6.0
appVersion: 1.6.0
kubeVersion: ">=1.16.0-0"
engine: gotpl
description: Flagger is a progressive delivery operator for Kubernetes
home: https://flagger.app
icon: https://raw.githubusercontent.com/fluxcd/flagger/main/docs/logo/weaveworks.png
sources:
- https://github.com/fluxcd/flagger
maintainers:
- name: stefanprodan
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com
keywords:
- flagger
- istio
- appmesh
- linkerd
- gloo
- contour
- nginx
- gitops
- canary

View File

@@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,175 +0,0 @@
# Flagger
[Flagger](https://github.com/fluxcd/flagger) is an operator that automates the release process of applications on Kubernetes.
Flagger can run automated application analysis, testing, promotion and rollback for the following deployment strategies:
* Canary Release (progressive traffic shifting)
* A/B Testing (HTTP headers and cookies traffic routing)
* Blue/Green (traffic switching and mirroring)
Flagger works with service mesh solutions (Istio, Linkerd, AWS App Mesh) and with Kubernetes ingress controllers
(NGINX, Skipper, Gloo, Contour, Traefik).
Flagger can be configured to send alerts to various chat platforms such as Slack, Microsoft Teams, Discord and Rocket.
## Prerequisites
* Kubernetes >= 1.16
## Installing the Chart
Add Flagger Helm repository:
```console
$ helm repo add flagger https://flagger.app
```
Install Flagger's custom resource definitions:
```console
$ kubectl apply -f https://raw.githubusercontent.com/fluxcd/flagger/main/artifacts/flagger/crd.yaml
```
To install Flagger for **Istio**:
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=istio-system \
--set meshProvider=istio \
--set metricsServer=http://prometheus:9090
```
To install Flagger for **Linkerd**:
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=linkerd \
--set meshProvider=linkerd \
--set metricsServer=http://linkerd-prometheus:9090
```
To install Flagger for **AWS App Mesh**:
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=appmesh-system \
--set meshProvider=appmesh:v1beta2 \
--set metricsServer=http://appmesh-prometheus:9090
```
To install Flagger and Prometheus for **NGINX** Ingress (requires controller metrics enabled):
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=ingress-nginx \
--set meshProvider=nginx \
--set prometheus.install=true
```
To install Flagger and Prometheus for **Gloo** (requires Gloo discovery enabled):
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=gloo-system \
--set meshProvider=gloo \
--set prometheus.install=true
```
To install Flagger and Prometheus for **Contour**:
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=projectcontour \
--set meshProvider=contour \
--set ingressClass=contour \
--set prometheus.install=true
```
To install Flagger and Prometheus for **Traefik**:
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=traefik \
--set prometheus.install=true \
--set meshProvider=traefik
```
The [configuration](#configuration) section lists the parameters that can be configured during installation.
## Uninstalling the Chart
To uninstall/delete the `flagger` deployment:
```console
$ helm delete flagger
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the Flagger chart and their default values.
Parameter | Description | Default
--- | --- | ---
`image.repository` | Image repository | `ghcr.io/fluxcd/flagger`
`image.tag` | Image tag | `<VERSION>`
`image.pullPolicy` | Image pull policy | `IfNotPresent`
`logLevel` | Log level | `info`
`metricsServer` | Prometheus URL, used when `prometheus.install` is `false` | `http://prometheus.istio-system:9090`
`prometheus.install` | If `true`, installs Prometheus configured to scrape all pods in the custer | `false`
`prometheus.retention` | Prometheus data retention | `2h`
`selectorLabels` | List of labels that Flagger uses to create pod selectors | `app,name,app.kubernetes.io/name`
`configTracking.enabled` | If `true`, flagger will track changes in Secrets and ConfigMaps referenced in the target deployment | `true`
`eventWebhook` | If set, Flagger will publish events to the given webhook | None
`slack.url` | Slack incoming webhook | None
`slack.channel` | Slack channel | None
`slack.user` | Slack username | `flagger`
`msteams.url` | Microsoft Teams incoming webhook | None
`podMonitor.enabled` | If `true`, create a PodMonitor for [monitoring the metrics](https://docs.flagger.app/usage/monitoring#metrics) | `false`
`podMonitor.namespace` | Namespace where the PodMonitor is created | the same namespace
`podMonitor.interval` | Interval at which metrics should be scraped | `15s`
`podMonitor.podMonitor` | Additional labels to add to the PodMonitor | `{}`
`leaderElection.enabled` | If `true`, Flagger will run in HA mode | `false`
`leaderElection.replicaCount` | Number of replicas | `1`
`serviceAccount.create` | If `true`, Flagger will create service account | `true`
`serviceAccount.name` | The name of the service account to create or use. If not set and `serviceAccount.create` is `true`, a name is generated using the Flagger fullname | `""`
`serviceAccount.annotations` | Annotations for service account | `{}`
`ingressAnnotationsPrefix` | Annotations prefix for ingresses | `custom.ingress.kubernetes.io`
`includeLabelPrefix` | List of prefixes of labels that are copied when creating primary deployments or daemonsets. Use * to include all | `""`
`rbac.create` | If `true`, create and use RBAC resources | `true`
`rbac.pspEnabled` | If `true`, create and use a restricted pod security policy | `false`
`crd.create` | If `true`, create Flagger's CRDs (should be enabled for Helm v2 only) | `false`
`resources.requests/cpu` | Pod CPU request | `10m`
`resources.requests/memory` | Pod memory request | `32Mi`
`resources.limits/cpu` | Pod CPU limit | `1000m`
`resources.limits/memory` | Pod memory limit | `512Mi`
`affinity` | Node/pod affinities | None
`nodeSelector` | Node labels for pod assignment | `{}`
`threadiness` | Number of controller workers | `2`
`tolerations` | List of node taints to tolerate | `[]`
`istio.kubeconfig.secretName` | The name of the Kubernetes secret containing the Istio shared control plane kubeconfig | None
`istio.kubeconfig.key` | The name of Kubernetes secret data key that contains the Istio control plane kubeconfig | `kubeconfig`
`ingressAnnotationsPrefix` | Annotations prefix for NGINX ingresses | None
`ingressClass` | Ingress class used for annotating HTTPProxy objects, e.g. `contour` | None
`podPriorityClassName` | PriorityClass name for pod priority configuration | ""
`podDisruptionBudget.enabled` | A PodDisruptionBudget will be created if `true` | `false`
`podDisruptionBudget.minAvailable` | The minimal number of available replicas that will be set in the PodDisruptionBudget | `1`
Specify each parameter using the `--set key=value[,key=value]` argument to `helm upgrade`. For example,
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace flagger-system \
--set slack.url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK \
--set slack.channel=general
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace istio-system \
-f values.yaml
```
> **Tip**: You can use the default [values.yaml](values.yaml)

View File

@@ -1,926 +0,0 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: canaries.flagger.app
annotations:
helm.sh/resource-policy: keep
spec:
group: flagger.app
names:
kind: Canary
listKind: CanaryList
plural: canaries
singular: canary
categories:
- all
scope: Namespaced
versions:
- name: v1beta1
served: true
storage: true
subresources:
status: {}
additionalPrinterColumns:
- name: Status
type: string
jsonPath: .status.phase
- name: Weight
type: string
jsonPath: .status.canaryWeight
- name: FailedChecks
type: string
jsonPath: .status.failedChecks
priority: 1
- name: Interval
type: string
jsonPath: .spec.analysis.interval
priority: 1
- name: Mirror
type: boolean
jsonPath: .spec.analysis.mirror
priority: 1
- name: StepWeight
type: string
jsonPath: .spec.analysis.stepWeight
priority: 1
- name: StepWeights
type: string
jsonPath: .spec.analysis.stepWeights
priority: 1
- name: MaxWeight
type: string
jsonPath: .spec.analysis.maxWeight
priority: 1
- name: LastTransitionTime
type: string
jsonPath: .status.lastTransitionTime
schema:
openAPIV3Schema:
description: Canary is the Schema for the Canary API.
type: object
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: CanarySpec defines the desired state of a Canary.
type: object
required:
- targetRef
- service
- analysis
properties:
provider:
description: Traffic managent provider
type: string
metricsServer:
description: Prometheus URL
type: string
progressDeadlineSeconds:
description: Deployment progress deadline
type: number
targetRef:
description: Target selector
type: object
required: ["apiVersion", "kind", "name"]
properties:
apiVersion:
type: string
kind:
type: string
enum:
- DaemonSet
- Deployment
- Service
name:
type: string
autoscalerRef:
description: HPA selector
type: object
required: ["apiVersion", "kind", "name"]
properties:
apiVersion:
type: string
kind:
type: string
enum:
- HorizontalPodAutoscaler
name:
type: string
ingressRef:
description: Ingress selector
type: object
required: ["apiVersion", "kind", "name"]
properties:
apiVersion:
type: string
kind:
type: string
enum:
- Ingress
name:
type: string
service:
description: Kubernetes Service spec
type: object
required: ["port"]
properties:
name:
description: Kubernetes service name
type: string
port:
description: Container port number
type: number
portName:
description: Container port name
type: string
targetPort:
description: Container target port name
x-kubernetes-int-or-string: true
portDiscovery:
description: Enable port dicovery
type: boolean
timeout:
description: HTTP or gRPC request timeout
type: string
meshName:
description: AppMesh mesh name
type: string
backends:
description: AppMesh backend array
type: array
items:
type: string
hosts:
description: The list of host names for this service
type: array
items:
type: string
delegation:
description: enable behaving as a delegate VirtualService
type: boolean
match:
description: URI match conditions
type: array
items:
type: object
properties:
uri:
type: object
oneOf:
- required: ["exact"]
- required: ["prefix"]
- required: ["suffix"]
- required: ["regex"]
properties:
exact:
format: string
type: string
prefix:
format: string
type: string
suffix:
format: string
type: string
regex:
format: string
type: string
retries:
description: Retry policy for HTTP requests
type: object
properties:
attempts:
description: Number of retries for a given request
format: int32
type: integer
perTryTimeout:
description: Timeout per retry attempt for a given request
type: string
retryOn:
description: Specifies the conditions under which retry takes place
format: string
type: string
rewrite:
description: Rewrite HTTP URIs
type: object
properties:
uri:
format: string
type: string
headers:
description: Headers operations
type: object
properties:
request:
properties:
add:
additionalProperties:
format: string
type: string
type: object
remove:
items:
format: string
type: string
type: array
set:
additionalProperties:
format: string
type: string
type: object
type: object
response:
properties:
add:
additionalProperties:
format: string
type: string
type: object
remove:
items:
format: string
type: string
type: array
set:
additionalProperties:
format: string
type: string
type: object
type: object
gateways:
description: The list of Istio gateway for this virtual service
type: array
items:
type: string
corsPolicy:
description: Istio Cross-Origin Resource Sharing policy (CORS)
type: object
properties:
allowCredentials:
type: boolean
allowHeaders:
items:
format: string
type: string
type: array
allowMethods:
description: List of HTTP methods allowed to access the resource
items:
format: string
type: string
type: array
allowOrigin:
description: The list of origins that are allowed to perform
CORS requests.
items:
format: string
type: string
type: array
allowOrigins:
description: String patterns that match allowed origins
type: array
items:
type: object
oneOf:
- required:
- exact
- required:
- prefix
- required:
- regex
properties:
exact:
format: string
type: string
prefix:
format: string
type: string
regex:
format: string
type: string
exposeHeaders:
items:
format: string
type: string
type: array
maxAge:
type: string
trafficPolicy:
description: Istio traffic policy
type: object
properties:
connectionPool:
type: object
properties:
http:
description: HTTP connection pool settings.
type: object
properties:
h2UpgradePolicy:
description: Specify if http1.1 connection should
be upgraded to http2 for the associated destination.
enum:
- DEFAULT
- DO_NOT_UPGRADE
- UPGRADE
type: string
http1MaxPendingRequests:
description: Maximum number of pending HTTP requests
to a destination.
format: int32
type: integer
http2MaxRequests:
description: Maximum number of requests to a backend.
format: int32
type: integer
idleTimeout:
description: The idle timeout for upstream connection
pool connections.
type: string
maxRequestsPerConnection:
description: Maximum number of requests per connection
to a backend.
format: int32
type: integer
maxRetries:
format: int32
type: integer
loadBalancer:
description: Settings controlling the load balancer algorithms.
type: object
oneOf:
- required:
- simple
- properties:
consistentHash:
oneOf:
- required:
- httpHeaderName
- required:
- httpCookie
- required:
- useSourceIp
- required:
- httpQueryParameterName
required:
- consistentHash
properties:
consistentHash:
properties:
httpCookie:
description: Hash based on HTTP cookie.
properties:
name:
description: Name of the cookie.
format: string
type: string
path:
description: Path to set for the cookie.
format: string
type: string
ttl:
description: Lifetime of the cookie.
type: string
type: object
httpHeaderName:
description: Hash based on a specific HTTP header.
format: string
type: string
httpQueryParameterName:
description: Hash based on a specific HTTP query parameter.
format: string
type: string
minimumRingSize:
type: integer
useSourceIp:
description: Hash based on the source IP address.
type: boolean
type: object
localityLbSetting:
properties:
distribute:
description: 'Optional: only one of distribute or
failover can be set.'
items:
properties:
from:
description: Originating locality, '/' separated,
e.g.
format: string
type: string
to:
additionalProperties:
type: integer
description: Map of upstream localities to traffic
distribution weights.
type: object
type: object
type: array
enabled:
description: enable locality load balancing, this
is DestinationRule-level and will override mesh
wide settings in entirety.
type: boolean
failover:
description: 'Optional: only failover or distribute
can be set.'
items:
properties:
from:
description: Originating region.
format: string
type: string
to:
format: string
type: string
type: object
type: array
type: object
simple:
enum:
- ROUND_ROBIN
- LEAST_CONN
- RANDOM
- PASSTHROUGH
type: string
outlierDetection:
description: Settings controlling eviction of unhealthy hosts from the load balancing pool.
type: object
properties:
baseEjectionTime:
description: Minimum ejection duration.
type: string
consecutive5xxErrors:
description: Number of 5xx errors before a host is ejected
from the connection pool.
type: integer
consecutiveErrors:
format: int32
type: integer
consecutiveGatewayErrors:
description: Number of gateway errors before a host is
ejected from the connection pool.
format: int32
type: integer
interval:
description: Time interval between ejection sweep analysis.
type: string
maxEjectionPercent:
format: int32
type: integer
minHealthPercent:
format: int32
type: integer
tls:
description: Istio TLS related settings for connections to the upstream service
type: object
properties:
caCertificates:
format: string
type: string
clientCertificate:
description: REQUIRED if mode is `MUTUAL`.
format: string
type: string
mode:
enum:
- DISABLE
- SIMPLE
- MUTUAL
- ISTIO_MUTUAL
type: string
privateKey:
description: REQUIRED if mode is `MUTUAL`.
format: string
type: string
sni:
description: SNI string to present to the server
during TLS handshake.
format: string
type: string
subjectAltNames:
items:
format: string
type: string
type: array
apex:
description: Metadata to add to the apex service
type: object
properties:
labels:
type: object
additionalProperties:
type: string
annotations:
type: object
additionalProperties:
type: string
primary:
description: Metadata to add to the primary service
type: object
properties:
labels:
type: object
additionalProperties:
type: string
annotations:
type: object
additionalProperties:
type: string
canary:
description: Metadata to add to the canary service
type: object
properties:
labels:
type: object
additionalProperties:
type: string
annotations:
type: object
additionalProperties:
type: string
skipAnalysis:
description: Skip analysis and promote canary
type: boolean
revertOnDeletion:
description: Revert mutated resources to original spec on deletion
type: boolean
analysis:
description: Canary analysis for this canary
type: object
oneOf:
- required: ["interval", "threshold", "iterations"]
- required: ["interval", "threshold", "stepWeight"]
- required: ["interval", "threshold", "stepWeights"]
properties:
interval:
description: Schedule interval for this canary
type: string
pattern: "^[0-9]+(m|s)"
iterations:
description: Number of checks to run for A/B Testing and Blue/Green
type: number
threshold:
description: Max number of failed checks before rollback
type: number
maxWeight:
description: Max traffic weight routed to canary
type: number
stepWeight:
description: Incremental traffic step weight for the analysis phase
type: number
stepWeights:
description: Incremental traffic step weights for the analysis phase
type: array
items:
type: number
stepWeightPromotion:
description: Incremental traffic step weight for the promotion phase
type: number
mirror:
description: Mirror traffic to canary
type: boolean
mirrorWeight:
description: Weight of traffic to be mirrored
type: number
match:
description: A/B testing match conditions
type: array
items:
type: object
properties:
headers:
type: object
additionalProperties:
oneOf:
- required: ["exact"]
- required: ["prefix"]
- required: ["suffix"]
- required: ["regex"]
type: object
properties:
exact:
format: string
type: string
prefix:
format: string
type: string
suffix:
format: string
type: string
regex:
description: RE2 style regex-based match (https://github.com/google/re2/wiki/Syntax)
format: string
type: string
sourceLabels:
description: Applicable only when the 'mesh' gateway is included in the service.gateways list
type: object
additionalProperties:
format: string
type: string
metrics:
description: Metric check list for this canary
type: array
items:
type: object
required: ["name"]
properties:
name:
description: Name of the metric
type: string
interval:
description: Interval of the query
type: string
pattern: "^[0-9]+(m|s)"
threshold:
description: Max value accepted for this metric
type: number
thresholdRange:
description: Range accepted for this metric
type: object
properties:
min:
description: Min value accepted for this metric
type: number
max:
description: Max value accepted for this metric
type: number
query:
description: Prometheus query
type: string
templateRef:
description: Metric template reference
type: object
required: ["name"]
properties:
name:
description: Name of this metric template
type: string
namespace:
description: Namespace of this metric template
type: string
webhooks:
description: Webhook list for this canary
type: array
items:
type: object
required: ["name", "url"]
properties:
name:
description: Name of the webhook
type: string
type:
description: Type of the webhook pre, post or during rollout
type: string
enum:
- ""
- confirm-rollout
- pre-rollout
- rollout
- confirm-promotion
- post-rollout
- event
- rollback
url:
description: URL address of this webhook
type: string
format: url
timeout:
description: Request timeout for this webhook
type: string
pattern: "^[0-9]+(m|s)"
metadata:
description: Metadata (key-value pairs) for this webhook
type: object
additionalProperties:
type: string
status:
description: CanaryStatus defines the observed state of a canary.
type: object
properties:
phase:
description: Analysis phase of this canary
type: string
enum:
- ""
- Initializing
- Initialized
- Waiting
- Progressing
- Promoting
- Finalising
- Succeeded
- Failed
- Terminating
- Terminated
canaryWeight:
description: Traffic weight routed to canary
type: number
failedChecks:
description: Failed check count of the current canary analysis
type: number
iterations:
description: Iteration count of the current canary analysis
type: number
lastAppliedSpec:
description: LastAppliedSpec of this canary
type: string
lastTransitionTime:
description: LastTransitionTime of this canary
format: date-time
type: string
conditions:
description: Status conditions of this canary
type: array
items:
type: object
required: [ "type", "status", "reason" ]
properties:
lastTransitionTime:
description: LastTransitionTime of this condition
format: date-time
type: string
lastUpdateTime:
description: LastUpdateTime of this condition
format: date-time
type: string
message:
description: Message associated with this condition
type: string
reason:
description: Reason for the current status of this condition
type: string
status:
description: Status of this condition
type: string
type:
description: Type of this condition
type: string
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: metrictemplates.flagger.app
annotations:
helm.sh/resource-policy: keep
spec:
group: flagger.app
names:
kind: MetricTemplate
listKind: MetricTemplateList
plural: metrictemplates
singular: metrictemplate
categories:
- all
scope: Namespaced
versions:
- name: v1beta1
served: true
storage: true
subresources:
status: {}
additionalPrinterColumns:
- name: Provider
type: string
jsonPath: .spec.provider.type
schema:
openAPIV3Schema:
description: MetricTemplate is the Schema for the MetricTemplates API.
type: object
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: MetricTemplateSpec defines the desired state of a MetricTemplate.
type: object
required:
- provider
- query
properties:
provider:
description: Provider of this metric template
type: object
required:
- type
properties:
type:
description: Type of this provider
type: string
enum:
- prometheus
- influxdb
- datadog
- cloudwatch
- newrelic
address:
description: API address of this provider
type: string
secretRef:
description: Kubernetes secret reference containing the provider credentials
type: object
required:
- name
properties:
name:
description: Name of the Kubernetes secret
type: string
region:
description: Region of the provider
type: string
query:
description: Query of this metric template
type: string
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: alertproviders.flagger.app
annotations:
helm.sh/resource-policy: keep
spec:
group: flagger.app
names:
kind: AlertProvider
listKind: AlertProviderList
plural: alertproviders
singular: alertprovider
categories:
- all
scope: Namespaced
versions:
- name: v1beta1
served: true
storage: true
subresources:
status: {}
additionalPrinterColumns:
- name: Type
type: string
jsonPath: .spec.type
schema:
openAPIV3Schema:
description: AlertProvider is the Schema for the AlertProvider API.
type: object
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: AlertProviderSpec defines the desired state of a AlertProvider.
type: object
oneOf:
- required:
- type
- address
- required:
- type
- secretRef
properties:
type:
description: Type of this provider
type: string
enum:
- slack
- msteams
- discord
- rocket
address:
description: Hook URL address of this provider
type: string
secretRef:
description: Kubernetes secret reference containing the provider address
type: object
required:
- name
properties:
name:
description: Name of the Kubernetes secret
type: string

View File

@@ -1 +0,0 @@
Flagger installed

View File

@@ -1,42 +0,0 @@
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "flagger.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Expand the name of the chart.
*/}}
{{- define "flagger.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "flagger.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "flagger.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "flagger.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}

View File

@@ -1,15 +0,0 @@
{{- if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "flagger.serviceAccountName" . }}
annotations:
{{- if .Values.serviceAccount.annotations }}
{{ toYaml .Values.serviceAccount.annotations | indent 4 }}
{{- end }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

View File

@@ -1,6 +0,0 @@
{{- if .Values.crd.create -}}
{{- range $path, $bytes := .Files.Glob "crds/*.yaml" -}}
{{ $.Files.Get $path }}
---
{{- end -}}
{{- end -}}

View File

@@ -1,163 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "flagger.fullname" . }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
replicas: {{ .Values.leaderElection.replicaCount }}
{{- if eq .Values.leaderElection.enabled false }}
strategy:
type: Recreate
{{- end }}
selector:
matchLabels:
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
annotations:
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
serviceAccountName: {{ template "flagger.serviceAccountName" . }}
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
topologyKey: kubernetes.io/hostname
{{- if .Values.image.pullSecret }}
imagePullSecrets:
- name: {{ .Values.image.pullSecret }}
{{- end }}
volumes:
{{- if .Values.istio.kubeconfig.secretName }}
- name: kubeconfig
secret:
secretName: "{{ .Values.istio.kubeconfig.secretName }}"
{{- end }}
{{- if .Values.podPriorityClassName }}
priorityClassName: {{ .Values.podPriorityClassName }}
{{- end }}
containers:
- name: flagger
{{- if .Values.securityContext.enabled }}
securityContext:
{{ toYaml .Values.securityContext.context | indent 12 }}
{{- end }}
volumeMounts:
{{- if .Values.istio.kubeconfig.secretName }}
- name: kubeconfig
mountPath: "/tmp/istio-host"
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
command:
- ./flagger
- -log-level={{ .Values.logLevel }}
{{- if .Values.meshProvider }}
- -mesh-provider={{ .Values.meshProvider }}
{{- end }}
{{- if .Values.prometheus.install }}
- -metrics-server=http://{{ template "flagger.fullname" . }}-prometheus:9090
{{- else }}
- -metrics-server={{ .Values.metricsServer }}
{{- end }}
{{- if .Values.selectorLabels }}
- -selector-labels={{ .Values.selectorLabels }}
{{- end }}
{{- if .Values.configTracking }}
- -enable-config-tracking={{ .Values.configTracking.enabled }}
{{- end }}
{{- if .Values.namespace }}
- -namespace={{ .Values.namespace }}
{{- end }}
{{- if .Values.slack.url }}
- -slack-url={{ .Values.slack.url }}
{{- end }}
{{- if .Values.slack.user }}
- -slack-user={{ .Values.slack.user }}
{{- end }}
{{- if .Values.slack.channel }}
- -slack-channel={{ .Values.slack.channel }}
{{- end }}
{{- if .Values.msteams.url }}
- -msteams-url={{ .Values.msteams.url }}
{{- end }}
{{- if .Values.leaderElection.enabled }}
- -enable-leader-election=true
- -leader-election-namespace={{ .Release.Namespace }}
{{- end }}
{{- if .Values.ingressAnnotationsPrefix }}
- -ingress-annotations-prefix={{ .Values.ingressAnnotationsPrefix }}
{{- end }}
{{- if .Values.includeLabelPrefix }}
- -include-label-prefix={{ .Values.includeLabelPrefix }}
{{- end }}
{{- if .Values.ingressClass }}
- -ingress-class={{ .Values.ingressClass }}
{{- end }}
{{- if .Values.eventWebhook }}
- -event-webhook={{ .Values.eventWebhook }}
{{- end }}
{{- if .Values.kubeconfigQPS }}
- -kubeconfig-qps={{ .Values.kubeconfigQPS }}
{{- end }}
{{- if .Values.kubeconfigBurst }}
- -kubeconfig-burst={{ .Values.kubeconfigBurst }}
{{- end }}
{{- if .Values.istio.kubeconfig.secretName }}
- -kubeconfig-service-mesh=/tmp/istio-host/{{ .Values.istio.kubeconfig.key }}
{{- end }}
{{- if .Values.threadiness }}
- -threadiness={{ .Values.threadiness }}
{{- end }}
livenessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
readinessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
{{- if .Values.env }}
env:
{{ toYaml .Values.env | indent 12 }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}

View File

@@ -1,11 +0,0 @@
{{- if .Values.podDisruptionBudget.enabled }}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: {{ template "flagger.name" . }}
spec:
minAvailable: {{ .Values.podDisruptionBudget.minAvailable }}
selector:
matchLabels:
app.kubernetes.io/name: {{ template "flagger.name" . }}
{{- end }}

View File

@@ -1,27 +0,0 @@
{{- if .Values.podMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- range $k, $v := .Values.podMonitor.additionalLabels }}
{{ $k }}: {{ $v | quote }}
{{- end }}
name: {{ include "flagger.fullname" . }}
namespace: {{ .Values.podMonitor.namespace | default .Release.Namespace }}
spec:
podMetricsEndpoints:
- interval: {{ .Values.podMonitor.interval }}
path: /metrics
port: http
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
selector:
matchLabels:
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

View File

@@ -1,284 +0,0 @@
{{- if .Values.prometheus.install }}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: {{ template "flagger.fullname" . }}-prometheus
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
- nodes/proxy
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: {{ template "flagger.fullname" . }}-prometheus
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "flagger.fullname" . }}-prometheus
subjects:
- kind: ServiceAccount
name: {{ template "flagger.serviceAccountName" . }}-prometheus
namespace: {{ .Release.Namespace }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "flagger.serviceAccountName" . }}-prometheus
namespace: {{ .Release.Namespace }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "flagger.fullname" . }}-prometheus
namespace: {{ .Release.Namespace }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
data:
prometheus.yml: |-
global:
scrape_interval: 5s
scrape_configs:
# Scrape config for AppMesh Envoy sidecar
- job_name: 'appmesh-envoy'
metrics_path: /stats/prometheus
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_name]
action: keep
regex: '^envoy$'
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: ${1}:9901
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
# Exclude high cardinality metrics
metric_relabel_configs:
- source_labels: [ cluster_name ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ tcp_prefix ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ listener_address ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_listener_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tls.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tcp_downstream.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_http_(stats|admin).*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_cluster_(lb|retry|bind|internal|max|original).*'
action: drop
# Scrape config for API servers
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- default
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: kubernetes;https
# scrape config for cAdvisor
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
# exclude high cardinality metrics
metric_relabel_configs:
- source_labels: [__name__]
regex: (container|machine)_(cpu|memory|network|fs)_(.+)
action: keep
- source_labels: [__name__]
regex: container_memory_failures_total
action: drop
# scrape config for pods
- job_name: kubernetes-pods
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_scrape
- source_labels: [ __address__ ]
regex: '.*9901.*'
action: drop
- action: replace
regex: (.+)
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_path
target_label: __metrics_path__
- action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_pod_annotation_prometheus_io_port
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: kubernetes_namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: kubernetes_pod_name
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "flagger.fullname" . }}-prometheus
namespace: {{ .Release.Namespace }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: {{ template "flagger.name" . }}-prometheus
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ template "flagger.name" . }}-prometheus
app.kubernetes.io/instance: {{ .Release.Name }}
annotations:
appmesh.k8s.aws/sidecarInjectorWebhook: disabled
sidecar.istio.io/inject: "false"
spec:
serviceAccountName: {{ template "flagger.serviceAccountName" . }}-prometheus
containers:
- name: prometheus
image: {{ .Values.prometheus.image }}
imagePullPolicy: IfNotPresent
args:
- '--storage.tsdb.retention={{ .Values.prometheus.retention }}'
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- containerPort: 9090
name: http
livenessProbe:
httpGet:
path: /-/healthy
port: 9090
readinessProbe:
httpGet:
path: /-/ready
port: 9090
resources:
requests:
cpu: 10m
memory: 128Mi
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
- name: data-volume
mountPath: /prometheus/data
volumes:
- name: config-volume
configMap:
name: {{ template "flagger.fullname" . }}-prometheus
- name: data-volume
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: {{ template "flagger.fullname" . }}-prometheus
namespace: {{ .Release.Namespace }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
selector:
app.kubernetes.io/name: {{ template "flagger.name" . }}-prometheus
app.kubernetes.io/instance: {{ .Release.Name }}
ports:
- name: http
protocol: TCP
port: 9090
{{- end }}

View File

@@ -1,66 +0,0 @@
{{- if .Values.rbac.pspEnabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "flagger.fullname" . }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
privileged: false
hostIPC: false
hostNetwork: false
hostPID: false
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
allowedCapabilities:
- '*'
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "flagger.fullname" . }}-psp
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- {{ template "flagger.fullname" . }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ template "flagger.fullname" . }}-psp
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "flagger.fullname" . }}-psp
subjects:
- kind: ServiceAccount
name: {{ template "flagger.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -1,220 +0,0 @@
{{- if .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: {{ template "flagger.fullname" . }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
rules:
- apiGroups:
- ""
resources:
- events
- configmaps
- configmaps/finalizers
- secrets
- secrets/finalizers
- services
- services/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- apps
resources:
- daemonsets
- daemonsets/finalizers
- deployments
- deployments/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
- horizontalpodautoscalers/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
- ingresses/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- flagger.app
resources:
- canaries
- canaries/status
- metrictemplates
- metrictemplates/status
- alertproviders
- alertproviders/status
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- networking.istio.io
resources:
- virtualservices
- virtualservices/finalizers
- destinationrules
- destinationrules/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- appmesh.k8s.aws
resources:
- virtualnodes
- virtualnodes/finalizers
- virtualrouters
- virtualrouters/finalizers
- virtualservices
- virtualservices/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- split.smi-spec.io
resources:
- trafficsplits
- trafficsplits/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- specs.smi-spec.io
resources:
- httproutegroups
- httproutegroups/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- gloo.solo.io
resources:
- upstreams
- upstreams/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- gateway.solo.io
resources:
- routetables
- routetables/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- projectcontour.io
resources:
- httpproxies
- httpproxies/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- traefik.containo.us
resources:
- traefikservices
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- nonResourceURLs:
- /version
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: {{ template "flagger.fullname" . }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "flagger.fullname" . }}
subjects:
- name: {{ template "flagger.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
kind: ServiceAccount
{{- end }}

View File

@@ -1,144 +0,0 @@
# Default values for flagger.
image:
repository: ghcr.io/fluxcd/flagger
tag: 1.6.0
pullPolicy: IfNotPresent
pullSecret:
# accepted values are debug, info, warning, error (defaults to info)
logLevel: info
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
appmesh.k8s.aws/sidecarInjectorWebhook: disabled
# priority class name for pod priority configuration
podPriorityClassName: ""
metricsServer: "http://prometheus:9090"
# accepted values are kubernetes, istio, linkerd, appmesh, contour, nginx, gloo, skipper, traefik
meshProvider: ""
# single namespace restriction
namespace: ""
# list of pod labels that Flagger uses to create pod selectors
# defaults to: app,name,app.kubernetes.io/name
selectorLabels: ""
# when enabled, flagger will track changes in Secrets and ConfigMaps referenced in the target deployment (enabled by default)
configTracking:
enabled: true
# annotations prefix for NGINX ingresses
ingressAnnotationsPrefix: ""
# ingress class used for annotating HTTPProxy objects
ingressClass: ""
# when enabled, it will add a security context for the flagger pod. You may
# need to disable this if you are running flagger on OpenShift
securityContext:
enabled: true
context:
readOnlyRootFilesystem: true
runAsUser: 10001
# when specified, flagger will publish events to the provided webhook
eventWebhook: ""
slack:
user: flagger
channel:
# incoming webhook https://api.slack.com/incoming-webhooks
url:
msteams:
# MS Teams incoming webhook URL
url:
podMonitor:
enabled: false
namespace:
interval: 15s
additionalLabels: {}
#env:
#- name: SLACK_URL
# valueFrom:
# secretKeyRef:
# name: slack
# key: url
#- name: MSTEAMS_URL
# valueFrom:
# secretKeyRef:
# name: msteams
# key: url
#- name: EVENT_WEBHOOK_URL
# valueFrom:
# secretKeyRef:
# name: eventwebhook
# key: url
env: []
leaderElection:
enabled: false
replicaCount: 1
serviceAccount:
# serviceAccount.create: Whether to create a service account or not
create: true
# serviceAccount.name: The name of the service account to create or use
name: ""
# serviceAccount.annotations: Annotations for service account
annotations: {}
rbac:
# rbac.create: `true` if rbac resources should be created
create: true
# rbac.pspEnabled: `true` if PodSecurityPolicy resources should be created
pspEnabled: false
crd:
# crd.create: `true` if custom resource definitions should be created
create: false
nameOverride: ""
fullnameOverride: ""
resources:
limits:
memory: "512Mi"
cpu: "1000m"
requests:
memory: "32Mi"
cpu: "10m"
nodeSelector: {}
tolerations: []
prometheus:
# to be used with ingress controllers
install: false
image: docker.io/prom/prometheus:v2.23.0
retention: 2h
kubeconfigQPS: ""
kubeconfigBurst: ""
# Istio multi-cluster service mesh (shared control plane single-network)
# https://istio.io/docs/setup/install/multicluster/shared-vpn/
istio:
kubeconfig:
# istio.kubeconfig.secretName: The name of the secret containing the Istio control plane kubeconfig
secretName: ""
# istio.kubeconfig.key: The name of secret data key that contains the Istio control plane kubeconfig
key: "kubeconfig"
podDisruptionBudget:
enabled: false
minAvailable: 1

View File

@@ -1,21 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@@ -1,20 +0,0 @@
apiVersion: v1
name: grafana
version: 1.5.0
appVersion: 7.2.0
description: Grafana dashboards for monitoring Flagger canary deployments
icon: https://raw.githubusercontent.com/fluxcd/flagger/main/docs/logo/weaveworks.png
home: https://flagger.app
sources:
- https://github.com/fluxcd/flagger
maintainers:
- name: stefanprodan
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com
keywords:
- flagger
- grafana
- canary
- istio
- appmesh

View File

@@ -1,81 +0,0 @@
# Flagger Grafana
Grafana dashboards for monitoring progressive deployments powered by Flagger and Prometheus.
![flagger-grafana](https://raw.githubusercontent.com/fluxcd/flagger/main/docs/screens/grafana-canary-analysis.png)
## Prerequisites
* Kubernetes >= 1.11
* Prometheus >= 2.6
## Installing the Chart
Add Flagger Helm repository:
```console
helm repo add flagger https://flagger.app
```
To install the chart for Istio run:
```console
helm upgrade -i flagger-grafana flagger/grafana \
--namespace=istio-system \
--set url=http://prometheus:9090
```
To install the chart for AWS App Mesh run:
```console
helm upgrade -i flagger-grafana flagger/grafana \
--namespace=appmesh-system \
--set url=http://appmesh-prometheus:9090
```
The command deploys Grafana on the Kubernetes cluster in the default namespace.
The [configuration](#configuration) section lists the parameters that can be configured during installation.
## Uninstalling the Chart
To uninstall/delete the `flagger-grafana` deployment:
```console
helm delete --purge flagger-grafana
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the Grafana chart and their default values.
Parameter | Description | Default
--- | --- | ---
`image.repository` | Image repository | `grafana/grafana`
`image.pullPolicy` | Image pull policy | `IfNotPresent`
`image.tag` | Image tag | `<VERSION>`
`replicaCount` | desired number of pods | `1`
`resources` | pod resources | `none`
`tolerations` | List of node taints to tolerate | `[]`
`affinity` | node/pod affinities | `node`
`nodeSelector` | node labels for pod assignment | `{}`
`service.type` | type of service | `ClusterIP`
`url` | Prometheus URL | `http://prometheus:9090`
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
helm install flagger/grafana --name flagger-grafana \
--set token=WEAVE-CLOUD-TOKEN
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
helm install flagger/grafana --name flagger-grafana -f values.yaml
```
> **Tip**: You can use the default [values.yaml](values.yaml)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +0,0 @@
1. Run the port forward command:
kubectl -n {{ .Release.Namespace }} port-forward svc/{{ .Release.Name }} 3000:80
2. Navigate to:
http://localhost:3000

View File

@@ -1,32 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "grafana.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "grafana.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "grafana.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}

View File

@@ -1,6 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "grafana.fullname" . }}-dashboards
data:
{{ (.Files.Glob "dashboards/*").AsConfig | indent 2 }}

View File

@@ -1,32 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "grafana.fullname" . }}-datasources
data:
datasources.yaml: |-
apiVersion: 1
deleteDatasources:
- name: prometheus
{{- if .Values.token }}
datasources:
- name: prometheus
type: prometheus
access: proxy
url: https://cloud.weave.works/api/prom
isDefault: true
editable: true
version: 1
basicAuth: true
basicAuthUser: weave
basicAuthPassword: {{ .Values.token }}
{{- else }}
datasources:
- name: prometheus
type: prometheus
access: proxy
url: {{ .Values.url }}
isDefault: true
editable: true
version: 1
{{- end }}

View File

@@ -1,93 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "grafana.fullname" . }}
labels:
app: {{ template "grafana.fullname" . }}
chart: {{ template "grafana.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "grafana.fullname" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "grafana.fullname" . }}
release: {{ .Release.Name }}
annotations:
prometheus.io/scrape: 'false'
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 3000
protocol: TCP
# livenessProbe:
# httpGet:
# path: /
# port: http
# readinessProbe:
# httpGet:
# path: /
# port: http
env:
- name: GF_PATHS_PROVISIONING
value: /etc/grafana/provisioning/
{{- if .Values.password }}
- name: GF_SECURITY_ADMIN_USER
value: {{ .Values.user }}
- name: GF_SECURITY_ADMIN_PASSWORD
value: {{ .Values.password }}
{{- else }}
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
{{- end }}
volumeMounts:
- name: grafana
mountPath: /var/lib/grafana
- name: dashboards
mountPath: /etc/grafana/dashboards
- name: datasources
mountPath: /etc/grafana/provisioning/datasources
- name: providers
mountPath: /etc/grafana/provisioning/dashboards
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
- name: grafana
emptyDir: {}
- name: dashboards
configMap:
name: {{ template "grafana.fullname" . }}-dashboards
- name: providers
configMap:
name: {{ template "grafana.fullname" . }}-providers
- name: datasources
configMap:
name: {{ template "grafana.fullname" . }}-datasources

View File

@@ -1,17 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "grafana.fullname" . }}-providers
data:
providers.yaml: |+
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /etc/grafana/dashboards

View File

@@ -1,19 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "grafana.fullname" . }}
labels:
app: {{ template "grafana.name" . }}
chart: {{ template "grafana.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
app: {{ template "grafana.fullname" . }}
release: {{ .Release.Name }}

View File

@@ -1,39 +0,0 @@
# Default values for grafana.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: grafana/grafana
tag: 7.3.4
pullPolicy: IfNotPresent
podAnnotations: {}
service:
type: ClusterIP
port: 80
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
user: admin
password:
# Prometheus instance
url: http://prometheus:9090
# Weave Cloud instance token
token:

View File

@@ -1,22 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,23 +0,0 @@
apiVersion: v1
name: loadtester
version: 0.18.0
appVersion: 0.18.0
kubeVersion: ">=1.11.0-0"
engine: gotpl
description: Flagger's load testing services based on rakyll/hey and bojand/ghz that generates traffic during canary analysis when configured as a webhook.
home: https://docs.flagger.app
icon: https://raw.githubusercontent.com/fluxcd/flagger/main/docs/logo/weaveworks.png
sources:
- https://github.com/fluxcd/flagger
maintainers:
- name: stefanprodan
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com
keywords:
- flagger
- istio
- appmesh
- linkerd
- gloo
- gitops
- load testing

View File

@@ -1,91 +0,0 @@
# Flagger load testing service
[Flagger's](https://github.com/fluxcd/flagger) load testing service is based on
[rakyll/hey](https://github.com/rakyll/hey) and
[bojand/ghz](https://github.com/bojand/ghz).
It can be used to generate HTTP and gRPC traffic during canary analysis when configured as a webhook.
## Prerequisites
* Kubernetes >= 1.11
## Installing the Chart
Add Flagger Helm repository:
```console
helm repo add flagger https://flagger.app
```
To install the chart with the release name `flagger-loadtester`:
```console
helm upgrade -i flagger-loadtester flagger/loadtester
```
The command deploys loadtester on the Kubernetes cluster in the default namespace.
> **Tip**: Note that the namespace where you deploy the load tester should
> have the Istio, App Mesh or Linkerd sidecar injection enabled
The [configuration](#configuration) section lists the parameters that can be configured during installation.
## Uninstalling the Chart
To uninstall/delete the `flagger-loadtester` deployment:
```console
helm delete flagger-loadtester
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the load tester chart and their default values.
Parameter | Description | Default
--- | --- | ---
`image.repository` | Image repository | `quay.io/stefanprodan/flagger-loadtester`
`image.pullPolicy` | Image pull policy | `IfNotPresent`
`image.tag` | Image tag | `<VERSION>`
`replicaCount` | Desired number of pods | `1`
`serviceAccountName` | Kubernetes service account name | `none`
`resources.requests.cpu` | CPU requests | `10m`
`resources.requests.memory` | Memory requests | `64Mi`
`tolerations` | List of node taints to tolerate | `[]`
`affinity` | node/pod affinities | `node`
`nodeSelector` | Node labels for pod assignment | `{}`
`service.type` | Type of service | `ClusterIP`
`service.port` | ClusterIP port | `80`
`cmd.timeout` | Command execution timeout | `1h`
`logLevel` | Log level can be debug, info, warning, error or panic | `info`
`appmesh.enabled` | Create AWS App Mesh v1beta2 virtual node | `false`
`appmesh.backends` | AWS App Mesh virtual services | `none`
`istio.enabled` | Create Istio virtual service | `false`
`istio.host` | Loadtester hostname | `flagger-loadtester.flagger`
`istio.gateway.enabled` | Create Istio gateway in namespace | `false`
`istio.tls.enabled` | Enable TLS in gateway ( TLS secrets should be in namespace ) | `false`
`istio.tls.httpsRedirect` | Redirect traffic to TLS port | `false`
`podPriorityClassName` | PriorityClass name for pod priority configuration | ""
`securityContext.enabled` | Add securityContext to container | ""
`securityContext.context` | securityContext to add | ""
Specify each parameter using the `--set key=value[,key=value]` argument to `helm upgrade`. For example,
```console
helm upgrade -i flagger-loadtester flagger/loadtester \
--set "appmesh.enabled=true" \
--set "appmesh.backends[0]=podinfo" \
--set "appmesh.backends[1]=podinfo-canary"
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
helm install flagger/loadtester --name flagger-loadtester -f values.yaml
```
> **Tip**: You can use the default [values.yaml](values.yaml)

View File

@@ -1 +0,0 @@
Flagger's load testing service is available at http://{{ include "loadtester.fullname" . }}.{{ .Release.Namespace }}/

View File

@@ -1,32 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "loadtester.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "loadtester.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "loadtester.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}

View File

@@ -1,27 +0,0 @@
{{- if .Values.appmesh.enabled }}
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualNode
metadata:
name: {{ include "loadtester.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "loadtester.name" . }}
helm.sh/chart: {{ include "loadtester.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
podSelector:
matchLabels:
app: {{ include "loadtester.name" . }}
logging:
accessLog:
file:
path: /dev/stdout
{{- if .Values.appmesh.backends }}
backends:
{{- range .Values.appmesh.backends }}
- virtualService:
virtualServiceRef:
name: {{ . }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,82 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "loadtester.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "loadtester.name" . }}
helm.sh/chart: {{ include "loadtester.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ include "loadtester.name" . }}
template:
metadata:
labels:
app: {{ include "loadtester.name" . }}
annotations:
appmesh.k8s.aws/ports: "444"
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
{{- if .Values.serviceAccountName }}
serviceAccountName: {{ .Values.serviceAccountName }}
{{- else if .Values.rbac.create }}
serviceAccountName: {{ include "loadtester.fullname" . }}
{{- end }}
{{- if .Values.podPriorityClassName }}
priorityClassName: {{ .Values.podPriorityClassName }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
{{- if .Values.securityContext.enabled }}
securityContext:
{{ toYaml .Values.securityContext.context | indent 12 }}
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
command:
- ./loadtester
- -port=8080
- -log-level={{ .Values.logLevel }}
- -timeout={{ .Values.cmd.timeout }}
livenessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
readinessProbe:
exec:
command:
- wget
- --quiet
- --tries=1
- --timeout=4
- --spider
- http://localhost:8080/healthz
timeoutSeconds: 5
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -1,30 +0,0 @@
{{- if and (.Values.istio.enabled) (.Values.istio.gateway.enabled) }}
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: {{ include "loadtester.fullname" . }}
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-default
protocol: HTTP
hosts:
- {{ .Values.istio.host }}
{{- if .Values.istio.tls.enabled }}
- port:
number: 443
name: https-default
protocol: HTTPS
tls:
httpsRedirect: {{ .Values.istio.tls.httpsRedirect }}
mode: SIMPLE
serverCertificate: "sds"
privateKey: "sds"
credentialName: {{ include "loadtester.fullname" . }}
hosts:
- {{ .Values.istio.host }}
{{- end }}
{{- end }}

View File

@@ -1,17 +0,0 @@
{{- if .Values.istio.enabled }}
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: {{ include "loadtester.fullname" . }}
spec:
gateways:
- {{ include "loadtester.fullname" . }}
hosts:
- {{ .Values.istio.host }}
http:
- route:
- destination:
host: {{ include "loadtester.fullname" . }}
port:
number: {{ .Values.service.port }}
{{- end }}

View File

@@ -1,54 +0,0 @@
---
{{- if .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
{{- if eq .Values.rbac.scope "cluster" }}
kind: ClusterRole
{{- else }}
kind: Role
{{- end }}
metadata:
name: {{ template "loadtester.fullname" . }}
labels:
helm.sh/chart: {{ template "loadtester.chart" . }}
app.kubernetes.io/name: {{ template "loadtester.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
rules:
{{ toYaml .Values.rbac.rules | indent 2 }}
---
apiVersion: rbac.authorization.k8s.io/v1
{{- if eq .Values.rbac.scope "cluster" }}
kind: ClusterRoleBinding
{{- else }}
kind: RoleBinding
{{- end }}
metadata:
name: {{ template "loadtester.fullname" . }}
labels:
helm.sh/chart: {{ template "loadtester.chart" . }}
app.kubernetes.io/name: {{ template "loadtester.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
{{- if eq .Values.rbac.scope "cluster" }}
kind: ClusterRole
{{- else }}
kind: Role
{{- end }}
name: {{ template "loadtester.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "loadtester.fullname" . }}
namespace: {{ .Release.Namespace }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "loadtester.fullname" . }}
labels:
helm.sh/chart: {{ template "loadtester.chart" . }}
app.kubernetes.io/name: {{ template "loadtester.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

View File

@@ -1,18 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "loadtester.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "loadtester.name" . }}
helm.sh/chart: {{ include "loadtester.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
app: {{ include "loadtester.name" . }}

View File

@@ -1,27 +0,0 @@
{{- if .Values.meshName }}
apiVersion: appmesh.k8s.aws/v1beta1
kind: VirtualNode
metadata:
name: {{ include "loadtester.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "loadtester.name" . }}
helm.sh/chart: {{ include "loadtester.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
meshName: {{ .Values.meshName }}
listeners:
- portMapping:
port: 444
protocol: http
serviceDiscovery:
dns:
hostName: {{ include "loadtester.fullname" . }}.{{ .Release.Namespace }}
{{- if .Values.backends }}
backends:
{{- range .Values.backends }}
- virtualService:
virtualServiceName: {{ . }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,81 +0,0 @@
replicaCount: 1
image:
repository: ghcr.io/fluxcd/flagger-loadtester
tag: 0.18.0
pullPolicy: IfNotPresent
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
podPriorityClassName: ""
logLevel: info
cmd:
timeout: 1h
nameOverride: ""
fullnameOverride: ""
service:
type: ClusterIP
port: 80
resources:
requests:
cpu: 10m
memory: 64Mi
nodeSelector: {}
tolerations: []
affinity: {}
rbac:
# rbac.create: `true` if rbac resources should be created
create: false
# rbac.scope: `cluster` to create cluster-scope rbac resources (ClusterRole/ClusterRoleBinding)
# otherwise, namespace-scope rbac resources will be created (Role/RoleBinding)
scope:
# rbac.rules: array of rules to apply to the role. example:
# rules:
# - apiGroups: [""]
# resources: ["pods"]
# verbs: ["list", "get"]
rules: []
# name of an existing service account to use - if not creating rbac resources
serviceAccountName: ""
# App Mesh virtual node settings (to be used for AppMesh v1beta1)
meshName: ""
#backends:
# - app1.namespace
# - app2.namespace
# App Mesh virtual node settings (to be used for AppMesh v1beta2)
appmesh:
enabled: false
backends:
- podinfo
- podinfo-canary
#Istio virtual service and gatway settings. TLS secrets should be in namespace before enbaled it. ( secret format loadtester.fullname )
istio:
enabled: false
host: flagger-loadtester.flagger
gateway:
enabled: false
tls:
enabled: false
httpsRedirect: false
# when enabled, it will add a security context for the loadtester pod
securityContext:
enabled: false
context:
readOnlyRootFilesystem: true
runAsUser: 100
runAsGroup: 101

View File

@@ -1,21 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@@ -1,14 +0,0 @@
apiVersion: v1
version: 5.0.0
appVersion: 5.0.0
name: podinfo
engine: gotpl
description: Flagger canary deployment demo application
home: https://docs.flagger.app
icon: https://raw.githubusercontent.com/fluxcd/flagger/main/docs/logo/weaveworks.png
sources:
- https://github.com/stefanprodan/podinfo
maintainers:
- name: stefanprodan
url: https://github.com/stefanprodan
email: stefanprodan@users.noreply.github.com

View File

@@ -1,79 +0,0 @@
# Podinfo
Podinfo is a tiny web application made with Go
that showcases best practices of running canary deployments with Flagger and Istio.
## Installing the Chart
Add Flagger Helm repository:
```console
helm repo add flagger https://flagger.app
```
To install the chart with the release name `frontend`:
```console
helm upgrade -i frontend flagger/podinfo \
--namespace test \
--set nameOverride=frontend \
--set backend=http://backend.test:9898/echo \
--set canary.enabled=true \
--set canary.istioIngress.enabled=true \
--set canary.istioIngress.gateway=public-gateway.istio-system.svc.cluster.local \
--set canary.istioIngress.host=frontend.istio.example.com
```
To install the chart as `backend`:
```console
helm upgrade -i backend flagger/podinfo \
--namespace test \
--set nameOverride=backend \
--set canary.enabled=true
```
## Uninstalling the Chart
To uninstall/delete the `frontend` deployment:
```console
$ helm delete --purge frontend
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the podinfo chart and their default values.
Parameter | Description | Default
--- | --- | ---
`image.repository` | image repository | `quay.io/stefanprodan/podinfo`
`image.tag` | image tag | `<VERSION>`
`image.pullPolicy` | image pull policy | `IfNotPresent`
`hpa.enabled` | enables HPA | `true`
`hpa.cpu` | target CPU usage per pod | `80`
`hpa.memory` | target memory usage per pod | `512Mi`
`hpa.minReplicas` | maximum pod replicas | `2`
`hpa.maxReplicas` | maximum pod replicas | `4`
`resources.requests/cpu` | pod CPU request | `1m`
`resources.requests/memory` | pod memory request | `16Mi`
`backend` | backend URL | None
`faults.delay` | random HTTP response delays between 0 and 5 seconds | `false`
`faults.error` | 1/3 chances of a random HTTP response error | `false`
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
$ helm install flagger/podinfo --name frontend \
--set=image.tag=1.4.1,hpa.enabled=false
```
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
```console
$ helm install flagger/podinfo --name frontend -f values.yaml
```

View File

@@ -1 +0,0 @@
podinfo {{ .Release.Name }} deployed!

View File

@@ -1,43 +0,0 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "podinfo.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "podinfo.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "podinfo.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create chart name suffix.
*/}}
{{- define "podinfo.suffix" -}}
{{- if .Values.canary.enabled -}}
{{- "-primary" -}}
{{- else -}}
{{- "" -}}
{{- end -}}
{{- end -}}

View File

@@ -1,60 +0,0 @@
{{- if .Values.canary.enabled }}
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: {{ template "podinfo.fullname" . }}
labels:
app: {{ template "podinfo.name" . }}
chart: {{ template "podinfo.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ template "podinfo.fullname" . }}
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: {{ template "podinfo.fullname" . }}
service:
port: {{ .Values.service.port }}
{{- if .Values.canary.istioIngress.enabled }}
gateways:
- {{ .Values.canary.istioIngress.gateway }}
hosts:
- {{ .Values.canary.istioIngress.host }}
{{- end }}
trafficPolicy:
tls:
mode: {{ .Values.canary.istioTLS }}
analysis:
interval: {{ .Values.canary.analysis.interval }}
threshold: {{ .Values.canary.analysis.threshold }}
maxWeight: {{ .Values.canary.analysis.maxWeight }}
stepWeight: {{ .Values.canary.analysis.stepWeight }}
metrics:
- name: request-success-rate
threshold: {{ .Values.canary.thresholds.successRate }}
interval: 1m
- name: request-duration
threshold: {{ .Values.canary.thresholds.latency }}
interval: 1m
webhooks:
{{- if .Values.canary.helmtest.enabled }}
- name: "helm test"
type: pre-rollout
url: {{ .Values.canary.helmtest.url }}
timeout: 3m
metadata:
type: "helmv3"
cmd: "test {{ .Release.Name }} -n {{ .Release.Namespace }}"
{{- end }}
{{- if .Values.canary.loadtest.enabled }}
- name: load-test-get
url: {{ .Values.canary.loadtest.url }}
timeout: 5s
metadata:
cmd: "hey -z 1m -q 5 -c 2 http://{{ template "podinfo.fullname" . }}.{{ .Release.Namespace }}:{{ .Values.service.port }}"
{{- end }}
{{- end }}

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "podinfo.fullname" . }}
labels:
app: {{ template "podinfo.name" . }}
chart: {{ template "podinfo.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
config.yaml: |-
# http settings
http-client-timeout: 1m
http-server-timeout: {{ .Values.httpServer.timeout }}
http-server-shutdown-timeout: 5s

View File

@@ -1,101 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "podinfo.fullname" . }}
labels:
app: {{ template "podinfo.name" . }}
chart: {{ template "podinfo.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ template "podinfo.fullname" . }}
template:
metadata:
labels:
app: {{ template "podinfo.fullname" . }}
annotations:
prometheus.io/scrape: 'true'
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
terminationGracePeriodSeconds: 30
containers:
- name: podinfo
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- ./podinfo
- --port={{ .Values.service.port }}
- --level={{ .Values.logLevel }}
- --random-delay={{ .Values.faults.delay }}
- --random-error={{ .Values.faults.error }}
- --config-path=/podinfo/config
{{- range .Values.backends }}
- --backend-url={{ . }}
{{- end }}
env:
- name: PODINFO_UI_COLOR
value: "#34577c"
{{- if .Values.message }}
- name: PODINFO_UI_MESSAGE
value: {{ .Values.message }}
{{- end }}
{{- if .Values.backend }}
- name: PODINFO_BACKEND_URL
value: {{ .Values.backend }}
{{- end }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
exec:
command:
- podcli
- check
- http
- localhost:{{ .Values.service.port }}/healthz
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
exec:
command:
- podcli
- check
- http
- localhost:{{ .Values.service.port }}/readyz
initialDelaySeconds: 5
timeoutSeconds: 5
volumeMounts:
- name: data
mountPath: /data
- name: config
mountPath: /podinfo/config
readOnly: true
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: {{ template "podinfo.fullname" . }}

View File

@@ -1,31 +0,0 @@
{{- if .Values.hpa.enabled -}}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ template "podinfo.fullname" . }}
labels:
app: {{ template "podinfo.name" . }}
chart: {{ template "podinfo.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ template "podinfo.fullname" . }}
minReplicas: {{ .Values.hpa.minReplicas }}
maxReplicas: {{ .Values.hpa.maxReplicas }}
metrics:
{{- if .Values.hpa.cpu }}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .Values.hpa.cpu }}
{{- end }}
{{- if .Values.hpa.memory }}
- type: Resource
resource:
name: memory
targetAverageValue: {{ .Values.hpa.memory }}
{{- end }}
{{- end }}

View File

@@ -1,20 +0,0 @@
{{- if not .Values.canary.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "podinfo.fullname" . }}
labels:
app: {{ template "podinfo.name" . }}
chart: {{ template "podinfo.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
app: {{ template "podinfo.fullname" . }}
{{- end }}

View File

@@ -1,29 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: {{ template "podinfo.fullname" . }}-jwt-test-{{ randAlphaNum 5 | lower }}
labels:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app: {{ template "podinfo.name" . }}
annotations:
"helm.sh/hook": test-success
sidecar.istio.io/inject: "false"
linkerd.io/inject: disabled
appmesh.k8s.aws/sidecarInjectorWebhook: disabled
spec:
containers:
- name: tools
image: giantswarm/tiny-tools
command:
- sh
- -c
- |
TOKEN=$(curl -sd 'test' ${PODINFO_SVC}/token | jq -r .token) &&
curl -H "Authorization: Bearer ${TOKEN}" ${PODINFO_SVC}/token/validate | grep test
env:
- name: PODINFO_SVC
value: {{ template "podinfo.fullname" . }}:{{ .Values.service.port }}
restartPolicy: Never

View File

@@ -1,83 +0,0 @@
# Default values for podinfo.
image:
repository: ghcr.io/stefanprodan/podinfo
tag: 5.0.0
pullPolicy: IfNotPresent
podAnnotations: {}
service:
enabled: false
type: ClusterIP
port: 9898
hpa:
enabled: true
minReplicas: 2
maxReplicas: 4
cpu: 80
memory: 512Mi
canary:
enabled: false
# Istio traffic policy tls can be DISABLE or ISTIO_MUTUAL
istioTLS: DISABLE
istioIngress:
enabled: false
# Istio ingress gateway name
gateway: public-gateway.istio-system.svc.cluster.local
# external host name eg. podinfo.example.com
host:
analysis:
# schedule interval (default 60s)
interval: 15s
# max number of failed metric checks before rollback
threshold: 10
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 5
thresholds:
# minimum req success rate (non 5xx responses)
# percentage (0-100)
successRate: 99
# maximum req duration P99
# milliseconds
latency: 500
loadtest:
enabled: false
# load tester address
url: http://flagger-loadtester.test/
helmtest:
enabled: false
# helm tester address
url: http://flagger-helmtester.kube-system/
resources:
limits:
requests:
cpu: 100m
memory: 32Mi
nodeSelector: {}
tolerations: []
affinity: {}
nameOverride: ""
fullnameOverride: ""
logLevel: info
backend: #http://backend-podinfo:9898/echo
backends: []
message: #UI greetings
faults:
delay: false
error: false
httpServer:
timeout: 30s

View File

@@ -1,419 +0,0 @@
/*
Copyright 2020 The Flux authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"context"
"flag"
"fmt"
"log"
"os"
"strings"
"time"
"github.com/Masterminds/semver/v3"
"github.com/go-logr/zapr"
"go.uber.org/zap"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/uuid"
"k8s.io/client-go/kubernetes"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/tools/leaderelection"
"k8s.io/client-go/tools/leaderelection/resourcelock"
"k8s.io/client-go/transport"
_ "k8s.io/code-generator/cmd/client-gen/generators"
"k8s.io/klog/v2"
"github.com/fluxcd/flagger/pkg/canary"
clientset "github.com/fluxcd/flagger/pkg/client/clientset/versioned"
informers "github.com/fluxcd/flagger/pkg/client/informers/externalversions"
"github.com/fluxcd/flagger/pkg/controller"
"github.com/fluxcd/flagger/pkg/logger"
"github.com/fluxcd/flagger/pkg/metrics/observers"
"github.com/fluxcd/flagger/pkg/notifier"
"github.com/fluxcd/flagger/pkg/router"
"github.com/fluxcd/flagger/pkg/server"
"github.com/fluxcd/flagger/pkg/signals"
"github.com/fluxcd/flagger/pkg/version"
)
var (
masterURL string
kubeconfig string
kubeconfigQPS int
kubeconfigBurst int
metricsServer string
controlLoopInterval time.Duration
logLevel string
port string
msteamsURL string
includeLabelPrefix string
slackURL string
slackUser string
slackChannel string
eventWebhook string
threadiness int
zapReplaceGlobals bool
zapEncoding string
namespace string
meshProvider string
selectorLabels string
ingressAnnotationsPrefix string
ingressClass string
enableLeaderElection bool
leaderElectionNamespace string
enableConfigTracking bool
ver bool
kubeconfigServiceMesh string
)
func init() {
flag.StringVar(&kubeconfig, "kubeconfig", "", "Path to a kubeconfig. Only required if out-of-cluster.")
flag.IntVar(&kubeconfigQPS, "kubeconfig-qps", 100, "Set QPS for kubeconfig.")
flag.IntVar(&kubeconfigBurst, "kubeconfig-burst", 250, "Set Burst for kubeconfig.")
flag.StringVar(&masterURL, "master", "", "The address of the Kubernetes API server. Overrides any value in kubeconfig. Only required if out-of-cluster.")
flag.StringVar(&metricsServer, "metrics-server", "http://prometheus:9090", "Prometheus URL.")
flag.DurationVar(&controlLoopInterval, "control-loop-interval", 10*time.Second, "Kubernetes API sync interval.")
flag.StringVar(&logLevel, "log-level", "debug", "Log level can be: debug, info, warning, error.")
flag.StringVar(&port, "port", "8080", "Port to listen on.")
flag.StringVar(&slackURL, "slack-url", "", "Slack hook URL.")
flag.StringVar(&slackUser, "slack-user", "flagger", "Slack user name.")
flag.StringVar(&slackChannel, "slack-channel", "", "Slack channel.")
flag.StringVar(&eventWebhook, "event-webhook", "", "Webhook for publishing flagger events")
flag.StringVar(&msteamsURL, "msteams-url", "", "MS Teams incoming webhook URL.")
flag.StringVar(&includeLabelPrefix, "include-label-prefix", "", "List of prefixes of labels that are copied when creating primary deployments or daemonsets. Use * to include all.")
flag.IntVar(&threadiness, "threadiness", 2, "Worker concurrency.")
flag.BoolVar(&zapReplaceGlobals, "zap-replace-globals", false, "Whether to change the logging level of the global zap logger.")
flag.StringVar(&zapEncoding, "zap-encoding", "json", "Zap logger encoding.")
flag.StringVar(&namespace, "namespace", "", "Namespace that flagger would watch canary object.")
flag.StringVar(&meshProvider, "mesh-provider", "istio", "Service mesh provider, can be istio, linkerd, appmesh, contour, gloo, nginx, skipper or traefik.")
flag.StringVar(&selectorLabels, "selector-labels", "app,name,app.kubernetes.io/name", "List of pod labels that Flagger uses to create pod selectors.")
flag.StringVar(&ingressAnnotationsPrefix, "ingress-annotations-prefix", "nginx.ingress.kubernetes.io", "Annotations prefix for NGINX ingresses.")
flag.StringVar(&ingressClass, "ingress-class", "", "Ingress class used for annotating HTTPProxy objects.")
flag.BoolVar(&enableLeaderElection, "enable-leader-election", false, "Enable leader election.")
flag.StringVar(&leaderElectionNamespace, "leader-election-namespace", "kube-system", "Namespace used to create the leader election config map.")
flag.BoolVar(&enableConfigTracking, "enable-config-tracking", true, "Enable secrets and configmaps tracking.")
flag.BoolVar(&ver, "version", false, "Print version")
flag.StringVar(&kubeconfigServiceMesh, "kubeconfig-service-mesh", "", "Path to a kubeconfig for the service mesh control plane cluster.")
}
func main() {
klog.InitFlags(nil)
flag.Parse()
if ver {
fmt.Println("Flagger version", version.VERSION, "revision ", version.REVISION)
os.Exit(0)
}
logger, err := logger.NewLoggerWithEncoding(logLevel, zapEncoding)
if err != nil {
log.Fatalf("Error creating logger: %v", err)
}
if zapReplaceGlobals {
zap.ReplaceGlobals(logger.Desugar())
}
klog.SetLogger(zapr.NewLogger(logger.Desugar()))
defer logger.Sync()
stopCh := signals.SetupSignalHandler()
logger.Infof("Starting flagger version %s revision %s mesh provider %s", version.VERSION, version.REVISION, meshProvider)
cfg, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfig)
if err != nil {
logger.Fatalf("Error building kubeconfig: %v", err)
}
cfg.QPS = float32(kubeconfigQPS)
cfg.Burst = kubeconfigBurst
kubeClient, err := kubernetes.NewForConfig(cfg)
if err != nil {
logger.Fatalf("Error building kubernetes clientset: %v", err)
}
flaggerClient, err := clientset.NewForConfig(cfg)
if err != nil {
logger.Fatalf("Error building flagger clientset: %s", err.Error())
}
// use a remote cluster for routing if a service mesh kubeconfig is specified
if kubeconfigServiceMesh == "" {
kubeconfigServiceMesh = kubeconfig
}
cfgHost, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfigServiceMesh)
if err != nil {
logger.Fatalf("Error building host kubeconfig: %v", err)
}
cfgHost.QPS = float32(kubeconfigQPS)
cfgHost.Burst = kubeconfigBurst
meshClient, err := clientset.NewForConfig(cfgHost)
if err != nil {
logger.Fatalf("Error building mesh clientset: %v", err)
}
verifyCRDs(flaggerClient, logger)
verifyKubernetesVersion(kubeClient, logger)
infos := startInformers(flaggerClient, logger, stopCh)
labels := strings.Split(selectorLabels, ",")
if len(labels) < 1 {
logger.Fatalf("At least one selector label is required")
}
if namespace != "" {
logger.Infof("Watching namespace %s", namespace)
}
observerFactory, err := observers.NewFactory(metricsServer)
if err != nil {
logger.Fatalf("Error building prometheus client: %s", err.Error())
}
ok, err := observerFactory.Client.IsOnline()
if ok {
logger.Infof("Connected to metrics server %s", metricsServer)
} else {
logger.Errorf("Metrics server %s unreachable %v", metricsServer, err)
}
// setup Slack or MS Teams notifications
notifierClient := initNotifier(logger)
// start HTTP server
go server.ListenAndServe(port, 3*time.Second, logger, stopCh)
routerFactory := router.NewFactory(cfg, kubeClient, flaggerClient, ingressAnnotationsPrefix, ingressClass, logger, meshClient)
var configTracker canary.Tracker
if enableConfigTracking {
configTracker = &canary.ConfigTracker{
Logger: logger,
KubeClient: kubeClient,
FlaggerClient: flaggerClient,
}
} else {
configTracker = &canary.NopTracker{}
}
includeLabelPrefixArray := strings.Split(includeLabelPrefix, ",")
canaryFactory := canary.NewFactory(kubeClient, flaggerClient, configTracker, labels, includeLabelPrefixArray, logger)
c := controller.NewController(
kubeClient,
flaggerClient,
infos,
controlLoopInterval,
logger,
notifierClient,
canaryFactory,
routerFactory,
observerFactory,
meshProvider,
version.VERSION,
fromEnv("EVENT_WEBHOOK_URL", eventWebhook),
)
// leader election context
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// prevents new requests when leadership is lost
cfg.Wrap(transport.ContextCanceller(ctx, fmt.Errorf("the leader is shutting down")))
// cancel leader election context on shutdown signals
go func() {
<-stopCh
cancel()
}()
// wrap controller run
runController := func() {
if err := c.Run(threadiness, stopCh); err != nil {
logger.Fatalf("Error running controller: %v", err)
}
}
// run controller when this instance wins the leader election
if enableLeaderElection {
ns := leaderElectionNamespace
if namespace != "" {
ns = namespace
}
startLeaderElection(ctx, runController, ns, kubeClient, logger)
} else {
runController()
}
}
func startInformers(flaggerClient clientset.Interface, logger *zap.SugaredLogger, stopCh <-chan struct{}) controller.Informers {
flaggerInformerFactory := informers.NewSharedInformerFactoryWithOptions(flaggerClient, time.Second*30, informers.WithNamespace(namespace))
logger.Info("Waiting for canary informer cache to sync")
canaryInformer := flaggerInformerFactory.Flagger().V1beta1().Canaries()
go canaryInformer.Informer().Run(stopCh)
if ok := cache.WaitForNamedCacheSync("flagger", stopCh, canaryInformer.Informer().HasSynced); !ok {
logger.Fatalf("failed to wait for cache to sync")
}
logger.Info("Waiting for metric template informer cache to sync")
metricInformer := flaggerInformerFactory.Flagger().V1beta1().MetricTemplates()
go metricInformer.Informer().Run(stopCh)
if ok := cache.WaitForNamedCacheSync("flagger", stopCh, metricInformer.Informer().HasSynced); !ok {
logger.Fatalf("failed to wait for cache to sync")
}
logger.Info("Waiting for alert provider informer cache to sync")
alertInformer := flaggerInformerFactory.Flagger().V1beta1().AlertProviders()
go alertInformer.Informer().Run(stopCh)
if ok := cache.WaitForNamedCacheSync("flagger", stopCh, alertInformer.Informer().HasSynced); !ok {
logger.Fatalf("failed to wait for cache to sync")
}
return controller.Informers{
CanaryInformer: canaryInformer,
MetricInformer: metricInformer,
AlertInformer: alertInformer,
}
}
func startLeaderElection(ctx context.Context, run func(), ns string, kubeClient kubernetes.Interface, logger *zap.SugaredLogger) {
configMapName := "flagger-leader-election"
id, err := os.Hostname()
if err != nil {
logger.Fatalf("Error running controller: %v", err)
}
id = id + "_" + string(uuid.NewUUID())
lock, err := resourcelock.New(
resourcelock.ConfigMapsResourceLock,
ns,
configMapName,
kubeClient.CoreV1(),
kubeClient.CoordinationV1(),
resourcelock.ResourceLockConfig{
Identity: id,
},
)
if err != nil {
logger.Fatalf("Error running controller: %v", err)
}
logger.Infof("Starting leader election id: %s configmap: %s namespace: %s", id, configMapName, ns)
leaderelection.RunOrDie(ctx, leaderelection.LeaderElectionConfig{
Lock: lock,
ReleaseOnCancel: true,
LeaseDuration: 60 * time.Second,
RenewDeadline: 15 * time.Second,
RetryPeriod: 5 * time.Second,
Callbacks: leaderelection.LeaderCallbacks{
OnStartedLeading: func(ctx context.Context) {
logger.Info("Acting as elected leader")
run()
},
OnStoppedLeading: func() {
logger.Infof("Leadership lost")
os.Exit(1)
},
OnNewLeader: func(identity string) {
if identity != id {
logger.Infof("Another instance has been elected as leader: %v", identity)
}
},
},
})
}
func initNotifier(logger *zap.SugaredLogger) (client notifier.Interface) {
provider := "slack"
notifierURL := fromEnv("SLACK_URL", slackURL)
if msteamsURL != "" || os.Getenv("MSTEAMS_URL") != "" {
provider = "msteams"
notifierURL = fromEnv("MSTEAMS_URL", msteamsURL)
}
notifierFactory := notifier.NewFactory(notifierURL, slackUser, slackChannel)
var err error
client, err = notifierFactory.Notifier(provider)
if err != nil {
logger.Errorf("Notifier %v", err)
} else if len(notifierURL) > 30 {
logger.Infof("Notifications enabled for %s", notifierURL[0:30])
}
return
}
func fromEnv(envVar string, defaultVal string) string {
if v := os.Getenv(envVar); v != "" {
return v
}
return defaultVal
}
func verifyCRDs(flaggerClient clientset.Interface, logger *zap.SugaredLogger) {
_, err := flaggerClient.FlaggerV1beta1().Canaries(namespace).List(context.TODO(), metav1.ListOptions{Limit: 1})
if err != nil {
logger.Fatalf("Canary CRD is not registered %v", err)
}
_, err = flaggerClient.FlaggerV1beta1().MetricTemplates(namespace).List(context.TODO(), metav1.ListOptions{Limit: 1})
if err != nil {
logger.Fatalf("MetricTemplate CRD is not registered %v", err)
}
_, err = flaggerClient.FlaggerV1beta1().AlertProviders(namespace).List(context.TODO(), metav1.ListOptions{Limit: 1})
if err != nil {
logger.Fatalf("AlertProvider CRD is not registered %v", err)
}
}
func verifyKubernetesVersion(kubeClient kubernetes.Interface, logger *zap.SugaredLogger) {
ver, err := kubeClient.Discovery().ServerVersion()
if err != nil {
logger.Fatalf("Error calling Kubernetes API: %v", err)
}
k8sVersionConstraint := "^1.11.0"
// We append -alpha.1 to the end of our version constraint so that prebuilds of later versions
// are considered valid for our purposes, as well as some managed solutions like EKS where they provide
// a version like `v1.12.6-eks-d69f1b`. It doesn't matter what the prelease value is here, just that it
// exists in our constraint.
semverConstraint, err := semver.NewConstraint(k8sVersionConstraint + "-alpha.1")
if err != nil {
logger.Fatalf("Error parsing kubernetes version constraint: %v", err)
}
k8sSemver, err := semver.NewVersion(ver.GitVersion)
if err != nil {
logger.Fatalf("Error parsing kubernetes version as a semantic version: %v", err)
}
if !semverConstraint.Check(k8sSemver) {
logger.Fatalf("Unsupported version of kubernetes detected. Expected %s, got %v", k8sVersionConstraint, ver)
}
logger.Infof("Connected to Kubernetes API %s", ver)
}

View File

@@ -1,70 +0,0 @@
/*
Copyright 2020 The Flux authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"flag"
"log"
"time"
"github.com/fluxcd/flagger/pkg/loadtester"
"github.com/fluxcd/flagger/pkg/logger"
"github.com/fluxcd/flagger/pkg/signals"
"go.uber.org/zap"
)
var VERSION = "0.18.0"
var (
logLevel string
port string
timeout time.Duration
zapReplaceGlobals bool
zapEncoding string
)
func init() {
flag.StringVar(&logLevel, "log-level", "debug", "Log level can be: debug, info, warning, error.")
flag.StringVar(&port, "port", "9090", "Port to listen on.")
flag.DurationVar(&timeout, "timeout", time.Hour, "Load test exec timeout.")
flag.BoolVar(&zapReplaceGlobals, "zap-replace-globals", false, "Whether to change the logging level of the global zap logger.")
flag.StringVar(&zapEncoding, "zap-encoding", "json", "Zap logger encoding.")
}
func main() {
flag.Parse()
logger, err := logger.NewLoggerWithEncoding(logLevel, zapEncoding)
if err != nil {
log.Fatalf("Error creating logger: %v", err)
}
if zapReplaceGlobals {
zap.ReplaceGlobals(logger.Desugar())
}
defer logger.Sync()
stopCh := signals.SetupSignalHandler()
taskRunner := loadtester.NewTaskRunner(logger, timeout)
go taskRunner.Start(100*time.Millisecond, stopCh)
logger.Infof("Starting load tester v%s API on port %s", VERSION, port)
gateStorage := loadtester.NewGateStorage("in-memory")
loadtester.ListenAndServe(port, time.Minute, logger, taskRunner, gateStorage, stopCh)
}

23
docs/.vuepress/config.js Normal file
View File

@@ -0,0 +1,23 @@
module.exports = {
title: 'Flagger',
description: 'Progressive Delivery operator for Kubernetes (Canary, A/B Testing and Blue/Green deployments)',
themeConfig: {
search: true,
activeHeaderLinks: false,
docsDir: 'docs',
repo: 'fluxcd/flagger',
nav: [
{ text: 'Docs', link: 'https://docs.flagger.app' },
{ text: 'Changelog', link: 'https://github.com/fluxcd/flagger/blob/main/CHANGELOG.md' }
]
},
head: [
['link', { rel: 'icon', href: '/favicon.png' }],
['link', { rel: 'stylesheet', href: '/website.css' }],
['meta', { name: 'keywords', content: 'gitops kubernetes flagger flux istio linkerd appmesh contour gloo nginx skipper traefik' }],
['meta', { name: 'twitter:card', content: 'summary_large_image' }],
['meta', { name: 'twitter:title', content: 'Flagger' }],
['meta', { name: 'twitter:description', content: 'Progressive delivery Kubernetes operator (Canary, A/B Testing and Blue/Green deployments)' }],
['meta', { name: 'twitter:image:src', content: 'https://flagger.app/flagger-overview.png' }]
]
};

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 245 KiB

View File

@@ -0,0 +1,12 @@
.icon.outbound {
display: none !important;
}
.site-name {
padding-left: 30px;
position: relative;
background: url(favicon.png) left 50% no-repeat;
background-size: 20px 20px;
}
/*.theme-container .theme-default-content:not(.custom) {*/
/* max-width: 920px;*/
/*}*/

100
docs/README.md Normal file
View File

@@ -0,0 +1,100 @@
---
title: Flagger
home: true
heroText: Flagger
tagline: Progressive Delivery Operator for Kubernetes
actionText: Get Started →
actionLink: https://docs.flagger.app
features:
- title: Safer Releases
details: Reduce the risk of introducing a new software version in production by gradually shifting traffic to the new version while measuring metrics like HTTP/gRPC request success rate and latency.
- title: Flexible Traffic Routing
details: Shift and route traffic between app versions automatically using an ingress controller or a service mesh compatible with Kubernetes Gateway API.
- title: Extensible Validation
details: Besides the builtin metrics checks, you can extend your application analysis with custom metrics and webhooks for running acceptance tests, load tests, or any other custom validation.
footer: Apache License 2.0 | Copyright © 2018-2025 The Flux authors
---
## Progressive Delivery
Flagger was designed to give developers confidence in automating production releases with progressive delivery techniques.
::: tip Canary release
A benefit of using canary releases is the ability to do capacity testing of the new version in a production environment
with a safe rollback strategy if issues are found. By slowly ramping up the load, you can monitor and capture metrics
about how the new version impacts the production environment.
[Martin Fowler](https://martinfowler.com/bliki/CanaryRelease.html)
:::
Flagger can run automated application analysis, testing, promotion and rollback for the following deployment strategies:
* **Canary** (progressive traffic shifting with session affinity)
* [Istio](https://docs.flagger.app/tutorials/istio-progressive-delivery),
[Linkerd](https://docs.flagger.app/tutorials/linkerd-progressive-delivery),
[Kuma Service Mesh](https://docs.flagger.app/tutorials/kuma-progressive-delivery),
[Gateway API](https://docs.flagger.app/tutorials/gatewayapi-progressive-delivery)
* [Contour](https://docs.flagger.app/tutorials/contour-progressive-delivery),
[Gloo](https://docs.flagger.app/tutorials/gloo-progressive-delivery),
[NGINX](https://docs.flagger.app/tutorials/nginx-progressive-delivery),
[Skipper](https://docs.flagger.app/tutorials/skipper-progressive-delivery),
[Traefik](https://docs.flagger.app/tutorials/traefik-progressive-delivery),
[Apache APISIX](https://docs.flagger.app/tutorials/apisix-progressive-delivery),
[Knative](https://docs.flagger.app/tutorials/knative-progressive-delivery)
* **A/B Testing** (HTTP headers and cookies traffic routing)
* [Istio](https://docs.flagger.app/tutorials/istio-ab-testing),
[Gateway API](https://docs.flagger.app/tutorials/gatewayapi-progressive-delivery#a-b-testing),
[Contour](https://docs.flagger.app/tutorials/contour-progressive-delivery#a-b-testing),
[NGINX](https://docs.flagger.app/tutorials/nginx-progressive-delivery#a-b-testing)
* **Blue/Green** (traffic switching and mirroring)
* [Kubernetes CNI](https://docs.flagger.app/tutorials/kubernetes-blue-green),
[Istio](https://docs.flagger.app/tutorials/istio-progressive-delivery#traffic-mirroring),
Linkerd, Kuma, Contour, Gloo, NGINX, Skipper, Traefik, Apache
Flagger's application analysis can be extended with metric queries targeting Prometheus, Datadog,
CloudWatch, New Relic, Graphite, Dynatrace, InfluxDB and Google Cloud Monitoring.
Flagger can be configured to [send notifications](https://docs.flagger.app/usage/alerting) to
Slack, Microsoft Teams, Discord and Rocket.
It will post messages when a deployment has been initialised,
when a new revision has been detected and if the canary analysis failed or succeeded.
## GitOps
![GitOps with Flagger and Flux](/flagger-gitops.png)
You can build fully automated GitOps pipelines for canary deployments with Flagger and
[Flux](https://github.com/fluxcd/flux2).
::: tip GitOps
GitOps is a way to do Kubernetes cluster management and application delivery.
It works by using Git as a single source of truth for declarative infrastructure and applications.
With Git at the center of your delivery pipelines, developers can make pull requests
to accelerate and simplify application deployments and operations tasks to Kubernetes.
:::
## Getting Help
If you have any questions about Flagger and progressive delivery:
* Read the Flagger [docs](https://docs.flagger.app).
* Invite yourself to the [CNCF community slack](https://slack.cncf.io/)
and join the [#flagger](https://cloud-native.slack.com/messages/flagger/) channel.
* Check out the [Flux talks section](https://fluxcd.io/community/#talks) and to see a list of online talks,
hands-on training and meetups.
* File an [issue](https://github.com/fluxcd/flagger/issues/new).
Your feedback is always welcome!
## License
Flagger is [Apache 2.0](https://raw.githubusercontent.com/fluxcd/flagger/main/LICENSE)
licensed and accepts contributions via GitHub pull requests.
Flagger was initially developed in 2018 at Weaveworks by Stefan Prodan.
In 2020 Flagger became a [Cloud Native Computing Foundation](https://cncf.io/) project,
part of [Flux](https://fluxcd.io) family of GitOps tools.
[![CNCF](/cncf.png)](https://cncf.io/)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 220 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 KiB

Some files were not shown because too many files have changed in this diff Show More