Compare commits

...

76 Commits

Author SHA1 Message Date
Stefan Prodan
02e5f233d0 Release v1.2.0 2018-09-11 22:18:58 +03:00
Stefan Prodan
b89f46ac04 Add websocket client command to CLI 2018-09-11 22:14:59 +03:00
Stefan Prodan
59cd692141 Add websocket echo handler 2018-09-11 22:13:54 +03:00
Stefan Prodan
bcd61428d1 Import gorilla/websocket 2018-09-11 22:12:37 +03:00
Stefan Prodan
f8ec9c0947 Fix multi-arch docker push 2018-09-11 18:04:13 +03:00
Stefan Prodan
6c98fbf1f4 Add JWT token issue and validate handlers 2018-09-10 11:36:11 +03:00
Stefan Prodan
54f6d9f74d Add env handler 2018-09-10 01:29:49 +03:00
Stefan Prodan
1d35304d9d OpenfaaS canary deployments 2018-09-09 16:38:37 +03:00
Stefan Prodan
457a56f71a Publish podinfo CLI to GitHub with goreleaser 2018-09-09 12:38:23 +03:00
Stefan Prodan
fbcab6cf56 Upgrade to go 1.11 and alpine 3.8 2018-09-09 12:37:58 +03:00
Stefan Prodan
0126282669 add goreleaser 2018-09-08 19:05:37 +03:00
Stefan Prodan
ff1fb39f43 Release v1.1.0
- add podinfo CLI to Quay docker image
- use podinfo CLI for health checks (Istio mTLS support)
2018-09-08 11:38:48 +03:00
Stefan Prodan
84f0e1c9e2 Add CLI check certificate 2018-09-07 15:26:07 +03:00
Stefan Prodan
3eb4cc90f9 Add CLI to Quay docker image 2018-09-07 14:55:24 +03:00
Stefan Prodan
b6c3d36bde Add CLI check tcp command 2018-09-07 14:54:55 +03:00
Stefan Prodan
a8a85e6aae Add CLI version cmd 2018-09-07 14:54:35 +03:00
Stefan Prodan
79b2d784bf Add OpenFaaS Istio guide WIP 2018-09-07 13:38:39 +03:00
Stefan Prodan
bfd35f6cc0 Make health checks compatible with Istio mTLS 2018-09-07 13:38:18 +03:00
Stefan Prodan
f1775ba090 Add podinfo CLI WIP 2018-09-07 13:37:40 +03:00
Stefan Prodan
7a2d59de8e Add OpenFaaS Istio mTLS and policies 2018-09-05 15:38:53 +03:00
Stefan Prodan
8191871761 Add Istio A/B test dashboard 2018-09-05 15:38:24 +03:00
Stefan Prodan
36bb719b1c Bump version to 1.0.1 2018-08-27 16:26:30 +01:00
Stefan Prodan
ecd204b15e Release v1.0.0 2018-08-22 00:57:53 +03:00
Stefan Prodan
979fd669df Use gorilla mux route name as Prometheus path label 2018-08-21 15:19:21 +03:00
Stefan Prodan
feac686e60 Release v1.0.0-beta.1 2018-08-21 12:03:34 +03:00
Stefan Prodan
d362dc5f81 Set env var prefix to PODINFO 2018-08-21 11:58:37 +03:00
Stefan Prodan
593ccaa0cd Add random delay and errors middleware 2018-08-21 03:12:20 +03:00
Stefan Prodan
0f098cf0f1 Add config file support 2018-08-21 02:02:47 +03:00
Stefan Prodan
2ddbc03371 Replace zerolog with zap 2018-08-21 02:01:26 +03:00
Stefan Prodan
f2d95bbf80 Add logging middleware and log level option 2018-08-20 17:03:07 +03:00
Stefan Prodan
7d18ec68b3 Use plag, viper and zap 2018-08-20 11:30:18 +03:00
Stefan Prodan
774d34c1dd Rewrite HTTP server with gorilla mux 2018-08-20 11:29:11 +03:00
Stefan Prodan
f13d006993 Add Kubernetes probes handlers 2018-08-20 11:28:06 +03:00
Stefan Prodan
aeeb146c2a Add UI handler 2018-08-20 11:27:40 +03:00
Stefan Prodan
11bd74eff2 Add local storage read/write handler 2018-08-20 11:27:08 +03:00
Stefan Prodan
af6d11fd33 Add panic handler 2018-08-20 11:26:24 +03:00
Stefan Prodan
49746fe2fb Add fscache reader handler 2018-08-20 11:26:08 +03:00
Stefan Prodan
da24d729bb Add runtime info handler 2018-08-20 11:25:36 +03:00
Stefan Prodan
449fcca3a9 Add HTTP status code handler 2018-08-20 11:25:15 +03:00
Stefan Prodan
2b0a742974 Add echo headers handler 2018-08-20 11:24:49 +03:00
Stefan Prodan
153f4dce45 Add echo handler with backend propagation 2018-08-20 11:24:23 +03:00
Stefan Prodan
4c8d11cc3e Add delay handler 2018-08-20 11:23:45 +03:00
Stefan Prodan
08415ce2ce Add version handler 2018-08-20 11:23:13 +03:00
Stefan Prodan
d26b7a96d9 Add UI index handler 2018-08-20 11:22:48 +03:00
Stefan Prodan
3c897b8bd7 Rename git commit to revision 2018-08-20 11:21:51 +03:00
Stefan Prodan
511ab87a18 Update deps for v1.0 2018-08-20 11:20:56 +03:00
Stefan Prodan
21922197b5 Add resource usage to blue/green dashboard 2018-08-18 14:22:35 +03:00
Stefan Prodan
7ea943525f Add Helm chart for load testing 2018-08-17 18:45:52 +03:00
Stefan Prodan
57ff4465cd Add Istio Blue/Green Grafana dashboard 2018-08-17 17:25:04 +03:00
Stefan Prodan
a86ef1fdb6 Add frontend, backend and store chart values
- add Istio virtual service weight for blue/green
2018-08-17 15:41:23 +03:00
Stefan Prodan
ddf1b80e1b Log backend errors 2018-08-17 15:38:35 +03:00
Stefan Prodan
896aceb240 Add Helm chart for Istio canary deployments and A/B testing 2018-08-16 15:24:04 +03:00
Stefan Prodan
7996f76e71 Release v0.6.1
- update page title when hostname changes
2018-08-16 15:21:26 +03:00
Stefan Prodan
8b04a8f502 Remove old charts 2018-08-16 15:20:21 +03:00
Stefan Prodan
8a6a4e8901 Release v0.6
- Helm chart: use quay image, add color env var, rename backend env var, adjust deployment strategy and set liveness probe to 2s
2018-08-16 00:09:02 +03:00
Stefan Prodan
cf8531c224 Move ping to api/echo 2018-08-16 00:05:32 +03:00
Stefan Prodan
d1574a6601 Decrease Istio HTTP 503 errors with preStop 2018-08-15 19:42:08 +03:00
Stefan Prodan
75d93e0c54 Inject delay and failures for the orange backend 2018-08-15 13:37:40 +03:00
Stefan Prodan
7622dfb74f Add store service 2018-08-15 12:28:03 +03:00
Stefan Prodan
85a26ed71e Add X-Api-Version header
- inject version header for backend calls
- route frontend calls to backend based on API version
2018-08-15 11:16:20 +03:00
Stefan Prodan
81b22f08f8 Add instrumentation list 2018-08-15 11:14:59 +03:00
Stefan Prodan
7d9e3afde7 Beta release v0.6.0-beta.10 2018-08-14 16:41:58 +03:00
Stefan Prodan
3d2028a124 Display hostname as title 2018-08-14 16:41:14 +03:00
Stefan Prodan
1b56648f5b Enable HTTPS redirect in Istio gateway 2018-08-14 16:04:44 +03:00
Stefan Prodan
3a704215a4 Move the public gateway to istio-system ns
- expose Jaeger and Grafana
2018-08-14 15:57:07 +03:00
Stefan Prodan
25aaeff13c Ignore DS_Store 2018-08-14 13:33:36 +03:00
Stefan Prodan
3b93a3445e Make message and color configurable via env vars 2018-08-14 13:21:35 +03:00
Stefan Prodan
a6cc3d2ef9 Reload page when version changes and use fetch API for backend calls 2018-08-14 13:20:05 +03:00
Stefan Prodan
718d8ba4e0 Get external IP from httpbin.org 2018-08-14 11:24:22 +03:00
Stefan Prodan
24ceb25930 Beta release v0.6.0-beta.2 2018-08-13 14:56:13 +03:00
Stefan Prodan
fc8dfc7678 Add Istio Gateway manifests 2018-08-13 14:55:27 +03:00
Stefan Prodan
8e656fdfd0 Add UI/API response and forward OpenTracing headers to backend 2018-08-13 14:54:46 +03:00
Stefan Prodan
a945842e9b Add VueJS UI 2018-08-13 14:52:49 +03:00
Stefan Prodan
09a743f5c2 Add CPU and Memory stress test flags 2018-08-10 11:48:12 +03:00
Stefan Prodan
c44a58602e Release v0.5.1 2018-08-08 12:17:05 +03:00
Stefan Prodan
2ee11bf6b2 Remove deleted files from cache instead of clearing the whole cache 2018-08-08 12:14:26 +03:00
695 changed files with 97360 additions and 12163 deletions

4
.gitignore vendored
View File

@@ -10,8 +10,12 @@
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
.DS_Store
# Project-local glide cache, RE: https://github.com/Masterminds/glide/issues/736
.glide/
.idea/
release/
build/
gcloud/
dist/

21
.goreleaser.yml Normal file
View File

@@ -0,0 +1,21 @@
builds:
- main: ./cmd/podcli
binary: podcli
ldflags: -s -w -X github.com/stefanprodan/k8s-podinfo/pkg/version.REVISION={{.Commit}}
goos:
- windows
- darwin
- linux
goarch:
- amd64
env:
- CGO_ENABLED=0
ignore:
- goos: darwin
goarch: 386
- goos: windows
goarch: 386
archive:
name_template: "{{ .Binary }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}"
files:
- none*

View File

@@ -2,7 +2,7 @@ sudo: required
language: go
go:
- 1.9.x
- 1.11.x
services:
- docker
@@ -14,19 +14,10 @@ addons:
before_install:
- make dep
# - curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
# - mkdir -p .bin; mv ./kubectl .bin/kubectl && chmod +x .bin/kubectl
# - export PATH="$TRAVIS_BUILD_DIR/.bin:$PATH"
# - wget https://cdn.rawgit.com/Mirantis/kubeadm-dind-cluster/master/fixed/dind-cluster-v1.8.sh && chmod +x dind-cluster-v1.8.sh && ./dind-cluster-v1.8.sh up
# - export PATH="$HOME/.kubeadm-dind-cluster:$PATH"
script:
- make test
- make build docker-build
# - kubectl get nodes
# - kubectl run podinfo --image=podinfo:latest --port=9898
# - sleep 5
# - kubectl get pods
after_success:
- if [ -z "$DOCKER_USER" ]; then
@@ -41,3 +32,10 @@ after_success:
echo $QUAY_PASS | docker login -u $QUAY_USER --password-stdin quay.io;
make quay-push;
fi
deploy:
- provider: script
skip_cleanup: true
script: curl -sL http://git.io/goreleaser | bash
on:
tags: true

View File

@@ -6,7 +6,7 @@ RUN addgroup -S app \
curl openssl netcat-openbsd
WORKDIR /home/app
COPY ./ui ./ui
ADD podinfo .
RUN chown -R app:app ./

View File

@@ -1,5 +1,6 @@
FROM alpine:3.7
COPY ./ui ./ui
ADD podinfo /podinfo
CMD ["./podinfo"]

View File

@@ -1,4 +1,4 @@
FROM golang:1.9 as builder
FROM golang:1.11 as builder
RUN mkdir -p /go/src/github.com/stefanprodan/k8s-podinfo/
@@ -11,10 +11,14 @@ RUN go test $(go list ./... | grep -v integration | grep -v /vendor/ | grep -v /
RUN gofmt -l -d $(find . -type f -name '*.go' -not -path "./vendor/*") && \
GIT_COMMIT=$(git rev-list -1 HEAD) && \
CGO_ENABLED=0 GOOS=linux go build -ldflags "-s -w \
-X github.com/stefanprodan/k8s-podinfo/pkg/version.GITCOMMIT=${GIT_COMMIT}" \
-X github.com/stefanprodan/k8s-podinfo/pkg/version.REVISION=${GIT_COMMIT}" \
-a -installsuffix cgo -o podinfo ./cmd/podinfo
FROM alpine:3.7
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags "-s -w \
-X github.com/stefanprodan/k8s-podinfo/pkg/version.REVISION=${GIT_COMMIT}" \
-a -installsuffix cgo -o podcli ./cmd/podcli
FROM alpine:3.8
RUN addgroup -S app \
&& adduser -S -g app app \
@@ -24,7 +28,8 @@ RUN addgroup -S app \
WORKDIR /home/app
COPY --from=builder /go/src/github.com/stefanprodan/k8s-podinfo/podinfo .
COPY --from=builder /go/src/github.com/stefanprodan/k8s-podinfo/podcli /usr/local/bin/podcli
COPY ./ui ./ui
RUN chown -R app:app ./
USER app

281
Gopkg.lock generated
View File

@@ -3,96 +3,325 @@
[[projects]]
branch = "master"
digest = "1:d6afaeed1502aa28e80a4ed0981d570ad91b2579193404256ce672ed0a609e0d"
name = "github.com/beorn7/perks"
packages = ["quantile"]
revision = "4c0e84591b9aa9e6dcfdf3e020114cd81f89d5f9"
pruneopts = "UT"
revision = "3a771d992973f24aa725d07868b467d1ddfceafb"
[[projects]]
digest = "1:b95738a1e6ace058b5b8544303c0871fc01d224ef0d672f778f696265d0f2917"
name = "github.com/chzyer/readline"
packages = ["."]
pruneopts = "UT"
revision = "62c6fe6193755f722b8b8788aa7357be55a50ff1"
version = "v1.4"
[[projects]]
digest = "1:76dc72490af7174349349838f2fe118996381b31ea83243812a97e5a0fd5ed55"
name = "github.com/dgrijalva/jwt-go"
packages = ["."]
pruneopts = "UT"
revision = "06ea1031745cb8b3dab3f6a236daf2b0aa468b7e"
version = "v3.2.0"
[[projects]]
digest = "1:865079840386857c809b72ce300be7580cb50d3d3129ce11bf9aa6ca2bc1934a"
name = "github.com/fatih/color"
packages = ["."]
pruneopts = "UT"
revision = "5b77d2a35fb0ede96d138fc9a99f5c9b6aef11b4"
version = "v1.7.0"
[[projects]]
digest = "1:abeb38ade3f32a92943e5be54f55ed6d6e3b6602761d74b4aab4c9dd45c18abd"
name = "github.com/fsnotify/fsnotify"
packages = ["."]
pruneopts = "UT"
revision = "c2828203cd70a50dcccfb2761f8b1f8ceef9a8e9"
version = "v1.4.7"
[[projects]]
digest = "1:97df918963298c287643883209a2c3f642e6593379f97ab400c2a2e219ab647d"
name = "github.com/golang/protobuf"
packages = ["proto"]
revision = "925541529c1fa6821df4e44ce2723319eb2be768"
version = "v1.0.0"
pruneopts = "UT"
revision = "aa810b61a9c79d51363740d207bb46cf8e620ed5"
version = "v1.2.0"
[[projects]]
digest = "1:c79fb010be38a59d657c48c6ba1d003a8aa651fa56b579d959d74573b7dff8e1"
name = "github.com/gorilla/context"
packages = ["."]
pruneopts = "UT"
revision = "08b5f424b9271eedf6f9f0ce86cb9396ed337a42"
version = "v1.1.1"
[[projects]]
digest = "1:e73f5b0152105f18bc131fba127d9949305c8693f8a762588a82a48f61756f5f"
name = "github.com/gorilla/mux"
packages = ["."]
pruneopts = "UT"
revision = "e3702bed27f0d39777b0b37b664b6280e8ef8fbf"
version = "v1.6.2"
[[projects]]
digest = "1:7b5c6e2eeaa9ae5907c391a91c132abfd5c9e8a784a341b5625e750c67e6825d"
name = "github.com/gorilla/websocket"
packages = ["."]
pruneopts = "UT"
revision = "66b9c49e59c6c48f0ffce28c2d8b8a5678502c6d"
version = "v1.4.0"
[[projects]]
branch = "master"
digest = "1:a361611b8c8c75a1091f00027767f7779b29cb37c456a71b8f2604c88057ab40"
name = "github.com/hashicorp/hcl"
packages = [
".",
"hcl/ast",
"hcl/parser",
"hcl/printer",
"hcl/scanner",
"hcl/strconv",
"hcl/token",
"json/parser",
"json/scanner",
"json/token",
]
pruneopts = "UT"
revision = "ef8a98b0bbce4a65b5aa4c368430a80ddc533168"
[[projects]]
digest = "1:870d441fe217b8e689d7949fef6e43efbc787e50f200cb1e70dbca9204a1d6be"
name = "github.com/inconshreveable/mousetrap"
packages = ["."]
pruneopts = "UT"
revision = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
version = "v1.0"
[[projects]]
digest = "1:c568d7727aa262c32bdf8a3f7db83614f7af0ed661474b24588de635c20024c7"
name = "github.com/magiconair/properties"
packages = ["."]
pruneopts = "UT"
revision = "c2353362d570a7bfa228149c62842019201cfb71"
version = "v1.8.0"
[[projects]]
digest = "1:c658e84ad3916da105a761660dcaeb01e63416c8ec7bc62256a9b411a05fcd67"
name = "github.com/mattn/go-colorable"
packages = ["."]
pruneopts = "UT"
revision = "167de6bfdfba052fa6b2d3664c8f5272e23c9072"
version = "v0.0.9"
[[projects]]
digest = "1:0981502f9816113c9c8c4ac301583841855c8cf4da8c72f696b3ebedf6d0e4e5"
name = "github.com/mattn/go-isatty"
packages = ["."]
pruneopts = "UT"
revision = "6ca4dbf54d38eea1a992b3c722a76a5d1c4cb25c"
version = "v0.0.4"
[[projects]]
digest = "1:ff5ebae34cfbf047d505ee150de27e60570e8c394b3b8fdbb720ff6ac71985fc"
name = "github.com/matttproud/golang_protobuf_extensions"
packages = ["pbutil"]
revision = "3247c84500bff8d9fb6d579d800f20b3e091582c"
version = "v1.0.0"
pruneopts = "UT"
revision = "c12348ce28de40eed0136aa2b644d0ee0650e56c"
version = "v1.0.1"
[[projects]]
name = "github.com/pkg/errors"
branch = "master"
digest = "1:5ab79470a1d0fb19b041a624415612f8236b3c06070161a910562f2b2d064355"
name = "github.com/mitchellh/mapstructure"
packages = ["."]
revision = "645ef00459ed84a119197bfb8d8205042c6df63d"
version = "v0.8.0"
pruneopts = "UT"
revision = "f15292f7a699fcc1a38a80977f80a046874ba8ac"
[[projects]]
digest = "1:95741de3af260a92cc5c7f3f3061e85273f5a81b5db20d4bd68da74bd521675e"
name = "github.com/pelletier/go-toml"
packages = ["."]
pruneopts = "UT"
revision = "c01d1270ff3e442a8a57cddc1c92dc1138598194"
version = "v1.2.0"
[[projects]]
digest = "1:d14a5f4bfecf017cb780bdde1b6483e5deb87e12c332544d2c430eda58734bcb"
name = "github.com/prometheus/client_golang"
packages = [
"prometheus",
"prometheus/promhttp"
"prometheus/promhttp",
]
pruneopts = "UT"
revision = "c5b7fccd204277076155f10851dad72b76a49317"
version = "v0.8.0"
[[projects]]
branch = "master"
digest = "1:2d5cd61daa5565187e1d96bae64dbbc6080dacf741448e9629c64fd93203b0d4"
name = "github.com/prometheus/client_model"
packages = ["go"]
revision = "99fa1f4be8e564e8a6b613da7fa6f46c9edafc6c"
pruneopts = "UT"
revision = "5c3871d89910bfb32f5fcab2aa4b9ec68e65a99f"
[[projects]]
branch = "master"
digest = "1:63b68062b8968092eb86bedc4e68894bd096ea6b24920faca8b9dcf451f54bb5"
name = "github.com/prometheus/common"
packages = [
"expfmt",
"internal/bitbucket.org/ww/goautoneg",
"model"
"model",
]
revision = "e4aa40a9169a88835b849a6efb71e05dc04b88f0"
pruneopts = "UT"
revision = "c7de2306084e37d54b8be01f3541a8464345e9a5"
[[projects]]
branch = "master"
digest = "1:8c49953a1414305f2ff5465147ee576dd705487c35b15918fcd4efdc0cb7a290"
name = "github.com/prometheus/procfs"
packages = [
".",
"internal/util",
"nfs",
"xfs"
"xfs",
]
revision = "54d17b57dd7d4a3aa092476596b3f8a933bde349"
pruneopts = "UT"
revision = "05ee40e3a273f7245e8777337fc7b46e533a9a92"
[[projects]]
name = "github.com/rs/zerolog"
digest = "1:bd1ae00087d17c5a748660b8e89e1043e1e5479d0fea743352cda2f8dd8c4f84"
name = "github.com/spf13/afero"
packages = [
".",
"internal/cbor",
"internal/json",
"log"
"mem",
]
revision = "77db4b4f350e31be66a57c332acb7721cf9ff9bb"
version = "v1.8.0"
pruneopts = "UT"
revision = "787d034dfe70e44075ccc060d346146ef53270ad"
version = "v1.1.1"
[[projects]]
digest = "1:516e71bed754268937f57d4ecb190e01958452336fa73dbac880894164e91c1f"
name = "github.com/spf13/cast"
packages = ["."]
pruneopts = "UT"
revision = "8965335b8c7107321228e3e3702cab9832751bac"
version = "v1.2.0"
[[projects]]
digest = "1:645cabccbb4fa8aab25a956cbcbdf6a6845ca736b2c64e197ca7cbb9d210b939"
name = "github.com/spf13/cobra"
packages = ["."]
pruneopts = "UT"
revision = "ef82de70bb3f60c65fb8eebacbb2d122ef517385"
version = "v0.0.3"
[[projects]]
branch = "master"
name = "golang.org/x/sys"
packages = ["unix"]
revision = "bd9dbc187b6e1dacfdd2722a87e83093c2d7bd6e"
digest = "1:8a020f916b23ff574845789daee6818daf8d25a4852419aae3f0b12378ba432a"
name = "github.com/spf13/jwalterweatherman"
packages = ["."]
pruneopts = "UT"
revision = "14d3d4c518341bea657dd8a226f5121c0ff8c9f2"
[[projects]]
digest = "1:dab83a1bbc7ad3d7a6ba1a1cc1760f25ac38cdf7d96a5cdd55cd915a4f5ceaf9"
name = "github.com/spf13/pflag"
packages = ["."]
pruneopts = "UT"
revision = "9a97c102cda95a86cec2345a6f09f55a939babf5"
version = "v1.0.2"
[[projects]]
digest = "1:4fc8a61287ccfb4286e1ca5ad2ce3b0b301d746053bf44ac38cf34e40ae10372"
name = "github.com/spf13/viper"
packages = ["."]
pruneopts = "UT"
revision = "907c19d40d9a6c9bb55f040ff4ae45271a4754b9"
version = "v1.1.0"
[[projects]]
digest = "1:3c1a69cdae3501bf75e76d0d86dc6f2b0a7421bc205c0cb7b96b19eed464a34d"
name = "go.uber.org/atomic"
packages = ["."]
pruneopts = "UT"
revision = "1ea20fb1cbb1cc08cbd0d913a96dead89aa18289"
version = "v1.3.2"
[[projects]]
digest = "1:60bf2a5e347af463c42ed31a493d817f8a72f102543060ed992754e689805d1a"
name = "go.uber.org/multierr"
packages = ["."]
pruneopts = "UT"
revision = "3c4937480c32f4c13a875a1829af76c98ca3d40a"
version = "v1.1.0"
[[projects]]
digest = "1:c52caf7bd44f92e54627a31b85baf06a68333a196b3d8d241480a774733dcf8b"
name = "go.uber.org/zap"
packages = [
".",
"buffer",
"internal/bufferpool",
"internal/color",
"internal/exit",
"zapcore",
]
pruneopts = "UT"
revision = "ff33455a0e382e8a81d14dd7c922020b6b5e7982"
version = "v1.9.1"
[[projects]]
branch = "master"
digest = "1:e3e7f51633f98fce9404940f74dfd65cf306ee173ddf5a8b247c9c830e42b38a"
name = "golang.org/x/sys"
packages = ["unix"]
pruneopts = "UT"
revision = "1a700e749ce29638d0bbcb531cce1094ea096bd3"
[[projects]]
digest = "1:8029e9743749d4be5bc9f7d42ea1659471767860f0cdc34d37c3111bd308a295"
name = "golang.org/x/text"
packages = [
"internal/gen",
"internal/triegen",
"internal/ucd",
"transform",
"unicode/cldr",
"unicode/norm",
]
pruneopts = "UT"
revision = "f21a4dfb5e38f5895301dc265a8def02365cc3d0"
version = "v0.3.0"
[[projects]]
digest = "1:342378ac4dcb378a5448dd723f0784ae519383532f5e70ade24132c4c8693202"
name = "gopkg.in/yaml.v2"
packages = ["."]
revision = "7f97868eec74b32b0982dd158a51a446d1da7eb5"
version = "v2.1.1"
pruneopts = "UT"
revision = "5420a8b6744d3b0345ab293f6fcba19c978f1183"
version = "v2.2.1"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
inputs-digest = "1b88c0a618973c53f6b715dd51ad99f2952baf09c4d752bc8a985d26439a739c"
input-imports = [
"github.com/chzyer/readline",
"github.com/dgrijalva/jwt-go",
"github.com/fatih/color",
"github.com/fsnotify/fsnotify",
"github.com/gorilla/mux",
"github.com/gorilla/websocket",
"github.com/prometheus/client_golang/prometheus",
"github.com/prometheus/client_golang/prometheus/promhttp",
"github.com/spf13/cobra",
"github.com/spf13/pflag",
"github.com/spf13/viper",
"go.uber.org/zap",
"go.uber.org/zap/zapcore",
]
solver-name = "gps-cdcl"
solver-version = 1

View File

@@ -1,24 +1,39 @@
[[constraint]]
name = "github.com/pkg/errors"
version = "0.8.0"
[[constraint]]
name = "github.com/prometheus/client_golang"
version = "0.8.0"
[[constraint]]
name = "github.com/rs/zerolog"
version = "1.8.0"
name = "github.com/gorilla/mux"
version = "v1.6.2"
[[constraint]]
name = "gopkg.in/yaml.v2"
version = "2.1.1"
name = "github.com/gorilla/websocket"
version = "v1.4.0"
[[constraint]]
name = "go.uber.org/zap"
version = "v1.9.1"
[[override]]
name = "github.com/fsnotify/fsnotify"
version = "1.2.9"
[[constraint]]
name = "github.com/spf13/pflag"
version = "v1.0.2"
[[constraint]]
name = "github.com/spf13/viper"
version = "v1.1.0"
[[constraint]]
name = "github.com/spf13/cobra"
version = "v0.0.3"
[[constraint]]
name = "github.com/dgrijalva/jwt-go"
version = "v3.2.0"
[prune]
go-tests = true
unused-packages = true

View File

@@ -20,7 +20,8 @@ build:
@rm -rf build && mkdir build
@echo Building: linux/$(LINUX_ARCH) $(VERSION) ;\
for arch in $(LINUX_ARCH); do \
mkdir -p build/linux/$$arch && CGO_ENABLED=0 GOOS=linux GOARCH=$$arch go build -ldflags="-s -w -X $(GITREPO)/pkg/version.GITCOMMIT=$(GITCOMMIT)" -o build/linux/$$arch/$(NAME) ./cmd/$(NAME) ;\
mkdir -p build/linux/$$arch && CGO_ENABLED=0 GOOS=linux GOARCH=$$arch go build -ldflags="-s -w -X $(GITREPO)/pkg/version.REVISION=$(GITCOMMIT)" -o build/linux/$$arch/$(NAME) ./cmd/$(NAME) ;\
cp -r ui/ build/linux/$$arch/ui;\
done
.PHONY: tar
@@ -45,6 +46,7 @@ docker-build: tar
@for arch in $(LINUX_ARCH); do \
mkdir -p build/docker/linux/$$arch ;\
tar -xzf release/$(NAME)_$(VERSION)_linux_$$arch.tgz -C build/docker/linux/$$arch ;\
cp -r ui/ build/docker/linux/$$arch/ui;\
if [ $$arch == amd64 ]; then \
cp Dockerfile build/docker/linux/$$arch ;\
cp Dockerfile build/docker/linux/$$arch/Dockerfile.in ;\
@@ -71,9 +73,9 @@ docker-build: tar
.PHONY: docker-push
docker-push:
@echo Pushing: $(VERSION) to $(DOCKER_IMAGE_NAME)
for arch in $(LINUX_ARCH); do \
docker push $(DOCKER_IMAGE_NAME):$(NAME)-$$arch ;\
done
for arch in $(LINUX_ARCH); do \
docker push $(DOCKER_IMAGE_NAME):$(NAME)-$$arch ;\
done
manifest-tool push from-args --platforms $(PLATFORMS) --template $(DOCKER_IMAGE_NAME):podinfo-ARCH --target $(DOCKER_IMAGE_NAME):$(VERSION)
manifest-tool push from-args --platforms $(PLATFORMS) --template $(DOCKER_IMAGE_NAME):podinfo-ARCH --target $(DOCKER_IMAGE_NAME):latest
@@ -93,7 +95,7 @@ gcr-build:
.PHONY: test
test:
cd pkg/server ; go test -v -race ./...
go test -v -race ./...
.PHONY: dep
dep:
@@ -103,9 +105,10 @@ dep:
.PHONY: charts
charts:
cd charts/ && helm package podinfo/
cd charts/ && helm package podinfo-istio/
cd charts/ && helm package loadtest/
cd charts/ && helm package ambassador/
cd charts/ && helm package grafana/
cd charts/ && helm package ngrok/
cd charts/ && helm package weave-flux/
mv charts/*.tgz docs/
helm repo index docs --url https://stefanprodan.github.io/k8s-podinfo --merge ./docs/index.yaml

View File

@@ -9,31 +9,35 @@ Specifications:
* Multi-platform Docker image (amd64/arm/arm64/ppc64le/s390x)
* Health checks (readiness and liveness)
* Graceful shutdown on interrupt signals
* Watches for secrets and configmaps changes and updates the in-memory cache
* Prometheus instrumentation (RED metrics)
* Dependency management with golang/dep
* Structured logging with zerolog
* Error handling with pkg/errors
* File watcher for secrets and configmaps
* Instrumented with Prometheus
* Tracing with Istio and Jaeger
* Structured logging with zap
* 12-factor app with viper
* Fault injection (random errors and latency)
* Helm chart
Web API:
* `GET /` prints runtime information, environment variables, labels and annotations
* `GET /` prints runtime information
* `GET /version` prints podinfo version and git commit hash
* `GET /metrics` http requests duration and Go runtime metrics
* `GET /metrics` return HTTP requests duration and Go runtime metrics
* `GET /healthz` used by Kubernetes liveness probe
* `GET /readyz` used by Kubernetes readiness probe
* `POST /readyz/enable` signals the Kubernetes LB that this instance is ready to receive traffic
* `POST /readyz/disable` signals the Kubernetes LB to stop sending requests to this instance
* `GET /error` returns code 500 and logs the error
* `GET /status/{code}` returns the status code
* `GET /panic` crashes the process with exit code 255
* `POST /echo` echos the posted content, logs the SHA1 hash of the content
* `GET /echoheaders` prints the request HTTP headers
* `POST /job` long running job, json body: `{"wait":2}`
* `GET /configs` prints the configmaps and/or secrets mounted in the `config` volume
* `POST /echo` forwards the call to the backend service and echos the posted content
* `GET /env` returns the environment variables as a JSON array
* `GET /headers` returns a JSON with the request HTTP headers
* `GET /delay/{seconds}` waits for the specified period
* `POST /token` issues a JWT token valid for one minute `JWT=$(curl -sd 'anon' podinfo:9898/token | jq -r .token)`
* `GET /token/validate` validates the JWT token `curl -H "Authorization: Bearer $JWT" podinfo:9898/token/validate`
* `GET /configs` returns a JSON with configmaps and/or secrets mounted in the `config` volume
* `POST /write` writes the posted content to disk at /data/hash and returns the SHA1 hash of the content
* `POST /read` receives a SHA1 hash and returns the content of the file /data/hash if exists
* `POST /backend` forwards the call to the backend service on `http://backend-podinfo:9898/echo`
* `GET /read/{hash}` returns the content of the file /data/hash if exists
* `GET /ws/echo` echos content via websockets `podcli ws ws://localhost:9898/ws/echo`
### Guides

File diff suppressed because it is too large Load Diff

View File

@@ -3,7 +3,7 @@ kind: Deployment
metadata:
name: {{ template "grafana.fullname" . }}
labels:
app: {{ template "grafana.name" . }}
app: {{ template "grafana.fullname" . }}
chart: {{ template "grafana.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
@@ -11,12 +11,12 @@ spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "grafana.name" . }}
app: {{ template "grafana.fullname" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "grafana.name" . }}
app: {{ template "grafana.fullname" . }}
release: {{ .Release.Name }}
annotations:
prometheus.io/scrape: 'false'

View File

@@ -15,5 +15,5 @@ spec:
protocol: TCP
name: http
selector:
app: {{ template "grafana.name" . }}
app: {{ template "grafana.fullname" . }}
release: {{ .Release.Name }}

View File

@@ -0,0 +1,5 @@
apiVersion: v1
appVersion: "1.0"
description: Hey load test Helm chart for Kubernetes
name: loadtest
version: 0.1.0

View File

@@ -0,0 +1 @@
{{ template "loadtest.fullname" . }} has been deployed successfully!

View File

@@ -2,7 +2,7 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "weave-cloud.name" -}}
{{- define "loadtest.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
@@ -11,7 +11,7 @@ Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "weave-cloud.fullname" -}}
{{- define "loadtest.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
@@ -27,6 +27,6 @@ If release name contains chart name it will be used as a full name.
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "weave-cloud.chart" -}}
{{- define "loadtest.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}

View File

@@ -0,0 +1,31 @@
{{- $fullname := include "loadtest.fullname" . -}}
{{- $name := include "loadtest.name" . -}}
{{- $chart := include "loadtest.chart" . -}}
{{- $image := .Values.image -}}
{{- range $test := .Values.tests }}
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ $fullname }}-{{ .name }}
labels:
app: {{ $name }}
chart: {{ $chart }}
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
containers:
- name: loadtest
image: {{ $image }}
args:
- /bin/sh
- -c
- "hey -z 58s {{ $test.cmd }} {{ $test.url }}"
restartPolicy: OnFailure
{{- end -}}

View File

@@ -0,0 +1,11 @@
# Default values for loadtest.
image: stefanprodan/loadtest:latest
tests:
- name: "blue"
url: "https://canary.istio.weavedx.com/api/echo"
cmd: "-h2 -m POST -d '{test: 1}' -H 'X-API-Version: 0.6.0' -c 50 -q 5"
- name: "green"
url: "https://canary.istio.weavedx.com/api/echo"
cmd: "-h2 -m POST -d '{test: 2}' -H 'X-API-Version: 0.6.1' -c 10 -q 5"

View File

@@ -0,0 +1,12 @@
apiVersion: v1
version: 1.2.0
appVersion: 1.2.0
engine: gotpl
name: podinfo-istio
description: Podinfo Helm chart for Istio
home: https://github.com/stefanprodan/k8s-podinfo
maintainers:
- email: stefanprodan@users.noreply.github.com
name: stefanprodan
sources:
- https://github.com/stefanprodan/k8s-podinfo

View File

@@ -0,0 +1,80 @@
# Podinfo Istio
Podinfo is a tiny web application made with Go
that showcases best practices of running microservices in Kubernetes.
## Installing the Chart
Create an Istio enabled namespace:
```console
kubectl create namespace demo
kubectl label namespace demo istio-injection=enabled
```
Create an Istio Gateway in the `istio-system` namespace named `public-gateway`:
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
```
Create the `frontend` release by specifying the external domain name:
```console
helm upgrade frontend --install ./charts/podinfo-istio \
--namespace=demo \
--set host=podinfo.example.com \
--set gateway.name=public-gateway \
--set gateway.create=false \
-f ./charts/podinfo-istio/frontend.yaml
```
Create the `backend` release:
```console
helm upgrade backend --install ./charts/podinfo-istio \
--namespace=demo \
-f ./charts/podinfo-istio/backend.yaml
```
Create the `store` release:
```console
helm upgrade store --install ./charts/podinfo-istio \
--namespace=demo \
-f ./charts/podinfo-istio/store.yaml
```
Start load test:
```console
helm upgrade --install loadtest ./charts/loadtest \
--namespace=loadtesting
```

34
charts/podinfo-istio/apply.sh Executable file
View File

@@ -0,0 +1,34 @@
#!/usr/bin/env bash
#Usage: fswatch -o ./podinfo-istio/ | xargs -n1 ./podinfo-istio/apply.sh
set -e
MARK='\033[0;32m'
NC='\033[0m'
log (){
echo -e "$(date +%Y-%m-%dT%H:%M:%S%z) ${MARK}${1}${NC}"
}
log "installing frontend"
helm upgrade frontend --install ./podinfo-istio \
--namespace=demo \
--set host=canary.istio.weavedx.com \
--set gateway.name=public-gateway \
--set gateway.create=false \
-f ./podinfo-istio/frontend.yaml
log "installing backend"
helm upgrade backend --install ./podinfo-istio \
--namespace=demo \
-f ./podinfo-istio/backend.yaml
log "installing store"
helm upgrade store --install ./podinfo-istio \
--namespace=demo \
-f ./podinfo-istio/store.yaml
log "finished installing frontend, backend and store"

View File

@@ -0,0 +1,21 @@
# Default values for backend demo.
# expose the blue/green deployments inside the cluster
host: backend
# stable release
blue:
replicas: 2
tag: "1.1.0"
backend: http://store:9898/api/echo
# canary release
green:
replicas: 2
tag: "1.1.0"
routing:
# target green callers
- match:
- sourceLabels:
color: green
backend: http://store:9898/api/echo

View File

@@ -0,0 +1,39 @@
# Default values for frontend demo.
# external domain
host:
exposeHost: true
# no more than one Gateway can be created on a cluster
# if TLS is enabled the istio-ingressgateway-certs secret must exist in istio-system ns
# if you have a Gateway running you can set the name to your own gateway and turn off create
gateway:
name: public-gateway
create: false
tls: true
httpsRedirect: true
# stable release
blue:
replicas: 2
tag: "1.1.0"
message: "Greetings from the blue frontend"
backend: http://backend:9898/api/echo
# canary release
green:
replicas: 2
tag: "1.1.0"
routing:
# target Safari
- match:
- headers:
user-agent:
regex: "^(?!.*Chrome).*Safari.*"
# target API clients by version
- match:
- headers:
x-api-version:
regex: "^(v{0,1})0\\.6\\.([1-9]).*"
message: "Greetings from the green frontend"
backend: http://backend:9898/api/echo

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,19 @@
# Default values for backend demo.
# expose the store deployment inside the cluster
host: store
# load balance 80/20 between blue and green
blue:
replicas: 2
tag: "1.1.0"
backend: https://httpbin.org/anything
weight: 80
green:
replicas: 2
tag: "1.1.0"
backend: https://httpbin.org/anything
externalServices:
- httpbin.org

View File

@@ -0,0 +1 @@
{{ template "podinfo-istio.fullname" . }} has been deployed successfully!

View File

@@ -2,42 +2,35 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "weave-flux.name" -}}
{{- define "podinfo-istio.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
The release name is used as a full name.
*/}}
{{- define "weave-flux.fullname" -}}
{{- define "podinfo-istio.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- define "podinfo-istio.blue" -}}
{{- printf "%s-%s" .Release.Name "blue" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "podinfo-istio.green" -}}
{{- printf "%s-%s" .Release.Name "green" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "weave-flux.chart" -}}
{{- define "podinfo-istio.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "weave-flux.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "weave-flux.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,78 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "podinfo-istio.blue" . }}
labels:
app: {{ template "podinfo-istio.fullname" . }}
chart: {{ template "podinfo-istio.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
color: blue
version: {{ .Values.blue.tag }}
spec:
replicas: {{ .Values.blue.replicas }}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ template "podinfo-istio.fullname" . }}
color: blue
template:
metadata:
labels:
app: {{ template "podinfo-istio.fullname" . }}
color: blue
version: {{ .Values.blue.tag }}
annotations:
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: podinfod
image: "{{ .Values.blue.repository }}:{{ .Values.blue.tag }}"
imagePullPolicy: {{ .Values.imagePullPolicy }}
command:
- ./podinfo
- --port={{ .Values.containerPort }}
- --level={{ .Values.logLevel }}
- --random-delay={{ .Values.blue.faults.delay }}
- --random-error={{ .Values.blue.faults.error }}
env:
- name: PODINFO_UI_COLOR
value: blue
{{- if .Values.blue.backend }}
- name: PODINFO_BACKEND_URL
value: {{ .Values.blue.backend }}
{{- end }}
{{- if .Values.blue.message }}
- name: PODINFO_UI_MESSAGE
value: {{ .Values.blue.message }}
{{- end }}
ports:
- name: http
containerPort: {{ .Values.containerPort }}
protocol: TCP
livenessProbe:
exec:
command:
- podcli
- check
- http
- localhost:{{ .Values.service.containerPort }}/healthz
readinessProbe:
exec:
command:
- podcli
- check
- http
- localhost:{{ .Values.service.containerPort }}/readyz
volumeMounts:
- name: data
mountPath: /data
resources:
{{ toYaml .Values.resources | indent 12 }}
volumes:
- name: data
emptyDir: {}

View File

@@ -0,0 +1,20 @@
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: {{ template "podinfo-istio.fullname" . }}
labels:
app: {{ template "podinfo-istio.fullname" . }}
chart: {{ template "podinfo-istio.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
host: {{ template "podinfo-istio.fullname" . }}
subsets:
- name: blue
labels:
color: blue
{{- if gt .Values.green.replicas 0.0 }}
- name: green
labels:
color: green
{{- end }}

View File

@@ -0,0 +1,22 @@
{{- if .Values.externalServices -}}
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: {{ template "podinfo-istio.fullname" . }}-external-svcs
labels:
app: {{ template "podinfo-istio.fullname" . }}
chart: {{ template "podinfo-istio.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
hosts:
{{- range .Values.externalServices }}
- {{ . }}
{{- end }}
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: HTTPS
resolution: DNS
{{- end }}

View File

@@ -0,0 +1,31 @@
{{- if .Values.gateway.create -}}
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: {{ .Values.gateway.name }}
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: {{ .Values.gateway.httpsRedirect }}
{{- if .Values.gateway.tls }}
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
{{- end }}
{{- end }}

View File

@@ -0,0 +1,80 @@
{{- if gt .Values.green.replicas 0.0 -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "podinfo-istio.green" . }}
labels:
app: {{ template "podinfo-istio.fullname" . }}
chart: {{ template "podinfo-istio.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
color: green
version: {{ .Values.green.tag }}
spec:
replicas: {{ .Values.green.replicas }}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ template "podinfo-istio.fullname" . }}
color: green
template:
metadata:
labels:
app: {{ template "podinfo-istio.fullname" . }}
color: green
version: {{ .Values.green.tag }}
annotations:
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: podinfod
image: "{{ .Values.green.repository }}:{{ .Values.green.tag }}"
imagePullPolicy: {{ .Values.imagePullPolicy }}
command:
- ./podinfo
- --port={{ .Values.containerPort }}
- --level={{ .Values.logLevel }}
- --random-delay={{ .Values.green.faults.delay }}
- --random-error={{ .Values.green.faults.error }}
env:
- name: PODINFO_UI_COLOR
value: green
{{- if .Values.green.backend }}
- name: PODINFO_BACKEND_URL
value: {{ .Values.green.backend }}
{{- end }}
{{- if .Values.green.message }}
- name: PODINFO_UI_MESSAGE
value: {{ .Values.green.message }}
{{- end }}
ports:
- name: http
containerPort: {{ .Values.containerPort }}
protocol: TCP
livenessProbe:
exec:
command:
- podcli
- check
- http
- localhost:{{ .Values.service.containerPort }}/healthz
readinessProbe:
exec:
command:
- podcli
- check
- http
- localhost:{{ .Values.service.containerPort }}/readyz
volumeMounts:
- name: data
mountPath: /data
resources:
{{ toYaml .Values.resources | indent 12 }}
volumes:
- name: data
emptyDir: {}
{{- end }}

View File

@@ -0,0 +1,18 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "podinfo-istio.fullname" . }}
labels:
app: {{ template "podinfo-istio.fullname" . }}
chart: {{ template "podinfo-istio.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: ClusterIP
ports:
- port: {{ .Values.containerPort }}
targetPort: http
protocol: TCP
name: http
selector:
app: {{ template "podinfo-istio.fullname" . }}

View File

@@ -0,0 +1,43 @@
{{- $host := .Release.Name -}}
{{- $timeout := .Values.timeout -}}
{{- $greenWeight := (sub 100 (.Values.blue.weight|int)) | int -}}
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: {{ template "podinfo-istio.fullname" . }}
labels:
app: {{ template "podinfo-istio.fullname" . }}
chart: {{ template "podinfo-istio.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
hosts:
- {{ .Values.host }}
{{- if .Values.exposeHost }}
gateways:
- {{ .Values.gateway.name }}.istio-system.svc.cluster.local
{{- end }}
http:
{{- if gt .Values.green.replicas 0.0 }}
{{- range .Values.green.routing }}
- match:
{{ toYaml .match | indent 6 }}
route:
- destination:
host: {{ $host }}
subset: green
timeout: {{ $timeout }}
{{- end }}
{{- end }}
- route:
- destination:
host: {{ template "podinfo-istio.fullname" . }}
subset: blue
weight: {{ .Values.blue.weight }}
{{- if gt .Values.green.replicas 0.0 }}
- destination:
host: {{ template "podinfo-istio.fullname" . }}
subset: green
weight: {{ $greenWeight }}
{{- end }}
timeout: {{ $timeout }}

View File

@@ -0,0 +1,60 @@
# Default values for podinfo-istio.
# host can be an external domain or a local one
host: podinfo
# if the host is an external domain must be exposed via the Gateway
exposeHost: false
timeout: 30s
# creates public-gateway.istio-system.svc.cluster.local
# no more than one Gateway can be created on a cluster
# if TLS is enabled the istio-ingressgateway-certs secret must exist in istio-system ns
# if you have a Gateway running you can set the name to your own gateway and turn off create
gateway:
name: public-gateway
create: false
tls: false
httpsRedirect: false
# authorise external https services
#externalServices:
# - api.github.com
# - apis.google.com
# - googleapis.com
# stable release
# by default all traffic goes to blue
blue:
replicas: 2
repository: quay.io/stefanprodan/podinfo
tag: "1.2.0"
# green must have at at least one replica to set weight under 100
weight: 100
message:
backend:
faults:
delay: false
error: false
# canary release
# disabled with 0 replicas
green:
replicas: 0
repository: quay.io/stefanprodan/podinfo
tag: "1.2.0"
message:
backend:
routing:
faults:
delay: false
error: false
# blue/green common settings
logLevel: info
containerPort: 9898
imagePullPolicy: IfNotPresent
resources:
limits:
requests:
cpu: 1m
memory: 16Mi

View File

@@ -1,12 +1,12 @@
apiVersion: v1
appVersion: "0.5.0"
description: Podinfo Helm chart for Kubernetes
version: 1.2.0
appVersion: 1.2.0
name: podinfo
version: 0.2.0
engine: gotpl
description: Podinfo Helm chart for Kubernetes
home: https://github.com/stefanprodan/k8s-podinfo
maintainers:
- email: stefanprodan@users.noreply.github.com
name: stefanprodan
sources:
- https://github.com/stefanprodan/k8s-podinfo
maintainers:
- name: stefanprodan
email: stefanprodan@users.noreply.github.com
engine: gotpl

View File

@@ -8,7 +8,8 @@ that showcases best practices of running microservices in Kubernetes.
To install the chart with the release name `my-release`:
```console
$ helm install stable/podinfo --name my-release
$ helm repo add sp https://stefanprodan.github.io/k8s-podinfo
$ helm upgrade my-release --install sp/podinfo
```
The command deploys podinfo on the Kubernetes cluster in the default namespace.
@@ -31,23 +32,27 @@ The following tables lists the configurable parameters of the podinfo chart and
Parameter | Description | Default
--- | --- | ---
`affinity` | node/pod affinities | None
`hpa.enabled` | Enables HPA | `false`
`hpa.cpu` | Target CPU usage per pod | None
`hpa.memory` | Target memory usage per pod | None
`hpa.requests` | Target requests per second per pod | None
`hpa.maxReplicas` | Maximum pod replicas | `10`
`ingress.hosts` | Ingress accepted hostnames | None
`ingress.tls` | Ingress TLS configuration | None:
`image.pullPolicy` | Image pull policy | `IfNotPresent`
`image.repository` | Image repository | `stefanprodan/podinfo`
`image.tag` | Image tag | `0.0.1`
`ingress.enabled` | Enables Ingress | `false`
`ingress.annotations` | Ingress annotations | None
`ingress.hosts` | Ingress accepted hostnames | None
`ingress.tls` | Ingress TLS configuration | None
`color` | UI color | blue
`backend` | echo backend URL | None
`faults.delay` | random HTTP response delays between 0 and 5 seconds | `false`
`faults.error` | 1/3 chances of a random HTTP response error | `false`
`hpa.enabled` | enables HPA | `false`
`hpa.cpu` | target CPU usage per pod | None
`hpa.memory` | target memory usage per pod | None
`hpa.requests` | target requests per second per pod | None
`hpa.maxReplicas` | maximum pod replicas | `10`
`ingress.hosts` | ingress accepted hostnames | None
`ingress.tls` | ingress TLS configuration | None:
`image.pullPolicy` | image pull policy | `IfNotPresent`
`image.repository` | image repository | `stefanprodan/podinfo`
`image.tag` | image tag | `0.0.1`
`ingress.enabled` | enables ingress | `false`
`ingress.annotations` | ingress annotations | None
`ingress.hosts` | ingress accepted hostnames | None
`ingress.tls` | ingress TLS configuration | None
`message` | UI greetings message | None
`nodeSelector` | node labels for pod assignment | `{}`
`podAnnotations` | annotations to add to each pod | `{}`
`replicaCount` | desired number of pods | `1`
`replicaCount` | desired number of pods | `2`
`resources.requests/cpu` | pod CPU request | `1m`
`resources.requests/memory` | pod memory request | `16Mi`
`resources.limits/cpu` | pod CPU limit | None
@@ -56,7 +61,7 @@ Parameter | Description | Default
`service.internalPort` | internal port for the service | `9898`
`service.nodePort` | node port for the service | `31198`
`service.type` | type of service | `ClusterIP`
`tolerations` | List of node taints to tolerate | `[]`
`tolerations` | list of node taints to tolerate | `[]`
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,

View File

@@ -7,43 +7,70 @@ metadata:
chart: {{ template "podinfo.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
color: {{ .Values.color }}
version: {{ .Values.image.tag }}
spec:
replicas: {{ .Values.replicaCount }}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ template "podinfo.name" . }}
color: {{ .Values.color }}
version: {{ .Values.image.tag }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "podinfo.name" . }}
color: {{ .Values.color }}
version: {{ .Values.image.tag }}
release: {{ .Release.Name }}
annotations:
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- ./podinfo
- -port={{ .Values.service.containerPort }}
- -logLevel={{ .Values.logLevel }}
- --port={{ .Values.service.containerPort }}
- --level={{ .Values.logLevel }}
- --random-delay={{ .Values.faults.delay }}
- --random-error={{ .Values.faults.error }}
env:
- name: backend_url
- name: PODINFO_UI_COLOR
value: {{ .Values.color }}
{{- if .Values.message }}
- name: PODINFO_UI_MESSAGE
value: {{ .Values.message }}
{{- end }}
{{- if .Values.backend }}
- name: PODINFO_BACKEND_URL
value: {{ .Values.backend }}
{{- end }}
ports:
- name: http
containerPort: {{ .Values.service.containerPort }}
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: http
exec:
command:
- podcli
- check
- http
- localhost:{{ .Values.service.containerPort }}/healthz
readinessProbe:
httpGet:
path: /readyz
port: http
exec:
command:
- podcli
- check
- http
- localhost:{{ .Values.service.containerPort }}/readyz
volumeMounts:
- name: data
mountPath: /data

View File

@@ -1,11 +1,18 @@
# Default values for podinfo.
replicaCount: 1
backend: http://backend-podinfo:9898/echo
replicaCount: 2
logLevel: info
color: blue
backend: #http://backend-podinfo:9898/echo
message: #UI greetings
faults:
delay: false
error: false
image:
repository: stefanprodan/podinfo
tag: 0.4.0
repository: quay.io/stefanprodan/podinfo
tag: 1.2.0
pullPolicy: IfNotPresent
service:
@@ -14,7 +21,7 @@ service:
containerPort: 9898
nodePort: 31198
# Heapster or metrics-server add-on required
# metrics-server add-on required
hpa:
enabled: false
maxReplicas: 10
@@ -50,4 +57,3 @@ tolerations: []
affinity: {}
logLevel: debug

View File

@@ -1,13 +0,0 @@
apiVersion: v1
appVersion: "1.0"
description: Weave Cloud is a add-on to Kubernetes which provides Continuous Delivery, along with hosted Prometheus Monitoring and a visual dashboard for exploring & debugging microservices
name: weave-cloud
version: 0.2.0
home: https://weave.works
maintainers:
- name: Ilya Dmitrichenko
email: ilya@weave.works
- name: Stefan Prodan
email: stefan@weave.works
engine: gotpl
icon: https://www.weave.works/assets/images/bltd108e8f850ae9e7c/weave-logo-512.png

View File

@@ -1,53 +0,0 @@
# Weave Cloud Agents
> ***NOTE: This chart is for Kubernetes version 1.6 and later.***
Weave Cloud is a add-on to Kubernetes which provides Continuous Delivery, along with hosted Prometheus Monitoring and a visual dashboard for exploring & debugging microservices.
This package contains the agents which connect your cluster to Weave Cloud.
_To learn more and sign up please visit [Weaveworks website](https://weave.works)._
You will need a service token which you can get from [cloud.weave.works](https://cloud.weave.works/).
## Installing the Chart
To install the chart:
```console
$ helm install --name weave-cloud --namespace weave --set token=<YOUR_WEAVE_CLOUD_SERVICE_TOKEN> stable/weave-cloud
```
To view the pods installed:
```console
$ kubectl get pods -n weave -l weave-cloud-component
```
To upgrade the chart:
```console
$ helm upgrade --reuse-values weave-cloud stable/weave-cloud
```
## Uninstalling the Chart
To uninstall/delete the `weave-cloud` chart:
```console
$ helm delete --purge weave-cloud
```
Delete the `weave` namespace:
```console
$ kubectl delete namespace weave
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
The following tables lists the configurable parameters of the Weave Cloud Agents chart and their default values.
| Parameter | Description | Default |
| --------- | ----------- | ------- |
| `token` | Weave Cloud service token | _none_ _(**must be set**)_ |

View File

@@ -1,28 +0,0 @@
{{- if .Values.token -}}
Weave Cloud agents had been installed!
First, verify all Pods are running:
kubectl get pods -n {{ .Release.Namespace }}
Next, login to Weave Cloud (https://cloud.weave.works) and verify the agents are connect to your instance.
If you need help or have any question, join our Slack to chat to us https://slack.weave.works.
Happy hacking!
{{- else -}}
#######################################################
#### ERROR: Weave Cloud service token is missing ####
#######################################################
The installation of Weave Cloud agents is incomplete until you set the service token.
To retrieve your Weave Cloud service token, log in to your instance first: https://cloud.weave.works/instances
Then run:
helm upgrade {{ .Release.Name }} --set token=<YOUR_WEAVE_CLOUD_SERVICE_TOKEN> stable/weave-cloud
{{- end }}

View File

@@ -1,40 +0,0 @@
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ .Values.agent.name }}
labels:
app: {{ .Values.agent.name }}
chart: {{ template "weave-cloud.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
minReadySeconds: 30
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
app: {{ .Values.agent.name }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ .Values.agent.name }}
release: {{ .Release.Name }}
spec:
serviceAccount: {{ .Values.agent.name }}
serviceAccountName: {{ .Values.agent.name }}
terminationGracePeriodSeconds: 30
containers:
- name: {{ .Values.agent.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
- -agent.poll-url=https://get.weave.works/k8s/agent.yaml?instanceID={{"{{.InstanceID}}"}}
- -wc.hostname=cloud.weave.works
- -wc.token=$(WEAVE_CLOUD_TOKEN)
env:
- name: WEAVE_CLOUD_TOKEN
valueFrom:
secretKeyRef:
key: token
name: weave-cloud

View File

@@ -1,39 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.agent.name }}
namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: {{ .Values.agent.name }}
labels:
name: {{ .Values.agent.name }}
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: {{ .Values.agent.name }}
labels:
name: {{ .Values.agent.name }}
roleRef:
kind: ClusterRole
name: {{ .Values.agent.name }}
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: {{ .Values.agent.name }}
namespace: {{ .Release.Namespace }}

View File

@@ -1,7 +0,0 @@
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: weave-cloud
data:
token: {{ .Values.token | b64enc }}

View File

@@ -1,12 +0,0 @@
# Default values for weave-cloud.
# token: ""
agent:
name: weave-agent
image:
repository: quay.io/weaveworks/launcher-agent
tag: master-b60524c
pullPolicy: IfNotPresent

View File

@@ -1,13 +0,0 @@
apiVersion: v1
appVersion: "1.3.0"
description: Flux is a tool that automatically ensures that the state of a cluster matches what is specified in version control
name: weave-flux
version: 0.2.0
home: https://weave.works
sources:
- https://github.com/weaveworks/flux
maintainers:
- name: stefanprodan
email: stefan@weave.works
engine: gotpl
icon: https://landscape.cncf.io/logos/weave-flux.svg

View File

@@ -1,115 +0,0 @@
# Weave Flux OSS
Flux is a tool that automatically ensures that the state of a cluster matches what is specified in version control.
It is most useful when used as a deployment tool at the end of a Continuous Delivery pipeline. Flux will make sure that your new container images and config changes are propagated to the cluster.
## Introduction
This chart bootstraps a [Weave Flux](https://github.com/weaveworks/flux) deployment on
a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
## Prerequisites
- Kubernetes 1.7+
## Installing the Chart
To install the chart with the release name `cd`:
```console
$ helm install --name cd \
--set git.url=git@github.com:weaveworks/flux-example \
--namespace flux \
./charts/weave-flux
```
To install Flux with the Helm operator:
```console
$ helm install --name cd \
--set git.url=git@github.com:stefanprodan/weave-flux-helm-demo \
--set git.user="Stefan Prodan" \
--set git.email="stefan.prodan@gmail.com" \
--set helmOperator.create=true \
--namespace flux \
./charts/weave-flux
```
Be aware that the Helm operator is alpha quality, DO NOT use it on a production cluster.
The [configuration](#configuration) section lists the parameters that can be configured during installation.
### Setup Git deploy
At startup Flux generates a SSH key and logs the public key.
Find the SSH public key with:
```bash
export FLUX_POD=$(kubectl get pods --namespace flux -l "app=weave-flux,release=cd" -o jsonpath="{.items[0].metadata.name}")
kubectl -n flux logs $FLUX_POD | grep identity.pub | cut -d '"' -f2 | sed 's/.\{2\}$//'
```
In order to sync your cluster state with GitHub you need to copy the public key and
create a deploy key with write access on your GitHub repository.
## Uninstalling the Chart
To uninstall/delete the `cd` deployment:
```console
$ helm delete --purge cd
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
You should also remove the deploy key from your GitHub repository.
## Configuration
The following tables lists the configurable parameters of the Weave Flux chart and their default values.
| Parameter | Description | Default |
| ------------------------------- | ------------------------------------------ | ---------------------------------------------------------- |
| `image.repository` | Image repository | `quay.io/weaveworks/flux`
| `image.tag` | Image tag | `1.2.5`
| `image.pullPoliwell cy` | Image pull policy | `IfNotPresent`
| `resources` | CPU/memory resource requests/limits | None
| `rbac.create` | If `true`, create and use RBAC resources | `true`
| `serviceAccount.create` | If `true`, create a new service account | `true`
| `serviceAccount.name` | Service account to be used | `weave-flux`
| `service.type` | Service type to be used | `ClusterIP`
| `service.port` | Service port to be used | `3030`
| `git.url` | URL of git repo with Kubernetes manifests | None
| `git.branch` | Branch of git repo to use for Kubernetes manifests | `master`
| `git.path` | Path within git repo to locate Kubernetes manifests (relative path) | None
| `git.user` | Username to use as git committer | `Weave Flux`
| `git.email` | Email to use as git committer | `support@weave.works`
| `git.chartsPath` | Path within git repo to locate Helm charts (relative path) | `charts`
| `git.pollInterval` | Period at which to poll git repo for new commits | `30s`
| `helmOperator.create` | If `true`, install the Helm operator | `false`
| `helmOperator.repository` | Helm operator image repository | `quay.io/weaveworks/helm-operator`
| `helmOperator.tag` | Helm operator image tag | `master-6f427cb`
| `helmOperator.pullPolicy` | Helm operator image pull policy | `IfNotPresent`
| `token` | Weave Cloud service token | None
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example:
```console
$ helm upgrade --install --wait cd \
--set git.url=git@github.com:stefanprodan/podinfo \
--set git.path=deploy/auto-scaling \
--namespace flux \
./charts/weave-flux
```
## Upgrade
Update Weave Flux version with:
```console
helm upgrade --reuse-values cd \
--set image.tag=1.2.6 \
./charts/weave-flux
```

View File

@@ -1,19 +0,0 @@
1. Get the application URL by running these commands:
{{- if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "weave-flux.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ template "weave-flux.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "weave-flux.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "weave-flux.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
kubectl -n {{ .Release.Namespace }} port-forward $POD_NAME 8080:3030
{{- end }}
2. Get the Git deploy key by running these commands:
export FLUX_POD=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "weave-flux.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
kubectl -n {{ .Release.Namespace }} logs $FLUX_POD | grep identity.pub | cut -d '"' -f2 | sed 's/.\{2\}$//'

View File

@@ -1,75 +0,0 @@
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ template "weave-flux.fullname" . }}
labels:
app: {{ template "weave-flux.name" . }}
chart: {{ template "weave-flux.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "weave-flux.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "weave-flux.name" . }}
release: {{ .Release.Name }}
spec:
{{- if .Values.serviceAccount.create }}
serviceAccountName: {{ template "weave-flux.serviceAccountName" . }}
{{- end }}
volumes:
- name: git-key
secret:
secretName: {{ template "weave-flux.fullname" . }}-git-deploy
defaultMode: 0400
- name: git-keygen
emptyDir:
medium: Memory
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 3030
protocol: TCP
volumeMounts:
- name: git-key
mountPath: /etc/fluxd/ssh
readOnly: true
- name: git-keygen
mountPath: /var/fluxd/keygen
args:
- --ssh-keygen-dir=/var/fluxd/keygen
- --k8s-secret-name={{ template "weave-flux.fullname" . }}-git-deploy
- --memcached-hostname={{ template "weave-flux.fullname" . }}-memcached
- --git-url={{ .Values.git.url }}
- --git-branch={{ .Values.git.branch }}
- --git-path={{ .Values.git.path }}
- --git-user={{ .Values.git.user }}
- --git-email={{ .Values.git.email }}
- --git-poll-interval={{ .Values.git.pollInterval }}
- --sync-interval={{ .Values.git.pollInterval }}
{{- if .Values.token }}
- --connect=wss://cloud.weave.works/api/flux
- --token={{ .Values.token }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}

View File

@@ -1,19 +0,0 @@
{{- if .Values.helmOperator.create -}}
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: fluxhelmreleases.helm.integrations.flux.weave.works
labels:
app: {{ template "weave-flux.name" . }}
chart: {{ template "weave-flux.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
group: helm.integrations.flux.weave.works
names:
kind: FluxHelmRelease
listKind: FluxHelmReleaseList
plural: fluxhelmreleases
scope: Namespaced
version: v1alpha2
{{- end -}}

View File

@@ -1,43 +0,0 @@
{{- if .Values.helmOperator.create -}}
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ template "weave-flux.fullname" . }}-helm-operator
labels:
app: {{ template "weave-flux.name" . }}-helm-operator
chart: {{ template "weave-flux.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "weave-flux.name" . }}-helm-operator
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "weave-flux.name" . }}-helm-operator
release: {{ .Release.Name }}
spec:
{{- if .Values.serviceAccount.create }}
serviceAccountName: {{ template "weave-flux.serviceAccountName" . }}
{{- end }}
volumes:
- name: git-key
secret:
secretName: {{ template "weave-flux.fullname" . }}-git-deploy
defaultMode: 0400
containers:
- name: flux-helm-operator
image: "{{ .Values.helmOperator.repository }}:{{ .Values.helmOperator.tag }}"
imagePullPolicy: {{ .Values.helmOperator.pullPolicy }}
volumeMounts:
- name: git-key
mountPath: /etc/fluxd/ssh
readOnly: true
args:
- --git-url={{ .Values.git.url }}
- --git-branch={{ .Values.git.branch }}
- --git-charts-path={{ .Values.git.chartsPath }}
{{- end -}}

View File

@@ -1,54 +0,0 @@
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ template "weave-flux.fullname" . }}-memcached
labels:
app: {{ template "weave-flux.name" . }}-memcached
chart: {{ template "weave-flux.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: {{ template "weave-flux.name" . }}-memcached
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "weave-flux.name" . }}-memcached
release: {{ .Release.Name }}
spec:
containers:
- name: memcached
image: memcached:1.4.25
imagePullPolicy: IfNotPresent
args:
- -m 64 # Maximum memory to use, in megabytes. 64MB is default.
- -p 11211 # Default port, but being explicit is nice.
- -vv # This gets us to the level of request logs.
ports:
- name: memcached
containerPort: 11211
---
apiVersion: v1
kind: Service
metadata:
name: {{ template "weave-flux.fullname" . }}-memcached
labels:
app: {{ template "weave-flux.name" . }}-memcached
chart: {{ template "weave-flux.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
clusterIP: None
ports:
- port: 11211
targetPort: memcached
protocol: TCP
name: memcached
selector:
app: {{ template "weave-flux.name" . }}-memcached
release: {{ .Release.Name }}

View File

@@ -1,40 +0,0 @@
{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: {{ template "weave-flux.fullname" . }}
labels:
app: {{ template "weave-flux.name" . }}
chart: {{ template "weave-flux.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: {{ template "weave-flux.fullname" . }}
labels:
app: {{ template "weave-flux.name" . }}
chart: {{ template "weave-flux.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "weave-flux.fullname" . }}
subjects:
- name: {{ template "weave-flux.serviceAccountName" . }}
namespace: {{ .Release.Namespace | quote }}
kind: ServiceAccount
{{- end -}}

View File

@@ -1,5 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ template "weave-flux.fullname" . }}-git-deploy
type: Opaque

View File

@@ -1,19 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "weave-flux.fullname" . }}
labels:
app: {{ template "weave-flux.name" . }}
chart: {{ template "weave-flux.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
app: {{ template "weave-flux.name" . }}
release: {{ .Release.Name }}

View File

@@ -1,11 +0,0 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "weave-flux.serviceAccountName" . }}
labels:
app: {{ template "weave-flux.name" . }}
chart: {{ template "weave-flux.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- end -}}

View File

@@ -1,64 +0,0 @@
# Default values for weave-flux.
# Weave Cloud service token
token: ""
replicaCount: 1
image:
repository: quay.io/weaveworks/flux
tag: 1.3.0
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 3030
helmOperator:
create: false
repository: quay.io/weaveworks/helm-operator
tag: 0.1.0-alpha
pullPolicy: IfNotPresent
rbac:
# Specifies whether RBAC resources should be created
create: true
serviceAccount:
# Specifies whether a service account should be created
create: true
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
resources: {}
# If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
git:
# URL of git repo with Kubernetes manifests; e.g. git@github.com:weaveworks/flux-example
url: ""
# Branch of git repo to use for Kubernetes manifests
branch: "master"
# Path within git repo to locate Kubernetes manifests (relative path)
path: ""
# Username to use as git committer
user: "Weave Flux"
# Email to use as git committer
email: "support@weave.works"
# Path within git repo to locate Helm charts (relative path)
chartsPath: "charts"
# Period at which to poll git repo for new commits
pollInterval: "30s"

245
cmd/podcli/check.go Normal file
View File

@@ -0,0 +1,245 @@
package main
import (
"bytes"
"context"
"crypto/tls"
"fmt"
"net"
"net/http"
"net/url"
"os"
"strings"
"time"
"github.com/spf13/cobra"
"go.uber.org/zap"
)
var (
retryCount int
retryDelay time.Duration
method string
body string
timeout time.Duration
)
var checkCmd = &cobra.Command{
Use: `check`,
Short: "Health check commands",
Long: "Commands for running health checks",
}
var checkUrlCmd = &cobra.Command{
Use: `http [address]`,
Short: "HTTP(S) health check",
Example: ` check http https://httpbin.org/anything --method=POST --retry=2 --delay=2s --timeout=3s --body='{"test"=1}'`,
RunE: runCheck,
}
var checkTcpCmd = &cobra.Command{
Use: `tcp [address]`,
Short: "TCP health check",
Example: ` check tcp httpbin.org:443 --retry=1 --delay=2s --timeout=2s`,
RunE: runCheckTCP,
}
var checkCertCmd = &cobra.Command{
Use: `cert [address]`,
Short: "SSL/TLS certificate validity check",
Example: ` check cert httpbin.org`,
RunE: runCheckCert,
}
func init() {
checkUrlCmd.Flags().StringVar(&method, "method", "GET", "HTTP method")
checkUrlCmd.Flags().StringVar(&body, "body", "", "HTTP POST/PUT content")
checkUrlCmd.Flags().IntVar(&retryCount, "retry", 0, "times to retry the HTTP call")
checkUrlCmd.Flags().DurationVar(&retryDelay, "delay", 1*time.Second, "wait duration between retries")
checkUrlCmd.Flags().DurationVar(&timeout, "timeout", 5*time.Second, "timeout")
checkCmd.AddCommand(checkUrlCmd)
checkTcpCmd.Flags().IntVar(&retryCount, "retry", 0, "times to retry the TCP check")
checkTcpCmd.Flags().DurationVar(&retryDelay, "delay", 1*time.Second, "wait duration between retries")
checkTcpCmd.Flags().DurationVar(&timeout, "timeout", 5*time.Second, "timeout")
checkCmd.AddCommand(checkTcpCmd)
checkCmd.AddCommand(checkCertCmd)
rootCmd.AddCommand(checkCmd)
}
func runCheck(cmd *cobra.Command, args []string) error {
if retryCount < 0 {
return fmt.Errorf("--retry is required")
}
if len(args) < 1 {
return fmt.Errorf("address is required! example: check http https://httpbin.org")
}
address := args[0]
if !strings.HasPrefix(address, "http://") && !strings.HasPrefix(address, "https://") {
address = fmt.Sprintf("http://%s", address)
}
for n := 0; n <= retryCount; n++ {
if n != 1 {
time.Sleep(retryDelay)
}
req, err := http.NewRequest(method, address, bytes.NewBuffer([]byte(body)))
if err != nil {
logger.Info("check failed",
zap.String("address", address),
zap.Error(err))
os.Exit(1)
}
ctx, cancel := context.WithTimeout(req.Context(), timeout)
resp, err := http.DefaultClient.Do(req.WithContext(ctx))
cancel()
if err != nil {
logger.Info("check failed",
zap.String("address", address),
zap.Error(err))
continue
}
if resp.Body != nil {
resp.Body.Close()
}
if resp.StatusCode >= 200 && resp.StatusCode < 400 {
logger.Info("check succeed",
zap.String("address", address),
zap.Int("status code", resp.StatusCode),
zap.String("response size", fmtContentLength(resp.ContentLength)))
os.Exit(0)
} else {
logger.Info("check failed",
zap.String("address", address),
zap.Int("status code", resp.StatusCode))
continue
}
}
os.Exit(1)
return nil
}
func runCheckTCP(cmd *cobra.Command, args []string) error {
if retryCount < 0 {
return fmt.Errorf("--retry is required")
}
if len(args) < 1 {
return fmt.Errorf("address is required! example: check tcp httpbin.org:80")
}
address := args[0]
for n := 0; n <= retryCount; n++ {
if n != 1 {
time.Sleep(retryDelay)
}
conn, err := net.DialTimeout("tcp", address, timeout)
if err != nil {
logger.Info("check failed",
zap.String("address", address),
zap.Error(err))
continue
}
conn.Close()
logger.Info("check succeed", zap.String("address", address))
os.Exit(0)
}
os.Exit(1)
return nil
}
func runCheckCert(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return fmt.Errorf("address is required! example: check cert httpbin.org")
}
host := args[0]
if !strings.HasPrefix(host, "https://") {
host = "https://" + host
}
u, err := url.Parse(host)
if err != nil {
logger.Info("check failed",
zap.String("address", host),
zap.Error(err))
os.Exit(1)
}
address := u.Hostname() + ":443"
ipConn, err := net.DialTimeout("tcp", address, 5*time.Second)
if err != nil {
logger.Info("check failed",
zap.String("address", address),
zap.Error(err))
os.Exit(1)
}
defer ipConn.Close()
conn := tls.Client(ipConn, &tls.Config{
InsecureSkipVerify: true,
ServerName: u.Hostname(),
})
if err = conn.Handshake(); err != nil {
logger.Info("check failed",
zap.String("address", address),
zap.Error(err))
os.Exit(1)
}
defer conn.Close()
addr := conn.RemoteAddr()
_, _, err = net.SplitHostPort(addr.String())
if err != nil {
logger.Info("check failed",
zap.String("address", address),
zap.Error(err))
os.Exit(1)
}
cert := conn.ConnectionState().PeerCertificates[0]
timeNow := time.Now()
if timeNow.After(cert.NotAfter) {
logger.Info("check failed",
zap.String("address", address),
zap.String("issuer", cert.Issuer.CommonName),
zap.String("subject", cert.Subject.CommonName),
zap.Time("expired", cert.NotAfter))
os.Exit(1)
}
logger.Info("check succeed",
zap.String("address", address),
zap.String("issuer", cert.Issuer.CommonName),
zap.String("subject", cert.Subject.CommonName),
zap.Time("notAfter", cert.NotAfter),
zap.Time("notBefore", cert.NotBefore))
return nil
}
func fmtContentLength(b int64) string {
const unit = 1000
if b < unit {
return fmt.Sprintf("%d B", b)
}
div, exp := int64(unit), 0
for n := b / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f %cB", float64(b)/float64(div), "kMGTPE"[exp])
}

39
cmd/podcli/main.go Normal file
View File

@@ -0,0 +1,39 @@
package main
import (
"fmt"
"log"
"os"
"strings"
"github.com/spf13/cobra"
"go.uber.org/zap"
)
var rootCmd = &cobra.Command{
Use: "podcli",
Short: "podinfo command line",
Long: `
podinfo command line utilities`,
}
var (
logger *zap.Logger
)
func main() {
var err error
logger, err = zap.NewDevelopment()
if err != nil {
log.Fatalf("can't initialize zap logger: %v", err)
}
defer logger.Sync()
rootCmd.SetArgs(os.Args[1:])
if err := rootCmd.Execute(); err != nil {
e := err.Error()
fmt.Println(strings.ToUpper(e[:1]) + e[1:])
os.Exit(1)
}
}

21
cmd/podcli/version.go Normal file
View File

@@ -0,0 +1,21 @@
package main
import (
"fmt"
"github.com/spf13/cobra"
"github.com/stefanprodan/k8s-podinfo/pkg/version"
)
func init() {
rootCmd.AddCommand(versionCmd)
}
var versionCmd = &cobra.Command{
Use: `version`,
Short: "Prints podcli version",
RunE: func(cmd *cobra.Command, args []string) error {
fmt.Println(version.VERSION)
return nil
},
}

143
cmd/podcli/ws.go Normal file
View File

@@ -0,0 +1,143 @@
package main
import (
"encoding/hex"
"fmt"
"net/http"
"net/url"
"os"
"regexp"
"strings"
"github.com/chzyer/readline"
"github.com/fatih/color"
"github.com/gorilla/websocket"
"github.com/spf13/cobra"
"go.uber.org/zap"
)
var origin string
func init() {
wsCmd.Flags().StringVarP(&origin, "origin", "o", "", "websocket origin")
rootCmd.AddCommand(wsCmd)
}
var wsCmd = &cobra.Command{
Use: `ws [address]`,
Short: "Websocket client",
Example: ` ws localhost:9898/ws/echo`,
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return fmt.Errorf("address is required")
}
address := args[0]
if !strings.HasPrefix(address, "ws://") && !strings.HasPrefix(address, "wss://") {
address = fmt.Sprintf("ws://%s", address)
}
dest, err := url.Parse(address)
if err != nil {
return err
}
if origin != "" {
} else {
originURL := *dest
if dest.Scheme == "wss" {
originURL.Scheme = "https"
} else {
originURL.Scheme = "http"
}
origin = originURL.String()
}
err = connect(dest.String(), origin, &readline.Config{
Prompt: "> ",
})
if err != nil {
logger.Info("websocket closed", zap.Error(err))
}
return nil
},
}
type session struct {
ws *websocket.Conn
rl *readline.Instance
errChan chan error
}
func connect(url, origin string, rlConf *readline.Config) error {
headers := make(http.Header)
headers.Add("Origin", origin)
ws, _, err := websocket.DefaultDialer.Dial(url, headers)
if err != nil {
return err
}
rl, err := readline.NewEx(rlConf)
if err != nil {
return err
}
defer rl.Close()
sess := &session{
ws: ws,
rl: rl,
errChan: make(chan error),
}
go sess.readConsole()
go sess.readWebsocket()
return <-sess.errChan
}
func (s *session) readConsole() {
for {
line, err := s.rl.Readline()
if err != nil {
s.errChan <- err
return
}
err = s.ws.WriteMessage(websocket.TextMessage, []byte(line))
if err != nil {
s.errChan <- err
return
}
}
}
func bytesToFormattedHex(bytes []byte) string {
text := hex.EncodeToString(bytes)
return regexp.MustCompile("(..)").ReplaceAllString(text, "$1 ")
}
func (s *session) readWebsocket() {
rxSprintf := color.New(color.FgGreen).SprintfFunc()
for {
msgType, buf, err := s.ws.ReadMessage()
if err != nil {
fmt.Fprint(s.rl.Stdout(), rxSprintf("< %s\n", err.Error()))
os.Exit(1)
return
}
var text string
switch msgType {
case websocket.TextMessage:
text = string(buf)
case websocket.BinaryMessage:
text = bytesToFormattedHex(buf)
default:
s.errChan <- fmt.Errorf("unknown websocket frame type: %d", msgType)
return
}
fmt.Fprint(s.rl.Stdout(), rxSprintf("< %s\n", text))
}
}

View File

@@ -1,66 +1,192 @@
package main
import (
"flag"
stdlog "log"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strings"
"time"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"github.com/stefanprodan/k8s-podinfo/pkg/server"
"github.com/spf13/pflag"
"github.com/spf13/viper"
"github.com/stefanprodan/k8s-podinfo/pkg/api"
"github.com/stefanprodan/k8s-podinfo/pkg/signals"
"github.com/stefanprodan/k8s-podinfo/pkg/version"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
)
var (
port string
debug bool
logLevel string
)
func init() {
flag.StringVar(&port, "port", "9898", "Port to listen on.")
flag.BoolVar(&debug, "debug", false, "sets log level to debug")
flag.StringVar(&logLevel, "logLevel", "debug", "sets log level as debug, info, warn, error, flat or panic ")
}
func main() {
flag.Parse()
setLogging()
// flags definition
fs := pflag.NewFlagSet("default", pflag.ContinueOnError)
fs.Int("port", 9898, "port")
fs.String("level", "info", "log level debug, info, warn, error, flat or panic")
fs.String("backend-url", "", "backend service URL")
fs.Duration("http-client-timeout", 2*time.Minute, "client timeout duration")
fs.Duration("http-server-timeout", 30*time.Second, "server read and write timeout duration")
fs.Duration("http-server-shutdown-timeout", 5*time.Second, "server graceful shutdown timeout duration")
fs.String("data-path", "/data", "data local path")
fs.String("config-path", "", "config dir path")
fs.String("config", "config.yaml", "config file name")
fs.String("ui-path", "./ui", "UI local path")
fs.String("ui-color", "blue", "UI color")
fs.String("ui-message", fmt.Sprintf("greetings from podinfo v%v", version.VERSION), "UI message")
fs.Bool("random-delay", false, "between 0 and 5 seconds random delay")
fs.Bool("random-error", false, "1/3 chances of a random response error")
fs.Int("stress-cpu", 0, "Number of CPU cores with 100 load")
fs.Int("stress-memory", 0, "MB of data to load into memory")
log.Info().Msgf("Starting podinfo version %s commit %s", version.VERSION, version.GITCOMMIT)
log.Debug().Msgf("Starting HTTP server on port %v", port)
versionFlag := fs.BoolP("version", "v", false, "get version number")
// parse flags
err := fs.Parse(os.Args[1:])
switch {
case err == pflag.ErrHelp:
os.Exit(0)
case err != nil:
fmt.Fprintf(os.Stderr, "Error: %s\n\n", err.Error())
fs.PrintDefaults()
os.Exit(2)
case *versionFlag:
fmt.Println(version.VERSION)
os.Exit(0)
}
// bind flags and environment variables
viper.BindPFlags(fs)
viper.RegisterAlias("backendUrl", "backend-url")
hostname, _ := os.Hostname()
viper.SetDefault("jwt-secret", "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9")
viper.Set("hostname", hostname)
viper.Set("version", version.VERSION)
viper.Set("revision", version.REVISION)
viper.SetEnvPrefix("PODINFO")
viper.SetEnvKeyReplacer(strings.NewReplacer("-", "_"))
viper.AutomaticEnv()
// load config from file
if _, err := os.Stat(filepath.Join(viper.GetString("config-path"), viper.GetString("config"))); err == nil {
viper.SetConfigName(strings.Split(viper.GetString("config"), ".")[0])
viper.AddConfigPath(viper.GetString("config-path"))
if err := viper.ReadInConfig(); err != nil {
fmt.Printf("Error reading config file, %v\n", err)
}
}
// configure logging
logger, _ := initZap(viper.GetString("level"))
defer logger.Sync()
stdLog := zap.RedirectStdLog(logger)
defer stdLog()
// start stress tests if any
beginStressTest(viper.GetInt("stress-cpu"), viper.GetInt("stress-memory"), logger)
// load HTTP server config
var srvCfg api.Config
if err := viper.Unmarshal(&srvCfg); err != nil {
logger.Panic("config unmarshal failed", zap.Error(err))
}
// log version and port
logger.Info("Starting podinfo",
zap.String("version", viper.GetString("version")),
zap.String("revision", viper.GetString("revision")),
zap.String("port", viper.GetString("port")),
)
// start HTTP server
srv, _ := api.NewServer(&srvCfg, logger)
stopCh := signals.SetupSignalHandler()
server.ListenAndServe(port, 5*time.Second, stopCh)
srv.ListenAndServe(stopCh)
}
func setLogging() {
// set global log level
func initZap(logLevel string) (*zap.Logger, error) {
level := zap.NewAtomicLevelAt(zapcore.InfoLevel)
switch logLevel {
case "debug":
zerolog.SetGlobalLevel(zerolog.DebugLevel)
level = zap.NewAtomicLevelAt(zapcore.DebugLevel)
case "info":
zerolog.SetGlobalLevel(zerolog.InfoLevel)
level = zap.NewAtomicLevelAt(zapcore.InfoLevel)
case "warn":
zerolog.SetGlobalLevel(zerolog.WarnLevel)
level = zap.NewAtomicLevelAt(zapcore.WarnLevel)
case "error":
zerolog.SetGlobalLevel(zerolog.ErrorLevel)
level = zap.NewAtomicLevelAt(zapcore.ErrorLevel)
case "fatal":
zerolog.SetGlobalLevel(zerolog.FatalLevel)
level = zap.NewAtomicLevelAt(zapcore.FatalLevel)
case "panic":
zerolog.SetGlobalLevel(zerolog.PanicLevel)
default:
zerolog.SetGlobalLevel(zerolog.InfoLevel)
level = zap.NewAtomicLevelAt(zapcore.PanicLevel)
}
// keep for backwards compatibility
if debug {
zerolog.SetGlobalLevel(zerolog.DebugLevel)
zapEncoderConfig := zapcore.EncoderConfig{
TimeKey: "ts",
LevelKey: "level",
NameKey: "logger",
CallerKey: "caller",
MessageKey: "msg",
StacktraceKey: "stacktrace",
LineEnding: zapcore.DefaultLineEnding,
EncodeLevel: zapcore.LowercaseLevelEncoder,
EncodeTime: zapcore.ISO8601TimeEncoder,
EncodeDuration: zapcore.SecondsDurationEncoder,
EncodeCaller: zapcore.ShortCallerEncoder,
}
// set zerolog as standard logger
stdlog.SetFlags(0)
stdlog.SetOutput(log.Logger)
zapConfig := zap.Config{
Level: level,
Development: false,
Sampling: &zap.SamplingConfig{
Initial: 100,
Thereafter: 100,
},
Encoding: "json",
EncoderConfig: zapEncoderConfig,
OutputPaths: []string{"stderr"},
ErrorOutputPaths: []string{"stderr"},
}
return zapConfig.Build()
}
var stressMemoryPayload []byte
func beginStressTest(cpus int, mem int, logger *zap.Logger) {
done := make(chan int)
if cpus > 0 {
logger.Info("starting CPU stress", zap.Int("cores", cpus))
for i := 0; i < cpus; i++ {
go func() {
for {
select {
case <-done:
return
default:
}
}
}()
}
}
if mem > 0 {
path := "/tmp/podinfo.data"
f, err := os.Create(path)
if err != nil {
logger.Error("memory stress failed", zap.Error(err))
}
if err := f.Truncate(1000000 * int64(mem)); err != nil {
logger.Error("memory stress failed", zap.Error(err))
}
stressMemoryPayload, err = ioutil.ReadFile(path)
f.Close()
os.Remove(path)
if err != nil {
logger.Error("memory stress failed", zap.Error(err))
}
logger.Info("starting CPU stress", zap.Int("memory", len(stressMemoryPayload)))
}
}

View File

@@ -9,75 +9,41 @@ spec:
metadata:
labels:
app: podinfo
#role: openfaas-system
annotations:
prometheus.io/scrape: 'true'
spec:
containers:
- name: podinfod
image: stefanprodan/podinfo:0.0.9
imagePullPolicy: Always
image: quay.io/stefanprodan/podinfo:1.0.1
command:
- ./podinfo
- -port=9898
- -logtostderr=true
- -v=2
volumeMounts:
- name: metadata
mountPath: /etc/podinfod/metadata
readOnly: true
- name: resources
mountPath: /etc/podinfod/resources
readOnly: true
- --port=9898
- --level=debug
ports:
- containerPort: 9898
- name: http
containerPort: 9898
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9898
initialDelaySeconds: 1
periodSeconds: 2
failureThreshold: 1
livenessProbe:
httpGet:
path: /healthz
port: 9898
initialDelaySeconds: 1
periodSeconds: 3
failureThreshold: 2
exec:
command:
- /bin/sh
- -c
- wget --quiet --tries=1 --spider http://localhost:9898/healthz || exit 1
readinessProbe:
exec:
command:
- /bin/sh
- -c
- wget --quiet --tries=1 --spider http://localhost:9898/readyz || exit 1
resources:
requests:
memory: "32Mi"
cpu: "10m"
limits:
memory: "256Mi"
cpu: "100m"
volumes:
- name: metadata
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "annotations"
fieldRef:
fieldPath: metadata.annotations
- name: resources
downwardAPI:
items:
- path: "cpu_limit"
resourceFieldRef:
containerName: podinfod
resource: limits.cpu
- path: "cpu_request"
resourceFieldRef:
containerName: podinfod
resource: requests.cpu
- path: "mem_limit"
resourceFieldRef:
containerName: podinfod
resource: limits.memory
- path: "mem_request"
resourceFieldRef:
containerName: podinfod
resource: requests.memory
env:
- name: color
value: "blue"
- name: message
value: "Greetings from podinfo blue"
- name: backendURL
value: "http://podinfo-backend:9898/backend"

View File

@@ -2,13 +2,14 @@
apiVersion: v1
kind: Service
metadata:
name: podinfo-clusterip
name: podinfo
labels:
app: podinfo
spec:
type: ClusterIP
ports:
- port: 9898
- name: http
port: 9898
targetPort: 9898
protocol: TCP
selector:

View File

@@ -0,0 +1,28 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt

View File

@@ -0,0 +1,17 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
namespace: istio-system
spec:
hosts:
- "grafana.istio.weavedx.com"
gateways:
- public-gateway.istio-system.svc.cluster.local
http:
- route:
- destination:
host: grafana
timeout: 30s

View File

@@ -0,0 +1,17 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafanax
namespace: istio-system
spec:
hosts:
- "grafanax.istio.weavedx.com"
gateways:
- public-gateway.istio-system.svc.cluster.local
http:
- route:
- destination:
host: grafanax
timeout: 30s

View File

@@ -0,0 +1,17 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: jaeger
namespace: istio-system
spec:
hosts:
- "jaeger.istio.weavedx.com"
gateways:
- public-gateway.istio-system.svc.cluster.local
http:
- route:
- destination:
host: jaeger-query
timeout: 30s

View File

@@ -0,0 +1,19 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: env
namespace: openfaas-fn
spec:
hosts:
- env
http:
- route:
- destination:
host: env
weight: 90
- destination:
host: env-canary
weight: 10
timeout: 30s

View File

@@ -0,0 +1,51 @@
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: default
namespace: openfaas
spec:
peers:
- mtls: {}
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: default
namespace: openfaas
spec:
host: "*.openfaas.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: default
namespace: openfaas-fn
spec:
peers:
- mtls: {}
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: default
namespace: openfaas-fn
spec:
host: "*.openfaas-fn.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: openfaas-permissive
namespace: openfaas
spec:
targets:
- name: gateway
peers:
- mtls:
mode: PERMISSIVE

View File

@@ -0,0 +1,13 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
istio-injection: enabled
name: openfaas
---
apiVersion: v1
kind: Namespace
metadata:
labels:
istio-injection: enabled
name: openfaas-fn

View File

@@ -0,0 +1,55 @@
apiVersion: config.istio.io/v1alpha2
kind: denier
metadata:
name: denyhandler
namespace: openfaas
spec:
status:
code: 7
message: Not allowed
---
apiVersion: config.istio.io/v1alpha2
kind: checknothing
metadata:
name: denyrequest
namespace: openfaas
spec:
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: denyopenfaasfn
namespace: openfaas
spec:
match: destination.namespace == "openfaas" && source.namespace == "openfaas-fn" && source.labels["role"] != "openfaas-system"
actions:
- handler: denyhandler.denier
instances: [ denyrequest.checknothing ]
---
apiVersion: config.istio.io/v1alpha2
kind: denier
metadata:
name: denyhandler
namespace: openfaas-fn
spec:
status:
code: 7
message: Not allowed
---
apiVersion: config.istio.io/v1alpha2
kind: checknothing
metadata:
name: denyrequest
namespace: openfaas-fn
spec:
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: denyopenfaasfn
namespace: openfaas-fn
spec:
match: destination.namespace == "openfaas-fn" && source.namespace != "openfaas" && source.labels["role"] != "openfaas-system"
actions:
- handler: denyhandler.denier
instances: [ denyrequest.checknothing ]

View File

@@ -0,0 +1,17 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: gateway
namespace: openfaas
spec:
hosts:
- "openfaas.istio.weavedx.com"
gateways:
- public-gateway.istio-system.svc.cluster.local
http:
- route:
- destination:
host: gateway
timeout: 30s

View File

@@ -0,0 +1,14 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: podinfo-backend
spec:
host: podinfo-backend
subsets:
- name: grey
labels:
color: grey
- name: orange
labels:
color: orange

View File

@@ -0,0 +1,64 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo-backend-grey
labels:
app: podinfo-backend
color: grey
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: podinfo-backend
color: grey
template:
metadata:
labels:
app: podinfo-backend
color: grey
annotations:
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:0.6.0
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- sleep 3
command:
- ./podinfo
- -port=9898
- -logLevel=debug
ports:
- name: http
containerPort: 9898
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9898
livenessProbe:
httpGet:
path: /healthz
port: 9898
resources:
requests:
memory: "32Mi"
cpu: "10m"
env:
- name: color
value: "grey"
- name: message
value: "Greetings from backend grey"
- name: backendURL
value: "http://podinfo-store:9898/echo" #"https://httpbin.org/anything"

View File

@@ -0,0 +1,64 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo-backend-orange
labels:
app: podinfo-backend
color: orange
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: podinfo-backend
color: orange
template:
metadata:
labels:
app: podinfo-backend
color: orange
annotations:
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:0.6.0
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- sleep 3
command:
- ./podinfo
- -port=9898
- -logLevel=debug
ports:
- name: http
containerPort: 9898
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9898
livenessProbe:
httpGet:
path: /healthz
port: 9898
resources:
requests:
memory: "32Mi"
cpu: "10m"
env:
- name: color
value: "orange"
- name: message
value: "Greetings from backend orange"
- name: backendURL
value: "http://podinfo-store:9898/echo" #"https://httpbin.org/anything"

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: podinfo-backend
labels:
app: podinfo-backend
spec:
type: ClusterIP
ports:
- port: 9898
targetPort: http
protocol: TCP
name: http
selector:
app: podinfo-backend

View File

@@ -0,0 +1,29 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo-backend
spec:
hosts:
- podinfo-backend
http:
# new version
# forward 100% of the traffic to orange
- match:
# - headers:
# x-api-version:
# regex: "^(v{0,1})0\\.6\\.([0-9]{1,3}).*"
- sourceLabels:
color: blue
route:
- destination:
host: podinfo-backend
subset: orange
timeout: 20s
# default route
# forward 100% of the traffic to grey
- route:
- destination:
host: podinfo-backend
subset: grey
timeout: 20s

View File

@@ -0,0 +1,65 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo-blue
labels:
app: podinfo
color: blue
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: podinfo
color: blue
template:
metadata:
labels:
app: podinfo
color: blue
annotations:
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:0.6.0
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- sleep 3
command:
- ./podinfo
- -port=9898
- -logLevel=debug
ports:
- name: http
containerPort: 9898
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9898
initialDelaySeconds: 10
livenessProbe:
httpGet:
path: /healthz
port: 9898
resources:
requests:
memory: "32Mi"
cpu: "10m"
env:
- name: color
value: "blue"
- name: message
value: "Greetings from podinfo blue"
- name: backendURL
value: "http://podinfo-backend:9898/backend"

View File

@@ -0,0 +1,14 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: podinfo
spec:
host: podinfo
subsets:
- name: blue
labels:
color: blue
- name: green
labels:
color: green

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: podinfo
labels:
app: podinfo
spec:
type: ClusterIP
ports:
- port: 9898
targetPort: http
protocol: TCP
name: http
selector:
app: podinfo

View File

@@ -0,0 +1,82 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo
spec:
hosts:
- "podinfo.istio.weavedx.com"
gateways:
- public-gateway.istio-system.svc.cluster.local
http:
# Opera: forward 100% of the traffic to green
- match:
- headers:
user-agent:
regex: ".*OPR.*"
route:
- destination:
host: podinfo
subset: green
timeout: 30s
# Chrome: 50/50 load balancing between blue and green
- match:
- headers:
user-agent:
regex: ".*Chrome.*"
route:
- destination:
host: podinfo
subset: blue
weight: 50
- destination:
host: podinfo
subset: green
weight: 50
timeout: 30s
# Safari: 70/30 load balancing between blue and green
- match:
- headers:
user-agent:
regex: "^(?!.*Chrome).*Safari.*"
route:
- destination:
host: podinfo
subset: blue
weight: 100
- destination:
host: podinfo
subset: green
weight: 0
timeout: 30s
# Route based on color header
- match:
- headers:
x-color:
exact: "blue"
route:
- destination:
host: podinfo
subset: blue
timeout: 30s
retries:
attempts: 3
perTryTimeout: 3s
- match:
- headers:
x-color:
exact: "green"
route:
- destination:
host: podinfo
subset: green
timeout: 30s
retries:
attempts: 3
perTryTimeout: 3s
# Any other browser: forward 100% of the traffic to blue
- route:
- destination:
host: podinfo
subset: blue
timeout: 35s

View File

@@ -0,0 +1,68 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo-green
labels:
app: podinfo
color: green
spec:
replicas: 3
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: podinfo
color: green
template:
metadata:
labels:
app: podinfo
color: green
annotations:
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:0.6.0
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- sleep 4
command:
- ./podinfo
- -port=9898
- -logLevel=debug
ports:
- name: http
containerPort: 9898
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9898
initialDelaySeconds: 1
periodSeconds: 2
failureThreshold: 1
livenessProbe:
httpGet:
path: /healthz
port: 9898
resources:
requests:
memory: "32Mi"
cpu: "10m"
env:
- name: color
value: "green"
- name: message
value: "Greetings from podinfo green"
- name: backendURL
value: "http://podinfo-backend:9898/backend"

View File

@@ -0,0 +1,15 @@
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: httpbin
spec:
hosts:
- httpbin.org
ports:
- number: 80
name: http
protocol: HTTP
- number: 443
name: https
protocol: HTTPS
resolution: DNS

View File

@@ -0,0 +1,62 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo-store
labels:
app: podinfo-store
version: "0.6"
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: podinfo-store
version: "0.6"
template:
metadata:
labels:
app: podinfo-store
version: "0.6"
annotations:
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:0.6.0
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- sleep 3
command:
- ./podinfo
- -port=9898
- -logLevel=debug
ports:
- name: http
containerPort: 9898
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9898
livenessProbe:
httpGet:
path: /healthz
port: 9898
resources:
requests:
memory: "32Mi"
cpu: "10m"
env:
- name: color
value: "yellow"
- name: message
value: "Greetings from store yellow"

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: podinfo-store
labels:
app: podinfo-store
spec:
type: ClusterIP
ports:
- port: 9898
targetPort: http
protocol: TCP
name: http
selector:
app: podinfo-store

View File

@@ -0,0 +1,27 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo-store
spec:
hosts:
- podinfo-store
http:
- match:
- sourceLabels:
color: orange
route:
- destination:
host: podinfo-store
timeout: 15s
fault:
delay:
percent: 50
fixedDelay: 500ms
abort:
percent: 50
httpStatus: 500
- route:
- destination:
host: podinfo-store
timeout: 15s

View File

@@ -0,0 +1,8 @@
apiVersion: v1
data:
basic_auth_password: ODM4NzIwYTUxMjgxNDlkMzJmMTIxYTViMWQ4N2FjMzUwNzAxZThmZQ==
basic_auth_test: YWRtaW4=
kind: Secret
metadata:
name: basic-auth
type: Opaque

View File

@@ -6,7 +6,13 @@ metadata:
labels:
app: podinfo
spec:
replicas: 1
replicas: 3
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: podinfo
@@ -17,13 +23,21 @@ spec:
annotations:
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:0.5.0-alpha6
image: quay.io/stefanprodan/podinfo:0.6.0
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- sleep 3
command:
- ./podinfo
- -port=9898
- -debug=true
- -logLevel=debug
ports:
- name: http
containerPort: 9898
@@ -33,7 +47,7 @@ spec:
path: /readyz
port: 9898
initialDelaySeconds: 1
periodSeconds: 5
periodSeconds: 2
failureThreshold: 1
livenessProbe:
httpGet:
@@ -47,6 +61,12 @@ spec:
memory: "32Mi"
cpu: "10m"
env:
- name: color
value: "blue"
- name: message
value: "Greetings from podinfo blue"
- name: backendURL
value: "http://podinfo-backend:9898/echo"
- name: configPath
value: "/var/secrets"
volumeMounts:

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: podinfo
labels:
app: podinfo
spec:
type: LoadBalancer
ports:
- port: 9898
targetPort: http
protocol: TCP
name: http
selector:
app: podinfo

478
docs/8-istio-openfaas.md Normal file
View File

@@ -0,0 +1,478 @@
# OpenFaaS + Istio
### Install Istio
Download latest release:
```bash
curl -L https://git.io/getLatestIstio | sh -
```
Configure Istio with Prometheus, Jaeger and cert-manager:
```yaml
global:
nodePort: false
proxy:
includeIPRanges: "10.28.0.0/14,10.7.240.0/20"
ingress:
enabled: false
sidecarInjectorWebhook:
enabled: true
enableNamespacesByDefault: false
gateways:
enabled: true
grafana:
enabled: true
prometheus:
enabled: true
servicegraph:
enabled: true
tracing:
enabled: true
certmanager:
enabled: true
```
Save the above file as `istio-of.yaml` and install Istio with Helm:
```bash
helm upgrade --install istio ./install/kubernetes/helm/istio \
--namespace=istio-system \
-f ./istio-of.yaml
```
### Configure Istio Gateway with LE certs
Istio Gateway:
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
```
Find the gateway public IP:
```bash
IP=$(kubectl -n istio-system describe svc/istio-ingressgateway | grep 'Ingress' | awk '{print $NF}')
```
Create a zone in GCP Cloud DNS with the following records:
```bash
istio.example.com. A $IP
*.istio.example.com. A $IP
```
Create a service account with Cloud DNS admin role (replace `my-gcp-project` with your project ID):
```bash
GCP_PROJECT=my-gcp-project
gcloud iam service-accounts create dns-admin \
--display-name=dns-admin \
--project=${GCP_PROJECT}
gcloud iam service-accounts keys create ./gcp-dns-admin.json \
--iam-account=dns-admin@${GCP_PROJECT}.iam.gserviceaccount.com \
--project=${GCP_PROJECT}
gcloud projects add-iam-policy-binding ${GCP_PROJECT} \
--member=serviceAccount:dns-admin@${GCP_PROJECT}.iam.gserviceaccount.com \
--role=roles/dns.admin
```
Create a Kubernetes secret with the GCP Cloud DNS admin key:
```bash
kubectl create secret generic cert-manager-credentials \
--from-file=./gcp-dns-admin.json \
--namespace=istio-system
```
LE issuer for GCP Cloud DNS:
```yaml
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-prod
namespace: istio-system
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: email@example.com
privateKeySecretRef:
name: letsencrypt-prod
dns01:
providers:
- name: cloud-dns
clouddns:
serviceAccountSecretRef:
name: cert-manager-credentials
key: gcp-dns-admin.json
project: my-gcp-project
```
Wildcard cert:
```yaml
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: istio-gateway
namespace: istio-system
spec:
secretname: istio-ingressgateway-certs
issuerRef:
name: letsencrypt-prod
commonName: "*.istio.example.com"
dnsNames:
- istio.example.com
acme:
config:
- dns01:
provider: cloud-dns
domains:
- "*.istio.example.com"
- "istio.example.com"
```
### Configure OpenFaaS mTLS and access policies
Create the OpenFaaS namespaces:
```yaml
apiVersion: v1
kind: Namespace
metadata:
labels:
istio-injection: enabled
name: openfaas
---
apiVersion: v1
kind: Namespace
metadata:
labels:
istio-injection: enabled
name: openfaas-fn
```
Create an Istio virtual service for OpenFaaS Gateway:
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: gateway
namespace: openfaas
spec:
hosts:
- "openfaas.istio.example.com"
gateways:
- public-gateway.istio-system.svc.cluster.local
http:
- route:
- destination:
host: gateway
timeout: 30s
```
Enable mTLS on `openfaas` namespace:
```yaml
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: default
namespace: openfaas
spec:
peers:
- mtls: {}
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: default
namespace: openfaas
spec:
host: "*.openfaas.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
```
Allow plaintext traffic to OpenFaaS Gateway:
```yaml
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: permissive
namespace: openfaas
spec:
targets:
- name: gateway
peers:
- mtls:
mode: PERMISSIVE
```
Enable mTLS on `openfaas-fn` namespace:
```yaml
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: default
namespace: openfaas-fn
spec:
peers:
- mtls: {}
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: default
namespace: openfaas-fn
spec:
host: "*.openfaas-fn.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
```
Deny access to OpenFaaS core services from the `openfaas-fn` namespace except for system functions:
```yaml
apiVersion: config.istio.io/v1alpha2
kind: denier
metadata:
name: denyhandler
namespace: openfaas
spec:
status:
code: 7
message: Not allowed
---
apiVersion: config.istio.io/v1alpha2
kind: checknothing
metadata:
name: denyrequest
namespace: openfaas
spec:
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: denyopenfaasfn
namespace: openfaas
spec:
match: destination.namespace == "openfaas" && source.namespace == "openfaas-fn" && source.labels["role"] != "openfaas-system"
actions:
- handler: denyhandler.denier
instances: [ denyrequest.checknothing ]
```
Deny access to functions except for OpenFaaS core services:
```yaml
apiVersion: config.istio.io/v1alpha2
kind: denier
metadata:
name: denyhandler
namespace: openfaas-fn
spec:
status:
code: 7
message: Not allowed
---
apiVersion: config.istio.io/v1alpha2
kind: checknothing
metadata:
name: denyrequest
namespace: openfaas-fn
spec:
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: denyopenfaasfn
namespace: openfaas-fn
spec:
match: destination.namespace == "openfaas-fn" && source.namespace != "openfaas" && source.labels["role"] != "openfaas-system"
actions:
- handler: denyhandler.denier
instances: [ denyrequest.checknothing ]
```
### Install OpenFaaS
Add the OpenFaaS `helm` chart:
```bash
$ helm repo add openfaas https://openfaas.github.io/faas-netes/
```
Create a secret named `basic-auth` in the `openfaas` namespace:
```bash
# generate a random password
password=$(head -c 12 /dev/urandom | shasum| cut -d' ' -f1)
kubectl -n openfaas create secret generic basic-auth \
--from-literal=basic-auth-user=admin \
--from-literal=basic-auth-password=$password
```
Install OpenFaaS with Helm:
```bash
helm upgrade --install openfaas ./chart/openfaas \
--namespace openfaas \
--set functionNamespace=openfaas-fn \
--set operator.create=true \
--set securityContext=true \
--set basic_auth=true \
--set exposeServices=false \
--set operator.createCRD=true
```
Wait for OpenFaaS Gateway to come online:
```bash
watch curl -v http://openfaas.istio.example.com/heathz
```
Save your credentials in faas-cli store:
```bash
echo $password | faas-cli login -g https://openfaas.istio.example.com -u admin --password-stdin
```
### Canary deployments for OpenFaaS functions
Create a general available release for the `env` function version 1.0.0:
```yaml
apiVersion: openfaas.com/v1alpha2
kind: Function
metadata:
name: env
namespace: openfaas-fn
spec:
name: env
image: stefanprodan/of-env:1.0.0
resources:
requests:
memory: "32Mi"
cpu: "10m"
limits:
memory: "64Mi"
cpu: "100m"
```
Create a canary release for version 1.1.0:
```yaml
apiVersion: openfaas.com/v1alpha2
kind: Function
metadata:
name: env-canaray
namespace: openfaas-fn
spec:
name: env-canaray
image: stefanprodan/of-env:1.1.0
resources:
requests:
memory: "32Mi"
cpu: "10m"
limits:
memory: "64Mi"
cpu: "100m"
```
Create an Istio virtual service with 10% traffic going to canary:
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: env
namespace: openfaas-fn
spec:
hosts:
- env
http:
- route:
- destination:
host: env
weight: 90
- destination:
host: env-canary
weight: 10
timeout: 30s
```
Test traffic routing (one in ten calls should hit the canary release):
```bash
while true; do sleep 1; curl -sS https://openfaas.istio.example.com/function/env | grep HOSTNAME; done
HOSTNAME=env-59bf48fb9d-cjsjw
HOSTNAME=env-59bf48fb9d-cjsjw
HOSTNAME=env-59bf48fb9d-cjsjw
HOSTNAME=env-59bf48fb9d-cjsjw
HOSTNAME=env-59bf48fb9d-cjsjw
HOSTNAME=env-59bf48fb9d-cjsjw
HOSTNAME=env-59bf48fb9d-cjsjw
HOSTNAME=env-59bf48fb9d-cjsjw
HOSTNAME=env-59bf48fb9d-cjsjw
HOSTNAME=env-canary-5dffdf4458-4vnn2
```
Tracing the general available release with Jaeger:
![ga-trace](https://github.com/stefanprodan/k8s-podinfo/blob/master/docs/screens/openfaas-istio-ga-trace.png)
Tracing the canary release:
![canary-trace](https://github.com/stefanprodan/k8s-podinfo/blob/master/docs/screens/openfaas-istio-canary-trace.png)
Monitor ga vs canary success rate and latency with Prometheus and Grafana:
![canary-prom](https://github.com/stefanprodan/k8s-podinfo/blob/master/docs/screens/openfaas-istio-canary-prom.png)

Binary file not shown.

Binary file not shown.

View File

@@ -3,9 +3,9 @@ entries:
ambassador:
- apiVersion: v1
appVersion: 0.29.0
created: 2018-08-04T02:00:00.9435927+03:00
created: 2018-09-11T22:17:53.254801678+03:00
description: A Helm chart for Datawire Ambassador
digest: a30c8cb38e696b09fda8269ad8465ce6fec6100cfc108ca85ecbc85913ca5c7f
digest: 5d8b6d4f129fed12c005a0f8ff022d4a1aead489498c244f4764e0b0c3104477
engine: gotpl
maintainers:
- email: stefanprodan@users.noreply.github.com
@@ -19,27 +19,117 @@ entries:
grafana:
- apiVersion: v1
appVersion: "1.0"
created: 2018-08-04T02:00:00.944250297+03:00
created: 2018-09-11T22:17:53.255623303+03:00
description: A Helm chart for Kubernetes
digest: abdcadc5cddcb7c015aa5bb64e59bfa246774ad9243b3eb3c2a814abb38f2776
digest: e5c4c01b886ca55c85adf3ff95cef5f6dd603833d7dae02e2c6da4235728da75
name: grafana
urls:
- https://stefanprodan.github.io/k8s-podinfo/grafana-0.1.0.tgz
version: 0.1.0
loadtest:
- apiVersion: v1
appVersion: "1.0"
created: 2018-09-11T22:17:53.255836039+03:00
description: Hey load test Helm chart for Kubernetes
digest: 5a1bf6cba24ada79e10af0b3780dacf64e4c10283fff47165010ee868ed02482
name: loadtest
urls:
- https://stefanprodan.github.io/k8s-podinfo/loadtest-0.1.0.tgz
version: 0.1.0
ngrok:
- apiVersion: v1
appVersion: "1.0"
created: 2018-08-04T02:00:00.944555634+03:00
created: 2018-09-11T22:17:53.256149474+03:00
description: A Ngrok Helm chart for Kubernetes
digest: 7bf5ed2ef63ccd5efb76bcd9a086b04816a162c51d6ab592bccf58c283acd2ea
digest: 989439d77088b2d3092b0ac0c65a4835335ec8194b9dbf36b2347c9a3d28944f
name: ngrok
urls:
- https://stefanprodan.github.io/k8s-podinfo/ngrok-0.1.0.tgz
version: 0.1.0
podinfo:
- apiVersion: v1
appVersion: 1.2.0
created: 2018-09-11T22:17:53.262775711+03:00
description: Podinfo Helm chart for Kubernetes
digest: 106e5c22922b6953381f9c73929d320128536c1a68c972323b860af5aa9182fe
engine: gotpl
home: https://github.com/stefanprodan/k8s-podinfo
maintainers:
- email: stefanprodan@users.noreply.github.com
name: stefanprodan
name: podinfo
sources:
- https://github.com/stefanprodan/k8s-podinfo
urls:
- https://stefanprodan.github.io/k8s-podinfo/podinfo-1.2.0.tgz
version: 1.2.0
- apiVersion: v1
appVersion: 1.1.0
created: 2018-09-11T22:17:53.261768578+03:00
description: Podinfo Helm chart for Kubernetes
digest: 973d60c629d7ae476776098ad6b654d78d91f91b3faa8bc016b94c73c42be094
engine: gotpl
home: https://github.com/stefanprodan/k8s-podinfo
maintainers:
- email: stefanprodan@users.noreply.github.com
name: stefanprodan
name: podinfo
sources:
- https://github.com/stefanprodan/k8s-podinfo
urls:
- https://stefanprodan.github.io/k8s-podinfo/podinfo-1.1.0.tgz
version: 1.1.0
- apiVersion: v1
appVersion: 1.0.0
created: 2018-09-11T22:17:53.260888709+03:00
description: Podinfo Helm chart for Kubernetes
digest: 82068727ba5b552341b14a980e954e27a8517f0ef76aab314c160b0f075e6de4
engine: gotpl
home: https://github.com/stefanprodan/k8s-podinfo
maintainers:
- email: stefanprodan@users.noreply.github.com
name: stefanprodan
name: podinfo
sources:
- https://github.com/stefanprodan/k8s-podinfo
urls:
- https://stefanprodan.github.io/k8s-podinfo/podinfo-1.0.0.tgz
version: 1.0.0
- apiVersion: v1
appVersion: 0.6.0
created: 2018-09-11T22:17:53.259861051+03:00
description: Podinfo Helm chart for Kubernetes
digest: bd25a710eddb3985d3bd921a11022b5c68a04d37cf93a1a4aab17eeda35aa2f8
engine: gotpl
home: https://github.com/stefanprodan/k8s-podinfo
maintainers:
- email: stefanprodan@users.noreply.github.com
name: stefanprodan
name: podinfo
sources:
- https://github.com/stefanprodan/k8s-podinfo
urls:
- https://stefanprodan.github.io/k8s-podinfo/podinfo-0.2.2.tgz
version: 0.2.2
- apiVersion: v1
appVersion: 0.5.1
created: 2018-09-11T22:17:53.259045852+03:00
description: Podinfo Helm chart for Kubernetes
digest: 631ca3e2db5553541a50b625f538e6a1f2a103c13aa8148fdd38baf2519e5235
engine: gotpl
home: https://github.com/stefanprodan/k8s-podinfo
maintainers:
- email: stefanprodan@users.noreply.github.com
name: stefanprodan
name: podinfo
sources:
- https://github.com/stefanprodan/k8s-podinfo
urls:
- https://stefanprodan.github.io/k8s-podinfo/podinfo-0.2.1.tgz
version: 0.2.1
- apiVersion: v1
appVersion: 0.5.0
created: 2018-08-04T02:00:00.946091506+03:00
created: 2018-09-11T22:17:53.258116794+03:00
description: Podinfo Helm chart for Kubernetes
digest: dfe7cf44aef0d170549918b00966422a07e7611f9d0081fb34f5b5beb0641c00
engine: gotpl
@@ -55,7 +145,7 @@ entries:
version: 0.2.0
- apiVersion: v1
appVersion: 0.3.0
created: 2018-08-04T02:00:00.945649351+03:00
created: 2018-09-11T22:17:53.257068714+03:00
description: Podinfo Helm chart for Kubernetes
digest: 4865a2d8b269cf453935cda9661c2efb82c16411471f8c11221a6d03d9bb58b1
engine: gotpl
@@ -69,41 +159,53 @@ entries:
urls:
- https://stefanprodan.github.io/k8s-podinfo/podinfo-0.1.0.tgz
version: 0.1.0
weave-flux:
podinfo-istio:
- apiVersion: v1
appVersion: 1.3.0
created: 2018-08-04T02:00:00.947870112+03:00
description: Flux is a tool that automatically ensures that the state of a cluster
matches what is specified in version control
digest: 1f52e427bb1d728641405f5ad9c514e8861905c110c14db95516629d24443b7d
appVersion: 1.2.0
created: 2018-09-11T22:17:53.265466221+03:00
description: Podinfo Helm chart for Istio
digest: 8115e72f232f82eb3e6da1965364cfede7c069f95a627dddac45cfbe6cb90dc4
engine: gotpl
home: https://weave.works
icon: https://landscape.cncf.io/logos/weave-flux.svg
home: https://github.com/stefanprodan/k8s-podinfo
maintainers:
- email: stefan@weave.works
- email: stefanprodan@users.noreply.github.com
name: stefanprodan
name: weave-flux
name: podinfo-istio
sources:
- https://github.com/weaveworks/flux
- https://github.com/stefanprodan/k8s-podinfo
urls:
- https://stefanprodan.github.io/k8s-podinfo/weave-flux-0.2.0.tgz
- https://stefanprodan.github.io/k8s-podinfo/podinfo-istio-1.2.0.tgz
version: 1.2.0
- apiVersion: v1
appVersion: 1.1.0
created: 2018-09-11T22:17:53.264848234+03:00
description: Podinfo Helm chart for Istio
digest: bcceb63ff780a8f0ba0b30997040e4e82170f9cce17c26ec817648ed024c83f5
engine: gotpl
home: https://github.com/stefanprodan/k8s-podinfo
maintainers:
- email: stefanprodan@users.noreply.github.com
name: stefanprodan
name: podinfo-istio
sources:
- https://github.com/stefanprodan/k8s-podinfo
urls:
- https://stefanprodan.github.io/k8s-podinfo/podinfo-istio-0.2.0.tgz
version: 0.2.0
- apiVersion: v1
appVersion: 1.2.5
created: 2018-08-04T02:00:00.947340919+03:00
description: Flux is a tool that automatically ensures that the state of a cluster
matches what is specified in version control
digest: 9e18fb8d175f4fac3b054905c7110d18b6d18f884011df9e9d010c66337da7ec
appVersion: 0.6.0
created: 2018-09-11T22:17:53.263900568+03:00
description: Podinfo Helm chart for Istio
digest: f12f8aa1eca1328e9eaa30bd757f6ed3ff97205e2bf016a47265bc2de6a63d8f
engine: gotpl
home: https://weave.works
icon: https://www.weave.works/assets/images/bltd108e8f850ae9e7c/weave-logo-512.png
home: https://github.com/stefanprodan/k8s-podinfo
maintainers:
- email: stefan@weave.works
name: Stefan Prodan
name: weave-flux
- email: stefanprodan@users.noreply.github.com
name: stefanprodan
name: podinfo-istio
sources:
- https://github.com/weaveworks/flux
- https://github.com/stefanprodan/k8s-podinfo
urls:
- https://stefanprodan.github.io/k8s-podinfo/weave-flux-0.1.0.tgz
- https://stefanprodan.github.io/k8s-podinfo/podinfo-istio-0.1.0.tgz
version: 0.1.0
generated: 2018-08-04T02:00:00.942959149+03:00
generated: 2018-09-11T22:17:53.253846309+03:00

BIN
docs/loadtest-0.1.0.tgz Normal file

Binary file not shown.

Binary file not shown.

BIN
docs/podinfo-0.2.1.tgz Normal file

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More