Compare commits

...

117 Commits

Author SHA1 Message Date
Stefan Prodan
ecd204b15e Release v1.0.0 2018-08-22 00:57:53 +03:00
Stefan Prodan
979fd669df Use gorilla mux route name as Prometheus path label 2018-08-21 15:19:21 +03:00
Stefan Prodan
feac686e60 Release v1.0.0-beta.1 2018-08-21 12:03:34 +03:00
Stefan Prodan
d362dc5f81 Set env var prefix to PODINFO 2018-08-21 11:58:37 +03:00
Stefan Prodan
593ccaa0cd Add random delay and errors middleware 2018-08-21 03:12:20 +03:00
Stefan Prodan
0f098cf0f1 Add config file support 2018-08-21 02:02:47 +03:00
Stefan Prodan
2ddbc03371 Replace zerolog with zap 2018-08-21 02:01:26 +03:00
Stefan Prodan
f2d95bbf80 Add logging middleware and log level option 2018-08-20 17:03:07 +03:00
Stefan Prodan
7d18ec68b3 Use plag, viper and zap 2018-08-20 11:30:18 +03:00
Stefan Prodan
774d34c1dd Rewrite HTTP server with gorilla mux 2018-08-20 11:29:11 +03:00
Stefan Prodan
f13d006993 Add Kubernetes probes handlers 2018-08-20 11:28:06 +03:00
Stefan Prodan
aeeb146c2a Add UI handler 2018-08-20 11:27:40 +03:00
Stefan Prodan
11bd74eff2 Add local storage read/write handler 2018-08-20 11:27:08 +03:00
Stefan Prodan
af6d11fd33 Add panic handler 2018-08-20 11:26:24 +03:00
Stefan Prodan
49746fe2fb Add fscache reader handler 2018-08-20 11:26:08 +03:00
Stefan Prodan
da24d729bb Add runtime info handler 2018-08-20 11:25:36 +03:00
Stefan Prodan
449fcca3a9 Add HTTP status code handler 2018-08-20 11:25:15 +03:00
Stefan Prodan
2b0a742974 Add echo headers handler 2018-08-20 11:24:49 +03:00
Stefan Prodan
153f4dce45 Add echo handler with backend propagation 2018-08-20 11:24:23 +03:00
Stefan Prodan
4c8d11cc3e Add delay handler 2018-08-20 11:23:45 +03:00
Stefan Prodan
08415ce2ce Add version handler 2018-08-20 11:23:13 +03:00
Stefan Prodan
d26b7a96d9 Add UI index handler 2018-08-20 11:22:48 +03:00
Stefan Prodan
3c897b8bd7 Rename git commit to revision 2018-08-20 11:21:51 +03:00
Stefan Prodan
511ab87a18 Update deps for v1.0 2018-08-20 11:20:56 +03:00
Stefan Prodan
21922197b5 Add resource usage to blue/green dashboard 2018-08-18 14:22:35 +03:00
Stefan Prodan
7ea943525f Add Helm chart for load testing 2018-08-17 18:45:52 +03:00
Stefan Prodan
57ff4465cd Add Istio Blue/Green Grafana dashboard 2018-08-17 17:25:04 +03:00
Stefan Prodan
a86ef1fdb6 Add frontend, backend and store chart values
- add Istio virtual service weight for blue/green
2018-08-17 15:41:23 +03:00
Stefan Prodan
ddf1b80e1b Log backend errors 2018-08-17 15:38:35 +03:00
Stefan Prodan
896aceb240 Add Helm chart for Istio canary deployments and A/B testing 2018-08-16 15:24:04 +03:00
Stefan Prodan
7996f76e71 Release v0.6.1
- update page title when hostname changes
2018-08-16 15:21:26 +03:00
Stefan Prodan
8b04a8f502 Remove old charts 2018-08-16 15:20:21 +03:00
Stefan Prodan
8a6a4e8901 Release v0.6
- Helm chart: use quay image, add color env var, rename backend env var, adjust deployment strategy and set liveness probe to 2s
2018-08-16 00:09:02 +03:00
Stefan Prodan
cf8531c224 Move ping to api/echo 2018-08-16 00:05:32 +03:00
Stefan Prodan
d1574a6601 Decrease Istio HTTP 503 errors with preStop 2018-08-15 19:42:08 +03:00
Stefan Prodan
75d93e0c54 Inject delay and failures for the orange backend 2018-08-15 13:37:40 +03:00
Stefan Prodan
7622dfb74f Add store service 2018-08-15 12:28:03 +03:00
Stefan Prodan
85a26ed71e Add X-Api-Version header
- inject version header for backend calls
- route frontend calls to backend based on API version
2018-08-15 11:16:20 +03:00
Stefan Prodan
81b22f08f8 Add instrumentation list 2018-08-15 11:14:59 +03:00
Stefan Prodan
7d9e3afde7 Beta release v0.6.0-beta.10 2018-08-14 16:41:58 +03:00
Stefan Prodan
3d2028a124 Display hostname as title 2018-08-14 16:41:14 +03:00
Stefan Prodan
1b56648f5b Enable HTTPS redirect in Istio gateway 2018-08-14 16:04:44 +03:00
Stefan Prodan
3a704215a4 Move the public gateway to istio-system ns
- expose Jaeger and Grafana
2018-08-14 15:57:07 +03:00
Stefan Prodan
25aaeff13c Ignore DS_Store 2018-08-14 13:33:36 +03:00
Stefan Prodan
3b93a3445e Make message and color configurable via env vars 2018-08-14 13:21:35 +03:00
Stefan Prodan
a6cc3d2ef9 Reload page when version changes and use fetch API for backend calls 2018-08-14 13:20:05 +03:00
Stefan Prodan
718d8ba4e0 Get external IP from httpbin.org 2018-08-14 11:24:22 +03:00
Stefan Prodan
24ceb25930 Beta release v0.6.0-beta.2 2018-08-13 14:56:13 +03:00
Stefan Prodan
fc8dfc7678 Add Istio Gateway manifests 2018-08-13 14:55:27 +03:00
Stefan Prodan
8e656fdfd0 Add UI/API response and forward OpenTracing headers to backend 2018-08-13 14:54:46 +03:00
Stefan Prodan
a945842e9b Add VueJS UI 2018-08-13 14:52:49 +03:00
Stefan Prodan
09a743f5c2 Add CPU and Memory stress test flags 2018-08-10 11:48:12 +03:00
Stefan Prodan
c44a58602e Release v0.5.1 2018-08-08 12:17:05 +03:00
Stefan Prodan
2ee11bf6b2 Remove deleted files from cache instead of clearing the whole cache 2018-08-08 12:14:26 +03:00
Stefan Prodan
70b0e92555 Release v0.5 2018-08-04 02:04:07 +03:00
Stefan Prodan
7a78c93a49 Set log level flag and update zerolog pkg 2018-08-04 02:02:47 +03:00
Stefan Prodan
be915d44cc Reload configmaps and secrets when kubelet updates them 2018-08-01 03:22:39 +03:00
Weave Flux
82f2f9ecf9 Automated: default:deployment/podinfo
[ci skip]
2018-07-05 16:20:52 +00:00
Weave Flux
035f78edc1 Deautomated: default:deployment/podinfo
[ci skip]
2018-07-05 16:18:53 +00:00
Weave Flux
91c61d4fa5 Automated: default:deployment/podinfo
[ci skip]
2018-07-05 16:18:37 +00:00
Weave Flux
e673dae20d Release all latest to default:deployment/podinfo
[ci skip]
2018-07-05 16:04:30 +00:00
Weave Flux
adfff4a923 Release stefanprodan/podinfo:62fa684 to default:deployment/podinfo
[ci skip]
2018-07-05 16:01:53 +00:00
Weave Flux
4db9d5a1ed Release stefanprodan/podinfo:92114c0 to default:deployment/podinfo
[ci skip]
2018-07-05 16:01:21 +00:00
Ilya Dmitrichenko
92114c05c9 Change hash algorithm 2018-07-05 16:20:48 +01:00
Stefan Prodan
62fa684440 Release v4.0 2018-06-14 16:03:16 -07:00
Stefan Prodan
2aba7a3ed2 Update release automation list 2018-05-23 14:01:25 +03:00
Stefan Prodan
fda68019ea Merge pull request #5 from errordeveloper/master
Fix deploy guard logic
2018-05-21 13:06:28 +03:00
Ilya Dmitrichenko
39dde13700 Fix deploy guard logic, use multiple lines 2018-05-21 09:55:02 +01:00
Stefan Prodan
2485a10189 Merge pull request #3 from errordeveloper/master 2018-05-19 16:42:50 +03:00
Ilya Dmitrichenko
6c3569e131 Skip deploy on PR 2018-05-18 16:54:42 +01:00
Ilya Dmitrichenko
9b3a033845 Production deployment manifest for skaffold blog 2018-05-18 16:29:22 +01:00
Stefan Prodan
f02ebc267a Merge pull request #2 from errordeveloper/master
Add CircleCI
2018-05-17 14:55:31 +03:00
Ilya Dmitrichenko
01631a0a43 Add CircleCI 2018-05-17 12:51:52 +01:00
Stefan Prodan
a1e5cb77fd Merge pull request #1 from errordeveloper/master
Add Skaffold config files
2018-05-15 16:17:23 +03:00
Ilya Dmitrichenko
cdc6765b51 Add skaffold 2018-05-15 12:38:53 +01:00
Ilya Dmitrichenko
ff9cf93b14 Add .dockerignore 2018-05-11 14:53:17 +01:00
Stefan Prodan
5665149191 Set default port to 9898 2018-05-11 16:13:47 +03:00
Stefan Prodan
5a1f009200 Add Weave Flux Helm Operator diagram 2018-05-10 13:49:05 +03:00
Stefan Prodan
b6be95ee77 Bump podinfo Helm chart app version to v0.3 2018-05-10 11:56:47 +03:00
Stefan Prodan
ad22fdb933 Add canary deployments docs for Istio and Ambassador 2018-05-10 11:50:19 +03:00
Stefan Prodan
9b287dbf5c Add git poll interval option to Flux chart 2018-05-07 12:23:26 +03:00
Stefan Prodan
e81277f217 Release Flux chart 1.3.0 and Helm Operator v1alpha2 2018-05-07 11:16:00 +03:00
Stefan Prodan
e24c83525a Use http_request_duration_seconds for RED metrics 2018-05-07 11:03:02 +03:00
Stefan Prodan
65d03a557b Fix Quay push 2018-05-07 11:00:02 +03:00
Stefan Prodan
e93d0682fb bump version to 0.3.0 2018-05-02 23:08:43 +02:00
Stefan Prodan
a1bedc8c43 Update Weave Flux chart (Weave Cloud token option) 2018-04-25 15:01:19 +03:00
Stefan Prodan
07d3192afb Add Weave Cloud service token option 2018-04-25 15:00:27 +03:00
Stefan Prodan
ee10c878a0 Istio Canary GitOps mention the cluster config repo 2018-04-25 14:52:26 +03:00
Stefan Prodan
db9bf53e4f Istio Canary GitOps pipeline 2018-04-25 14:22:56 +03:00
Stefan Prodan
53d2609d8f Weave Scope Istio canary observability 2018-04-25 11:39:31 +03:00
Stefan Prodan
b34653912d Weave Cloud Istio canary observability 2018-04-25 11:17:43 +03:00
Stefan Prodan
1a2029f74d Use Weave Flux Helm Operator master-d5c374c 2018-04-24 00:09:51 +03:00
Stefan Prodan
68babf42e1 Add Weave Flux Helm Operator 2018-04-22 13:39:29 +03:00
Stefan Prodan
1330decdaa Add Weave Flux to podinfo Helm repo 2018-04-22 12:17:13 +03:00
Stefan Prodan
1682f79478 Add Weave Flux Git deploy setup docs 2018-04-22 12:16:49 +03:00
Stefan Prodan
93dee060dc Add Weave Flux OSS Helm chart 2018-04-21 21:34:06 +03:00
Stefan Prodan
797a4200dd Add Weave Cloud chart 2018-04-20 12:26:39 +03:00
Stefan Prodan
0c84164b65 Add Istio ingress to v1alpha3, remove v1alpha2 2018-04-20 01:03:16 +03:00
Stefan Prodan
b104769f20 Istio use Hey to generate load 2018-04-20 00:56:36 +03:00
Stefan Prodan
4acfdba296 Add ClusterIP service definition 2018-04-19 10:21:05 +03:00
Stefan Prodan
b5719fea3f Use test namespace 2018-04-17 14:58:40 +03:00
Stefan Prodan
00106faf8d Istio install steps 2018-04-17 14:13:08 +03:00
Stefan Prodan
88f417ee1c split Istio configs 2018-04-17 13:56:23 +03:00
Stefan Prodan
94441ef933 Istio - using same hosts has no effect 2018-04-17 11:47:44 +03:00
Stefan Prodan
b1871f827b Istio broken 2018-04-12 11:37:15 +01:00
Stefan Prodan
753799812a Fix istio.io/v1alpha3 definitions 2018-04-12 11:08:45 +01:00
Stefan Prodan
6aa5cbbaee Canary istio.io/v1alpha3 2018-04-12 10:15:13 +01:00
Stefan Prodan
4efde133e5 fix Travis Quay login 2018-04-12 10:14:51 +01:00
Stefan Prodan
60c0601128 try fix Travis Docker login 2018-04-11 18:05:02 +01:00
Stefan Prodan
d4882b4212 Istio canary deployments 2018-04-11 15:27:41 +01:00
Stefan Prodan
e4c765160a All namespace ops from K9 pod 2018-04-10 16:44:46 +01:00
Stefan Prodan
130e1dac8e Clone GCP Git repo on K9 IDE startup 2018-04-09 14:49:27 +01:00
Stefan Prodan
510864654f Add GCP Git support 2018-04-09 11:52:05 +01:00
Stefan Prodan
310643b0df Add Flux to the k9 setup 2018-04-08 01:24:17 +03:00
Stefan Prodan
6de537a315 Use Cloud9 golang image 2018-04-07 02:46:47 +03:00
Stefan Prodan
5d992a92bb Clone git repo at startup
- mount known_hosts from the ssh secret
2018-04-07 02:15:55 +03:00
Stefan Prodan
0aade8c049 Automate Git server repo seeding 2018-04-07 01:40:13 +03:00
721 changed files with 241113 additions and 7851 deletions

38
.circleci/config.yml Normal file
View File

@@ -0,0 +1,38 @@
version: 2
jobs:
build:
docker:
- image: errordeveloper/skaffold:66cc263ef18f107adce245b8fc622a8ea46385f2
steps:
- checkout
- setup_remote_docker: {docker_layer_caching: true}
- run:
name: Run unit tests and build the image with Skaffold
command: skaffold build --profile=test
deploy:
docker:
- image: errordeveloper/skaffold:66cc263ef18f107adce245b8fc622a8ea46385f2
steps:
- checkout
- setup_remote_docker: {docker_layer_caching: true}
- run:
name: Build and push the image to the registry with Skaffold
command: |
if [[ -z "${CIRCLE_PULL_REQUEST}" ]] && [[ "${CIRCLE_PROJECT_USERNAME}" = "stefanprodan" ]] ; then
echo $REGISTRY_PASSWORD | docker login --username $REGISTRY_USERNAME --password-stdin
skaffold build --profile=production
else
echo "Do not push image"
fi
workflows:
version: 2
main:
jobs:
- build
- deploy:
requires: [build]
filters:
branches: {only: [master]}

9
.dockerignore Normal file
View File

@@ -0,0 +1,9 @@
docs
deploy
charts
cloudbuild.yaml
skaffold.yaml
.gitignore
.travis.yml
LICENSE
README.md

3
.gitignore vendored
View File

@@ -10,8 +10,11 @@
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
.DS_Store
# Project-local glide cache, RE: https://github.com/Masterminds/glide/issues/736
.glide/
.idea/
release/
build/
gcloud/

View File

@@ -32,12 +32,12 @@ after_success:
- if [ -z "$DOCKER_USER" ]; then
echo "PR build, skipping Docker Hub push";
else
docker login -u $DOCKER_USER -p $DOCKER_PASS;
echo $DOCKER_PASS | docker login -u $DOCKER_USER --password-stdin;
make docker-push;
fi
- if [ -z "$QUAY_USER" ]; then
echo "PR build, skipping Quay push";
else
docker login -u $QUAY_USER -p $QUAY_PASS quay.io;
echo $QUAY_PASS | docker login -u $QUAY_USER --password-stdin quay.io;
make quay-push;
fi

View File

@@ -6,7 +6,7 @@ RUN addgroup -S app \
curl openssl netcat-openbsd
WORKDIR /home/app
COPY ./ui ./ui
ADD podinfo .
RUN chown -R app:app ./

View File

@@ -1,5 +1,6 @@
FROM alpine:3.7
COPY ./ui ./ui
ADD podinfo /podinfo
CMD ["./podinfo"]

View File

@@ -11,7 +11,7 @@ RUN go test $(go list ./... | grep -v integration | grep -v /vendor/ | grep -v /
RUN gofmt -l -d $(find . -type f -name '*.go' -not -path "./vendor/*") && \
GIT_COMMIT=$(git rev-list -1 HEAD) && \
CGO_ENABLED=0 GOOS=linux go build -ldflags "-s -w \
-X github.com/stefanprodan/k8s-podinfo/pkg/version.GITCOMMIT=${GIT_COMMIT}" \
-X github.com/stefanprodan/k8s-podinfo/pkg/version.REVISION=${GIT_COMMIT}" \
-a -installsuffix cgo -o podinfo ./cmd/podinfo
FROM alpine:3.7
@@ -24,7 +24,7 @@ RUN addgroup -S app \
WORKDIR /home/app
COPY --from=builder /go/src/github.com/stefanprodan/k8s-podinfo/podinfo .
COPY ./ui ./ui
RUN chown -R app:app ./
USER app

152
Gopkg.lock generated
View File

@@ -5,25 +5,72 @@
branch = "master"
name = "github.com/beorn7/perks"
packages = ["quantile"]
revision = "4c0e84591b9aa9e6dcfdf3e020114cd81f89d5f9"
revision = "3a771d992973f24aa725d07868b467d1ddfceafb"
[[projects]]
name = "github.com/fsnotify/fsnotify"
packages = ["."]
revision = "c2828203cd70a50dcccfb2761f8b1f8ceef9a8e9"
version = "v1.4.7"
[[projects]]
name = "github.com/golang/protobuf"
packages = ["proto"]
revision = "925541529c1fa6821df4e44ce2723319eb2be768"
version = "v1.0.0"
revision = "aa810b61a9c79d51363740d207bb46cf8e620ed5"
version = "v1.2.0"
[[projects]]
name = "github.com/gorilla/context"
packages = ["."]
revision = "08b5f424b9271eedf6f9f0ce86cb9396ed337a42"
version = "v1.1.1"
[[projects]]
name = "github.com/gorilla/mux"
packages = ["."]
revision = "e3702bed27f0d39777b0b37b664b6280e8ef8fbf"
version = "v1.6.2"
[[projects]]
branch = "master"
name = "github.com/hashicorp/hcl"
packages = [
".",
"hcl/ast",
"hcl/parser",
"hcl/printer",
"hcl/scanner",
"hcl/strconv",
"hcl/token",
"json/parser",
"json/scanner",
"json/token"
]
revision = "ef8a98b0bbce4a65b5aa4c368430a80ddc533168"
[[projects]]
name = "github.com/magiconair/properties"
packages = ["."]
revision = "c2353362d570a7bfa228149c62842019201cfb71"
version = "v1.8.0"
[[projects]]
name = "github.com/matttproud/golang_protobuf_extensions"
packages = ["pbutil"]
revision = "3247c84500bff8d9fb6d579d800f20b3e091582c"
version = "v1.0.0"
revision = "c12348ce28de40eed0136aa2b644d0ee0650e56c"
version = "v1.0.1"
[[projects]]
name = "github.com/pkg/errors"
branch = "master"
name = "github.com/mitchellh/mapstructure"
packages = ["."]
revision = "645ef00459ed84a119197bfb8d8205042c6df63d"
version = "v0.8.0"
revision = "f15292f7a699fcc1a38a80977f80a046874ba8ac"
[[projects]]
name = "github.com/pelletier/go-toml"
packages = ["."]
revision = "c01d1270ff3e442a8a57cddc1c92dc1138598194"
version = "v1.2.0"
[[projects]]
name = "github.com/prometheus/client_golang"
@@ -38,7 +85,7 @@
branch = "master"
name = "github.com/prometheus/client_model"
packages = ["go"]
revision = "99fa1f4be8e564e8a6b613da7fa6f46c9edafc6c"
revision = "5c3871d89910bfb32f5fcab2aa4b9ec68e65a99f"
[[projects]]
branch = "master"
@@ -48,7 +95,7 @@
"internal/bitbucket.org/ww/goautoneg",
"model"
]
revision = "e4aa40a9169a88835b849a6efb71e05dc04b88f0"
revision = "c7de2306084e37d54b8be01f3541a8464345e9a5"
[[projects]]
branch = "master"
@@ -59,27 +106,94 @@
"nfs",
"xfs"
]
revision = "54d17b57dd7d4a3aa092476596b3f8a933bde349"
revision = "05ee40e3a273f7245e8777337fc7b46e533a9a92"
[[projects]]
name = "github.com/rs/zerolog"
name = "github.com/spf13/afero"
packages = [
".",
"internal/json",
"log"
"mem"
]
revision = "56a970de510213e50dbaa39ad73ac07c9ec75606"
version = "v1.5.0"
revision = "787d034dfe70e44075ccc060d346146ef53270ad"
version = "v1.1.1"
[[projects]]
name = "github.com/spf13/cast"
packages = ["."]
revision = "8965335b8c7107321228e3e3702cab9832751bac"
version = "v1.2.0"
[[projects]]
branch = "master"
name = "github.com/spf13/jwalterweatherman"
packages = ["."]
revision = "14d3d4c518341bea657dd8a226f5121c0ff8c9f2"
[[projects]]
name = "github.com/spf13/pflag"
packages = ["."]
revision = "9a97c102cda95a86cec2345a6f09f55a939babf5"
version = "v1.0.2"
[[projects]]
name = "github.com/spf13/viper"
packages = ["."]
revision = "907c19d40d9a6c9bb55f040ff4ae45271a4754b9"
version = "v1.1.0"
[[projects]]
name = "go.uber.org/atomic"
packages = ["."]
revision = "1ea20fb1cbb1cc08cbd0d913a96dead89aa18289"
version = "v1.3.2"
[[projects]]
name = "go.uber.org/multierr"
packages = ["."]
revision = "3c4937480c32f4c13a875a1829af76c98ca3d40a"
version = "v1.1.0"
[[projects]]
name = "go.uber.org/zap"
packages = [
".",
"buffer",
"internal/bufferpool",
"internal/color",
"internal/exit",
"zapcore"
]
revision = "ff33455a0e382e8a81d14dd7c922020b6b5e7982"
version = "v1.9.1"
[[projects]]
branch = "master"
name = "golang.org/x/sys"
packages = ["unix"]
revision = "1a700e749ce29638d0bbcb531cce1094ea096bd3"
[[projects]]
name = "golang.org/x/text"
packages = [
"internal/gen",
"internal/triegen",
"internal/ucd",
"transform",
"unicode/cldr",
"unicode/norm"
]
revision = "f21a4dfb5e38f5895301dc265a8def02365cc3d0"
version = "v0.3.0"
[[projects]]
name = "gopkg.in/yaml.v2"
packages = ["."]
revision = "7f97868eec74b32b0982dd158a51a446d1da7eb5"
version = "v2.1.1"
revision = "5420a8b6744d3b0345ab293f6fcba19c978f1183"
version = "v2.2.1"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
inputs-digest = "4f1e9200a330a22000fc47075b59e68e57c94bcb3d9f444f3ce85cab77e07fde"
inputs-digest = "95fe64936946a78f8261f1054187eb37c9766694640a68915416d8ba9f192b6a"
solver-name = "gps-cdcl"
solver-version = 1

View File

@@ -1,19 +1,26 @@
[[constraint]]
name = "github.com/pkg/errors"
version = "0.8.0"
[[constraint]]
name = "github.com/prometheus/client_golang"
version = "0.8.0"
[[constraint]]
name = "github.com/rs/zerolog"
version = "1.5.0"
name = "github.com/gorilla/mux"
version = "v1.6.2"
[[constraint]]
name = "gopkg.in/yaml.v2"
version = "2.1.1"
name = "go.uber.org/zap"
version = "v1.9.1"
[[override]]
name = "github.com/fsnotify/fsnotify"
version = "1.2.9"
[[constraint]]
name = "github.com/spf13/pflag"
version = "v1.0.2"
[[constraint]]
name = "github.com/spf13/viper"
version = "v1.1.0"
[prune]
go-tests = true

View File

@@ -20,7 +20,8 @@ build:
@rm -rf build && mkdir build
@echo Building: linux/$(LINUX_ARCH) $(VERSION) ;\
for arch in $(LINUX_ARCH); do \
mkdir -p build/linux/$$arch && CGO_ENABLED=0 GOOS=linux GOARCH=$$arch go build -ldflags="-s -w -X $(GITREPO)/pkg/version.GITCOMMIT=$(GITCOMMIT)" -o build/linux/$$arch/$(NAME) ./cmd/$(NAME) ;\
mkdir -p build/linux/$$arch && CGO_ENABLED=0 GOOS=linux GOARCH=$$arch go build -ldflags="-s -w -X $(GITREPO)/pkg/version.REVISION=$(GITCOMMIT)" -o build/linux/$$arch/$(NAME) ./cmd/$(NAME) ;\
cp -r ui/ build/linux/$$arch/ui;\
done
.PHONY: tar
@@ -45,6 +46,7 @@ docker-build: tar
@for arch in $(LINUX_ARCH); do \
mkdir -p build/docker/linux/$$arch ;\
tar -xzf release/$(NAME)_$(VERSION)_linux_$$arch.tgz -C build/docker/linux/$$arch ;\
cp -r ui/ build/docker/linux/$$arch/ui;\
if [ $$arch == amd64 ]; then \
cp Dockerfile build/docker/linux/$$arch ;\
cp Dockerfile build/docker/linux/$$arch/Dockerfile.in ;\
@@ -71,7 +73,7 @@ docker-build: tar
.PHONY: docker-push
docker-push:
@echo Pushing: $(VERSION) to $(DOCKER_IMAGE_NAME)
for arch in $(LINUX_ARCH); do \
for arch in $(LINUX_ARCH); do \
docker push $(DOCKER_IMAGE_NAME):$(NAME)-$$arch ;\
done
manifest-tool push from-args --platforms $(PLATFORMS) --template $(DOCKER_IMAGE_NAME):podinfo-ARCH --target $(DOCKER_IMAGE_NAME):$(VERSION)
@@ -80,7 +82,7 @@ docker-push:
.PHONY: quay-push
quay-push:
@echo Pushing: $(VERSION) to quay.io/$(DOCKER_IMAGE_NAME):$(VERSION)
@cd build/docker/linux/amd64/ ; docker build -t quay.io/$(DOCKER_IMAGE_NAME):$(VERSION) . ; docker push quay.io/$(DOCKER_IMAGE_NAME):$(VERSION)
@docker build -t quay.io/$(DOCKER_IMAGE_NAME):$(VERSION) -f Dockerfile.ci . ; docker push quay.io/$(DOCKER_IMAGE_NAME):$(VERSION)
.PHONY: clean
clean:
@@ -103,11 +105,10 @@ dep:
.PHONY: charts
charts:
cd charts/ && helm package podinfo/
mv charts/podinfo-0.1.0.tgz docs/
cd charts/ && helm package podinfo-istio/
cd charts/ && helm package loadtest/
cd charts/ && helm package ambassador/
mv charts/ambassador-0.1.0.tgz docs/
cd charts/ && helm package grafana/
mv charts/grafana-0.1.0.tgz docs/
cd charts/ && helm package ngrok/
mv charts/ngrok-0.1.0.tgz docs/
mv charts/*.tgz docs/
helm repo index docs --url https://stefanprodan.github.io/k8s-podinfo --merge ./docs/index.yaml

View File

@@ -5,33 +5,35 @@ that showcases best practices of running microservices in Kubernetes.
Specifications:
* Multi-arch build and release automation (Make/TravisCI)
* Release automation (Make/TravisCI/CircleCI/Quay.io/Google Cloud Container Builder/Skaffold/Weave Flux)
* Multi-platform Docker image (amd64/arm/arm64/ppc64le/s390x)
* Health checks (readiness and liveness)
* Graceful shutdown on interrupt signals
* Prometheus instrumentation (RED metrics)
* Dependency management with golang/dep
* Structured logging with zerolog
* Error handling with pkg/errors
* File watcher for secrets and configmaps
* Instrumented with Prometheus
* Tracing with Istio and Jeger
* Structured logging with zap
* 12-factor app with viper
* Fault injection (random errors and latency)
* Helm chart
Web API:
* `GET /` prints runtime information, environment variables, labels and annotations
* `GET /` prints runtime information
* `GET /version` prints podinfo version and git commit hash
* `GET /metrics` http requests duration and Go runtime metrics
* `GET /metrics` return HTTP requests duration and Go runtime metrics
* `GET /healthz` used by Kubernetes liveness probe
* `GET /readyz` used by Kubernetes readiness probe
* `POST /readyz/enable` signals the Kubernetes LB that this instance is ready to receive traffic
* `POST /readyz/disable` signals the Kubernetes LB to stop sending requests to this instance
* `GET /error` returns code 500 and logs the error
* `GET /status/{code}` returns the status code
* `GET /panic` crashes the process with exit code 255
* `POST /echo` echos the posted content, logs the SHA1 hash of the content
* `GET /echoheaders` prints the request HTTP headers
* `POST /job` long running job, json body: `{"wait":2}`
* `POST /echo` forwards the call to the backend service and echos the posted content
* `GET /headers` returns a JSON with the request HTTP headers
* `GET /delay/{seconds}` waits for the specified period
* `GET /configs` returns a JSON with configmaps and/or secrets mounted in the `config` volume
* `POST /write` writes the posted content to disk at /data/hash and returns the SHA1 hash of the content
* `POST /read` receives a SHA1 hash and returns the content of the file /data/hash if exists
* `POST /backend` forwards the call to the backend service on `http://backend-podinfo:9898/echo`
* `GET /read/{hash}` returns the content of the file /data/hash if exists
### Guides
@@ -39,5 +41,6 @@ Web API:
* [Horizontal Pod Auto-scaling](docs/2-autoscaling.md)
* [Monitoring and alerting with Prometheus](docs/3-monitoring.md)
* [StatefulSets with local persistent volumes](docs/4-statefulsets.md)
* [Canary Deployments and A/B Testing](docs/5-canary.md)
* [Expose Kubernetes services over HTTPS with Ngrok](docs/6-ngrok.md)
* [A/B Testing with Ambassador API Gateway](docs/5-canary.md)
* [Canary Deployments with Istio](docs/7-istio.md)

View File

@@ -3,7 +3,7 @@ kind: Deployment
metadata:
name: {{ template "grafana.fullname" . }}
labels:
app: {{ template "grafana.name" . }}
app: {{ template "grafana.fullname" . }}
chart: {{ template "grafana.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
@@ -11,12 +11,12 @@ spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "grafana.name" . }}
app: {{ template "grafana.fullname" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "grafana.name" . }}
app: {{ template "grafana.fullname" . }}
release: {{ .Release.Name }}
annotations:
prometheus.io/scrape: 'false'

View File

@@ -15,5 +15,5 @@ spec:
protocol: TCP
name: http
selector:
app: {{ template "grafana.name" . }}
app: {{ template "grafana.fullname" . }}
release: {{ .Release.Name }}

View File

@@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@@ -0,0 +1,5 @@
apiVersion: v1
appVersion: "1.0"
description: Hey load test Helm chart for Kubernetes
name: loadtest
version: 0.1.0

View File

@@ -0,0 +1 @@
{{ template "loadtest.fullname" . }} has been deployed successfully!

View File

@@ -0,0 +1,32 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "loadtest.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "loadtest.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "loadtest.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}

View File

@@ -0,0 +1,31 @@
{{- $fullname := include "loadtest.fullname" . -}}
{{- $name := include "loadtest.name" . -}}
{{- $chart := include "loadtest.chart" . -}}
{{- $image := .Values.image -}}
{{- range $test := .Values.tests }}
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ $fullname }}-{{ .name }}
labels:
app: {{ $name }}
chart: {{ $chart }}
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
containers:
- name: loadtest
image: {{ $image }}
args:
- /bin/sh
- -c
- "hey -z 58s {{ $test.cmd }} {{ $test.url }}"
restartPolicy: OnFailure
{{- end -}}

View File

@@ -0,0 +1,11 @@
# Default values for loadtest.
image: stefanprodan/loadtest:latest
tests:
- name: "blue"
url: "https://canary.istio.weavedx.com/api/echo"
cmd: "-h2 -m POST -d '{test: 1}' -H 'X-API-Version: 0.6.0' -c 50 -q 5"
- name: "green"
url: "https://canary.istio.weavedx.com/api/echo"
cmd: "-h2 -m POST -d '{test: 2}' -H 'X-API-Version: 0.6.1' -c 10 -q 5"

View File

@@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@@ -0,0 +1,12 @@
apiVersion: v1
appVersion: "0.6.0"
description: Podinfo Helm chart for Istio
name: podinfo-istio
version: 0.1.0
home: https://github.com/stefanprodan/k8s-podinfo
sources:
- https://github.com/stefanprodan/k8s-podinfo
maintainers:
- name: stefanprodan
email: stefanprodan@users.noreply.github.com
engine: gotpl

View File

@@ -0,0 +1,80 @@
# Podinfo Istio
Podinfo is a tiny web application made with Go
that showcases best practices of running microservices in Kubernetes.
## Installing the Chart
Create an Istio enabled namespace:
```console
kubectl create namespace demo
kubectl label namespace demo istio-injection=enabled
```
Create an Istio Gateway in the `istio-system` namespace named `public-gateway`:
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
```
Create the `frontend` release by specifying the external domain name:
```console
helm upgrade frontend --install ./charts/podinfo-istio \
--namespace=demo \
--set host=podinfo.example.com \
--set gateway.name=public-gateway \
--set gateway.create=false \
-f ./charts/podinfo-istio/frontend.yaml
```
Create the `backend` release:
```console
helm upgrade backend --install ./charts/podinfo-istio \
--namespace=demo \
-f ./charts/podinfo-istio/backend.yaml
```
Create the `store` release:
```console
helm upgrade store --install ./charts/podinfo-istio \
--namespace=demo \
-f ./charts/podinfo-istio/store.yaml
```
Start load test:
```console
helm upgrade --install loadtest ./charts/loadtest \
--namespace=loadtesting
```

34
charts/podinfo-istio/apply.sh Executable file
View File

@@ -0,0 +1,34 @@
#!/usr/bin/env bash
#Usage: fswatch -o ./podinfo-istio/ | xargs -n1 ./podinfo-istio/apply.sh
set -e
MARK='\033[0;32m'
NC='\033[0m'
log (){
echo -e "$(date +%Y-%m-%dT%H:%M:%S%z) ${MARK}${1}${NC}"
}
log "installing frontend"
helm upgrade frontend --install ./podinfo-istio \
--namespace=demo \
--set host=canary.istio.weavedx.com \
--set gateway.name=public-gateway \
--set gateway.create=false \
-f ./podinfo-istio/frontend.yaml
log "installing backend"
helm upgrade backend --install ./podinfo-istio \
--namespace=demo \
-f ./podinfo-istio/backend.yaml
log "installing store"
helm upgrade store --install ./podinfo-istio \
--namespace=demo \
-f ./podinfo-istio/store.yaml
log "finished installing frontend, backend and store"

View File

@@ -0,0 +1,21 @@
# Default values for backend demo.
# expose the blue/green deployments inside the cluster
host: backend
# stable release
blue:
replicas: 2
tag: "1.0.0"
backend: http://store:9898/api/echo
# canary release
green:
replicas: 2
tag: "1.0.0"
routing:
# target green callers
- match:
- sourceLabels:
color: green
backend: http://store:9898/api/echo

View File

@@ -0,0 +1,39 @@
# Default values for frontend demo.
# external domain
host:
exposeHost: true
# no more than one Gateway can be created on a cluster
# if TLS is enabled the istio-ingressgateway-certs secret must exist in istio-system ns
# if you have a Gateway running you can set the name to your own gateway and turn off create
gateway:
name: public-gateway
create: false
tls: true
httpsRedirect: true
# stable release
blue:
replicas: 2
tag: "1.0.0"
message: "Greetings human! Gabi is THE BEST!!!!"
backend: http://backend:9898/api/echo
# canary release
green:
replicas: 2
tag: "1.0.0"
routing:
# target Safari
- match:
- headers:
user-agent:
regex: "^(?!.*Chrome).*Safari.*"
# target API clients by version
- match:
- headers:
x-api-version:
regex: "^(v{0,1})0\\.6\\.([1-9]).*"
message: "Greetings from the green frontend"
backend: http://backend:9898/api/echo

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,19 @@
# Default values for backend demo.
# expose the store deployment inside the cluster
host: store
# load balance 80/20 between blue and green
blue:
replicas: 2
tag: "1.0.0"
backend: https://httpbin.org/anything
weight: 80
green:
replicas: 2
tag: "1.0.0"
backend: https://httpbin.org/anything
externalServices:
- httpbin.org

View File

@@ -0,0 +1 @@
{{ template "podinfo-istio.fullname" . }} has been deployed successfully!

View File

@@ -0,0 +1,36 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "podinfo-istio.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
The release name is used as a full name.
*/}}
{{- define "podinfo-istio.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- define "podinfo-istio.blue" -}}
{{- printf "%s-%s" .Release.Name "blue" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "podinfo-istio.green" -}}
{{- printf "%s-%s" .Release.Name "green" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "podinfo-istio.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}

View File

@@ -0,0 +1,78 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "podinfo-istio.blue" . }}
labels:
app: {{ template "podinfo-istio.fullname" . }}
chart: {{ template "podinfo-istio.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
color: blue
version: {{ .Values.blue.tag }}
spec:
replicas: {{ .Values.blue.replicas }}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ template "podinfo-istio.fullname" . }}
color: blue
template:
metadata:
labels:
app: {{ template "podinfo-istio.fullname" . }}
color: blue
version: {{ .Values.blue.tag }}
annotations:
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: podinfod
image: "{{ .Values.blue.repository }}:{{ .Values.blue.tag }}"
imagePullPolicy: {{ .Values.imagePullPolicy }}
command:
- ./podinfo
- --port={{ .Values.containerPort }}
- --level={{ .Values.logLevel }}
- --random-delay={{ .Values.blue.faults.delay }}
- --random-error={{ .Values.blue.faults.error }}
env:
- name: PODINFO_UI_COLOR
value: blue
{{- if .Values.blue.backend }}
- name: PODINFO_BACKEND_URL
value: {{ .Values.blue.backend }}
{{- end }}
{{- if .Values.blue.message }}
- name: PODINFO_UI_MESSAGE
value: {{ .Values.blue.message }}
{{- end }}
ports:
- name: http
containerPort: {{ .Values.containerPort }}
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9898
initialDelaySeconds: 1
periodSeconds: 2
failureThreshold: 1
livenessProbe:
httpGet:
path: /healthz
port: 9898
initialDelaySeconds: 1
periodSeconds: 10
failureThreshold: 2
volumeMounts:
- name: data
mountPath: /data
resources:
{{ toYaml .Values.resources | indent 12 }}
volumes:
- name: data
emptyDir: {}

View File

@@ -0,0 +1,20 @@
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: {{ template "podinfo-istio.fullname" . }}
labels:
app: {{ template "podinfo-istio.fullname" . }}
chart: {{ template "podinfo-istio.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
host: {{ template "podinfo-istio.fullname" . }}
subsets:
- name: blue
labels:
color: blue
{{- if gt .Values.green.replicas 0.0 }}
- name: green
labels:
color: green
{{- end }}

View File

@@ -0,0 +1,22 @@
{{- if .Values.externalServices -}}
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: {{ template "podinfo-istio.fullname" . }}-external-svcs
labels:
app: {{ template "podinfo-istio.fullname" . }}
chart: {{ template "podinfo-istio.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
hosts:
{{- range .Values.externalServices }}
- {{ . }}
{{- end }}
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: HTTPS
resolution: DNS
{{- end }}

View File

@@ -0,0 +1,31 @@
{{- if .Values.gateway.create -}}
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: {{ .Values.gateway.name }}
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: {{ .Values.gateway.httpsRedirect }}
{{- if .Values.gateway.tls }}
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
{{- end }}
{{- end }}

View File

@@ -0,0 +1,80 @@
{{- if gt .Values.green.replicas 0.0 -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "podinfo-istio.green" . }}
labels:
app: {{ template "podinfo-istio.fullname" . }}
chart: {{ template "podinfo-istio.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
color: green
version: {{ .Values.green.tag }}
spec:
replicas: {{ .Values.green.replicas }}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ template "podinfo-istio.fullname" . }}
color: green
template:
metadata:
labels:
app: {{ template "podinfo-istio.fullname" . }}
color: green
version: {{ .Values.green.tag }}
annotations:
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: podinfod
image: "{{ .Values.green.repository }}:{{ .Values.green.tag }}"
imagePullPolicy: {{ .Values.imagePullPolicy }}
command:
- ./podinfo
- --port={{ .Values.containerPort }}
- --level={{ .Values.logLevel }}
- --random-delay={{ .Values.green.faults.delay }}
- --random-error={{ .Values.green.faults.error }}
env:
- name: PODINFO_UI_COLOR
value: green
{{- if .Values.green.backend }}
- name: PODINFO_BACKEND_URL
value: {{ .Values.green.backend }}
{{- end }}
{{- if .Values.green.message }}
- name: PODINFO_UI_MESSAGE
value: {{ .Values.green.message }}
{{- end }}
ports:
- name: http
containerPort: {{ .Values.containerPort }}
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9898
initialDelaySeconds: 1
periodSeconds: 2
failureThreshold: 1
livenessProbe:
httpGet:
path: /healthz
port: 9898
initialDelaySeconds: 1
periodSeconds: 10
failureThreshold: 2
volumeMounts:
- name: data
mountPath: /data
resources:
{{ toYaml .Values.resources | indent 12 }}
volumes:
- name: data
emptyDir: {}
{{- end }}

View File

@@ -0,0 +1,18 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "podinfo-istio.fullname" . }}
labels:
app: {{ template "podinfo-istio.fullname" . }}
chart: {{ template "podinfo-istio.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: ClusterIP
ports:
- port: {{ .Values.containerPort }}
targetPort: http
protocol: TCP
name: http
selector:
app: {{ template "podinfo-istio.fullname" . }}

View File

@@ -0,0 +1,43 @@
{{- $host := .Release.Name -}}
{{- $timeout := .Values.timeout -}}
{{- $greenWeight := (sub 100 (.Values.blue.weight|int)) | int -}}
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: {{ template "podinfo-istio.fullname" . }}
labels:
app: {{ template "podinfo-istio.fullname" . }}
chart: {{ template "podinfo-istio.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
hosts:
- {{ .Values.host }}
{{- if .Values.exposeHost }}
gateways:
- {{ .Values.gateway.name }}.istio-system.svc.cluster.local
{{- end }}
http:
{{- if gt .Values.green.replicas 0.0 }}
{{- range .Values.green.routing }}
- match:
{{ toYaml .match | indent 6 }}
route:
- destination:
host: {{ $host }}
subset: green
timeout: {{ $timeout }}
{{- end }}
{{- end }}
- route:
- destination:
host: {{ template "podinfo-istio.fullname" . }}
subset: blue
weight: {{ .Values.blue.weight }}
{{- if gt .Values.green.replicas 0.0 }}
- destination:
host: {{ template "podinfo-istio.fullname" . }}
subset: green
weight: {{ $greenWeight }}
{{- end }}
timeout: {{ $timeout }}

View File

@@ -0,0 +1,60 @@
# Default values for podinfo-istio.
# host can be an extarnal domain or a local one
host: podinfo
# if the host is an external domain must be exposed via the Gateway
exposeHost: false
timeout: 30s
# creates public-gateway.istio-system.svc.cluster.local
# no more than one Gateway can be created on a cluster
# if TLS is enabled the istio-ingressgateway-certs secret must exist in istio-system ns
# if you have a Gateway running you can set the name to your own gateway and turn off create
gateway:
name: public-gateway
create: false
tls: false
httpsRedirect: false
# authorise external https services
#externalServices:
# - api.github.com
# - apis.google.com
# - googleapis.com
# stable release
# by default all traffic goes to blue
blue:
replicas: 2
repository: quay.io/stefanprodan/podinfo
tag: "1.0.0"
# green must have at at least one replica to set weight under 100
weight: 100
message:
backend:
faults:
delay: false
error: false
# canary release
# disabled with 0 replicas
green:
replicas: 0
repository: quay.io/stefanprodan/podinfo
tag: "1.0.0"
message:
backend:
routing:
faults:
delay: false
error: false
# blue/green common settings
logLevel: info
containerPort: 9898
imagePullPolicy: IfNotPresent
resources:
limits:
requests:
cpu: 1m
memory: 16Mi

View File

@@ -1,8 +1,8 @@
apiVersion: v1
appVersion: "0.2.1"
appVersion: "1.0.0"
description: Podinfo Helm chart for Kubernetes
name: podinfo
version: 0.1.0
version: 1.0.0
home: https://github.com/stefanprodan/k8s-podinfo
sources:
- https://github.com/stefanprodan/k8s-podinfo

View File

@@ -8,7 +8,8 @@ that showcases best practices of running microservices in Kubernetes.
To install the chart with the release name `my-release`:
```console
$ helm install stable/podinfo --name my-release
$ helm repo add sp https://stefanprodan.github.io/k8s-podinfo
$ helm upgrade my-release --install sp/podinfo
```
The command deploys podinfo on the Kubernetes cluster in the default namespace.
@@ -31,23 +32,27 @@ The following tables lists the configurable parameters of the podinfo chart and
Parameter | Description | Default
--- | --- | ---
`affinity` | node/pod affinities | None
`hpa.enabled` | Enables HPA | `false`
`hpa.cpu` | Target CPU usage per pod | None
`hpa.memory` | Target memory usage per pod | None
`hpa.requests` | Target requests per second per pod | None
`hpa.maxReplicas` | Maximum pod replicas | `10`
`ingress.hosts` | Ingress accepted hostnames | None
`ingress.tls` | Ingress TLS configuration | None:
`image.pullPolicy` | Image pull policy | `IfNotPresent`
`image.repository` | Image repository | `stefanprodan/podinfo`
`image.tag` | Image tag | `0.0.1`
`ingress.enabled` | Enables Ingress | `false`
`ingress.annotations` | Ingress annotations | None
`ingress.hosts` | Ingress accepted hostnames | None
`ingress.tls` | Ingress TLS configuration | None
`color` | UI color | blue
`backend` | echo backend URL | None
`faults.delay` | random HTTP response delays between 0 and 5 seconds | `false`
`faults.error` | 1/3 chances of a random HTTP response error | `false`
`hpa.enabled` | enables HPA | `false`
`hpa.cpu` | target CPU usage per pod | None
`hpa.memory` | target memory usage per pod | None
`hpa.requests` | target requests per second per pod | None
`hpa.maxReplicas` | maximum pod replicas | `10`
`ingress.hosts` | ingress accepted hostnames | None
`ingress.tls` | ingress TLS configuration | None:
`image.pullPolicy` | image pull policy | `IfNotPresent`
`image.repository` | image repository | `stefanprodan/podinfo`
`image.tag` | image tag | `0.0.1`
`ingress.enabled` | enables ingress | `false`
`ingress.annotations` | ingress annotations | None
`ingress.hosts` | ingress accepted hostnames | None
`ingress.tls` | ingress TLS configuration | None
`message` | UI greetings message | None
`nodeSelector` | node labels for pod assignment | `{}`
`podAnnotations` | annotations to add to each pod | `{}`
`replicaCount` | desired number of pods | `1`
`replicaCount` | desired number of pods | `2`
`resources.requests/cpu` | pod CPU request | `1m`
`resources.requests/memory` | pod memory request | `16Mi`
`resources.limits/cpu` | pod CPU limit | None
@@ -56,7 +61,7 @@ Parameter | Description | Default
`service.internalPort` | internal port for the service | `9898`
`service.nodePort` | node port for the service | `31198`
`service.type` | type of service | `ClusterIP`
`tolerations` | List of node taints to tolerate | `[]`
`tolerations` | list of node taints to tolerate | `[]`
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,

View File

@@ -7,45 +7,70 @@ metadata:
chart: {{ template "podinfo.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
color: {{ .Values.color }}
version: {{ .Values.image.tag }}
spec:
replicas: {{ .Values.replicaCount }}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ template "podinfo.name" . }}
color: {{ .Values.color }}
version: {{ .Values.image.tag }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "podinfo.name" . }}
color: {{ .Values.color }}
version: {{ .Values.image.tag }}
release: {{ .Release.Name }}
annotations:
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- ./podinfo
- -port={{ .Values.service.containerPort }}
{{- if .Values.logLevel }}
- -debug=true
{{- end }}
- --port={{ .Values.service.containerPort }}
- --level={{ .Values.logLevel }}
- --random-delay={{ .Values.faults.delay }}
- --random-error={{ .Values.faults.error }}
env:
- name: backend_url
- name: PODINFO_UI_COLOR
value: {{ .Values.color }}
{{- if .Values.message }}
- name: PODINFO_UI_MESSAGE
value: {{ .Values.message }}
{{- end }}
{{- if .Values.backend }}
- name: PODINFO_BACKEND_URL
value: {{ .Values.backend }}
{{- end }}
ports:
- name: http
containerPort: {{ .Values.service.containerPort }}
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: http
readinessProbe:
httpGet:
path: /readyz
port: http
port: 9898
initialDelaySeconds: 1
periodSeconds: 2
failureThreshold: 1
livenessProbe:
httpGet:
path: /healthz
port: 9898
initialDelaySeconds: 1
periodSeconds: 10
failureThreshold: 2
volumeMounts:
- name: data
mountPath: /data

View File

@@ -1,11 +1,18 @@
# Default values for podinfo.
replicaCount: 1
backend: http://backend-podinfo:9898/echo
replicaCount: 2
logLevel: info
color: blue
backend: #http://backend-podinfo:9898/echo
message: #UI greetings
faults:
delay: false
error: false
image:
repository: stefanprodan/podinfo
tag: 0.2.1
repository: quay.io/stefanprodan/podinfo
tag: 1.0.0
pullPolicy: IfNotPresent
service:
@@ -14,7 +21,7 @@ service:
containerPort: 9898
nodePort: 31198
# Heapster or metrics-server add-on required
# metrics-server add-on required
hpa:
enabled: false
maxReplicas: 10
@@ -50,4 +57,3 @@ tolerations: []
affinity: {}
logLevel: debug

View File

@@ -1,37 +1,190 @@
package main
import (
"flag"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strings"
"time"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"github.com/stefanprodan/k8s-podinfo/pkg/server"
"github.com/spf13/pflag"
"github.com/spf13/viper"
"github.com/stefanprodan/k8s-podinfo/pkg/api"
"github.com/stefanprodan/k8s-podinfo/pkg/signals"
"github.com/stefanprodan/k8s-podinfo/pkg/version"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
)
var (
port string
debug bool
)
func init() {
flag.StringVar(&port, "port", "8989", "Port to listen on.")
flag.BoolVar(&debug, "debug", false, "sets log level to debug")
}
func main() {
flag.Parse()
// flags definition
fs := pflag.NewFlagSet("default", pflag.ContinueOnError)
fs.Int("port", 9898, "port")
fs.String("level", "info", "log level debug, info, warn, error, flat or panic")
fs.String("backend-url", "", "backend service URL")
fs.Duration("http-client-timeout", 2*time.Minute, "client timeout duration")
fs.Duration("http-server-timeout", 30*time.Second, "server read and write timeout duration")
fs.Duration("http-server-shutdown-timeout", 5*time.Second, "server graceful shutdown timeout duration")
fs.String("data-path", "/data", "data local path")
fs.String("config-path", "", "config dir path")
fs.String("config", "config.yaml", "config file name")
fs.String("ui-path", "./ui", "UI local path")
fs.String("ui-color", "blue", "UI color")
fs.String("ui-message", fmt.Sprintf("greetings from podinfo v%v", version.VERSION), "UI message")
fs.Bool("random-delay", false, "between 0 and 5 seconds random delay")
fs.Bool("random-error", false, "1/3 chances of a random response error")
fs.Int("stress-cpu", 0, "Number of CPU cores with 100 load")
fs.Int("stress-memory", 0, "MB of data to load into memory")
versionFlag := fs.Bool("version", false, "get version number")
zerolog.SetGlobalLevel(zerolog.InfoLevel)
if debug {
zerolog.SetGlobalLevel(zerolog.DebugLevel)
// parse flags
err := fs.Parse(os.Args[1:])
switch {
case err == pflag.ErrHelp:
os.Exit(0)
case err != nil:
fmt.Fprintf(os.Stderr, "Error: %s\n\n", err.Error())
fs.PrintDefaults()
os.Exit(2)
case *versionFlag:
fmt.Println(version.VERSION)
os.Exit(0)
}
log.Info().Msgf("Starting podinfo version %s commit %s", version.VERSION, version.GITCOMMIT)
log.Debug().Msgf("Starting HTTP server on port %v", port)
// bind flags and environment variables
viper.BindPFlags(fs)
viper.RegisterAlias("backendUrl", "backend-url")
hostname, _ := os.Hostname()
viper.Set("hostname", hostname)
viper.Set("version", version.VERSION)
viper.Set("revision", version.REVISION)
viper.SetEnvPrefix("PODINFO")
viper.SetEnvKeyReplacer(strings.NewReplacer("-", "_"))
viper.AutomaticEnv()
// load config from file
if _, err := os.Stat(filepath.Join(viper.GetString("config-path"), viper.GetString("config"))); err == nil {
viper.SetConfigName(strings.Split(viper.GetString("config"), ".")[0])
viper.AddConfigPath(viper.GetString("config-path"))
if err := viper.ReadInConfig(); err != nil {
fmt.Printf("Error reading config file, %v\n", err)
}
}
// configure logging
logger, _ := initZap(viper.GetString("level"))
defer logger.Sync()
stdLog := zap.RedirectStdLog(logger)
defer stdLog()
// start stress tests if any
beginStressTest(viper.GetInt("stress-cpu"), viper.GetInt("stress-memory"), logger)
// load HTTP server config
var srvCfg api.Config
if err := viper.Unmarshal(&srvCfg); err != nil {
logger.Panic("config unmarshal failed", zap.Error(err))
}
// log version and port
logger.Info("Starting podinfo",
zap.String("version", viper.GetString("version")),
zap.String("revision", viper.GetString("revision")),
zap.String("port", viper.GetString("port")),
)
// start HTTP server
srv, _ := api.NewServer(&srvCfg, logger)
stopCh := signals.SetupSignalHandler()
server.ListenAndServe(port, 5*time.Second, stopCh)
srv.ListenAndServe(stopCh)
}
func initZap(logLevel string) (*zap.Logger, error) {
level := zap.NewAtomicLevelAt(zapcore.InfoLevel)
switch logLevel {
case "debug":
level = zap.NewAtomicLevelAt(zapcore.DebugLevel)
case "info":
level = zap.NewAtomicLevelAt(zapcore.InfoLevel)
case "warn":
level = zap.NewAtomicLevelAt(zapcore.WarnLevel)
case "error":
level = zap.NewAtomicLevelAt(zapcore.ErrorLevel)
case "fatal":
level = zap.NewAtomicLevelAt(zapcore.FatalLevel)
case "panic":
level = zap.NewAtomicLevelAt(zapcore.PanicLevel)
}
zapEncoderConfig := zapcore.EncoderConfig{
TimeKey: "ts",
LevelKey: "level",
NameKey: "logger",
CallerKey: "caller",
MessageKey: "msg",
StacktraceKey: "stacktrace",
LineEnding: zapcore.DefaultLineEnding,
EncodeLevel: zapcore.LowercaseLevelEncoder,
EncodeTime: zapcore.ISO8601TimeEncoder,
EncodeDuration: zapcore.SecondsDurationEncoder,
EncodeCaller: zapcore.ShortCallerEncoder,
}
zapConfig := zap.Config{
Level: level,
Development: false,
Sampling: &zap.SamplingConfig{
Initial: 100,
Thereafter: 100,
},
Encoding: "json",
EncoderConfig: zapEncoderConfig,
OutputPaths: []string{"stderr"},
ErrorOutputPaths: []string{"stderr"},
}
return zapConfig.Build()
}
var stressMemoryPayload []byte
func beginStressTest(cpus int, mem int, logger *zap.Logger) {
done := make(chan int)
if cpus > 0 {
logger.Info("starting CPU stress", zap.Int("cores", cpus))
for i := 0; i < cpus; i++ {
go func() {
for {
select {
case <-done:
return
default:
}
}
}()
}
}
if mem > 0 {
path := "/tmp/podinfo.data"
f, err := os.Create(path)
if err != nil {
logger.Error("memory stress failed", zap.Error(err))
}
if err := f.Truncate(1000000 * int64(mem)); err != nil {
logger.Error("memory stress failed", zap.Error(err))
}
stressMemoryPayload, err = ioutil.ReadFile(path)
f.Close()
os.Remove(path)
if err != nil {
logger.Error("memory stress failed", zap.Error(err))
}
logger.Info("starting CPU stress", zap.Int("memory", len(stressMemoryPayload)))
}
}

View File

@@ -0,0 +1,28 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt

View File

@@ -0,0 +1,17 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
namespace: istio-system
spec:
hosts:
- "grafana.istio.weavedx.com"
gateways:
- public-gateway.istio-system.svc.cluster.local
http:
- route:
- destination:
host: grafana
timeout: 30s

View File

@@ -0,0 +1,17 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafanax
namespace: istio-system
spec:
hosts:
- "grafanax.istio.weavedx.com"
gateways:
- public-gateway.istio-system.svc.cluster.local
http:
- route:
- destination:
host: grafanax
timeout: 30s

View File

@@ -0,0 +1,17 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: jaeger
namespace: istio-system
spec:
hosts:
- "jaeger.istio.weavedx.com"
gateways:
- public-gateway.istio-system.svc.cluster.local
http:
- route:
- destination:
host: jaeger-query
timeout: 30s

View File

@@ -0,0 +1,53 @@
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: podinfo-canary
namespace: test
labels:
app: podinfo
release: canary
spec:
replicas: 1
selector:
matchLabels:
app: podinfo
release: canary
template:
metadata:
labels:
app: podinfo
release: canary
annotations:
prometheus.io/scrape: 'true'
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:0.2.2
imagePullPolicy: Always
command:
- ./podinfo
- -port=9898
- -debug=true
ports:
- name: http
containerPort: 9898
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9898
initialDelaySeconds: 1
periodSeconds: 5
failureThreshold: 1
livenessProbe:
httpGet:
path: /healthz
port: 9898
initialDelaySeconds: 1
periodSeconds: 10
failureThreshold: 2
resources:
requests:
memory: "32Mi"
cpu: "10m"

View File

@@ -0,0 +1,18 @@
---
apiVersion: v1
kind: Service
metadata:
name: podinfo-canary
namespace: test
labels:
app: podinfo-canary
spec:
type: ClusterIP
ports:
- name: http
port: 9898
targetPort: http
protocol: TCP
selector:
app: podinfo
release: canary

View File

@@ -0,0 +1,53 @@
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: podinfo-ga
namespace: test
labels:
app: podinfo
release: ga
spec:
replicas: 1
selector:
matchLabels:
app: podinfo
release: ga
template:
metadata:
labels:
app: podinfo
release: ga
annotations:
prometheus.io/scrape: 'true'
spec:
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:0.2.1
imagePullPolicy: Always
command:
- ./podinfo
- -port=9898
- -debug=true
ports:
- name: http
containerPort: 9898
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9898
initialDelaySeconds: 1
periodSeconds: 5
failureThreshold: 1
livenessProbe:
httpGet:
path: /healthz
port: 9898
initialDelaySeconds: 1
periodSeconds: 10
failureThreshold: 2
resources:
requests:
memory: "32Mi"
cpu: "10m"

View File

@@ -0,0 +1,18 @@
---
apiVersion: v1
kind: Service
metadata:
name: podinfo-ga
namespace: test
labels:
app: podinfo-ga
spec:
type: ClusterIP
ports:
- name: http
port: 9898
targetPort: http
protocol: TCP
selector:
app: podinfo
release: ga

View File

@@ -0,0 +1,17 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: podinfo
namespace: test
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- host: podinfo.co.uk
http:
paths:
- path: /.*
backend:
serviceName: podinfo
servicePort: 9898

View File

@@ -0,0 +1,15 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: podinfo
namespace: test
spec:
name: podinfo.test
subsets:
- name: ga
labels:
release: ga
- name: canary
labels:
release: canary

View File

@@ -0,0 +1,19 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: podinfo-gateway
namespace: test
spec:
selector:
app: podinfo
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- podinfo.co.uk
- podinfo.test.svc.cluster.local

View File

@@ -0,0 +1,40 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo
namespace: test
spec:
hosts:
- podinfo
- podinfo.co.uk
gateways:
- mesh
- podinfo-gateway
http:
- route:
- destination:
name: podinfo.test
subset: canary
weight: 20
- destination:
name: podinfo.test
subset: ga
weight: 80
# http:
# - match:
# - headers:
# x-user:
# exact: insider
# source_labels:
# release: ga
# route:
# - destination:
# name: podinfo.test
# subset: canary
# weight: 100
# - route:
# - destination:
# name: podinfo.test
# subset: ga
# weight: 100

View File

@@ -0,0 +1,17 @@
---
apiVersion: v1
kind: Service
metadata:
name: podinfo
namespace: test
labels:
app: podinfo
spec:
type: ClusterIP
ports:
- name: http
port: 9898
targetPort: http
protocol: TCP
selector:
app: podinfo

View File

@@ -0,0 +1,14 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: podinfo-backend
spec:
host: podinfo-backend
subsets:
- name: grey
labels:
color: grey
- name: orange
labels:
color: orange

View File

@@ -0,0 +1,64 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo-backend-grey
labels:
app: podinfo-backend
color: grey
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: podinfo-backend
color: grey
template:
metadata:
labels:
app: podinfo-backend
color: grey
annotations:
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:0.6.0
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- sleep 3
command:
- ./podinfo
- -port=9898
- -logLevel=debug
ports:
- name: http
containerPort: 9898
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9898
livenessProbe:
httpGet:
path: /healthz
port: 9898
resources:
requests:
memory: "32Mi"
cpu: "10m"
env:
- name: color
value: "grey"
- name: message
value: "Greetings from backend grey"
- name: backendURL
value: "http://podinfo-store:9898/echo" #"https://httpbin.org/anything"

View File

@@ -0,0 +1,64 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo-backend-orange
labels:
app: podinfo-backend
color: orange
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: podinfo-backend
color: orange
template:
metadata:
labels:
app: podinfo-backend
color: orange
annotations:
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:0.6.0
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- sleep 3
command:
- ./podinfo
- -port=9898
- -logLevel=debug
ports:
- name: http
containerPort: 9898
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9898
livenessProbe:
httpGet:
path: /healthz
port: 9898
resources:
requests:
memory: "32Mi"
cpu: "10m"
env:
- name: color
value: "orange"
- name: message
value: "Greetings from backend orange"
- name: backendURL
value: "http://podinfo-store:9898/echo" #"https://httpbin.org/anything"

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: podinfo-backend
labels:
app: podinfo-backend
spec:
type: ClusterIP
ports:
- port: 9898
targetPort: http
protocol: TCP
name: http
selector:
app: podinfo-backend

View File

@@ -0,0 +1,29 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo-backend
spec:
hosts:
- podinfo-backend
http:
# new version
# forward 100% of the traffic to orange
- match:
# - headers:
# x-api-version:
# regex: "^(v{0,1})0\\.6\\.([0-9]{1,3}).*"
- sourceLabels:
color: blue
route:
- destination:
host: podinfo-backend
subset: orange
timeout: 20s
# default route
# forward 100% of the traffic to grey
- route:
- destination:
host: podinfo-backend
subset: grey
timeout: 20s

View File

@@ -0,0 +1,65 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo-blue
labels:
app: podinfo
color: blue
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: podinfo
color: blue
template:
metadata:
labels:
app: podinfo
color: blue
annotations:
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:0.6.0
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- sleep 3
command:
- ./podinfo
- -port=9898
- -logLevel=debug
ports:
- name: http
containerPort: 9898
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9898
initialDelaySeconds: 10
livenessProbe:
httpGet:
path: /healthz
port: 9898
resources:
requests:
memory: "32Mi"
cpu: "10m"
env:
- name: color
value: "blue"
- name: message
value: "Greetings from podinfo blue"
- name: backendURL
value: "http://podinfo-backend:9898/backend"

View File

@@ -0,0 +1,14 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: podinfo
spec:
host: podinfo
subsets:
- name: blue
labels:
color: blue
- name: green
labels:
color: green

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: podinfo
labels:
app: podinfo
spec:
type: ClusterIP
ports:
- port: 9898
targetPort: http
protocol: TCP
name: http
selector:
app: podinfo

View File

@@ -0,0 +1,82 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo
spec:
hosts:
- "podinfo.istio.weavedx.com"
gateways:
- public-gateway.istio-system.svc.cluster.local
http:
# Opera: forward 100% of the traffic to green
- match:
- headers:
user-agent:
regex: ".*OPR.*"
route:
- destination:
host: podinfo
subset: green
timeout: 30s
# Chrome: 50/50 load balancing between blue and green
- match:
- headers:
user-agent:
regex: ".*Chrome.*"
route:
- destination:
host: podinfo
subset: blue
weight: 50
- destination:
host: podinfo
subset: green
weight: 50
timeout: 30s
# Safari: 70/30 load balancing between blue and green
- match:
- headers:
user-agent:
regex: "^(?!.*Chrome).*Safari.*"
route:
- destination:
host: podinfo
subset: blue
weight: 100
- destination:
host: podinfo
subset: green
weight: 0
timeout: 30s
# Route based on color header
- match:
- headers:
x-color:
exact: "blue"
route:
- destination:
host: podinfo
subset: blue
timeout: 30s
retries:
attempts: 3
perTryTimeout: 3s
- match:
- headers:
x-color:
exact: "green"
route:
- destination:
host: podinfo
subset: green
timeout: 30s
retries:
attempts: 3
perTryTimeout: 3s
# Any other browser: forward 100% of the traffic to blue
- route:
- destination:
host: podinfo
subset: blue
timeout: 35s

View File

@@ -0,0 +1,68 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo-green
labels:
app: podinfo
color: green
spec:
replicas: 3
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: podinfo
color: green
template:
metadata:
labels:
app: podinfo
color: green
annotations:
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:0.6.0
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- sleep 4
command:
- ./podinfo
- -port=9898
- -logLevel=debug
ports:
- name: http
containerPort: 9898
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9898
initialDelaySeconds: 1
periodSeconds: 2
failureThreshold: 1
livenessProbe:
httpGet:
path: /healthz
port: 9898
resources:
requests:
memory: "32Mi"
cpu: "10m"
env:
- name: color
value: "green"
- name: message
value: "Greetings from podinfo green"
- name: backendURL
value: "http://podinfo-backend:9898/backend"

View File

@@ -0,0 +1,15 @@
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: httpbin
spec:
hosts:
- httpbin.org
ports:
- number: 80
name: http
protocol: HTTP
- number: 443
name: https
protocol: HTTPS
resolution: DNS

View File

@@ -0,0 +1,62 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo-store
labels:
app: podinfo-store
version: "0.6"
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: podinfo-store
version: "0.6"
template:
metadata:
labels:
app: podinfo-store
version: "0.6"
annotations:
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:0.6.0
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- sleep 3
command:
- ./podinfo
- -port=9898
- -logLevel=debug
ports:
- name: http
containerPort: 9898
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9898
livenessProbe:
httpGet:
path: /healthz
port: 9898
resources:
requests:
memory: "32Mi"
cpu: "10m"
env:
- name: color
value: "yellow"
- name: message
value: "Greetings from store yellow"

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: podinfo-store
labels:
app: podinfo-store
spec:
type: ClusterIP
ports:
- port: 9898
targetPort: http
protocol: TCP
name: http
selector:
app: podinfo-store

View File

@@ -0,0 +1,27 @@
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo-store
spec:
hosts:
- podinfo-store
http:
- match:
- sourceLabels:
color: orange
route:
- destination:
host: podinfo-store
timeout: 15s
fault:
delay:
percent: 50
fixedDelay: 500ms
abort:
percent: 50
httpStatus: 500
- route:
- destination:
host: podinfo-store
timeout: 15s

View File

@@ -12,13 +12,22 @@ Create a secret with the Git ssh key:
kubectl apply -f ./deploy/k9/ssh-key.yaml
```
Create the Git Server deploy and service:
Create the Git Server deployment and service:
```bash
kubectl apply -f ./deploy/k9/git-dep.yaml
kubectl apply -f ./deploy/k9/git-svc.yaml
```
Deploy Flux (modify fux-dep.yaml and add your weave token):
```bash
kubectl apply -f ./deploy/k9/memcache-dep.yaml
kubectl apply -f ./deploy/k9/memcache-svc.yaml
kubectl apply -f ./deploy/k9/flux-rbac.yaml
kubectl apply -f ./deploy/k9/flux-dep.yaml
```
Create the Cloud9 IDE deployment:
```bash
@@ -31,35 +40,21 @@ Find the public IP:
kubectl -n ide get svc --selector=name=ide
```
Open Cloud9 IDE in your browser, login with `username/password` and run the following commands:
Open Cloud9 IDE in your browser, login with `username/password` and config git:
```bash
ssh-keyscan gitsrv >> ~/.ssh/known_hosts
git config --global user.email "user@weavedx.com"
git config --global user.name "User"
```
Exec into the Git server and create a repo:
Commit a change to podinfo repo:
```bash
kubectl -n ide exec -it gitsrv-69b4cd5fc-dd6rf -- sh
/git-server # cd repos
/git-server # mkdir myrepo.git
/git-server # cd myrepo.git
/git-server # git init --shared=true
/git-server # git add .
/git-server # git config --global user.email "user@weavedx.com"
/git-server # git config --global user.name "User"
/git-server # git commit -m "init"
/git-server # git checkout -b dummy
```
Go back to the Cloud9 IDE and clone the repo:
```bash
git clone ssh://git@gitsrv/git-server/repos/myrepo.git
cd k8s-podinfo
rm Dockerfile.build
git add .
git commit -m "test"
git push origin master
```

52
deploy/k9/flux-dep.yaml Executable file
View File

@@ -0,0 +1,52 @@
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
name: flux
name: flux
namespace: ide
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
name: flux
template:
metadata:
labels:
name: flux
spec:
serviceAccount: flux
volumes:
- name: ssh-git
secret:
defaultMode: 0400
secretName: ssh-git
- name: git-keygen
emptyDir:
medium: Memory
containers:
- name: flux
image: quay.io/weaveworks/flux:1.2.5
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3030
volumeMounts:
- name: ssh-git
mountPath: /root/.ssh
readOnly: true
- name: ssh-git
mountPath: /etc/fluxd/ssh
readOnly: true
- name: git-keygen
mountPath: /var/fluxd/keygen
args:
- --ssh-keygen-dir=/var/fluxd/keygen
- --k8s-secret-name=ssh-git
- --git-url=ssh://git@gitsrv/git-server/repos/cluster.git
- --git-branch=master
#- --git-path=deploy/canary
#- --connect=wss://cloud.weave.works/api/flux
#- --token=yghrfcs5berdqp68z7wfndcea93rq6nx

36
deploy/k9/flux-rbac.yaml Executable file
View File

@@ -0,0 +1,36 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
name: flux
name: flux
namespace: ide
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
labels:
name: flux
name: flux
rules:
- apiGroups: ['*']
resources: ['*']
verbs: ['*']
- nonResourceURLs: ['*']
verbs: ['*']
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
labels:
name: flux
name: flux
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flux
subjects:
- kind: ServiceAccount
name: flux
namespace: ide

View File

@@ -17,8 +17,13 @@ spec:
name: gitsrv
spec:
containers:
- image: jkarlos/git-server-docker
- image: stefanprodan/gitsrv:0.0.5
name: git
env:
- name: REPO
value: "cluster.git"
- name: TAR_URL
value: "https://github.com/stefanprodan/kubecon-cluster/archive/0.0.1.tar.gz"
ports:
- containerPort: 22
name: ssh
@@ -33,5 +38,7 @@ spec:
secret:
secretName: ssh-git
- name: git-server-data
persistentVolumeClaim:
claimName: git-server-data
emptyDir: {}
# - name: git-server-data
# persistentVolumeClaim:
# claimName: git-server-data

16
deploy/k9/k9-cfg.yaml Normal file
View File

@@ -0,0 +1,16 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: k9-cfg
namespace: ide
data:
gcp-clone.sh: |
#!/usr/bin/env sh
export PATH=/google-cloud-sdk/bin:$PATH
git config --global credential.helper gcloud.sh
git config --global user.email "dx+training@weave.works"
git config --global user.name "k8s fan"
git clone -b master $1

View File

@@ -21,23 +21,37 @@ spec:
serviceAccount: ide
serviceAccountName: ide
initContainers:
- command:
- name: git-clone-cluster
command:
- /bin/sh
- -c
- test -d /workspace/k8s-podinfo || git clone https://github.com/stefanprodan/k8s-podinfo
k8s-podinfo
image: stefanprodan/k9c:0.1.0
- test -d /workspace/cluster || git clone -b master ssh://git@gitsrv/git-server/repos/cluster.git
image: stefanprodan/k9c:v2-gcloud
imagePullPolicy: IfNotPresent
name: git-clone
volumeMounts:
- mountPath: /workspace
name: data
name: ide-workspace-data
- mountPath: /root/.ssh
name: ssh-git
- name: git-clone-podinfo
command:
- /bin/bash
- -c
- /root/gcp-clone.sh https://source.developers.google.com/p/dx-general/r/podinfo
image: stefanprodan/k9c:v2-gcloud
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /workspace
name: ide-workspace-data
- mountPath: /root/gcp-clone.sh
subPath: gcp-clone.sh
name: git-init
containers:
- name: ide
args:
- --auth
- username:password
image: stefanprodan/k9c:0.1.0
image: stefanprodan/k9c:v2-gcloud
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
@@ -71,7 +85,7 @@ spec:
timeoutSeconds: 1
volumeMounts:
- mountPath: /workspace
name: data
name: ide-workspace-data
- mountPath: /var/run/docker.sock
name: dockersocket
- mountPath: /root/.ssh
@@ -81,12 +95,16 @@ spec:
secret:
defaultMode: 0600
secretName: ssh-git
- name: data
- name: git-init
configMap:
defaultMode: 0744
name: k9-cfg
- name: dockersocket
hostPath:
path: /var/run/docker.sock
type: ""
- name: ide-workspace-data
emptyDir: {}
# - name: ide-workspace-data
# persistentVolumeClaim:
# claimName: ide-workspace-data
- hostPath:
path: /var/run/docker.sock
type: ""
name: dockersocket

View File

@@ -19,15 +19,11 @@ rules:
resources:
- '*'
verbs:
- get
- list
- watch
- '*'
- nonResourceURLs:
- '*'
verbs:
- get
- list
- watch
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding

31
deploy/k9/memcache-dep.yaml Executable file
View File

@@ -0,0 +1,31 @@
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
name: memcached
name: memcached
namespace: ide
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
name: memcached
template:
metadata:
labels:
name: memcached
spec:
containers:
- name: memcached
image: memcached:1.4.25
imagePullPolicy: IfNotPresent
args:
- -m 64 # Maximum memory to use, in megabytes. 64MB is default.
- -p 11211 # Default port, but being explicit is nice.
- -vv # This gets us to the level of request logs.
ports:
- name: clients
containerPort: 11211

15
deploy/k9/memcache-svc.yaml Executable file
View File

@@ -0,0 +1,15 @@
---
apiVersion: v1
kind: Service
metadata:
name: memcached
namespace: ide
spec:
# The memcache client uses DNS to get a list of memcached servers and then
# uses a consistent hash of the key to determine which server to pick.
clusterIP: None
ports:
- name: memcached
port: 11211
selector:
name: memcached

View File

@@ -3,6 +3,8 @@
apiVersion: v1
kind: Secret
data:
known_hosts: Z2l0c3J2IHNzaC1yc2EgQUFBQUIzTnphQzF5YzJFQUFBQURBUUFCQUFBQkFRRHQ2NU0wc2FGQTZFd3NqYzhtUTQwNWJJNDA2QnNGU0pyYkd6OUdGcmJXQm4wUnMzTG9pNFM2QURXZ1RXbGNzcEh2YmZtZjd2WFc5b2lUanhla2U4b0hQQ2ZGWXJIRmRQSzI2QmlnMkoxa0UyRHpST05pelpXY2R3OGRwcVJodnhsdDIrL0VKdXVheThDR1h6M1ZMQ1Y4TmdKYzVBWW1Bd05Qa25VaFdLeGFBemp3dlJkLzBjeVhyNHZ2Y1REY213UjYzb2lXY1JQa0hDWjVMQ2xGdVpFMDY1VWxtMm82Q2dJdGwrZTZNNW91RFNKV1pEcFlXV21tSkpKdjFEUW9ScnVOYmFmNWY0YmdXVmtLanJRLzBjQTRpV1dsa3dKTWxBV1FncDlzYUQwRzJGODNocmYyWGFwTS9jbFdURnlia3pQcVBxYXcyQkV3WFA2dldwNkExaVVSCmdpdHNydiBlY2RzYS1zaGEyLW5pc3RwMjU2IEFBQUFFMlZqWkhOaExYTm9ZVEl0Ym1semRIQXlOVFlBQUFBSWJtbHpkSEF5TlRZQUFBQkJCTTZXeVV4UVJQdFpoWUx4akc2WVJRT0hEL1NYLyt1STRYQm80NFVUU3UyMXVxbWYvbEc4Y0xXVGRNVnpEbFVEWTkvRHg0dEZ6OTZMVDk3a1VDMXBMSnM9CmdpdHNydiBzc2gtZWQyNTUxOSBBQUFBQzNOemFDMWxaREkxTlRFNUFBQUFJQU12MXoyWW5EdWc1TTRLbHAzRk1iQnZ3OU5kRnJ4N09tNXVFS0ZRczA3dAo=
identity: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBczNuS2xrdkhLYlBEVlZCZTJLdGJSQ0dIcmphYU5MVkI3bVRHMjEwSjZpRXg4RnJYCldkR1YvZDJOUWJhYVFDQ1JDaHc3THdqSFFkeU5lVHJoam0xbGgzY0xzRk04RTJxRUEwZ0hMdVVyL2dlTkx6K0kKN2pzc0xTakJ0MTB6NWVYVzFjNUJaWWRBdElOK3dOTmpPTFptR21uV0NCQjBBUmx2M3d6RFNjOXE4NXZ2UXRjRwpxWjNnY0tReS9ZVDd3TXM0Y052ZW9CWDlBNVZOcnZrTjlrT3VBYmlCTFpUMWtCMWxxNVVsQ3FRVUpBNFQ4bVN5Clh5eXNsejhGcWxTb2xac1FKQmtSZmlDYlNQcENPcEVmc1h4L3ltTzlCaGVzMTVBOW42cDlVc253dDRjRzhwYjYKaEV6K0ZYQ0M5N3QrUXJ6L2dIV2w1bGI5SFViWFJPQ3FSaGxZWFFJREFRQUJBb0lCQURPNmNhcDU4dEJSTUlhQgpZS1RnVnJDL1hVVFlGZ0FGRWhpczBTZmVuQUh3U1YxMlFVVnhBK01rblJjSWsxTFNVQnR5NFpmLzFyWmo1QjdCCjJzcmJPdjBkVWJBamZWZzNSZzlRRGtuMDRZWVpGUmMrSDdCU083eFVWK2tLb3UrckhBMkpvZzJxT3ZzTVAxZEMKVmdHOWlKWXFWUGNFRHZ0V0lvbE9PMmNsc2pTK0REZEd4b0J2amFRbzhHVlZkaUZrekdOVmdTQkZnM1dPZnRsOApKMjZyNndzYVVXZDYrYyt1aTRFdUtJTjg2SEhNT0t0bDExbjd6QjNYVWllTEZVSW1GUjhHNHRHQlZwZXhzeVFpCmxjc2dUM1NnWEZDOUp6Q0VSUGRrTXVLSHVWMDBad01maU1ZK3N4Z3RsdXJRMnNhbVhXTmV4eGRlanpDR1RIcWcKV2xRWkJtRUNnWUVBNE5CN3VwMjZURFlUV0tyZ014aGJueHJaT0wxY3VhT1lWQUJnWWh5b2JnMWVabE10Mk1pVgpVaWxaamFkRnZ5emZjR1BTVEpjSjVmMmkvQ0dzS243RXFxUXdoZnd1S3ZRZTQyazgwbk5BcUczdTkvdkl4bklCCnFGZW5kTTE3SlN2WkU3NXFCVE9uTXVVZ1NuNFJoTXpzOEg3UTFmZFQ4UGMvTVRmRVVKcTQzcGtDZ1lFQXpGOUMKd1g0Z0UvUnZlbWZRc0ZVZ29TQ0lQdkR1bFhNNzNsK09XRnByOE00MWxpeGcvd0lvQ2NFZlpGcFdLUkpqSmwvUwpOVFh3YVhnOGg4RGl3a3d3dzJmcmNvWTl2TGNIcGxvWVRkN1ZjUVk4UGRKdjNJeGFReld6SHpMR3N0M29hZ08rCmJDbStsMEY5TnY0VUdWRHUrT0RSQjJyRWo2b1ZGRmh0SUQxbmRtVUNnWUJHS3V3alQrMkFzZlFSM2F1Q1p4eloKcVFDWmhBajM3QWEwV1RXOENhUE1UYUhrSUJ3VUtHN3FxUHRKaWlicngyNnAzbzRaMTU2QVNVemdrd1h3Y1lhaQptQUtKSHkrdHVtb1ZvcGdZTzE2Mzh5LzkrSGt1N3hCellZQmpwV3JGTEUxaHF6SGVFOFFnejREbm56ZUtrb2QxCmZLOWp5UUZMR1hDQXhSNGg1bGpES1FLQmdRQytqUjlmNjZvYkVQQ1Q3NUhicHpPS0tCd0FtNEhJWkszd2M2WHoKNlRMMVRqOFdhd0J4SStDUzM3YldTWWhHT1RlckF2S3EzRVR4QWNObVM4amhva3BoRjFhbTdGVkp6Rm5jbCtwTApTTFkzOExsZ1p3SVhYK0dWQXMrbENpSExpaTMyRXRHTVpndW5XYzlXNCtWM2lVZVhVMzV4N1BHaWhkR3JxNXJyCjBYVFRKUUtCZ0FReUF0RlloVHRONktCSER2NFdiTDQxcnBtcUlVcUlpV0R6a3FPT1ZXcHgzYkpTdWVNeDEyUjQKWHVVaGkwL2ZqbGFvMmYwWTBqbTBDUlQ5ZmlhQW56WHNMRXNzN2JYQ0ZZcGt3V3ZrNnNqV1BCWGdPUnBZbklHNQpRRWNFeklzRDFKQm1EY0RxdWxpZ0dnUzNIdGhiWTl5WW4vU3l4d0owcU5ob3BDS1d2OWNOCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
id_rsa: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBczNuS2xrdkhLYlBEVlZCZTJLdGJSQ0dIcmphYU5MVkI3bVRHMjEwSjZpRXg4RnJYCldkR1YvZDJOUWJhYVFDQ1JDaHc3THdqSFFkeU5lVHJoam0xbGgzY0xzRk04RTJxRUEwZ0hMdVVyL2dlTkx6K0kKN2pzc0xTakJ0MTB6NWVYVzFjNUJaWWRBdElOK3dOTmpPTFptR21uV0NCQjBBUmx2M3d6RFNjOXE4NXZ2UXRjRwpxWjNnY0tReS9ZVDd3TXM0Y052ZW9CWDlBNVZOcnZrTjlrT3VBYmlCTFpUMWtCMWxxNVVsQ3FRVUpBNFQ4bVN5Clh5eXNsejhGcWxTb2xac1FKQmtSZmlDYlNQcENPcEVmc1h4L3ltTzlCaGVzMTVBOW42cDlVc253dDRjRzhwYjYKaEV6K0ZYQ0M5N3QrUXJ6L2dIV2w1bGI5SFViWFJPQ3FSaGxZWFFJREFRQUJBb0lCQURPNmNhcDU4dEJSTUlhQgpZS1RnVnJDL1hVVFlGZ0FGRWhpczBTZmVuQUh3U1YxMlFVVnhBK01rblJjSWsxTFNVQnR5NFpmLzFyWmo1QjdCCjJzcmJPdjBkVWJBamZWZzNSZzlRRGtuMDRZWVpGUmMrSDdCU083eFVWK2tLb3UrckhBMkpvZzJxT3ZzTVAxZEMKVmdHOWlKWXFWUGNFRHZ0V0lvbE9PMmNsc2pTK0REZEd4b0J2amFRbzhHVlZkaUZrekdOVmdTQkZnM1dPZnRsOApKMjZyNndzYVVXZDYrYyt1aTRFdUtJTjg2SEhNT0t0bDExbjd6QjNYVWllTEZVSW1GUjhHNHRHQlZwZXhzeVFpCmxjc2dUM1NnWEZDOUp6Q0VSUGRrTXVLSHVWMDBad01maU1ZK3N4Z3RsdXJRMnNhbVhXTmV4eGRlanpDR1RIcWcKV2xRWkJtRUNnWUVBNE5CN3VwMjZURFlUV0tyZ014aGJueHJaT0wxY3VhT1lWQUJnWWh5b2JnMWVabE10Mk1pVgpVaWxaamFkRnZ5emZjR1BTVEpjSjVmMmkvQ0dzS243RXFxUXdoZnd1S3ZRZTQyazgwbk5BcUczdTkvdkl4bklCCnFGZW5kTTE3SlN2WkU3NXFCVE9uTXVVZ1NuNFJoTXpzOEg3UTFmZFQ4UGMvTVRmRVVKcTQzcGtDZ1lFQXpGOUMKd1g0Z0UvUnZlbWZRc0ZVZ29TQ0lQdkR1bFhNNzNsK09XRnByOE00MWxpeGcvd0lvQ2NFZlpGcFdLUkpqSmwvUwpOVFh3YVhnOGg4RGl3a3d3dzJmcmNvWTl2TGNIcGxvWVRkN1ZjUVk4UGRKdjNJeGFReld6SHpMR3N0M29hZ08rCmJDbStsMEY5TnY0VUdWRHUrT0RSQjJyRWo2b1ZGRmh0SUQxbmRtVUNnWUJHS3V3alQrMkFzZlFSM2F1Q1p4eloKcVFDWmhBajM3QWEwV1RXOENhUE1UYUhrSUJ3VUtHN3FxUHRKaWlicngyNnAzbzRaMTU2QVNVemdrd1h3Y1lhaQptQUtKSHkrdHVtb1ZvcGdZTzE2Mzh5LzkrSGt1N3hCellZQmpwV3JGTEUxaHF6SGVFOFFnejREbm56ZUtrb2QxCmZLOWp5UUZMR1hDQXhSNGg1bGpES1FLQmdRQytqUjlmNjZvYkVQQ1Q3NUhicHpPS0tCd0FtNEhJWkszd2M2WHoKNlRMMVRqOFdhd0J4SStDUzM3YldTWWhHT1RlckF2S3EzRVR4QWNObVM4amhva3BoRjFhbTdGVkp6Rm5jbCtwTApTTFkzOExsZ1p3SVhYK0dWQXMrbENpSExpaTMyRXRHTVpndW5XYzlXNCtWM2lVZVhVMzV4N1BHaWhkR3JxNXJyCjBYVFRKUUtCZ0FReUF0RlloVHRONktCSER2NFdiTDQxcnBtcUlVcUlpV0R6a3FPT1ZXcHgzYkpTdWVNeDEyUjQKWHVVaGkwL2ZqbGFvMmYwWTBqbTBDUlQ5ZmlhQW56WHNMRXNzN2JYQ0ZZcGt3V3ZrNnNqV1BCWGdPUnBZbklHNQpRRWNFeklzRDFKQm1EY0RxdWxpZ0dnUzNIdGhiWTl5WW4vU3l4d0owcU5ob3BDS1d2OWNOCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
id_rsa.pub: c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFDemVjcVdTOGNwczhOVlVGN1lxMXRFSVlldU5wbzB0VUh1Wk1iYlhRbnFJVEh3V3RkWjBaWDkzWTFCdHBwQUlKRUtIRHN2Q01kQjNJMTVPdUdPYldXSGR3dXdVendUYW9RRFNBY3U1U3YrQjQwdlA0anVPeXd0S01HM1hUUGw1ZGJWemtGbGgwQzBnMzdBMDJNNHRtWWFhZFlJRUhRQkdXL2ZETU5KejJyem0rOUMxd2FwbmVCd3BETDloUHZBeXpodzI5NmdGZjBEbFUydStRMzJRNjRCdUlFdGxQV1FIV1dybFNVS3BCUWtEaFB5WkxKZkxLeVhQd1dxVktpVm14QWtHUkYrSUp0SStrSTZrUit4ZkgvS1k3MEdGNnpYa0QyZnFuMVN5ZkMzaHdieWx2cUVUUDRWY0lMM3UzNUN2UCtBZGFYbVZ2MGRSdGRFNEtwR0dWaGQgdXNlckB3ZWF2ZWR4LmNvbQo=
metadata:

View File

@@ -0,0 +1,48 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
labels:
app: podinfo
spec:
replicas: 1
selector:
matchLabels:
app: podinfo
template:
metadata:
labels:
app: podinfo
annotations:
prometheus.io/scrape: 'true'
spec:
containers:
- name: podinfod
image: podinfo
command:
- ./podinfo
- -port=9898
- -debug=true
ports:
- name: http
containerPort: 9898
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9898
initialDelaySeconds: 1
periodSeconds: 5
failureThreshold: 1
livenessProbe:
httpGet:
path: /healthz
port: 9898
initialDelaySeconds: 1
periodSeconds: 10
failureThreshold: 2
resources:
requests:
memory: "32Mi"
cpu: "10m"

View File

@@ -0,0 +1,50 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
labels:
app: podinfo
annotations:
flux.weave.works/automated: 'true'
spec:
replicas: 1
selector:
matchLabels:
app: podinfo
template:
metadata:
labels:
app: podinfo
annotations:
prometheus.io/scrape: 'true'
spec:
containers:
- name: podinfod
image: stefanprodan/podinfo:92114c0
command:
- ./podinfo
- -port=9898
- -debug=true
ports:
- name: http
containerPort: 9898
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9898
initialDelaySeconds: 1
periodSeconds: 5
failureThreshold: 1
livenessProbe:
httpGet:
path: /healthz
port: 9898
initialDelaySeconds: 1
periodSeconds: 10
failureThreshold: 2
resources:
requests:
memory: "32Mi"
cpu: "10m"

View File

@@ -0,0 +1,8 @@
apiVersion: v1
data:
basic_auth_password: ODM4NzIwYTUxMjgxNDlkMzJmMTIxYTViMWQ4N2FjMzUwNzAxZThmZQ==
basic_auth_test: YWRtaW4=
kind: Secret
metadata:
name: basic-auth
type: Opaque

View File

@@ -0,0 +1,79 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
labels:
app: podinfo
spec:
replicas: 3
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: podinfo
template:
metadata:
labels:
app: podinfo
annotations:
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: podinfod
image: quay.io/stefanprodan/podinfo:0.6.0
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- sleep 3
command:
- ./podinfo
- -port=9898
- -logLevel=debug
ports:
- name: http
containerPort: 9898
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9898
initialDelaySeconds: 1
periodSeconds: 2
failureThreshold: 1
livenessProbe:
httpGet:
path: /healthz
port: 9898
initialDelaySeconds: 1
periodSeconds: 10
failureThreshold: 2
resources:
requests:
memory: "32Mi"
cpu: "10m"
env:
- name: color
value: "blue"
- name: message
value: "Greetings from podinfo blue"
- name: backendURL
value: "http://podinfo-backend:9898/echo"
- name: configPath
value: "/var/secrets"
volumeMounts:
- name: auth
readOnly: true
mountPath: "/var/secrets"
volumes:
- name: auth
secret:
secretName: basic-auth

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: podinfo
labels:
app: podinfo
spec:
type: LoadBalancer
ports:
- port: 9898
targetPort: http
protocol: TCP
name: http
selector:
app: podinfo

View File

@@ -7,26 +7,26 @@ Prometheus query examples of key metrics to measure and alert upon.
**Request Rate** - the number of requests per second by instance
```
sum(irate(http_requests_count{job=~".*podinfo"}[1m])) by (instance)
sum(irate(http_request_duration_seconds_count{job=~".*podinfo"}[1m])) by (instance)
```
**Request Errors** - the number of failed requests per second by URL path
```
sum(irate(http_requests_count{job=~".*podinfo", status=~"5.."}[1m])) by (path)
sum(irate(http_request_duration_seconds_count{job=~".*podinfo", status=~"5.."}[1m])) by (path)
```
**Request Duration** - average duration of each request over 10 minutes
```
sum(rate(http_requests_sum{job=~".*podinfo"}[10m])) /
sum(rate(http_requests_count{job=~".*podinfo"}[10m]))
sum(rate(http_request_duration_seconds_sum{job=~".*podinfo"}[10m])) /
sum(rate(http_request_duration_seconds_count{job=~".*podinfo"}[10m]))
```
**Request Latency** - 99th percentile request latency over 10 minutes
```
histogram_quantile(0.99, sum(rate(http_requests_bucket{job=~".*podinfo"}[10m])) by (le))
histogram_quantile(0.99, sum(rate(http_request_duration_seconds_bucket{job=~".*podinfo"}[10m])) by (le))
```
**Goroutines Rate** - the number of running goroutines over 10 minutes

View File

@@ -1,4 +1,4 @@
# Canary Deployments and A/B Testing
# A/B Testing and Canary Deployments
Canary Deployment and A/B testing with Ambassador's Envoy API Gateway.

View File

@@ -1,13 +1,13 @@
# Expose Kubernetes services over HTTPS with Ngrok
Have you ever wanted to expose a Kubernetes service running on Minikube on the internet and have a
temporary HTTPS address for it? If so then Ngrok is the perfect solution to do that without any
temporary HTTPS address for it? If so then Ngrok is a great fit to do that without any
firewall, NAT or DNS configurations.
If you are developing an application that works with webhooks or oauth callbacks
Ngrok can create a tunnel between your Kubernetes service and their cloud platform and provide you with
a unique HTTPS URL that you can use to test and debug your service.
For this purpose I've made a Helm chart that you can use to deploy Ngrok on Kubernetes by specifying
I've made a Helm chart that you can use to deploy Ngrok on Kubernetes by specifying
a ClusterIP service that will get exposed on the internet.
What follows is a step-by-step guide on how you can use Ngrok as a reverse proxy to
@@ -63,13 +63,36 @@ helm install sp/podinfo --name webhook
This deploys `podinfo` in the default namespace and
creates a ClusterIP service with the address `webhook-podinfo:9898`.
```yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: podinfo
chart: podinfo-0.1.0
heritage: Tiller
release: webhook
name: webhook-podinfo
namespace: default
spec:
ports:
- name: http
port: 9898
protocol: TCP
targetPort: http
selector:
app: podinfo
release: webhook
type: ClusterIP
```
### Deploy Ngrok
Before you begin go to [ngrok.com](https://ngrok.com) and register for a free account.
Ngrok will create a token for you, use it when installing the Ngrok chart.
Install Ngrok:
Install Ngrok by specifying the ClusterIP address you want to expose:
```bash
$ helm install sp/ngrok --name tunnel \

231
docs/7-istio.md Normal file
View File

@@ -0,0 +1,231 @@
# Canary Deployments with Istio
### Install Istio
Download latest release:
```bash
curl -L https://git.io/getLatestIstio | sh -
```
Add the istioctl client to your PATH:
```bash
cd istio-0.7.1
export PATH=$PWD/bin:$PATH
```
Install Istio services without enabling mutual TLS authentication:
```bash
kubectl apply -f install/kubernetes/istio.yaml
```
### Set Istio automatic sidecar injection
Generate certs:
```bash
./install/kubernetes/webhook-create-signed-cert.sh \
--service istio-sidecar-injector \
--namespace istio-system \
--secret sidecar-injector-certs
```
Install the sidecar injection configmap:
```bash
kubectl apply -f install/kubernetes/istio-sidecar-injector-configmap-release.yaml
```
Set the caBundle in the webhook install YAML that the Kubernetes api-server uses to invoke the webhook:
```bash
cat install/kubernetes/istio-sidecar-injector.yaml | \
./install/kubernetes/webhook-patch-ca-bundle.sh > \
install/kubernetes/istio-sidecar-injector-with-ca-bundle.yaml
```
Install the sidecar injector webhook:
```bash
kubectl apply -f install/kubernetes/istio-sidecar-injector-with-ca-bundle.yaml
```
Create the `test` namespace:
```bash
kubectl create namespace test
```
Label the `test` namespace with `istio-injection=enabled`:
```bash
kubectl label namespace test istio-injection=enabled
```
### Run GA and Canary Deployments
Apply the podinfo ga and canary deployments and service:
```bash
kubectl -n test apply -f ./deploy/istio-v1alpha3/ga-dep.yaml,./deploy/istio-v1alpha3/canary-dep.yaml,./deploy/istio-v1alpha3/svc.yaml
```
Apply the istio destination rule, virtual service and gateway:
```bash
kubectl -n test apply -f ./deploy/istio-v1alpha3/istio-destination-rule.yaml
kubectl -n test apply -f ./deploy/istio-v1alpha3/istio-virtual-service.yaml
kubectl -n test apply -f ./deploy/istio-v1alpha3/istio-gateway.yaml
```
Create a `loadtest` pod for testing:
```bash
kubectl -n test run -i --rm --tty loadtest --image=stefanprodan/loadtest --restart=Never -- sh
```
Start the load test:
```bash
hey -n 1000000 -c 2 -q 5 http://podinfo.test:9898/version
```
**Initial state**
All traffic is routed to the GA deployment:
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: podinfo
namespace: test
spec:
hosts:
- podinfo
- podinfo.co.uk
gateways:
- mesh
- podinfo-gateway
http:
- route:
- destination:
name: podinfo.test
subset: canary
weight: 0
- destination:
name: podinfo.test
subset: ga
weight: 100
```
![s1](https://github.com/stefanprodan/k8s-podinfo/blob/master/docs/screens/istio-c-s1.png)
**Canary warm-up**
Route 10% of the traffic to the canary deployment:
```yaml
http:
- route:
- destination:
name: podinfo.test
subset: canary
weight: 10
- destination:
name: podinfo.test
subset: ga
weight: 90
```
![s2](https://github.com/stefanprodan/k8s-podinfo/blob/master/docs/screens/istio-c-s2.png)
**Canary promotion**
Increase the canary traffic to 60%:
```yaml
http:
- route:
- destination:
name: podinfo.test
subset: canary
weight: 60
- destination:
name: podinfo.test
subset: ga
weight: 40
```
![s3](https://github.com/stefanprodan/k8s-podinfo/blob/master/docs/screens/istio-c-s3.png)
Full promotion, 100% of the traffic to the canary:
```yaml
http:
- route:
- destination:
name: podinfo.test
subset: canary
weight: 100
- destination:
name: podinfo.test
subset: ga
weight: 0
```
![s4](https://github.com/stefanprodan/k8s-podinfo/blob/master/docs/screens/istio-c-s4.png)
Measure requests latency for each deployment:
![s5](https://github.com/stefanprodan/k8s-podinfo/blob/master/docs/screens/istio-c-s5.png)
Observe the traffic shift with Scope:
![s0](https://github.com/stefanprodan/k8s-podinfo/blob/master/docs/screens/istio-c-s0.png)
### Applying GitOps
![gitops](https://github.com/stefanprodan/k8s-podinfo/blob/master/docs/diagrams/istio-gitops.png)
Prerequisites for automating Istio canary deployments:
* create a cluster config Git repo that contains the desire state of your cluster
* keep the GA and Canary deployment definitions in Git
* keep the Istio destination rule, virtual service and gateway definitions in Git
* any changes to the above resources are performed via `git commit` instead of `kubectl apply`
Assuming that the GA is version `0.1.0` and the Canary is at `0.2.0`, you would probably
want to automate the deployment of patches for 0.1.x and 0.2.x.
Using Weave Cloud you can define a GitOps pipeline that will continuously monitor for new patches
and will apply them on both GA and Canary deployments using Weave Flux filters:
* `0.1.*` for GA
* `0.2.*` for Canary
Let's assume you've found a performance issue on the Canary by monitoring the request latency graph, for
some reason the Canary is responding slower than the GA.
CD GitOps pipeline steps:
* An engineer fixes the latency issue and cuts a new release by tagging the master branch as 0.2.1
* GitHub notifies GCP Container Builder that a new tag has been committed
* GCP Container Builder builds the Docker image, tags it as 0.2.1 and pushes it to Google Container Registry
* Weave Flux detects the new tag on GCR and updates the Canary deployment definition
* Weave Flux commits the Canary deployment definition to GitHub in the cluster repo
* Weave Flux triggers a rolling update of the Canary deployment
* Weave Cloud sends a Slack notification that the 0.2.1 patch has been released
Once the Canary is fixed you can keep increasing the traffic shift from GA by modifying the weight setting
and committing the changes in Git. Weave Cloud will detect that the cluster state is out of sync with
desired state described in git and will apply the changes.
If you notice that the Canary doesn't behave well under load you can revert the changes in Git and
Weave Flux will undo the weight settings by applying the desired state from Git on the cluster.
Keep iterating on the Canary code until the SLA is on a par with the GA release.

BIN
docs/diagrams/flux-helm.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

View File

@@ -9,5 +9,6 @@ that showcases best practices of running microservices in Kubernetes.
* [Horizontal Pod Auto-scaling](2-autoscaling.md)
* [Monitoring and alerting with Prometheus](3-monitoring.md)
* [StatefulSets with local persistent volumes](4-statefulsets.md)
* [Canary Deployments and A/B Testing](5-canary.md)
* [Expose Kubernetes services over HTTPS with Ngrok](6-ngrok.md)
* [A/B Testing with Ambassador API Gateway](5-canary.md)
* [Canary Deployments with Istio](7-istio.md)

View File

@@ -3,7 +3,7 @@ entries:
ambassador:
- apiVersion: v1
appVersion: 0.29.0
created: 2018-03-25T11:44:07.51723005+03:00
created: 2018-08-21T18:51:24.168305347+03:00
description: A Helm chart for Datawire Ambassador
digest: a30c8cb38e696b09fda8269ad8465ce6fec6100cfc108ca85ecbc85913ca5c7f
engine: gotpl
@@ -19,29 +19,103 @@ entries:
grafana:
- apiVersion: v1
appVersion: "1.0"
created: 2018-03-25T11:44:07.518148658+03:00
created: 2018-08-21T18:51:24.169038265+03:00
description: A Helm chart for Kubernetes
digest: abdcadc5cddcb7c015aa5bb64e59bfa246774ad9243b3eb3c2a814abb38f2776
digest: e5b37ccdb6c477e36448cb1c1e02f35bbad5c67d3bbff712736cbdf21b48dd8c
name: grafana
urls:
- https://stefanprodan.github.io/k8s-podinfo/grafana-0.1.0.tgz
version: 0.1.0
loadtest:
- apiVersion: v1
appVersion: "1.0"
created: 2018-08-21T18:51:24.169257472+03:00
description: Hey load test Helm chart for Kubernetes
digest: b9fc7ca83ae2c669a65d6cecf1ca4cf729b9db535179aab1d57c49bbeefd11b9
name: loadtest
urls:
- https://stefanprodan.github.io/k8s-podinfo/loadtest-0.1.0.tgz
version: 0.1.0
ngrok:
- apiVersion: v1
appVersion: "1.0"
created: 2018-03-25T11:44:07.518483193+03:00
created: 2018-08-21T18:51:24.169542029+03:00
description: A Ngrok Helm chart for Kubernetes
digest: 50036d831c06f55ef0f3a865613489d341f425b51f88bf25fe2005708ea24df5
digest: 7bf5ed2ef63ccd5efb76bcd9a086b04816a162c51d6ab592bccf58c283acd2ea
name: ngrok
urls:
- https://stefanprodan.github.io/k8s-podinfo/ngrok-0.1.0.tgz
version: 0.1.0
podinfo:
- apiVersion: v1
appVersion: 0.2.1
created: 2018-03-25T11:44:07.518906367+03:00
appVersion: 1.0.0
created: 2018-08-21T18:51:24.173785478+03:00
description: Podinfo Helm chart for Kubernetes
digest: f762207915a73faee72683b8ef8ac53da9db2dcbcf2107b2479d11beeb8f661f
digest: 82068727ba5b552341b14a980e954e27a8517f0ef76aab314c160b0f075e6de4
engine: gotpl
home: https://github.com/stefanprodan/k8s-podinfo
maintainers:
- email: stefanprodan@users.noreply.github.com
name: stefanprodan
name: podinfo
sources:
- https://github.com/stefanprodan/k8s-podinfo
urls:
- https://stefanprodan.github.io/k8s-podinfo/podinfo-1.0.0.tgz
version: 1.0.0
- apiVersion: v1
appVersion: 0.6.0
created: 2018-08-21T18:51:24.173287096+03:00
description: Podinfo Helm chart for Kubernetes
digest: bd25a710eddb3985d3bd921a11022b5c68a04d37cf93a1a4aab17eeda35aa2f8
engine: gotpl
home: https://github.com/stefanprodan/k8s-podinfo
maintainers:
- email: stefanprodan@users.noreply.github.com
name: stefanprodan
name: podinfo
sources:
- https://github.com/stefanprodan/k8s-podinfo
urls:
- https://stefanprodan.github.io/k8s-podinfo/podinfo-0.2.2.tgz
version: 0.2.2
- apiVersion: v1
appVersion: 0.5.1
created: 2018-08-21T18:51:24.17260811+03:00
description: Podinfo Helm chart for Kubernetes
digest: 631ca3e2db5553541a50b625f538e6a1f2a103c13aa8148fdd38baf2519e5235
engine: gotpl
home: https://github.com/stefanprodan/k8s-podinfo
maintainers:
- email: stefanprodan@users.noreply.github.com
name: stefanprodan
name: podinfo
sources:
- https://github.com/stefanprodan/k8s-podinfo
urls:
- https://stefanprodan.github.io/k8s-podinfo/podinfo-0.2.1.tgz
version: 0.2.1
- apiVersion: v1
appVersion: 0.5.0
created: 2018-08-21T18:51:24.171814296+03:00
description: Podinfo Helm chart for Kubernetes
digest: dfe7cf44aef0d170549918b00966422a07e7611f9d0081fb34f5b5beb0641c00
engine: gotpl
home: https://github.com/stefanprodan/k8s-podinfo
maintainers:
- email: stefanprodan@users.noreply.github.com
name: stefanprodan
name: podinfo
sources:
- https://github.com/stefanprodan/k8s-podinfo
urls:
- https://stefanprodan.github.io/k8s-podinfo/podinfo-0.2.0.tgz
version: 0.2.0
- apiVersion: v1
appVersion: 0.3.0
created: 2018-08-21T18:51:24.17094332+03:00
description: Podinfo Helm chart for Kubernetes
digest: 4865a2d8b269cf453935cda9661c2efb82c16411471f8c11221a6d03d9bb58b1
engine: gotpl
home: https://github.com/stefanprodan/k8s-podinfo
maintainers:
@@ -53,4 +127,21 @@ entries:
urls:
- https://stefanprodan.github.io/k8s-podinfo/podinfo-0.1.0.tgz
version: 0.1.0
generated: 2018-03-25T11:44:07.516193733+03:00
podinfo-istio:
- apiVersion: v1
appVersion: 0.6.0
created: 2018-08-21T18:51:24.174402168+03:00
description: Podinfo Helm chart for Istio
digest: f12f8aa1eca1328e9eaa30bd757f6ed3ff97205e2bf016a47265bc2de6a63d8f
engine: gotpl
home: https://github.com/stefanprodan/k8s-podinfo
maintainers:
- email: stefanprodan@users.noreply.github.com
name: stefanprodan
name: podinfo-istio
sources:
- https://github.com/stefanprodan/k8s-podinfo
urls:
- https://stefanprodan.github.io/k8s-podinfo/podinfo-istio-0.1.0.tgz
version: 0.1.0
generated: 2018-08-21T18:51:24.167544997+03:00

BIN
docs/loadtest-0.1.0.tgz Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
docs/podinfo-0.2.0.tgz Normal file

Binary file not shown.

BIN
docs/podinfo-0.2.1.tgz Normal file

Binary file not shown.

BIN
docs/podinfo-0.2.2.tgz Normal file

Binary file not shown.

BIN
docs/podinfo-1.0.0.tgz Normal file

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More