From c4d9e6b3e14b0eb826ffd9dafa6cc903940aa611 Mon Sep 17 00:00:00 2001 From: Jerome Petazzoni Date: Fri, 17 Jan 2020 04:45:06 -0600 Subject: [PATCH 01/16] Update deployment scripts to install Helm 3 --- prepare-vms/lib/commands.sh | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/prepare-vms/lib/commands.sh b/prepare-vms/lib/commands.sh index 68904fa5..71bcf643 100644 --- a/prepare-vms/lib/commands.sh +++ b/prepare-vms/lib/commands.sh @@ -242,7 +242,7 @@ EOF" # Install helm pssh " if [ ! -x /usr/local/bin/helm ]; then - curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | sudo bash && + curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get-helm-3 | sudo bash && helm completion bash | sudo tee /etc/bash_completion.d/helm fi" @@ -533,14 +533,8 @@ _cmd_helmprom() { need_tag pssh " if i_am_first_node; then - kubectl -n kube-system get serviceaccount helm || - kubectl -n kube-system create serviceaccount helm - sudo -u docker -H helm init --service-account helm - kubectl get clusterrolebinding helm-can-do-everything || - kubectl create clusterrolebinding helm-can-do-everything \ - --clusterrole=cluster-admin \ - --serviceaccount=kube-system:helm - sudo -u docker -H helm upgrade --install prometheus stable/prometheus \ + sudo -u docker -H helm repo add stable https://kubernetes-charts.storage.googleapis.com/ + sudo -u docker -H helm install prometheus stable/prometheus \ --namespace kube-system \ --set server.service.type=NodePort \ --set server.service.nodePort=30090 \ From 52bafdb57e34acdd0f8e6b86848fc704e380cfa5 Mon Sep 17 00:00:00 2001 From: Jerome Petazzoni Date: Fri, 17 Jan 2020 08:21:23 -0600 Subject: [PATCH 02/16] Update Helm chapter to Helm 3 --- slides/k8s/helm.md | 327 +++++++++++++++++++++++++++++++++++++++------ 1 file changed, 284 insertions(+), 43 deletions(-) diff --git a/slides/k8s/helm.md b/slides/k8s/helm.md index 5c895daa..253a23b6 100644 --- a/slides/k8s/helm.md +++ b/slides/k8s/helm.md @@ -22,9 +22,9 @@ - `helm` is a CLI tool -- `tiller` is its companion server-side component +- It is used to find, install, upgrade *charts* -- A "chart" is an archive containing templatized YAML bundles +- A chart is an archive containing templatized YAML bundles - Charts are versioned @@ -32,6 +32,90 @@ --- +## Differences between charts and packages + +- A package (deb, rpm...) contains binaries, libraries, etc. + +- A chart contains YAML manifests + + (the binaries, libraries, etc. are in the images referenced by the chart) + +- On most distributions, a package can only be installed once + + (installing another version replaces the installed one) + +- A chart can be installed multiple times + +- Each installation is called a *release* + +- This allows to install e.g. 10 instances of MongoDB + + (with potentially different versions and configurations) + +--- + +class: extra-details + +## Wait a minute ... + +*But, on my Debian system, I have Python 2 **and** Python 3. +
+Also, I have multiple versions of the Postgres database engine!* + +Yes! + +But they have different package names: + +- `python2.7`, `python3.8` + +- `postgresql-10`, `postgresql-11` + +Good to know: the Postgres package in Debian includes +provisions to deploy multiple Postgres servers on the +same system, but it's an exception (and it's a lot of +work done by the package maintainer, not by the `dpkg` +or `apt` tools). + +--- + +## Helm 2 vs Helm 3 + +- Helm 3 was released [November 13, 2019](https://helm.sh/blog/helm-3-released/) + +- Charts remain compatible between Helm 2 and Helm 3 + +- The CLI is very similar (with minor changes to some commands) + +- The main difference is that Helm 2 uses `tiller`, a server-side component + +- Helm 3 doesn't use `tiller` at all, making it simpler (yay!) + +--- + +class: extra-details + +## With or without `tiller` + +- With Helm 3: + + - the `helm` CLI communicates directly with the Kubernetes API + + - it creates resources (deployments, services...) with our credentials + +- With Helm 2: + + - the `helm` CLI communicates with `tiller`, telling `tiller` what to do + + - `tiller` then communicates with the Kubernetes API, using its own credentials + +- This indirect model caused significant permissions headaches + + (`tiller` required very broad permissions to function) + +- `tiller` was removed in Helm 3 to simplify the security aspects + +--- + ## Installing Helm - If the `helm` CLI is not installed in your environment, install it @@ -45,14 +129,21 @@ - If it's not installed, run the following command: ```bash - curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash + curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get-helm-3 \ + | bash ``` ] +(To install Helm 2, replace `get-helm-3` with `get`.) + --- -## Installing Tiller +class: extra-details + +## Only if using Helm 2 ... + +- We need to install Tiller and give it some permissions - Tiller is composed of a *service* and a *deployment* in the `kube-system` namespace @@ -67,8 +158,6 @@ ] -If Tiller was already installed, don't worry: this won't break it. - At the end of the install process, you will see: ``` @@ -77,9 +166,11 @@ Happy Helming! --- -## Fix account permissions +class: extra-details -- Helm permission model requires us to tweak permissions +## Only if using Helm 2 ... + +- Tiller needs permissions to create Kubernetes resources - In a more realistic deployment, you might create per-user or per-team service accounts, roles, and role bindings @@ -92,6 +183,7 @@ Happy Helming! --clusterrole=cluster-admin --serviceaccount=kube-system:default ``` + ] (Defining the exact roles and permissions on your cluster requires @@ -100,79 +192,228 @@ fine for personal and development clusters.) --- -## View available charts +## Charts and repositories -- A public repo is pre-configured when installing Helm +- A *repository* (or repo in short) is a collection of charts -- We can view available charts with `helm search` (and an optional keyword) +- It's just a bunch of files + + (they can be hosted by a static HTTP server, or on a local directory) + +- We can add "repos" to Helm, giving them a nickname + +- The nickname is used when referring to charts on that repo + + (for instance, if we try to install `hello/world`, that + means the chart `world` on the repo `hello`; and that repo + `hello` might be something like https://blahblah.hello.io/charts/) + +--- + +## Managing repositories + +- Let's check what repositories we have, and add the `stable` repo + + (the `stable` repo contains a set of official-ish charts) .exercise[ -- View all available charts: +- List our repos: ```bash - helm search + helm repo list ``` -- View charts related to `prometheus`: +- Add the `stable` repo: ```bash - helm search prometheus + helm repo add stable https://kubernetes-charts.storage.googleapis.com/ ``` ] +Adding a repo can take a few seconds (it downloads the list of charts from the repo). + +It's OK to add a repo that already exists (it will merely update it). + --- -## Install a chart +## Search available charts -- Most charts use `LoadBalancer` service types by default +- We can search available charts with `helm search` -- Most charts require persistent volumes to store data +- We need to specify where to search (only our repos, or Helm Hub) -- We need to relax these requirements a bit +- Let's search for all charts mentioning tomcat! .exercise[ -- Install the Prometheus metrics collector on our cluster: +- Search for tomcat in the repo that we added earlier: ```bash - helm install stable/prometheus \ - --set server.service.type=NodePort \ - --set server.persistentVolume.enabled=false + helm search repo tomcat + ``` + +- Search for tomcat on the Helm Hub: + ```bash + helm search hub tomcat ``` ] -Where do these `--set` options come from? +[Helm Hub](https://hub.helm.sh/) indexes many repos, using the [Monocular](https://github.com/helm/monocular) server. --- -## Inspecting a chart +## Charts and releases -- `helm inspect` shows details about a chart (including available options) +- "Installing a chart" means creating a *release* + +- We need to name that release + + (or use the `--generate-name` to get Helm to generate one for us) .exercise[ -- See the metadata and all available options for `stable/prometheus`: +- Install the tomcat chart that we found earlier: ```bash - helm inspect stable/prometheus + helm install java4ever stable/tomcat ``` -] - -The chart's metadata includes a URL to the project's home page. - -(Sometimes it conveniently points to the documentation for the chart.) - ---- - -## Viewing installed charts - -- Helm keeps track of what we've installed - -.exercise[ - -- List installed Helm charts: +- List the releases: ```bash helm list ``` ] + +--- + +class: extra-details + +## Searching and installing with Helm 2 + +- Helm 2 doesn't have support for the Helm Hub + +- The `helm search` command only takes a search string argument + + (e.g. `helm search tomcat`) + +- With Helm 2, the name is optional: + + `helm install stable/tomcat` will automatically generate a name + + `helm install --name java4ever stable/tomcat` will specify a name + +--- + +## Viewing resources of a release + +- This specific chart labels all its resources with a `release` label + +- We can use a selector to see these resources + +.exercise[ + +- List all the resources created by this release: + ```bash + kuectl get all --selector=release=java4ever + ``` + +] + +Note: this `release` label wasn't added automatically by Helm. +
+It is defined in that chart. In other words, not all charts will provide this label. + +--- + +## Configuring a release + +- By default, `stable/tomcat` creates a service of type `LoadBalancer` + +- We would like to change that to a `NodePort` + +- We could use `kubectl edit service java4ever-tomcat`, but ... + + ... our changes would get overwritten next time we update that chart! + +- Instead, we are going to *set a value* + +- Values are parameters that the chart can use to change its behavior + +- Values have default values + +- Each chart is free to define its own values and their defaults + +--- + +## Checking possible values + +- We can inspect a chart with `helm show` or `helm inspect` + +.exercise[ + +- Look at the README for tomcat: + ```bash + helm show readme stable/tomcat + ``` + +- Look at the values and their defaults: + ```bash + helm show values stable/tomcat + ``` + +] + +The `values` may or may not have useful comments. + +The `readme` may or may not have (accurate) explanations for the values. + +(If we're unlucky, there won't be any indication about how to use the values!) + +--- + +## Setting values + +- Values can be set when installing a chart, or when upgrading it + +- We are going to update `java4ever` to change the type of the service + +.exercise[ + +- Update `java4ever`: + ```bash + helm upgrade java4ever stable/tomcat --set service.type=NodePort + ``` + +] + +Note that we have to specify the chart that we use (`stable/tomcat`), +even if we just want to update some values. + +We can set multiple values. If we want to set many values, we can use `-f`/`--values` and pass a YAML file with all the values. + +All unspecified values will take the default values defined in the chart. + +--- + +## Connecting to tomcat + +- Let's check the tomcat server that we just installed + +- Note: its readiness probe has a 60s delay + + (so it will take 60s after the initial deployment before the service works) + +.exercise[ + +- Check the node port allocated to the service: + ```bash + kubectl get service java4ever-tomcat + PORT=$(kubectl get service java4ever-tomcat -o jsonpath={..nodePort}) + ``` + +- Connect to it, checking the demo app on `/sample/`: + ```bash + curl localhost:$PORT/sample/ + ``` + +] \ No newline at end of file From 1c6c76162f7893a1802e49b14c1cfb9f17e6a57d Mon Sep 17 00:00:00 2001 From: Jerome Petazzoni Date: Fri, 17 Jan 2020 10:11:12 -0600 Subject: [PATCH 03/16] Add link to zip file --- slides/markmaker.py | 11 +++++++++++ slides/shared/prereqs.md | 6 ++++++ 2 files changed, 17 insertions(+) diff --git a/slides/markmaker.py b/slides/markmaker.py index c0eb15b8..aad08012 100755 --- a/slides/markmaker.py +++ b/slides/markmaker.py @@ -89,6 +89,15 @@ def flatten(titles): def generatefromyaml(manifest, filename): manifest = yaml.safe_load(manifest) + if "zip" not in manifest: + if manifest["slides"].endswith('/'): + manifest["zip"] = manifest["slides"] + "slides.zip" + else: + manifest["zip"] = manifest["slides"] + "/slides.zip" + + if "html" not in manifest: + manifest["html"] = filename + ".html" + markdown, titles = processchapter(manifest["chapters"], filename) logging.debug("Found {} titles.".format(len(titles))) toc = gentoc(titles) @@ -117,6 +126,8 @@ def generatefromyaml(manifest, filename): html = html.replace("@@CHAT@@", manifest["chat"]) html = html.replace("@@GITREPO@@", manifest["gitrepo"]) html = html.replace("@@SLIDES@@", manifest["slides"]) + html = html.replace("@@ZIP@@", manifest["zip"]) + html = html.replace("@@HTML@@", manifest["html"]) html = html.replace("@@TITLE@@", manifest["title"].replace("\n", " ")) html = html.replace("@@SLIDENUMBERPREFIX@@", manifest.get("slidenumberprefix", "")) return html diff --git a/slides/shared/prereqs.md b/slides/shared/prereqs.md index 0c66c22c..533a80d4 100644 --- a/slides/shared/prereqs.md +++ b/slides/shared/prereqs.md @@ -72,6 +72,12 @@ Misattributed to Benjamin Franklin - Slides will remain online so you can review them later if needed +- You can download the slides using that URL: + + @@ZIP@@ + + (then open the file `@@HTML@@`) + --- class: in-person From 3ea6b730c8ec51b4167fe1052bf7d440104f588c Mon Sep 17 00:00:00 2001 From: Jerome Petazzoni Date: Fri, 17 Jan 2020 11:46:58 -0600 Subject: [PATCH 04/16] Update the Prometheus install instructions --- slides/k8s/prometheus.md | 42 ++++++++++++++++++++++++++-------------- 1 file changed, 28 insertions(+), 14 deletions(-) diff --git a/slides/k8s/prometheus.md b/slides/k8s/prometheus.md index 049bea18..3052c61e 100644 --- a/slides/k8s/prometheus.md +++ b/slides/k8s/prometheus.md @@ -204,32 +204,46 @@ We need to: ## Step 1: install Helm -- If we already installed Helm earlier, these commands won't break anything +- If we already installed Helm earlier, this command won't break anything -.exercice[ +.exercise[ -- Install Tiller (Helm's server-side component) on our cluster: +- Install the Helm CLI: ```bash - helm init - ``` - -- Give Tiller permission to deploy things on our cluster: - ```bash - kubectl create clusterrolebinding add-on-cluster-admin \ - --clusterrole=cluster-admin --serviceaccount=kube-system:default + curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get-helm-3 \ + | bash ``` ] --- -## Step 2: install Prometheus +## Step 2: add the `stable` repo -- Skip this if we already installed Prometheus earlier +- This will add the repository containing the chart for Prometheus - (in doubt, check with `helm list`) +- This command is idempotent -.exercice[ + (it won't break anything if the repository was already added) + +.exercise[ + +- Add the repository: + ```bash + helm repo add stable https://kubernetes-charts.storage.googleapis.com/ + ``` + +] + +--- + +## Step 3: install Prometheus + +- The following command, just like the previous ones, is idempotent + + (it won't error out if Prometheus is already installed) + +.exercise[ - Install Prometheus on our cluster: ```bash From cff9cbdfbb5e4351b9bb67eadd87830d8771152d Mon Sep 17 00:00:00 2001 From: Jerome Petazzoni Date: Fri, 17 Jan 2020 12:01:20 -0600 Subject: [PATCH 05/16] Add slide about versioning and cadence --- slides/k8s/versions-k8s.md | 30 +++++++++++++++++++++++++++--- 1 file changed, 27 insertions(+), 3 deletions(-) diff --git a/slides/k8s/versions-k8s.md b/slides/k8s/versions-k8s.md index 8f28c203..9c856b68 100644 --- a/slides/k8s/versions-k8s.md +++ b/slides/k8s/versions-k8s.md @@ -1,7 +1,7 @@ ## Versions installed -- Kubernetes 1.15.3 -- Docker Engine 19.03.1 +- Kubernetes 1.17.1 +- Docker Engine 19.03.5 - Docker Compose 1.24.1 @@ -23,6 +23,10 @@ class: extra-details ## Kubernetes and Docker compatibility +- Kubernetes 1.17 validates Docker Engine version [up to 19.03](https://github.com/kubernetes/kubernetes/pull/84476) + + *however ...* + - Kubernetes 1.15 validates Docker Engine versions [up to 18.09](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#dependencies)
(the latest version when Kubernetes 1.14 was released) @@ -40,5 +44,25 @@ class: extra-details - "Validates" = continuous integration builds with very extensive (and expensive) testing - The Docker API is versioned, and offers strong backward-compatibility +
+ (if a client uses e.g. API v1.25, the Docker Engine will keep behaving the same way) - (If a client uses e.g. API v1.25, the Docker Engine will keep behaving the same way) +--- + +## Kubernetes versioning and cadence + +- Kubernetes versions are expressed using *semantic versioning* + + (a Kubernetes version is expressed as MAJOR.MINOR.PATCH) + +- There is a new *patch* release whenever needed + + (generally, there is about [2 to 4 weeks](https://github.com/kubernetes/sig-release/blob/master/release-engineering/role-handbooks/patch-release-team.md#release-timing) between patch releases, + except when a critical bug or vulnerability is found: + in that case, a patch release will follow as fast as possible) + +- There is a new *minor* release approximately every 3 months + +- At any given time, 3 *minor* releases are maintained + + (in other words, a given *minor* release is maintained about 9 months) From 1f826d79935e9f74e251080d5ea54048abf84010 Mon Sep 17 00:00:00 2001 From: Jerome Petazzoni Date: Fri, 17 Jan 2020 12:28:27 -0600 Subject: [PATCH 06/16] Add slide about version skew --- slides/k8s/versions-k8s.md | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/slides/k8s/versions-k8s.md b/slides/k8s/versions-k8s.md index 9c856b68..2daf88f8 100644 --- a/slides/k8s/versions-k8s.md +++ b/slides/k8s/versions-k8s.md @@ -66,3 +66,25 @@ class: extra-details - At any given time, 3 *minor* releases are maintained (in other words, a given *minor* release is maintained about 9 months) + +--- + +## Kubernetes version compatibility + +*Should my version of `kubectl` match exactly my cluster version?* + +- `kubectl` can be up to one minor version older or newer than the cluster + + (if cluster version is 1.15.X, `kubectl` can be 1.14.Y, 1.15.Y, or 1.16.Y) + +- Things *might* work with larger version differences + + (but they will probably fail randomly, so be careful) + +- This is an example of an error indicating version compability issues: + ``` + error: SchemaError(io.k8s.api.autoscaling.v2beta1.ExternalMetricStatus): + invalid object doesn't have additional properties + ``` + +- Check [the documentation](https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl) for the whole story about compatibility From 328a2edaaf18129f18c0a9c1252df2117fe7acf9 Mon Sep 17 00:00:00 2001 From: Jerome Petazzoni Date: Fri, 17 Jan 2020 14:17:18 -0600 Subject: [PATCH 07/16] Add slide about number of nodes in a cluster --- slides/k8s/concepts-k8s.md | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/slides/k8s/concepts-k8s.md b/slides/k8s/concepts-k8s.md index 994f7ce8..2d07da54 100644 --- a/slides/k8s/concepts-k8s.md +++ b/slides/k8s/concepts-k8s.md @@ -199,6 +199,30 @@ class: extra-details class: extra-details +## How many nodes should a cluster have? + +- There is no particular constraint + + (no need to have an odd number of nodes for quorum) + +- A cluster can have zero node + + (but then it won't be able to start any pods) + +- For testing and development, having a single node is fine + +- For production, make sure that you have extra capacity + + (so that your workload still fits if you lose a node or a group of nodes) + +- Kubernetes is tested with [up to 5000 nodes](https://kubernetes.io/docs/setup/best-practices/cluster-large/) + + (however, running a cluster of that size requires a lot of tuning) + +--- + +class: extra-details + ## Do we need to run Docker at all? No! From 3e9a93957859a8cdaa2ec82f18958cb038924c95 Mon Sep 17 00:00:00 2001 From: Jerome Petazzoni Date: Fri, 17 Jan 2020 17:07:43 -0600 Subject: [PATCH 08/16] Add traffic split / canary for Traefik --- k8s/canary.yaml | 21 ++++++ slides/k8s/ingress.md | 161 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 182 insertions(+) create mode 100644 k8s/canary.yaml diff --git a/k8s/canary.yaml b/k8s/canary.yaml new file mode 100644 index 00000000..88045150 --- /dev/null +++ b/k8s/canary.yaml @@ -0,0 +1,21 @@ +apiVersion: networking.k8s.io/v1beta1 +kind: Ingress +metadata: + name: whatever + annotations: + traefik.ingress.kubernetes.io/service-weights: | + whatever: 90% + whatever-new: 10% +spec: + rules: + - host: whatever.A.B.C.D.nip.io + http: + paths: + - path: / + backend: + serviceName: whatever + servicePort: 80 + - path: / + backend: + serviceName: whatever-new + servicePort: 80 diff --git a/slides/k8s/ingress.md b/slides/k8s/ingress.md index 7b12100a..31511f53 100644 --- a/slides/k8s/ingress.md +++ b/slides/k8s/ingress.md @@ -524,3 +524,164 @@ spec: - This should eventually stabilize (remember that ingresses are currently `apiVersion: networking.k8s.io/v1beta1`) + +--- + +## A special feature in action + +- We're going to see how to implement *canary releases* with Traefik + +- This feature is available on multiple ingress controllers + +- ... But it is configured very differently on each of them + +--- + +## Canary releases + +- A *canary release* (or canary launch or canary deployment) is a release that will process only a small fraction of the workload + +- Example 1: a canary release is deployed for a microservice + + - 1% of all requests (sampled randomly) are sent to the canary + - the remaining 99% are sent to the normal release + +- Example 2: a canary release is deployed for a web app + + - 1% of users see the canary release + - the remaining 99% are sent to the normal release + +- We're going to implement example 1 (per-request routing) + +--- + +## Canary releases with Traefik + +- We need to deploy the canary and expose it with a separate service + +- Then, in the Ingress resource, we need: + + - multiple `paths` entries (one for each service, canary and normal) + + - an extra annotation indicating the weight of each service + +- If we want, we can send requests to more than 2 services + +- Let's send requests to our 3 cheesy services! + +.exercise[ + +- Create the resource shown on the next slide + +] + +--- + +## The Ingress resource + +.small[ +```yaml +apiVersion: networking.k8s.io/v1beta1 +kind: Ingress +metadata: + name: cheeseplate + annotations: + traefik.ingress.kubernetes.io/service-weights: | + cheddar: 50% + wensleydale: 25% + stilton: 25% +spec: + rules: + - host: cheeseplate.`A.B.C.D`.nip.io + http: + paths: + - path: / + backend: + serviceName: cheddar + servicePort: 80 + - path: / + backend: + serviceName: wensledale + servicePort: 80 + - path: / + backend: + serviceName: stilton + servicePort: 80 +``` +] + +--- + +## Testing the canary + +- Let's check the percentage of requests going to each service + +.exercise[ + +- Continuously send HTTP requests to the new ingress: + ```bash + while sleep 0.1; do + curl -s http://cheeseplate.A.B.C.D.nip.io/ + done + ``` + +] + +We should see a 50/25/25 request mix. + +--- + +class: extra-details + +## Load balancing fairness + +Note: if we use odd request ratios, the load balancing algorithm might appear to be broken on a small scale (when sending a small number of requests), but on a large scale (with many requests) it will be fair. + +For instance, with a 11%/89% ratio, we can see 79 requests going to the 89%-weighted service, and then requests alternating between the two services; then 79 requests again, etc. + +--- + +class: extra-details + +## Other ingress controllers + +*Just to illustrate how different things are ...* + +- With the NGINX ingress controller: + + - define two ingress ressources +
+ (specifying rules with the same host+path) + + - add `nginx.ingress.kubernetes.io/canary` annotations on each + + +- With Linkerd2: + + - define two services + + - define an extra service for the weighted aggregate of the two + + - define a TrafficSplit (this is a CRD introduced by the SMI spec) + +--- + +class: extra-details + +## We need more than that + +What we saw is just one of the multiple building blocks that we need to achieve a canary release. + +We also need: + +- metrics (latency, performance ...) for our releases + +- automation to alter canary weights + + (increase canary weight if metrics look good; decrease otherwise) + +- a mechanism to manage the lifecycle of the canary releases + + (create them, promote them, delete them ...) + +For inspiration, check [flagger by Weave](https://github.com/weaveworks/flagger). From da9921d68abb3ee9877ba3fb10206c9a5e490fd4 Mon Sep 17 00:00:00 2001 From: Jerome Petazzoni Date: Sat, 18 Jan 2020 02:36:41 -0600 Subject: [PATCH 09/16] Update explanations for canary --- slides/k8s/ingress.md | 25 ++++++++++++++++++++++--- 1 file changed, 22 insertions(+), 3 deletions(-) diff --git a/slides/k8s/ingress.md b/slides/k8s/ingress.md index 31511f53..a51c453b 100644 --- a/slides/k8s/ingress.md +++ b/slides/k8s/ingress.md @@ -541,16 +541,35 @@ spec: - A *canary release* (or canary launch or canary deployment) is a release that will process only a small fraction of the workload -- Example 1: a canary release is deployed for a microservice +- After deploying the canary, we compare its metrics to the normal release + +- If the metrics look good, the canary will progressively receive more traffic + + (until it gets 100% and becomes the new normal release) + +- If the metrics aren't good, the canary is automatically removed + +- When we deploy a bad release, only a tiny fraction of traffic is affected + +--- + +## Various ways to implement canary + +- Example 1: canary for a microservice - 1% of all requests (sampled randomly) are sent to the canary - the remaining 99% are sent to the normal release -- Example 2: a canary release is deployed for a web app +- Example 2: canary for a web app - - 1% of users see the canary release + - 1% of users are sent to the canary web site - the remaining 99% are sent to the normal release +- Example 3: canary for shipping physical goods + + - 1% of orders are shipped with the canary process + - the reamining 99% are shipped with the normal process + - We're going to implement example 1 (per-request routing) --- From 7d6ab6974db10d746bdd2e16fae870c34093bf51 Mon Sep 17 00:00:00 2001 From: Jerome Petazzoni Date: Sat, 18 Jan 2020 09:49:18 -0600 Subject: [PATCH 10/16] Big autopilot update 'keys' does not handle special keys (like ^J) anymore. Instead, we should use `key`, which will pass its entire argument to tmux, without any processing. It is therefore possible to do something like: ```key ^C``` Or ```key Escape``` Most (if not all) calls to special keys have been converted to use 'key' instead of 'keys'. Action ```copypaste``` has been deprecated in favor of three separate actions: ```copy REGEX``` (searches the regex in the active pane, and if found, places it in an internal clipboard) ```paste``` (inserts the content of the clipboard as keystrokes) ```check``` (forces a status check) Also, a 'tmux' command has been added. It allows to do stuff like: ```tmux split-pane -v``` --- k8s/efk.yaml | 6 +++ slides/autopilot/autotest.py | 52 ++++++++++++++----- slides/containers/links.md | 13 ++++- slides/k8s/accessinternal.md | 4 +- slides/k8s/authn-authz.md | 2 +- slides/k8s/build-with-docker.md | 2 +- slides/k8s/build-with-kaniko.md | 2 +- slides/k8s/daemonset.md | 44 ++++++++++------ slides/k8s/dryrun.md | 2 + slides/k8s/horizontal-pod-autoscaler.md | 69 +++++++++++++++++++++++-- slides/k8s/kubectlexpose.md | 7 ++- slides/k8s/kubectlproxy.md | 2 +- slides/k8s/kubectlrun.md | 41 ++++++++++++++- slides/k8s/kubectlscale.md | 4 +- slides/k8s/kustomize.md | 7 +++ slides/k8s/logs-cli.md | 18 +++---- slides/k8s/netpol.md | 11 ++++ slides/k8s/ourapponkube.md | 2 +- slides/k8s/podsecuritypolicy.md | 33 ++++++++++++ slides/k8s/portworx.md | 12 ++--- slides/k8s/rollout.md | 12 ++--- slides/k8s/scalingdockercoins.md | 12 +++-- slides/k8s/shippingimages.md | 5 ++ slides/k8s/statefulsets.md | 2 +- slides/k8s/volumes.md | 25 +++++++++ slides/shared/composescale.md | 6 +-- slides/shared/sampleapp.md | 2 +- slides/swarm/creatingswarm.md | 2 +- slides/swarm/ipsec.md | 2 +- slides/swarm/logging.md | 4 +- 30 files changed, 323 insertions(+), 82 deletions(-) mode change 120000 => 100644 slides/containers/links.md diff --git a/k8s/efk.yaml b/k8s/efk.yaml index f2c9ea84..cb1e97d4 100644 --- a/k8s/efk.yaml +++ b/k8s/efk.yaml @@ -3,6 +3,7 @@ apiVersion: v1 kind: ServiceAccount metadata: name: fluentd + namespace: default --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole @@ -36,6 +37,7 @@ apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd + namespace: default labels: app: fluentd spec: @@ -95,6 +97,7 @@ metadata: labels: app: elasticsearch name: elasticsearch + namespace: default spec: selector: matchLabels: @@ -122,6 +125,7 @@ metadata: labels: app: elasticsearch name: elasticsearch + namespace: default spec: ports: - port: 9200 @@ -137,6 +141,7 @@ metadata: labels: app: kibana name: kibana + namespace: default spec: selector: matchLabels: @@ -160,6 +165,7 @@ metadata: labels: app: kibana name: kibana + namespace: default spec: ports: - port: 5601 diff --git a/slides/autopilot/autotest.py b/slides/autopilot/autotest.py index 760de79c..cdd3e603 100755 --- a/slides/autopilot/autotest.py +++ b/slides/autopilot/autotest.py @@ -26,6 +26,7 @@ IPADDR = None class State(object): def __init__(self): + self.clipboard = "" self.interactive = True self.verify_status = False self.simulate_type = True @@ -38,6 +39,7 @@ class State(object): def load(self): data = yaml.load(open("state.yaml")) + self.clipboard = str(data["clipboard"]) self.interactive = bool(data["interactive"]) self.verify_status = bool(data["verify_status"]) self.simulate_type = bool(data["simulate_type"]) @@ -51,6 +53,7 @@ class State(object): def save(self): with open("state.yaml", "w") as f: yaml.dump(dict( + clipboard=self.clipboard, interactive=self.interactive, verify_status=self.verify_status, simulate_type=self.simulate_type, @@ -85,9 +88,11 @@ class Snippet(object): # On single-line snippets, the data follows the method immediately if '\n' in content: self.method, self.data = content.split('\n', 1) - else: + self.data = self.data.strip() + elif ' ' in content: self.method, self.data = content.split(' ', 1) - self.data = self.data.strip() + else: + self.method, self.data = content, None self.next = None def __str__(self): @@ -186,7 +191,7 @@ def wait_for_prompt(): if last_line == "$": # This is a perfect opportunity to grab the node's IP address global IPADDR - IPADDR = re.findall("^\[(.*)\]", output, re.MULTILINE)[-1] + IPADDR = re.findall("\[(.*)\]", output, re.MULTILINE)[-1] return # When we are in an alpine container, the prompt will be "/ #" if last_line == "/ #": @@ -264,19 +269,31 @@ for slide in re.split("\n---?\n", content): slides.append(Slide(slide)) +# Send a single key. +# Useful for special keys, e.g. tmux interprets these strings: +# ^C (and all other sequences starting with a caret) +# Space +# ... and many others (check tmux manpage for details). +def send_key(data): + subprocess.check_call(["tmux", "send-keys", data]) + + +# Send multiple keys. +# If keystroke simulation is off, all keys are sent at once. +# If keystroke simulation is on, keys are sent one by one, with a delay between them. def send_keys(data): - if state.simulate_type and data[0] != '^': + if not state.simulate_type: + subprocess.check_call(["tmux", "send-keys", data]) + else: for key in data: if key == ";": key = "\\;" if key == "\n": if interruptible_sleep(1): return - subprocess.check_call(["tmux", "send-keys", key]) + send_key(key) if interruptible_sleep(0.15*random.random()): return if key == "\n": if interruptible_sleep(1): return - else: - subprocess.check_call(["tmux", "send-keys", data]) def capture_pane(): @@ -323,7 +340,10 @@ def check_bounds(): while True: state.save() slide = slides[state.slide] - snippet = slide.snippets[state.snippet-1] if state.snippet else None + if state.snippet and state.snippet <= len(slide.snippets): + snippet = slide.snippets[state.snippet-1] + else: + snippet = None click.clear() print("[Slide {}/{}] [Snippet {}/{}] [simulate_type:{}] [verify_status:{}] " "[switch_desktop:{}] [sync_slides:{}] [open_links:{}] [run_hidden:{}]" @@ -398,7 +418,9 @@ while True: continue method, data = snippet.method, snippet.data logging.info("Running with method {}: {}".format(method, data)) - if method == "keys": + if method == "key": + send_key(data) + elif method == "keys": send_keys(data) elif method == "bash" or (method == "hide" and state.run_hidden): # Make sure that we're ready @@ -421,7 +443,7 @@ while True: wait_for_prompt() # Verify return code check_exit_status() - elif method == "copypaste": + elif method == "copy": screen = capture_pane() matches = re.findall(data, screen, flags=re.DOTALL) if len(matches) == 0: @@ -430,8 +452,12 @@ while True: match = matches[-1] # Remove line breaks (like a screen copy paste would do) match = match.replace('\n', '') - send_keys(match + '\n') - # FIXME: we should factor out the "bash" method + logging.info("Copied {} to clipboard.".format(match)) + state.clipboard = match + elif method == "paste": + logging.info("Pasting {} from clipboard.".format(state.clipboard)) + send_keys(state.clipboard) + elif method == "check": wait_for_prompt() check_exit_status() elif method == "open": @@ -445,6 +471,8 @@ while True: if state.interactive: print("Press any key to continue to next step...") click.getchar() + elif method == "tmux": + subprocess.check_call(["tmux"] + data.split()) else: logging.warning("Unknown method {}: {!r}".format(method, data)) move_forward() diff --git a/slides/containers/links.md b/slides/containers/links.md deleted file mode 120000 index 3b5a5fbc..00000000 --- a/slides/containers/links.md +++ /dev/null @@ -1 +0,0 @@ -../swarm/links.md \ No newline at end of file diff --git a/slides/containers/links.md b/slides/containers/links.md new file mode 100644 index 00000000..f824ca98 --- /dev/null +++ b/slides/containers/links.md @@ -0,0 +1,12 @@ +# Links and resources + +- [Docker Community Slack](https://community.docker.com/registrations/groups/4316) +- [Docker Community Forums](https://forums.docker.com/) +- [Docker Hub](https://hub.docker.com) +- [Docker Blog](https://blog.docker.com/) +- [Docker documentation](https://docs.docker.com/) +- [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker) +- [Docker on Twitter](https://twitter.com/docker) +- [Play With Docker Hands-On Labs](https://training.play-with-docker.com/) + +.footnote[These slides (and future updates) are on → https://container.training/] diff --git a/slides/k8s/accessinternal.md b/slides/k8s/accessinternal.md index 437ffbf5..6767f44f 100644 --- a/slides/k8s/accessinternal.md +++ b/slides/k8s/accessinternal.md @@ -118,9 +118,9 @@ installed and set up `kubectl` to communicate with your cluster. - Terminate the port forwarder: diff --git a/slides/k8s/authn-authz.md b/slides/k8s/authn-authz.md index 57f2c965..a27b8424 100644 --- a/slides/k8s/authn-authz.md +++ b/slides/k8s/authn-authz.md @@ -547,7 +547,7 @@ It's important to note a couple of details in these flags... - Exit the container with `exit` or `^D` - + ] diff --git a/slides/k8s/build-with-docker.md b/slides/k8s/build-with-docker.md index ec2e9a20..8e73c042 100644 --- a/slides/k8s/build-with-docker.md +++ b/slides/k8s/build-with-docker.md @@ -109,7 +109,7 @@ spec: ] diff --git a/slides/k8s/build-with-kaniko.md b/slides/k8s/build-with-kaniko.md index be28a94a..6db7913b 100644 --- a/slides/k8s/build-with-kaniko.md +++ b/slides/k8s/build-with-kaniko.md @@ -174,7 +174,7 @@ spec: ] diff --git a/slides/k8s/daemonset.md b/slides/k8s/daemonset.md index 44e071f6..d6668748 100644 --- a/slides/k8s/daemonset.md +++ b/slides/k8s/daemonset.md @@ -110,20 +110,22 @@ ```bash vim rng.yml``` ```wait kind: Deployment``` ```keys /Deployment``` -```keys ^J``` +```key ^J``` ```keys cwDaemonSet``` -```keys ^[``` ] +```key ^[``` ] ```keys :wq``` -```keys ^J``` +```key ^J``` --> - Save, quit - Try to create our new resource: - ``` + ```bash kubectl apply -f rng.yml ``` + + ] -- @@ -501,11 +503,11 @@ be any interruption.* ] @@ -538,19 +540,18 @@ be any interruption.* .exercise[ -- Update the service to add `enabled: "yes"` to its selector: - ```bash - kubectl edit service rng - ``` +- Update the YAML manifest of the service + +- Add `enabled: "yes"` to its selector ] @@ -589,16 +590,25 @@ If we did everything correctly, the web UI shouldn't show any change. ```bash POD=$(kubectl get pod -l app=rng,pod-template-hash -o name) kubectl logs --tail 1 --follow $POD - ``` (We should see a steady stream of HTTP logs) + + - In another window, remove the label from the pod: ```bash kubectl label pod -l app=rng,pod-template-hash enabled- ``` (The stream of HTTP logs should stop immediately) + + ] There might be a slight change in the web UI (since we removed a bit diff --git a/slides/k8s/dryrun.md b/slides/k8s/dryrun.md index f319c8b6..6078d9dd 100644 --- a/slides/k8s/dryrun.md +++ b/slides/k8s/dryrun.md @@ -162,6 +162,8 @@ Instead, it has the fields expected in a DaemonSet. kubectl diff -f web.yaml ``` + + ] Note: we don't need to specify `--validate=false` here. diff --git a/slides/k8s/horizontal-pod-autoscaler.md b/slides/k8s/horizontal-pod-autoscaler.md index 069c479d..04ead202 100644 --- a/slides/k8s/horizontal-pod-autoscaler.md +++ b/slides/k8s/horizontal-pod-autoscaler.md @@ -105,19 +105,36 @@ - Monitor pod CPU usage: ```bash - watch kubectl top pods + watch kubectl top pods -l app=busyhttp ``` + + - Monitor service latency: ```bash - httping http://`ClusterIP`/ + httping http://`$CLUSTERIP`/ ``` + + - Monitor cluster events: ```bash kubectl get events -w ``` + + ] --- @@ -130,9 +147,15 @@ - Send a lot of requests to the service, with a concurrency level of 3: ```bash - ab -c 3 -n 100000 http://`ClusterIP`/ + ab -c 3 -n 100000 http://`$CLUSTERIP`/ ``` + + ] The latency (reported by `httping`) should increase above 3s. @@ -193,6 +216,20 @@ This can also be set with `--cpu-percent=`. kubectl edit deployment busyhttp ``` + + - In the `containers` list, add the following block: ```yaml resources: @@ -243,3 +280,29 @@ This can also be set with `--cpu-percent=`. - The metrics provided by metrics server are standard; everything else is custom - For more details, see [this great blog post](https://medium.com/uptime-99/kubernetes-hpa-autoscaling-with-custom-and-external-metrics-da7f41ff7846) or [this talk](https://www.youtube.com/watch?v=gSiGFH4ZnS8) + +--- + +## Cleanup + +- Since `busyhttp` uses CPU cycles, let's stop it before moving on + +.exercise[ + +- Delete the `busyhttp` Deployment: + ```bash + kubectl delete deployment busyhttp + ``` + + + +] diff --git a/slides/k8s/kubectlexpose.md b/slides/k8s/kubectlexpose.md index 8c1ad167..5bd4b6fc 100644 --- a/slides/k8s/kubectlexpose.md +++ b/slides/k8s/kubectlexpose.md @@ -124,7 +124,10 @@ kubectl create service externalname k8s --external-name kubernetes.io kubectl get pods -w ``` - + - Create a deployment for this very lightweight HTTP server: ```bash @@ -191,6 +194,8 @@ kubectl create service externalname k8s --external-name kubernetes.io - Send a few requests: diff --git a/slides/k8s/kubectlproxy.md b/slides/k8s/kubectlproxy.md index fa7f7505..c50618a1 100644 --- a/slides/k8s/kubectlproxy.md +++ b/slides/k8s/kubectlproxy.md @@ -101,7 +101,7 @@ If we wanted to talk to the API, we would need to: - Terminate the proxy: diff --git a/slides/k8s/kubectlrun.md b/slides/k8s/kubectlrun.md index d16e9b5e..01fe08e6 100644 --- a/slides/k8s/kubectlrun.md +++ b/slides/k8s/kubectlrun.md @@ -154,6 +154,11 @@ pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m - Leave that command running, so that we can keep an eye on these logs + + ] --- @@ -206,11 +211,21 @@ We could! But the *deployment* would notice it right away, and scale back to the - Interrupt `kubectl logs` (with Ctrl-C) + + - Restart it: ```bash kubectl logs deploy/pingpong --tail 1 --follow ``` + + ] `kubectl logs` will warn us that multiple pods were found, and that it's showing us only one of them. @@ -235,10 +250,30 @@ Let's leave `kubectl logs` running while we keep exploring. watch kubectl get pods ``` + + - Destroy the pod currently shown by `kubectl logs`: ``` kubectl delete pod pingpong-xxxxxxxxxx-yyyyy ``` + + + ] --- @@ -307,7 +342,7 @@ Let's leave `kubectl logs` running while we keep exploring. - Create the Cron Job: ```bash - kubectl run --schedule="*/3 * * * *" --restart=OnFailure --image=alpine sleep 10 + kubectl run every3mins --schedule="*/3 * * * *" --restart=OnFailure --image=alpine sleep 10 ``` - Check the resource that was created: @@ -418,7 +453,7 @@ Let's leave `kubectl logs` running while we keep exploring. ] @@ -447,6 +482,8 @@ class: extra-details kubectl logs -l run=pingpong --tail 1 -f ``` + + ] We see a message like the following one: diff --git a/slides/k8s/kubectlscale.md b/slides/k8s/kubectlscale.md index 488a6448..e78136aa 100644 --- a/slides/k8s/kubectlscale.md +++ b/slides/k8s/kubectlscale.md @@ -12,9 +12,9 @@ - Now, create more `worker` replicas: diff --git a/slides/k8s/kustomize.md b/slides/k8s/kustomize.md index 664f82f3..0554fadf 100644 --- a/slides/k8s/kustomize.md +++ b/slides/k8s/kustomize.md @@ -97,6 +97,8 @@ ship init https://github.com/jpetazzo/kubercoins ``` + + ] --- @@ -189,6 +191,11 @@ kubectl logs deploy/worker --tail=10 --follow --namespace=kustomcoins ``` + + ] Note: it might take a minute or two for the worker to start. diff --git a/slides/k8s/logs-cli.md b/slides/k8s/logs-cli.md index 5dda900b..9c810692 100644 --- a/slides/k8s/logs-cli.md +++ b/slides/k8s/logs-cli.md @@ -84,14 +84,14 @@ Exactly what we need! .exercise[ -- View the logs for all the rng containers: +- View the logs for all the pingpong containers: ```bash - stern rng + stern pingpong ``` ] @@ -117,7 +117,7 @@ Exactly what we need! ] @@ -138,14 +138,14 @@ Exactly what we need! .exercise[ -- View the logs for all the things started with `kubectl create deployment`: +- View the logs for all the things started with `kubectl run`: ```bash - stern -l app + stern -l run ``` ] diff --git a/slides/k8s/netpol.md b/slides/k8s/netpol.md index 21950c1b..5ec070d9 100644 --- a/slides/k8s/netpol.md +++ b/slides/k8s/netpol.md @@ -120,6 +120,12 @@ This is our game plan: kubectl create deployment testweb --image=nginx ``` + + - Find out the IP address of the pod with one of these two commands: ```bash kubectl get pods -o wide -l app=testweb @@ -154,6 +160,11 @@ The `curl` command should show us the "Welcome to nginx!" page. curl $IP ``` + + ] The `curl` command should now time out. diff --git a/slides/k8s/ourapponkube.md b/slides/k8s/ourapponkube.md index abe48620..992a38b3 100644 --- a/slides/k8s/ourapponkube.md +++ b/slides/k8s/ourapponkube.md @@ -108,7 +108,7 @@ kubectl wait deploy/worker --for condition=available ] diff --git a/slides/k8s/podsecuritypolicy.md b/slides/k8s/podsecuritypolicy.md index 97c721d1..0deb8284 100644 --- a/slides/k8s/podsecuritypolicy.md +++ b/slides/k8s/podsecuritypolicy.md @@ -220,6 +220,8 @@ sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml ``` + + ] --- @@ -240,6 +242,16 @@ - Save, quit + + ] --- @@ -271,6 +283,8 @@ kubectl run testpsp1 --image=nginx --restart=Never ``` + + - Try to create a Deployment: ```bash kubectl run testpsp2 --image=nginx @@ -498,3 +512,22 @@ class: extra-details - bind `psp:restricted` to the group `system:authenticated` - bind `psp:privileged` to the ServiceAccount `kube-system:default` + +--- + +## Fixing the cluster + +- Let's disable the PSP admission plugin + +.exercise[ + +- Edit the Kubernetes API server static pod manifest + +- Remove the PSP admission plugin + +- This can be done with this one-liner: + ```bash + sudo sed -i s/,PodSecurityPolicy// /etc/kubernetes/manifests/kube-apiserver.yaml + ``` + +] diff --git a/slides/k8s/portworx.md b/slides/k8s/portworx.md index ac0b7abd..46887b73 100644 --- a/slides/k8s/portworx.md +++ b/slides/k8s/portworx.md @@ -197,7 +197,7 @@ If you want to use an external key/value store, add one of the following: ] @@ -374,7 +374,7 @@ spec: autopilot prompt detection expects $ or # at the beginning of the line. ```wait postgres@postgres``` ```keys PS1="\u@\h:\w\n\$ "``` -```keys ^J``` +```key ^J``` --> - Check that default databases have been created correctly: @@ -428,7 +428,7 @@ autopilot prompt detection expects $ or # at the beginning of the line. psql demo -c "select count(*) from pgbench_accounts" ``` - + ] @@ -491,7 +491,7 @@ By "disrupt" we mean: "disconnect it from the network". - Logout to go back on `node1` - + - Watch the events unfolding with `kubectl get events -w` and `kubectl get pods -w` @@ -519,7 +519,7 @@ By "disrupt" we mean: "disconnect it from the network". - Check the number of rows in the `pgbench_accounts` table: @@ -527,7 +527,7 @@ By "disrupt" we mean: "disconnect it from the network". psql demo -c "select count(*) from pgbench_accounts" ``` - + ] diff --git a/slides/k8s/rollout.md b/slides/k8s/rollout.md index eb777c83..5e8c1a9f 100644 --- a/slides/k8s/rollout.md +++ b/slides/k8s/rollout.md @@ -94,7 +94,7 @@ - Update `worker` either with `kubectl edit`, or by running: @@ -150,7 +150,7 @@ That rollout should be pretty quick. What shows in the web UI? ] @@ -229,11 +229,7 @@ If you didn't deploy the Kubernetes dashboard earlier, just skip this slide. .exercise[ - + - Cancel the deployment and wait for the dust to settle: ```bash @@ -336,7 +332,7 @@ We might see something like 1, 4, 5. - Check the annotations for our replica sets: ```bash - kubectl describe replicasets -l app=worker | grep -A3 + kubectl describe replicasets -l app=worker | grep -A3 ^Annotations ``` ] diff --git a/slides/k8s/scalingdockercoins.md b/slides/k8s/scalingdockercoins.md index 75e9a7de..36ce2674 100644 --- a/slides/k8s/scalingdockercoins.md +++ b/slides/k8s/scalingdockercoins.md @@ -19,17 +19,14 @@ .exercise[ -- Open two new terminals to check what's going on with pods and deployments: +- Open a new terminal to keep an eye on our pods: ```bash kubectl get pods -w - kubectl get deployments -w ``` - Now, create more `worker` replicas: @@ -73,6 +70,11 @@ The graph in the web UI should go up again. kubectl scale deployment worker --replicas=10 ``` + + ] -- diff --git a/slides/k8s/shippingimages.md b/slides/k8s/shippingimages.md index 2529a303..6618501f 100644 --- a/slides/k8s/shippingimages.md +++ b/slides/k8s/shippingimages.md @@ -105,6 +105,11 @@ docker run ctr.run/github.com/jpetazzo/container.training/dockercoins/hasher ``` + + ] There might be a long pause before the first layer is pulled, diff --git a/slides/k8s/statefulsets.md b/slides/k8s/statefulsets.md index 6c7c15a9..102ef1c3 100644 --- a/slides/k8s/statefulsets.md +++ b/slides/k8s/statefulsets.md @@ -427,7 +427,7 @@ nodes and encryption of gossip traffic) were removed for simplicity. - Check the health of the cluster: diff --git a/slides/k8s/volumes.md b/slides/k8s/volumes.md index ef41b4cb..3952bcb9 100644 --- a/slides/k8s/volumes.md +++ b/slides/k8s/volumes.md @@ -110,6 +110,8 @@ It runs a single NGINX container. kubectl create -f ~/container.training/k8s/nginx-1-without-volume.yaml ``` + + - Get its IP address: ```bash IPADDR=$(kubectl get pod nginx-without-volume -o jsonpath={.status.podIP}) @@ -175,6 +177,8 @@ spec: kubectl create -f ~/container.training/k8s/nginx-2-with-volume.yaml ``` + + - Get its IP address: ```bash IPADDR=$(kubectl get pod nginx-with-volume -o jsonpath={.status.podIP}) @@ -269,6 +273,11 @@ spec: kubectl get pods -o wide --watch ``` + + ] --- @@ -282,11 +291,18 @@ spec: kubectl create -f ~/container.training/k8s/nginx-3-with-git.yaml ``` + + - As soon as we see its IP address, access it: ```bash curl $IP ``` + + - A few seconds later, the state of the pod will change; access it again: ```bash curl $IP @@ -399,10 +415,19 @@ spec: ## Trying the init container +.exercise[ + - Repeat the same operation as earlier (try to send HTTP requests as soon as the pod comes up) + + +] + - This time, instead of "403 Forbidden" we get a "connection refused" - NGINX doesn't start until the git container has done its job diff --git a/slides/shared/composescale.md b/slides/shared/composescale.md index ff8578f1..eba94dd5 100644 --- a/slides/shared/composescale.md +++ b/slides/shared/composescale.md @@ -40,7 +40,7 @@ class: extra-details ] @@ -75,7 +75,7 @@ Tip: use `^S` and `^Q` to pause/resume log output. ```bash top``` ```wait Tasks``` -```keys ^C``` +```key ^C``` --> - run `vmstat 1` to see I/O usage (si/so/bi/bo) @@ -85,7 +85,7 @@ Tip: use `^S` and `^Q` to pause/resume log output. ```bash vmstat 1``` ```wait memory``` -```keys ^C``` +```key ^C``` --> ] diff --git a/slides/shared/sampleapp.md b/slides/shared/sampleapp.md index 0636d2de..52a947f8 100644 --- a/slides/shared/sampleapp.md +++ b/slides/shared/sampleapp.md @@ -343,7 +343,7 @@ class: extra-details - Stop the application by hitting `^C` ] diff --git a/slides/swarm/creatingswarm.md b/slides/swarm/creatingswarm.md index 2f0d2f2f..7a053fb3 100644 --- a/slides/swarm/creatingswarm.md +++ b/slides/swarm/creatingswarm.md @@ -267,7 +267,7 @@ class: extra-details - Switch back to `node1` (with `exit`, `Ctrl-D` ...) - + - View the cluster from `node1`, which is a manager: ```bash diff --git a/slides/swarm/ipsec.md b/slides/swarm/ipsec.md index 5f8d48b5..b6803c24 100644 --- a/slides/swarm/ipsec.md +++ b/slides/swarm/ipsec.md @@ -72,7 +72,7 @@ ``` - + ] diff --git a/slides/swarm/logging.md b/slides/swarm/logging.md index 975949ed..6ebc7058 100644 --- a/slides/swarm/logging.md +++ b/slides/swarm/logging.md @@ -158,7 +158,7 @@ class: elk-manual ``` - + ] @@ -266,7 +266,7 @@ The test message should show up in the logstash container logs. ``` - + ] From 87462939d91b69e169246e14621fabede36222ad Mon Sep 17 00:00:00 2001 From: Jerome Petazzoni Date: Sat, 18 Jan 2020 11:12:33 -0600 Subject: [PATCH 11/16] Update dashboard to version 2.0 --- k8s/insecure-dashboard.yaml | 328 ++++++++++++++++++++++++++---------- 1 file changed, 236 insertions(+), 92 deletions(-) diff --git a/k8s/insecure-dashboard.yaml b/k8s/insecure-dashboard.yaml index 43ea51fa..ebf49362 100644 --- a/k8s/insecure-dashboard.yaml +++ b/k8s/insecure-dashboard.yaml @@ -12,19 +12,12 @@ # See the License for the specific language governing permissions and # limitations under the License. -# ------------------- Dashboard Secret ------------------- # - apiVersion: v1 -kind: Secret +kind: Namespace metadata: - labels: - k8s-app: kubernetes-dashboard - name: kubernetes-dashboard-certs - namespace: kube-system -type: Opaque + name: kubernetes-dashboard --- -# ------------------- Dashboard Service Account ------------------- # apiVersion: v1 kind: ServiceAccount @@ -32,62 +25,147 @@ metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard - namespace: kube-system + namespace: kubernetes-dashboard + +--- + +kind: Service +apiVersion: v1 +metadata: + labels: + k8s-app: kubernetes-dashboard + name: kubernetes-dashboard + namespace: kubernetes-dashboard +spec: + ports: + - port: 443 + targetPort: 8443 + selector: + k8s-app: kubernetes-dashboard + +--- + +apiVersion: v1 +kind: Secret +metadata: + labels: + k8s-app: kubernetes-dashboard + name: kubernetes-dashboard-certs + namespace: kubernetes-dashboard +type: Opaque + +--- + +apiVersion: v1 +kind: Secret +metadata: + labels: + k8s-app: kubernetes-dashboard + name: kubernetes-dashboard-csrf + namespace: kubernetes-dashboard +type: Opaque +data: + csrf: "" + +--- + +apiVersion: v1 +kind: Secret +metadata: + labels: + k8s-app: kubernetes-dashboard + name: kubernetes-dashboard-key-holder + namespace: kubernetes-dashboard +type: Opaque + +--- + +kind: ConfigMap +apiVersion: v1 +metadata: + labels: + k8s-app: kubernetes-dashboard + name: kubernetes-dashboard-settings + namespace: kubernetes-dashboard --- -# ------------------- Dashboard Role & Role Binding ------------------- # kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: - name: kubernetes-dashboard-minimal - namespace: kube-system + labels: + k8s-app: kubernetes-dashboard + name: kubernetes-dashboard + namespace: kubernetes-dashboard rules: - # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret. -- apiGroups: [""] - resources: ["secrets"] - verbs: ["create"] - # Allow Dashboard to create 'kubernetes-dashboard-settings' config map. -- apiGroups: [""] - resources: ["configmaps"] - verbs: ["create"] # Allow Dashboard to get, update and delete Dashboard exclusive secrets. -- apiGroups: [""] - resources: ["secrets"] - resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] - verbs: ["get", "update", "delete"] - # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. -- apiGroups: [""] - resources: ["configmaps"] - resourceNames: ["kubernetes-dashboard-settings"] - verbs: ["get", "update"] - # Allow Dashboard to get metrics from heapster. -- apiGroups: [""] - resources: ["services"] - resourceNames: ["heapster"] - verbs: ["proxy"] -- apiGroups: [""] - resources: ["services/proxy"] - resourceNames: ["heapster", "http:heapster:", "https:heapster:"] - verbs: ["get"] + - apiGroups: [""] + resources: ["secrets"] + resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"] + verbs: ["get", "update", "delete"] + # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. + - apiGroups: [""] + resources: ["configmaps"] + resourceNames: ["kubernetes-dashboard-settings"] + verbs: ["get", "update"] + # Allow Dashboard to get metrics. + - apiGroups: [""] + resources: ["services"] + resourceNames: ["heapster", "dashboard-metrics-scraper"] + verbs: ["proxy"] + - apiGroups: [""] + resources: ["services/proxy"] + resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"] + verbs: ["get"] --- + +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + labels: + k8s-app: kubernetes-dashboard + name: kubernetes-dashboard +rules: + # Allow Metrics Scraper to get metrics from the Metrics server + - apiGroups: ["metrics.k8s.io"] + resources: ["pods", "nodes"] + verbs: ["get", "list", "watch"] + +--- + apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: - name: kubernetes-dashboard-minimal - namespace: kube-system + labels: + k8s-app: kubernetes-dashboard + name: kubernetes-dashboard + namespace: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: Role - name: kubernetes-dashboard-minimal -subjects: -- kind: ServiceAccount name: kubernetes-dashboard - namespace: kube-system +subjects: + - kind: ServiceAccount + name: kubernetes-dashboard + namespace: kubernetes-dashboard + +--- + +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: kubernetes-dashboard +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: kubernetes-dashboard +subjects: + - kind: ServiceAccount + name: kubernetes-dashboard + namespace: kubernetes-dashboard --- -# ------------------- Dashboard Deployment ------------------- # kind: Deployment apiVersion: apps/v1 @@ -95,7 +173,7 @@ metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard - namespace: kube-system + namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 @@ -108,60 +186,124 @@ spec: k8s-app: kubernetes-dashboard spec: containers: - - name: kubernetes-dashboard - image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 - ports: - - containerPort: 8443 - protocol: TCP - args: - - --auto-generate-certificates - - --enable-skip-login - # Uncomment the following line to manually specify Kubernetes API server Host - # If not specified, Dashboard will attempt to auto discover the API server and connect - # to it. Uncomment only if the default does not work. - # - --apiserver-host=http://my-address:port - volumeMounts: - - name: kubernetes-dashboard-certs - mountPath: /certs - # Create on-disk volume to store exec logs - - mountPath: /tmp - name: tmp-volume - livenessProbe: - httpGet: - scheme: HTTPS - path: / - port: 8443 - initialDelaySeconds: 30 - timeoutSeconds: 30 + - name: kubernetes-dashboard + image: kubernetesui/dashboard:v2.0.0-rc2 + imagePullPolicy: Always + ports: + - containerPort: 8443 + protocol: TCP + args: + - --auto-generate-certificates + - --namespace=kubernetes-dashboard + # Uncomment the following line to manually specify Kubernetes API server Host + # If not specified, Dashboard will attempt to auto discover the API server and connect + # to it. Uncomment only if the default does not work. + # - --apiserver-host=http://my-address:port + - --enable-skip-login + volumeMounts: + - name: kubernetes-dashboard-certs + mountPath: /certs + # Create on-disk volume to store exec logs + - mountPath: /tmp + name: tmp-volume + livenessProbe: + httpGet: + scheme: HTTPS + path: / + port: 8443 + initialDelaySeconds: 30 + timeoutSeconds: 30 + securityContext: + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + runAsUser: 1001 + runAsGroup: 2001 volumes: - - name: kubernetes-dashboard-certs - secret: - secretName: kubernetes-dashboard-certs - - name: tmp-volume - emptyDir: {} + - name: kubernetes-dashboard-certs + secret: + secretName: kubernetes-dashboard-certs + - name: tmp-volume + emptyDir: {} serviceAccountName: kubernetes-dashboard + nodeSelector: + "beta.kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - - key: node-role.kubernetes.io/master - effect: NoSchedule + - key: node-role.kubernetes.io/master + effect: NoSchedule --- -# ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: - k8s-app: kubernetes-dashboard - name: kubernetes-dashboard - namespace: kube-system + k8s-app: dashboard-metrics-scraper + name: dashboard-metrics-scraper + namespace: kubernetes-dashboard spec: ports: - - port: 443 - targetPort: 8443 + - port: 8000 + targetPort: 8000 selector: - k8s-app: kubernetes-dashboard + k8s-app: dashboard-metrics-scraper + --- + +kind: Deployment +apiVersion: apps/v1 +metadata: + labels: + k8s-app: dashboard-metrics-scraper + name: dashboard-metrics-scraper + namespace: kubernetes-dashboard +spec: + replicas: 1 + revisionHistoryLimit: 10 + selector: + matchLabels: + k8s-app: dashboard-metrics-scraper + template: + metadata: + labels: + k8s-app: dashboard-metrics-scraper + annotations: + seccomp.security.alpha.kubernetes.io/pod: 'runtime/default' + spec: + containers: + - name: dashboard-metrics-scraper + image: kubernetesui/metrics-scraper:v1.0.2 + ports: + - containerPort: 8000 + protocol: TCP + livenessProbe: + httpGet: + scheme: HTTP + path: / + port: 8000 + initialDelaySeconds: 30 + timeoutSeconds: 30 + volumeMounts: + - mountPath: /tmp + name: tmp-volume + securityContext: + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + runAsUser: 1001 + runAsGroup: 2001 + serviceAccountName: kubernetes-dashboard + nodeSelector: + "beta.kubernetes.io/os": linux + # Comment the following tolerations if Dashboard must not be deployed on master + tolerations: + - key: node-role.kubernetes.io/master + effect: NoSchedule + volumes: + - name: tmp-volume + emptyDir: {} + +--- + apiVersion: apps/v1 kind: Deployment metadata: @@ -181,10 +323,12 @@ spec: - args: - sh - -c - - apk add --no-cache socat && socat TCP-LISTEN:80,fork,reuseaddr OPENSSL:kubernetes-dashboard.kube-system:443,verify=0 + - apk add --no-cache socat && socat TCP-LISTEN:80,fork,reuseaddr OPENSSL:kubernetes-dashboard.kubernetes-dashboard:443,verify=0 image: alpine name: dashboard + --- + apiVersion: v1 kind: Service metadata: @@ -199,13 +343,13 @@ spec: selector: app: dashboard type: NodePort + --- -apiVersion: rbac.authorization.k8s.io/v1beta1 + +apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: - name: kubernetes-dashboard - labels: - k8s-app: kubernetes-dashboard + name: insecure-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole @@ -213,4 +357,4 @@ roleRef: subjects: - kind: ServiceAccount name: kubernetes-dashboard - namespace: kube-system + namespace: kubernetes-dashboard From db276af18226a99ec39475c7b7299bdb02f939ce Mon Sep 17 00:00:00 2001 From: Jerome Petazzoni Date: Sat, 18 Jan 2020 11:33:02 -0600 Subject: [PATCH 12/16] Update Consul Bump up Consul version to 1.6. Change persistent consul demo; instead of a separate namespace, use a different label. This way, the two manifests can be more similar; and this simplifies the demo flow. --- k8s/consul.yaml | 6 +--- k8s/persistent-consul.yaml | 47 ++++++++++++++------------ slides/k8s/local-persistent-volumes.md | 36 +++++--------------- 3 files changed, 34 insertions(+), 55 deletions(-) diff --git a/k8s/consul.yaml b/k8s/consul.yaml index 8b254adb..d8452a0c 100644 --- a/k8s/consul.yaml +++ b/k8s/consul.yaml @@ -2,8 +2,6 @@ apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: consul - labels: - app: consul rules: - apiGroups: [""] resources: @@ -29,8 +27,6 @@ apiVersion: v1 kind: ServiceAccount metadata: name: consul - labels: - app: consul --- apiVersion: v1 kind: Service @@ -72,7 +68,7 @@ spec: terminationGracePeriodSeconds: 10 containers: - name: consul - image: "consul:1.5" + image: "consul:1.6" args: - "agent" - "-bootstrap-expect=3" diff --git a/k8s/persistent-consul.yaml b/k8s/persistent-consul.yaml index 64c35065..a08556bb 100644 --- a/k8s/persistent-consul.yaml +++ b/k8s/persistent-consul.yaml @@ -1,51 +1,54 @@ apiVersion: rbac.authorization.k8s.io/v1 -kind: Role +kind: ClusterRole metadata: - name: consul + name: persistentconsul rules: - - apiGroups: [ "" ] - resources: [ pods ] - verbs: [ get, list ] + - apiGroups: [""] + resources: + - pods + verbs: + - get + - list --- apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding +kind: ClusterRoleBinding metadata: - name: consul + name: persistentconsul roleRef: apiGroup: rbac.authorization.k8s.io - kind: Role - name: consul + kind: ClusterRole + name: persistentconsul subjects: - kind: ServiceAccount - name: consul - namespace: orange + name: persistentconsul + namespace: default --- apiVersion: v1 kind: ServiceAccount metadata: - name: consul + name: persistentconsul --- apiVersion: v1 kind: Service metadata: - name: consul + name: persistentconsul spec: ports: - port: 8500 name: http selector: - app: consul + app: persistentconsul --- apiVersion: apps/v1 kind: StatefulSet metadata: - name: consul + name: persistentconsul spec: - serviceName: consul + serviceName: persistentconsul replicas: 3 selector: matchLabels: - app: consul + app: persistentconsul volumeClaimTemplates: - metadata: name: data @@ -58,9 +61,9 @@ spec: template: metadata: labels: - app: consul + app: persistentconsul spec: - serviceAccountName: consul + serviceAccountName: persistentconsul affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: @@ -69,19 +72,19 @@ spec: - key: app operator: In values: - - consul + - persistentconsul topologyKey: kubernetes.io/hostname terminationGracePeriodSeconds: 10 containers: - name: consul - image: "consul:1.5" + image: "consul:1.6" volumeMounts: - name: data mountPath: /consul/data args: - "agent" - "-bootstrap-expect=3" - - "-retry-join=provider=k8s namespace=orange label_selector=\"app=consul\"" + - "-retry-join=provider=k8s label_selector=\"app=persistentconsul\"" - "-client=0.0.0.0" - "-data-dir=/consul/data" - "-server" diff --git a/slides/k8s/local-persistent-volumes.md b/slides/k8s/local-persistent-volumes.md index b2f86387..075cb619 100644 --- a/slides/k8s/local-persistent-volumes.md +++ b/slides/k8s/local-persistent-volumes.md @@ -56,28 +56,6 @@ --- -## Work in a separate namespace - -- To avoid conflicts with existing resources, let's create and use a new namespace - -.exercise[ - -- Create a new namespace: - ```bash - kubectl create namespace orange - ``` - -- Switch to that namespace: - ```bash - kns orange - ``` - -] - -.warning[Make sure to call that namespace `orange`: it is hardcoded in the YAML files.] - ---- - ## Deploying Consul - We will use a slightly different YAML file @@ -88,7 +66,9 @@ - the corresponding `volumeMounts` in the Pod spec - - the namespace `orange` used for discovery of Pods + - the label `consul` has been changed to `persistentconsul` +
+ (to avoid conflicts with the other Stateful Set) .exercise[ @@ -117,7 +97,7 @@ kubectl get pv ``` -- The Pod `consul-0` is not scheduled yet: +- The Pod `persistentconsul-0` is not scheduled yet: ```bash kubectl get pods -o wide ``` @@ -132,9 +112,9 @@ - In a Stateful Set, the Pods are started one by one -- `consul-1` won't be created until `consul-0` is running +- `persistentconsul-1` won't be created until `persistentconsul-0` is running -- `consul-0` has a dependency on an unbound Persistent Volume Claim +- `persistentconsul-0` has a dependency on an unbound Persistent Volume Claim - The scheduler won't schedule the Pod until the PVC is bound @@ -172,7 +152,7 @@ - Once a PVC is bound, its pod can start normally -- Once the pod `consul-0` has started, `consul-1` can be created, etc. +- Once the pod `persistentconsul-0` has started, `persistentconsul-1` can be created, etc. - Eventually, our Consul cluster is up, and backend by "persistent" volumes @@ -180,7 +160,7 @@ - Check that our Consul clusters has 3 members indeed: ```bash - kubectl exec consul-0 consul members + kubectl exec persistentconsul-0 consul members ``` ] From 745a435a1aa5ba76d5945df95cc040e082533a9b Mon Sep 17 00:00:00 2001 From: Jerome Petazzoni Date: Sat, 18 Jan 2020 11:51:57 -0600 Subject: [PATCH 13/16] Fix linebreak on cronjob --- slides/k8s/kubectlrun.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/slides/k8s/kubectlrun.md b/slides/k8s/kubectlrun.md index 01fe08e6..0cb5bf33 100644 --- a/slides/k8s/kubectlrun.md +++ b/slides/k8s/kubectlrun.md @@ -342,7 +342,8 @@ Let's leave `kubectl logs` running while we keep exploring. - Create the Cron Job: ```bash - kubectl run every3mins --schedule="*/3 * * * *" --restart=OnFailure --image=alpine sleep 10 + kubectl run every3mins --schedule="*/3 * * * *" --restart=OnFailure \ + --image=alpine sleep 10 ``` - Check the resource that was created: From ba323cb4e6e96818c57c8a5710fd3c9ee5f1c59c Mon Sep 17 00:00:00 2001 From: Jerome Petazzoni Date: Sat, 18 Jan 2020 12:06:04 -0600 Subject: [PATCH 14/16] Update Portworx --- k8s/portworx.yaml | 495 +++++++++++++++++++++++++++++++++++++++------- 1 file changed, 423 insertions(+), 72 deletions(-) diff --git a/k8s/portworx.yaml b/k8s/portworx.yaml index c3e21d0a..f29a54c3 100644 --- a/k8s/portworx.yaml +++ b/k8s/portworx.yaml @@ -1,4 +1,4 @@ -# SOURCE: https://install.portworx.com/?kbver=1.15.2&b=true&s=/dev/loop4&c=px-workshop&stork=true&lh=true&st=k8s&mc=false +# SOURCE: https://install.portworx.com/?mc=false&kbver=1.17.1&b=true&s=%2Fdev%2Floop4&j=auto&c=px-workshop&stork=true&csi=true&lh=true&st=k8s --- kind: Service apiVersion: v1 @@ -10,7 +10,7 @@ metadata: spec: selector: name: portworx - type: NodePort + type: ClusterIP ports: - name: px-api protocol: TCP @@ -50,6 +50,165 @@ spec: shortNames: - vps - vp + preserveUnknownFields: false + validation: + openAPIV3Schema: + type: object + required: + - spec + properties: + spec: + type: object + description: The desired spec of the volume placement strategy + properties: + replicaAffinity: + type: array + description: Allows you to specify a rule which creates an affinity for replicas within a volume + items: + type: object + properties: + enforcement: + type: string + enum: + - required + - preferred + description: Specifies if the given rule is required (hard) or preferred (soft) + topologyKey: + type: string + minLength: 1 + description: Key for the node label that the system uses to denote a topology domain. The key can be for any node label that is present on the Kubernetes node. + matchExpressions: + description: Expression to use for the replica affinity rule + type: array + items: + type: object + properties: + key: + type: string + minLength: 1 + operator: + type: string + enum: + - In + - NotIn + - Exists + - DoesNotExist + - Lt + - Gt + description: The logical operator to use for comparing the key and values in the match expression + values: + type: array + items: + type: string + required: + - key + - operator + replicaAntiAffinity: + type: array + description: Allows you to specify a rule that creates an anti-affinity for replicas within a volume + items: + type: object + properties: + enforcement: + type: string + enum: + - required + - preferred + description: Specifies if the given rule is required (hard) or preferred (soft) + topologyKey: + type: string + minLength: 1 + description: Key for the node label that the system uses to denote a topology domain. The key can be for any node label that is present on the Kubernetes node. + required: + - topologyKey + volumeAffinity: + type: array + description: Allows you to colocate volumes by specifying rules that place replicas of a volume together with those of another volume for which the specified labels match + items: + type: object + properties: + enforcement: + type: string + enum: + - required + - preferred + description: Specifies if the given rule is required (hard) or preferred (soft) + topologyKey: + type: string + minLength: 1 + description: Key for the node label that the system uses to denote a topology domain. The key can be for any node label that is present on the Kubernetes node. + matchExpressions: + description: Expression to use for the volume affinity rule + type: array + items: + type: object + properties: + key: + type: string + minLength: 1 + operator: + type: string + enum: + - In + - NotIn + - Exists + - DoesNotExist + - Lt + - Gt + description: The logical operator to use for comparing the key and values in the match expression + values: + type: array + items: + type: string + required: + - key + - operator + required: + - matchExpressions + volumeAntiAffinity: + type: array + description: Allows you to specify dissociation rules between 2 or more volumes that match the given labels + items: + type: object + properties: + enforcement: + type: string + enum: + - required + - preferred + description: Specifies if the given rule is required (hard) or preferred (soft) + topologyKey: + type: string + minLength: 1 + description: Key for the node label that the system uses to denote a topology domain. The key can be for any node label that is present on the Kubernetes node. + matchExpressions: + description: Expression to use for the volume anti affinity rule + type: array + items: + type: object + properties: + key: + type: string + minLength: 1 + operator: + type: string + enum: + - In + - NotIn + - Exists + - DoesNotExist + - Lt + - Gt + description: The logical operator to use for comparing the key and values in the match expression + values: + type: array + items: + type: string + required: + - key + - operator + required: + - matchExpressions --- apiVersion: v1 kind: ServiceAccount @@ -84,6 +243,13 @@ rules: - apiGroups: ["portworx.io"] resources: ["volumeplacementstrategies"] verbs: ["get", "list"] +- apiGroups: ["stork.libopenstorage.org"] + resources: ["backuplocations"] + verbs: ["get", "list"] +- apiGroups: [""] + resources: ["events"] + verbs: ["create"] + --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 @@ -127,14 +293,19 @@ roleRef: name: px-role apiGroup: rbac.authorization.k8s.io --- -apiVersion: extensions/v1beta1 +apiVersion: apps/v1 kind: DaemonSet metadata: name: portworx namespace: kube-system + labels: + name: portworx annotations: - portworx.com/install-source: "https://install.portworx.com/?kbver=1.15.2&b=true&s=/dev/loop4&c=px-workshop&stork=true&lh=true&st=k8s&mc=false" + portworx.com/install-source: "https://install.portworx.com/?mc=false&kbver=1.17.1&b=true&s=%2Fdev%2Floop4&j=auto&c=px-workshop&stork=true&csi=true&lh=true&st=k8s" spec: + selector: + matchLabels: + name: portworx minReadySeconds: 0 updateStrategy: type: RollingUpdate @@ -158,28 +329,20 @@ spec: operator: DoesNotExist hostNetwork: true hostPID: false - initContainers: - - name: checkloop - image: alpine - command: [ "sh", "-c" ] - args: - - | - if ! grep -q loop4 /proc/partitions; then - echo 'Could not find "loop4" in /proc/partitions. Please create it first.' - exit 1 - fi containers: - name: portworx - image: portworx/oci-monitor:2.1.3 + image: portworx/oci-monitor:2.3.2 imagePullPolicy: Always args: - ["-c", "px-workshop", "-s", "/dev/loop4", "-secret_type", "k8s", "-b", + ["-c", "px-workshop", "-s", "/dev/loop4", "-secret_type", "k8s", "-j", "auto", "-b", "-x", "kubernetes"] env: - name: "AUTO_NODE_RECOVERY_TIMEOUT_IN_SECS" value: "1500" - name: "PX_TEMPLATE_VERSION" value: "v4" + - name: CSI_ENDPOINT + value: unix:///var/lib/kubelet/plugins/pxd.portworx.com/csi.sock livenessProbe: periodSeconds: 30 @@ -210,6 +373,10 @@ spec: mountPath: /etc/crictl.yaml - name: etcpwx mountPath: /etc/pwx + - name: dev + mountPath: /dev + - name: csi-driver-path + mountPath: /var/lib/kubelet/plugins/pxd.portworx.com - name: optpwx mountPath: /opt/pwx - name: procmount @@ -224,6 +391,27 @@ spec: readOnly: true - name: dbusmount mountPath: /var/run/dbus + - name: csi-node-driver-registrar + image: quay.io/k8scsi/csi-node-driver-registrar:v1.1.0 + args: + - "--v=5" + - "--csi-address=$(ADDRESS)" + - "--kubelet-registration-path=/var/lib/kubelet/plugins/pxd.portworx.com/csi.sock" + imagePullPolicy: Always + env: + - name: ADDRESS + value: /csi/csi.sock + - name: KUBE_NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + securityContext: + privileged: true + volumeMounts: + - name: csi-driver-path + mountPath: /csi + - name: registration-dir + mountPath: /registration restartPolicy: Always serviceAccountName: px-account volumes: @@ -246,6 +434,17 @@ spec: - name: etcpwx hostPath: path: /etc/pwx + - name: dev + hostPath: + path: /dev + - name: registration-dir + hostPath: + path: /var/lib/kubelet/plugins_registry + type: DirectoryOrCreate + - name: csi-driver-path + hostPath: + path: /var/lib/kubelet/plugins/pxd.portworx.com + type: DirectoryOrCreate - name: optpwx hostPath: path: /opt/pwx @@ -265,6 +464,172 @@ spec: hostPath: path: /var/run/dbus --- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: px-csi-account + namespace: kube-system +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: px-csi-role +rules: +- apiGroups: ["extensions"] + resources: ["podsecuritypolicies"] + resourceNames: ["privileged"] + verbs: ["use"] +- apiGroups: ["apiextensions.k8s.io"] + resources: ["customresourcedefinitions"] + verbs: ["*"] +- apiGroups: [""] + resources: ["nodes"] + verbs: ["get", "list", "watch"] +- apiGroups: [""] + resources: ["persistentvolumes"] + verbs: ["get", "list", "watch", "create", "delete", "update", "patch"] +- apiGroups: [""] + resources: ["persistentvolumeclaims"] + verbs: ["get", "list", "watch", "update"] +- apiGroups: [""] + resources: ["persistentvolumeclaims/status"] + verbs: ["update", "patch"] +- apiGroups: ["storage.k8s.io"] + resources: ["storageclasses"] + verbs: ["get", "list", "watch"] +- apiGroups: ["storage.k8s.io"] + resources: ["volumeattachments"] + verbs: ["get", "list", "watch", "update", "patch"] +- apiGroups: [""] + resources: ["events"] + verbs: ["list", "watch", "create", "update", "patch"] +- apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "list"] +- apiGroups: ["snapshot.storage.k8s.io"] + resources: ["volumesnapshots", "volumesnapshotcontents", "volumesnapshotclasses", "volumesnapshots/status"] + verbs: ["create", "get", "list", "watch", "update", "delete"] +- apiGroups: [""] + resources: ["nodes"] + verbs: ["get", "list", "watch"] +- apiGroups: ["storage.k8s.io"] + resources: ["csinodes"] + verbs: ["get", "list", "watch", "update"] +- apiGroups: [""] + resources: ["nodes"] + verbs: ["get", "list", "watch"] +- apiGroups: ["csi.storage.k8s.io"] + resources: ["csidrivers"] + verbs: ["create", "delete"] +- apiGroups: [""] + resources: ["endpoints"] + verbs: ["get", "watch", "list", "delete", "update", "create"] +- apiGroups: ["coordination.k8s.io"] + resources: ["leases"] + verbs: ["*"] +--- +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: px-csi-role-binding +subjects: +- kind: ServiceAccount + name: px-csi-account + namespace: kube-system +roleRef: + kind: ClusterRole + name: px-csi-role + apiGroup: rbac.authorization.k8s.io +--- +kind: Service +apiVersion: v1 +metadata: + name: px-csi-service + namespace: kube-system +spec: + clusterIP: None +--- +kind: Deployment +apiVersion: apps/v1 +metadata: + name: px-csi-ext + namespace: kube-system +spec: + replicas: 3 + selector: + matchLabels: + app: px-csi-driver + template: + metadata: + labels: + app: px-csi-driver + spec: + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: px/enabled + operator: NotIn + values: + - "false" + - key: node-role.kubernetes.io/master + operator: DoesNotExist + serviceAccount: px-csi-account + containers: + - name: csi-external-provisioner + imagePullPolicy: Always + image: quay.io/openstorage/csi-provisioner:v1.4.0-1 + args: + - "--v=5" + - "--provisioner=pxd.portworx.com" + - "--csi-address=$(ADDRESS)" + - "--enable-leader-election" + - "--leader-election-type=leases" + env: + - name: ADDRESS + value: /csi/csi.sock + securityContext: + privileged: true + volumeMounts: + - name: socket-dir + mountPath: /csi + - name: csi-snapshotter + image: quay.io/k8scsi/csi-snapshotter:v2.0.0 + imagePullPolicy: Always + args: + - "--v=3" + - "--csi-address=$(ADDRESS)" + - "--leader-election=true" + env: + - name: ADDRESS + value: /csi/csi.sock + securityContext: + privileged: true + volumeMounts: + - name: socket-dir + mountPath: /csi + - name: csi-resizer + imagePullPolicy: Always + image: quay.io/k8scsi/csi-resizer:v0.3.0 + args: + - "--v=5" + - "--csi-address=$(ADDRESS)" + - "--leader-election=true" + env: + - name: ADDRESS + value: /csi/csi.sock + securityContext: + privileged: true + volumeMounts: + - name: socket-dir + mountPath: /csi + volumes: + - name: socket-dir + hostPath: + path: /var/lib/kubelet/plugins/pxd.portworx.com + type: DirectoryOrCreate +--- kind: Service apiVersion: v1 metadata: @@ -275,7 +640,7 @@ metadata: spec: selector: name: portworx-api - type: NodePort + type: ClusterIP ports: - name: px-api protocol: TCP @@ -290,12 +655,17 @@ spec: port: 9021 targetPort: 9021 --- -apiVersion: extensions/v1beta1 +apiVersion: apps/v1 kind: DaemonSet metadata: name: portworx-api namespace: kube-system + labels: + name: portworx-api spec: + selector: + matchLabels: + name: portworx-api minReadySeconds: 0 updateStrategy: type: RollingUpdate @@ -331,8 +701,14 @@ spec: port: 9001 restartPolicy: Always serviceAccountName: px-account - - +--- +apiVersion: storage.k8s.io/v1beta1 +kind: CSIDriver +metadata: + name: pxd.portworx.com +spec: + attachRequired: false + podInfoOnMount: false --- apiVersion: v1 kind: ConfigMap @@ -368,48 +744,9 @@ apiVersion: rbac.authorization.k8s.io/v1 metadata: name: stork-role rules: - - apiGroups: [""] - resources: ["pods", "pods/exec"] - verbs: ["get", "list", "delete", "create", "watch"] - - apiGroups: [""] - resources: ["persistentvolumes"] - verbs: ["get", "list", "watch", "create", "delete"] - - apiGroups: [""] - resources: ["persistentvolumeclaims"] - verbs: ["get", "list", "watch", "update"] - - apiGroups: ["storage.k8s.io"] - resources: ["storageclasses"] - verbs: ["get", "list", "watch"] - - apiGroups: [""] - resources: ["events"] - verbs: ["list", "watch", "create", "update", "patch"] - - apiGroups: ["stork.libopenstorage.org"] - resources: ["*"] - verbs: ["get", "list", "watch", "update", "patch", "create", "delete"] - - apiGroups: ["apiextensions.k8s.io"] - resources: ["customresourcedefinitions"] - verbs: ["create", "get"] - - apiGroups: ["volumesnapshot.external-storage.k8s.io"] - resources: ["volumesnapshots", "volumesnapshotdatas"] - verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] - - apiGroups: [""] - resources: ["configmaps"] - verbs: ["get", "create", "update"] - - apiGroups: [""] - resources: ["services"] - verbs: ["get"] - - apiGroups: [""] - resources: ["nodes"] - verbs: ["get", "list", "watch"] - - apiGroups: ["*"] - resources: ["deployments", "deployments/extensions"] - verbs: ["list", "get", "watch", "patch", "update", "initialize"] - - apiGroups: ["*"] - resources: ["statefulsets", "statefulsets/extensions"] - verbs: ["list", "get", "watch", "patch", "update", "initialize"] - apiGroups: ["*"] resources: ["*"] - verbs: ["list", "get"] + verbs: ["*"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 @@ -437,7 +774,7 @@ spec: port: 8099 targetPort: 8099 --- -apiVersion: extensions/v1beta1 +apiVersion: apps/v1 kind: Deployment metadata: annotations: @@ -447,6 +784,9 @@ metadata: name: stork namespace: kube-system spec: + selector: + matchLabels: + name: stork strategy: rollingUpdate: maxSurge: 1 @@ -469,7 +809,7 @@ spec: - --leader-elect=true - --health-monitor-interval=120 imagePullPolicy: Always - image: openstorage/stork:2.2.4 + image: openstorage/stork:2.3.1 env: - name: "PX_SERVICE_NAME" value: "portworx-api" @@ -512,8 +852,8 @@ rules: verbs: ["get", "create", "update"] - apiGroups: [""] resources: ["configmaps"] - verbs: ["get"] - - apiGroups: [""] + verbs: ["get", "list", "watch"] + - apiGroups: ["", "events.k8s.io"] resources: ["events"] verbs: ["create", "patch", "update"] - apiGroups: [""] @@ -548,8 +888,11 @@ rules: resources: ["persistentvolumeclaims", "persistentvolumes"] verbs: ["get", "list", "watch"] - apiGroups: ["storage.k8s.io"] - resources: ["storageclasses"] + resources: ["storageclasses", "csinodes"] verbs: ["get", "list", "watch"] + - apiGroups: ["coordination.k8s.io"] + resources: ["leases"] + verbs: ["create", "update", "get", "list", "watch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 @@ -564,7 +907,7 @@ roleRef: name: stork-scheduler-role apiGroup: rbac.authorization.k8s.io --- -apiVersion: apps/v1beta1 +apiVersion: apps/v1 kind: Deployment metadata: labels: @@ -574,12 +917,16 @@ metadata: name: stork-scheduler namespace: kube-system spec: + selector: + matchLabels: + name: stork-scheduler replicas: 3 template: metadata: labels: component: scheduler tier: control-plane + name: stork-scheduler name: stork-scheduler spec: containers: @@ -591,7 +938,7 @@ spec: - --policy-configmap=stork-config - --policy-configmap-namespace=kube-system - --lock-object-name=stork-scheduler - image: gcr.io/google_containers/kube-scheduler-amd64:v1.15.2 + image: gcr.io/google_containers/kube-scheduler-amd64:v1.17.1 livenessProbe: httpGet: path: /healthz @@ -693,7 +1040,7 @@ spec: selector: tier: px-web-console --- -apiVersion: apps/v1beta1 +apiVersion: apps/v1 kind: Deployment metadata: name: px-lighthouse @@ -701,6 +1048,9 @@ metadata: labels: tier: px-web-console spec: + selector: + matchLabels: + tier: px-web-console strategy: rollingUpdate: maxSurge: 1 @@ -717,7 +1067,7 @@ spec: spec: initContainers: - name: config-init - image: portworx/lh-config-sync:0.4 + image: portworx/lh-config-sync:2.0.5 imagePullPolicy: Always args: - "init" @@ -726,7 +1076,7 @@ spec: mountPath: /config/lh containers: - name: px-lighthouse - image: portworx/px-lighthouse:2.0.4 + image: portworx/px-lighthouse:2.0.6 imagePullPolicy: Always args: [ "-kubernetes", "true" ] ports: @@ -736,7 +1086,7 @@ spec: - name: config mountPath: /config/lh - name: config-sync - image: portworx/lh-config-sync:0.4 + image: portworx/lh-config-sync:2.0.5 imagePullPolicy: Always args: - "sync" @@ -744,7 +1094,7 @@ spec: - name: config mountPath: /config/lh - name: stork-connector - image: portworx/lh-stork-connector:0.2 + image: portworx/lh-stork-connector:2.0.5 imagePullPolicy: Always serviceAccountName: px-lh-account volumes: @@ -763,3 +1113,4 @@ provisioner: kubernetes.io/portworx-volume parameters: repl: "2" priority_io: "high" + From a32df01165540836d4a56b66081d283d5a002ffc Mon Sep 17 00:00:00 2001 From: Jerome Petazzoni Date: Sun, 19 Jan 2020 11:32:04 -0600 Subject: [PATCH 15/16] Revamp operator example Use Elastic Cloud for Kubernetes instead of the UPMC Enterprises operator. --- k8s/eck-cerebro.yaml | 69 ++ k8s/eck-elasticsearch.yaml | 19 + k8s/eck-filebeat.yaml | 168 ++++ k8s/eck-kibana.yaml | 17 + k8s/eck-operator.yaml | 1802 ++++++++++++++++++++++++++++++++++++ slides/k8s/operators.md | 306 +++++- 6 files changed, 2342 insertions(+), 39 deletions(-) create mode 100644 k8s/eck-cerebro.yaml create mode 100644 k8s/eck-elasticsearch.yaml create mode 100644 k8s/eck-filebeat.yaml create mode 100644 k8s/eck-kibana.yaml create mode 100644 k8s/eck-operator.yaml diff --git a/k8s/eck-cerebro.yaml b/k8s/eck-cerebro.yaml new file mode 100644 index 00000000..73f23e0a --- /dev/null +++ b/k8s/eck-cerebro.yaml @@ -0,0 +1,69 @@ +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: cerebro + name: cerebro +spec: + selector: + matchLabels: + app: cerebro + template: + metadata: + labels: + app: cerebro + spec: + volumes: + - name: conf + configMap: + name: cerebro + containers: + - image: lmenezes/cerebro + name: cerebro + volumeMounts: + - name: conf + mountPath: /conf + args: + - -Dconfig.file=/conf/application.conf + env: + - name: ELASTICSEARCH_PASSWORD + valueFrom: + secretKeyRef: + name: demo-es-elastic-user + key: elastic + +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: cerebro + name: cerebro +spec: + ports: + - port: 9000 + protocol: TCP + targetPort: 9000 + selector: + app: cerebro + type: NodePort +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: cerebro +data: + application.conf: | + secret = "ki:s:[[@=Ag?QI`W2jMwkY:eqvrJ]JqoJyi2axj3ZvOv^/KavOT4ViJSv?6YY4[N" + + hosts = [ + { + host = "http://demo-es-http.eck-demo.svc.cluster.local:9200" + name = "demo" + auth = { + username = "elastic" + password = ${?ELASTICSEARCH_PASSWORD} + } + } + ] diff --git a/k8s/eck-elasticsearch.yaml b/k8s/eck-elasticsearch.yaml new file mode 100644 index 00000000..f1142b3e --- /dev/null +++ b/k8s/eck-elasticsearch.yaml @@ -0,0 +1,19 @@ +apiVersion: elasticsearch.k8s.elastic.co/v1 +kind: Elasticsearch +metadata: + name: demo + namespace: eck-demo +spec: + http: + tls: + selfSignedCertificate: + disabled: true + nodeSets: + - name: default + count: 1 + config: + node.data: true + node.ingest: true + node.master: true + node.store.allow_mmap: false + version: 7.5.1 diff --git a/k8s/eck-filebeat.yaml b/k8s/eck-filebeat.yaml new file mode 100644 index 00000000..a4aa2928 --- /dev/null +++ b/k8s/eck-filebeat.yaml @@ -0,0 +1,168 @@ +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: filebeat-config + namespace: eck-demo + labels: + k8s-app: filebeat +data: + filebeat.yml: |- + filebeat.inputs: + - type: container + paths: + - /var/log/containers/*.log + processors: + - add_kubernetes_metadata: + host: ${NODE_NAME} + matchers: + - logs_path: + logs_path: "/var/log/containers/" + + # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this: + #filebeat.autodiscover: + # providers: + # - type: kubernetes + # node: ${NODE_NAME} + # hints.enabled: true + # hints.default_config: + # type: container + # paths: + # - /var/log/containers/*${data.kubernetes.container.id}.log + + processors: + - add_cloud_metadata: + - add_host_metadata: + + cloud.id: ${ELASTIC_CLOUD_ID} + cloud.auth: ${ELASTIC_CLOUD_AUTH} + + output.elasticsearch: + hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}'] + username: ${ELASTICSEARCH_USERNAME} + password: ${ELASTICSEARCH_PASSWORD} +--- +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: filebeat + namespace: eck-demo + labels: + k8s-app: filebeat +spec: + selector: + matchLabels: + k8s-app: filebeat + template: + metadata: + labels: + k8s-app: filebeat + spec: + serviceAccountName: filebeat + terminationGracePeriodSeconds: 30 + hostNetwork: true + dnsPolicy: ClusterFirstWithHostNet + containers: + - name: filebeat + image: docker.elastic.co/beats/filebeat:7.5.1 + args: [ + "-c", "/etc/filebeat.yml", + "-e", + ] + env: + - name: ELASTICSEARCH_HOST + value: demo-es-http + - name: ELASTICSEARCH_PORT + value: "9200" + - name: ELASTICSEARCH_USERNAME + value: elastic + - name: ELASTICSEARCH_PASSWORD + valueFrom: + secretKeyRef: + name: demo-es-elastic-user + key: elastic + - name: ELASTIC_CLOUD_ID + value: + - name: ELASTIC_CLOUD_AUTH + value: + - name: NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + securityContext: + runAsUser: 0 + # If using Red Hat OpenShift uncomment this: + #privileged: true + resources: + limits: + memory: 200Mi + requests: + cpu: 100m + memory: 100Mi + volumeMounts: + - name: config + mountPath: /etc/filebeat.yml + readOnly: true + subPath: filebeat.yml + - name: data + mountPath: /usr/share/filebeat/data + - name: varlibdockercontainers + mountPath: /var/lib/docker/containers + readOnly: true + - name: varlog + mountPath: /var/log + readOnly: true + volumes: + - name: config + configMap: + defaultMode: 0600 + name: filebeat-config + - name: varlibdockercontainers + hostPath: + path: /var/lib/docker/containers + - name: varlog + hostPath: + path: /var/log + # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart + - name: data + hostPath: + path: /var/lib/filebeat-data + type: DirectoryOrCreate +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: filebeat +subjects: +- kind: ServiceAccount + name: filebeat + namespace: eck-demo +roleRef: + kind: ClusterRole + name: filebeat + apiGroup: rbac.authorization.k8s.io +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: filebeat + labels: + k8s-app: filebeat +rules: +- apiGroups: [""] # "" indicates the core API group + resources: + - namespaces + - pods + verbs: + - get + - watch + - list +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: filebeat + namespace: eck-demo + labels: + k8s-app: filebeat +--- diff --git a/k8s/eck-kibana.yaml b/k8s/eck-kibana.yaml new file mode 100644 index 00000000..21cc09c8 --- /dev/null +++ b/k8s/eck-kibana.yaml @@ -0,0 +1,17 @@ +apiVersion: kibana.k8s.elastic.co/v1 +kind: Kibana +metadata: + name: demo +spec: + version: 7.5.1 + count: 1 + elasticsearchRef: + name: demo + namespace: eck-demo + http: + service: + spec: + type: NodePort + tls: + selfSignedCertificate: + disabled: true diff --git a/k8s/eck-operator.yaml b/k8s/eck-operator.yaml new file mode 100644 index 00000000..9736a156 --- /dev/null +++ b/k8s/eck-operator.yaml @@ -0,0 +1,1802 @@ +apiVersion: apiextensions.k8s.io/v1beta1 +kind: CustomResourceDefinition +metadata: + creationTimestamp: null + name: apmservers.apm.k8s.elastic.co +spec: + additionalPrinterColumns: + - JSONPath: .status.health + name: health + type: string + - JSONPath: .status.availableNodes + description: Available nodes + name: nodes + type: integer + - JSONPath: .spec.version + description: APM version + name: version + type: string + - JSONPath: .metadata.creationTimestamp + name: age + type: date + group: apm.k8s.elastic.co + names: + categories: + - elastic + kind: ApmServer + listKind: ApmServerList + plural: apmservers + shortNames: + - apm + singular: apmserver + scope: Namespaced + subresources: + status: {} + validation: + openAPIV3Schema: + description: ApmServer represents an APM Server resource in a Kubernetes cluster. + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: ApmServerSpec holds the specification of an APM Server. + properties: + config: + description: 'Config holds the APM Server configuration. See: https://www.elastic.co/guide/en/apm/server/current/configuring-howto-apm-server.html' + type: object + count: + description: Count of APM Server instances to deploy. + format: int32 + type: integer + elasticsearchRef: + description: ElasticsearchRef is a reference to the output Elasticsearch + cluster running in the same Kubernetes cluster. + properties: + name: + description: Name of the Kubernetes object. + type: string + namespace: + description: Namespace of the Kubernetes object. If empty, defaults + to the current namespace. + type: string + required: + - name + type: object + http: + description: HTTP holds the HTTP layer configuration for the APM Server + resource. + properties: + service: + description: Service defines the template for the associated Kubernetes + Service object. + properties: + metadata: + description: ObjectMeta is the metadata of the service. The + name and namespace provided here are managed by ECK and will + be ignored. + type: object + spec: + description: Spec is the specification of the service. + properties: + clusterIP: + description: 'clusterIP is the IP address of the service + and is usually assigned randomly by the master. If an + address is specified manually and is not in use by others, + it will be allocated to the service; otherwise, creation + of the service will fail. This field can not be changed + through updates. Valid values are "None", empty string + (""), or a valid IP address. "None" can be specified for + headless services when proxying is not required. Only + applies to types ClusterIP, NodePort, and LoadBalancer. + Ignored if type is ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies' + type: string + externalIPs: + description: externalIPs is a list of IP addresses for which + nodes in the cluster will also accept traffic for this + service. These IPs are not managed by Kubernetes. The + user is responsible for ensuring that traffic arrives + at a node with this IP. A common example is external + load-balancers that are not part of the Kubernetes system. + items: + type: string + type: array + externalName: + description: externalName is the external reference that + kubedns or equivalent will return as a CNAME record for + this service. No proxying will be involved. Must be a + valid RFC-1123 hostname (https://tools.ietf.org/html/rfc1123) + and requires Type to be ExternalName. + type: string + externalTrafficPolicy: + description: externalTrafficPolicy denotes if this Service + desires to route external traffic to node-local or cluster-wide + endpoints. "Local" preserves the client source IP and + avoids a second hop for LoadBalancer and Nodeport type + services, but risks potentially imbalanced traffic spreading. + "Cluster" obscures the client source IP and may cause + a second hop to another node, but should have good overall + load-spreading. + type: string + healthCheckNodePort: + description: healthCheckNodePort specifies the healthcheck + nodePort for the service. If not specified, HealthCheckNodePort + is created by the service api backend with the allocated + nodePort. Will use user-specified nodePort value if specified + by the client. Only effects when Type is set to LoadBalancer + and ExternalTrafficPolicy is set to Local. + format: int32 + type: integer + ipFamily: + description: ipFamily specifies whether this Service has + a preference for a particular IP family (e.g. IPv4 vs. + IPv6). If a specific IP family is requested, the clusterIP + field will be allocated from that family, if it is available + in the cluster. If no IP family is requested, the cluster's + primary IP family will be used. Other IP fields (loadBalancerIP, + loadBalancerSourceRanges, externalIPs) and controllers + which allocate external load-balancers should use the + same IP family. Endpoints for this Service will be of + this family. This field is immutable after creation. + Assigning a ServiceIPFamily not available in the cluster + (e.g. IPv6 in IPv4 only cluster) is an error condition + and will fail during clusterIP assignment. + type: string + loadBalancerIP: + description: 'Only applies to Service Type: LoadBalancer + LoadBalancer will get created with the IP specified in + this field. This feature depends on whether the underlying + cloud-provider supports specifying the loadBalancerIP + when a load balancer is created. This field will be ignored + if the cloud-provider does not support the feature.' + type: string + loadBalancerSourceRanges: + description: 'If specified and supported by the platform, + this will restrict traffic through the cloud-provider + load-balancer will be restricted to the specified client + IPs. This field will be ignored if the cloud-provider + does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/' + items: + type: string + type: array + ports: + description: 'The list of ports that are exposed by this + service. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies' + items: + description: ServicePort contains information on service's + port. + properties: + name: + description: The name of this port within the service. + This must be a DNS_LABEL. All ports within a ServiceSpec + must have unique names. When considering the endpoints + for a Service, this must match the 'name' field + in the EndpointPort. Optional if only one ServicePort + is defined on this service. + type: string + nodePort: + description: 'The port on each node on which this + service is exposed when type=NodePort or LoadBalancer. + Usually assigned by the system. If specified, it + will be allocated to the service if unused or else + creation of the service will fail. Default is to + auto-allocate a port if the ServiceType of this + Service requires one. More info: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport' + format: int32 + type: integer + port: + description: The port that will be exposed by this + service. + format: int32 + type: integer + protocol: + description: The IP protocol for this port. Supports + "TCP", "UDP", and "SCTP". Default is TCP. + type: string + targetPort: + anyOf: + - type: string + - type: integer + description: 'Number or name of the port to access + on the pods targeted by the service. Number must + be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + If this is a string, it will be looked up as a named + port in the target Pod''s container ports. If this + is not specified, the value of the ''port'' field + is used (an identity map). This field is ignored + for services with clusterIP=None, and should be + omitted or set equal to the ''port'' field. More + info: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service' + required: + - port + type: object + type: array + publishNotReadyAddresses: + description: publishNotReadyAddresses, when set to true, + indicates that DNS implementations must publish the notReadyAddresses + of subsets for the Endpoints associated with the Service. + The default value is false. The primary use case for setting + this field is to use a StatefulSet's Headless Service + to propagate SRV records for its Pods without respect + to their readiness for purpose of peer discovery. + type: boolean + selector: + additionalProperties: + type: string + description: 'Route service traffic to pods with label keys + and values matching this selector. If empty or not present, + the service is assumed to have an external process managing + its endpoints, which Kubernetes will not modify. Only + applies to types ClusterIP, NodePort, and LoadBalancer. + Ignored if type is ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/' + type: object + sessionAffinity: + description: 'Supports "ClientIP" and "None". Used to maintain + session affinity. Enable client IP based session affinity. + Must be ClientIP or None. Defaults to None. More info: + https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies' + type: string + sessionAffinityConfig: + description: sessionAffinityConfig contains the configurations + of session affinity. + properties: + clientIP: + description: clientIP contains the configurations of + Client IP based session affinity. + properties: + timeoutSeconds: + description: timeoutSeconds specifies the seconds + of ClientIP type session sticky time. The value + must be >0 && <=86400(for 1 day) if ServiceAffinity + == "ClientIP". Default value is 10800(for 3 hours). + format: int32 + type: integer + type: object + type: object + type: + description: 'type determines how the Service is exposed. + Defaults to ClusterIP. Valid options are ExternalName, + ClusterIP, NodePort, and LoadBalancer. "ExternalName" + maps to the specified externalName. "ClusterIP" allocates + a cluster-internal IP address for load-balancing to endpoints. + Endpoints are determined by the selector or if that is + not specified, by manual construction of an Endpoints + object. If clusterIP is "None", no virtual IP is allocated + and the endpoints are published as a set of endpoints + rather than a stable IP. "NodePort" builds on ClusterIP + and allocates a port on every node which routes to the + clusterIP. "LoadBalancer" builds on NodePort and creates + an external load-balancer (if supported in the current + cloud) which routes to the clusterIP. More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types' + type: string + type: object + type: object + tls: + description: TLS defines options for configuring TLS for HTTP. + properties: + certificate: + description: "Certificate is a reference to a Kubernetes secret + that contains the certificate and private key for enabling + TLS. The referenced secret should contain the following: \n + - `ca.crt`: The certificate authority (optional). - `tls.crt`: + The certificate (or a chain). - `tls.key`: The private key + to the first certificate in the certificate chain." + properties: + secretName: + description: SecretName is the name of the secret. + type: string + type: object + selfSignedCertificate: + description: SelfSignedCertificate allows configuring the self-signed + certificate generated by the operator. + properties: + disabled: + description: Disabled indicates that the provisioning of + the self-signed certifcate should be disabled. + type: boolean + subjectAltNames: + description: SubjectAlternativeNames is a list of SANs to + include in the generated HTTP TLS certificate. + items: + description: SubjectAlternativeName represents a SAN entry + in a x509 certificate. + properties: + dns: + description: DNS is the DNS name of the subject. + type: string + ip: + description: IP is the IP address of the subject. + type: string + type: object + type: array + type: object + type: object + type: object + image: + description: Image is the APM Server Docker image to deploy. + type: string + podTemplate: + description: PodTemplate provides customisation options (labels, annotations, + affinity rules, resource requests, and so on) for the APM Server pods. + type: object + secureSettings: + description: 'SecureSettings is a list of references to Kubernetes secrets + containing sensitive configuration options for APM Server. See: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-apm-server.html#k8s-apm-secure-settings' + items: + description: SecretSource defines a data source based on a Kubernetes + Secret. + properties: + entries: + description: Entries define how to project each key-value pair + in the secret to filesystem paths. If not defined, all keys + will be projected to similarly named paths in the filesystem. + If defined, only the specified keys will be projected to the + corresponding paths. + items: + description: KeyToPath defines how to map a key in a Secret + object to a filesystem path. + properties: + key: + description: Key is the key contained in the secret. + type: string + path: + description: Path is the relative file path to map the key + to. Path must not be an absolute file path and must not + contain any ".." components. + type: string + required: + - key + type: object + type: array + secretName: + description: SecretName is the name of the secret. + type: string + required: + - secretName + type: object + type: array + version: + description: Version of the APM Server. + type: string + type: object + status: + description: ApmServerStatus defines the observed state of ApmServer + properties: + associationStatus: + description: Association is the status of any auto-linking to Elasticsearch + clusters. + type: string + availableNodes: + format: int32 + type: integer + health: + description: ApmServerHealth expresses the status of the Apm Server + instances. + type: string + secretTokenSecret: + description: SecretTokenSecretName is the name of the Secret that contains + the secret token + type: string + service: + description: ExternalService is the name of the service the agents should + connect to. + type: string + type: object + version: v1 + versions: + - name: v1 + served: true + storage: true + - name: v1beta1 + served: true + storage: false + - name: v1alpha1 + served: false + storage: false +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] +--- +apiVersion: apiextensions.k8s.io/v1beta1 +kind: CustomResourceDefinition +metadata: + creationTimestamp: null + name: elasticsearches.elasticsearch.k8s.elastic.co +spec: + additionalPrinterColumns: + - JSONPath: .status.health + name: health + type: string + - JSONPath: .status.availableNodes + description: Available nodes + name: nodes + type: integer + - JSONPath: .spec.version + description: Elasticsearch version + name: version + type: string + - JSONPath: .status.phase + name: phase + type: string + - JSONPath: .metadata.creationTimestamp + name: age + type: date + group: elasticsearch.k8s.elastic.co + names: + categories: + - elastic + kind: Elasticsearch + listKind: ElasticsearchList + plural: elasticsearches + shortNames: + - es + singular: elasticsearch + scope: Namespaced + subresources: + status: {} + validation: + openAPIV3Schema: + description: Elasticsearch represents an Elasticsearch resource in a Kubernetes + cluster. + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: ElasticsearchSpec holds the specification of an Elasticsearch + cluster. + properties: + http: + description: HTTP holds HTTP layer settings for Elasticsearch. + properties: + service: + description: Service defines the template for the associated Kubernetes + Service object. + properties: + metadata: + description: ObjectMeta is the metadata of the service. The + name and namespace provided here are managed by ECK and will + be ignored. + type: object + spec: + description: Spec is the specification of the service. + properties: + clusterIP: + description: 'clusterIP is the IP address of the service + and is usually assigned randomly by the master. If an + address is specified manually and is not in use by others, + it will be allocated to the service; otherwise, creation + of the service will fail. This field can not be changed + through updates. Valid values are "None", empty string + (""), or a valid IP address. "None" can be specified for + headless services when proxying is not required. Only + applies to types ClusterIP, NodePort, and LoadBalancer. + Ignored if type is ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies' + type: string + externalIPs: + description: externalIPs is a list of IP addresses for which + nodes in the cluster will also accept traffic for this + service. These IPs are not managed by Kubernetes. The + user is responsible for ensuring that traffic arrives + at a node with this IP. A common example is external + load-balancers that are not part of the Kubernetes system. + items: + type: string + type: array + externalName: + description: externalName is the external reference that + kubedns or equivalent will return as a CNAME record for + this service. No proxying will be involved. Must be a + valid RFC-1123 hostname (https://tools.ietf.org/html/rfc1123) + and requires Type to be ExternalName. + type: string + externalTrafficPolicy: + description: externalTrafficPolicy denotes if this Service + desires to route external traffic to node-local or cluster-wide + endpoints. "Local" preserves the client source IP and + avoids a second hop for LoadBalancer and Nodeport type + services, but risks potentially imbalanced traffic spreading. + "Cluster" obscures the client source IP and may cause + a second hop to another node, but should have good overall + load-spreading. + type: string + healthCheckNodePort: + description: healthCheckNodePort specifies the healthcheck + nodePort for the service. If not specified, HealthCheckNodePort + is created by the service api backend with the allocated + nodePort. Will use user-specified nodePort value if specified + by the client. Only effects when Type is set to LoadBalancer + and ExternalTrafficPolicy is set to Local. + format: int32 + type: integer + ipFamily: + description: ipFamily specifies whether this Service has + a preference for a particular IP family (e.g. IPv4 vs. + IPv6). If a specific IP family is requested, the clusterIP + field will be allocated from that family, if it is available + in the cluster. If no IP family is requested, the cluster's + primary IP family will be used. Other IP fields (loadBalancerIP, + loadBalancerSourceRanges, externalIPs) and controllers + which allocate external load-balancers should use the + same IP family. Endpoints for this Service will be of + this family. This field is immutable after creation. + Assigning a ServiceIPFamily not available in the cluster + (e.g. IPv6 in IPv4 only cluster) is an error condition + and will fail during clusterIP assignment. + type: string + loadBalancerIP: + description: 'Only applies to Service Type: LoadBalancer + LoadBalancer will get created with the IP specified in + this field. This feature depends on whether the underlying + cloud-provider supports specifying the loadBalancerIP + when a load balancer is created. This field will be ignored + if the cloud-provider does not support the feature.' + type: string + loadBalancerSourceRanges: + description: 'If specified and supported by the platform, + this will restrict traffic through the cloud-provider + load-balancer will be restricted to the specified client + IPs. This field will be ignored if the cloud-provider + does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/' + items: + type: string + type: array + ports: + description: 'The list of ports that are exposed by this + service. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies' + items: + description: ServicePort contains information on service's + port. + properties: + name: + description: The name of this port within the service. + This must be a DNS_LABEL. All ports within a ServiceSpec + must have unique names. When considering the endpoints + for a Service, this must match the 'name' field + in the EndpointPort. Optional if only one ServicePort + is defined on this service. + type: string + nodePort: + description: 'The port on each node on which this + service is exposed when type=NodePort or LoadBalancer. + Usually assigned by the system. If specified, it + will be allocated to the service if unused or else + creation of the service will fail. Default is to + auto-allocate a port if the ServiceType of this + Service requires one. More info: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport' + format: int32 + type: integer + port: + description: The port that will be exposed by this + service. + format: int32 + type: integer + protocol: + description: The IP protocol for this port. Supports + "TCP", "UDP", and "SCTP". Default is TCP. + type: string + targetPort: + anyOf: + - type: string + - type: integer + description: 'Number or name of the port to access + on the pods targeted by the service. Number must + be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + If this is a string, it will be looked up as a named + port in the target Pod''s container ports. If this + is not specified, the value of the ''port'' field + is used (an identity map). This field is ignored + for services with clusterIP=None, and should be + omitted or set equal to the ''port'' field. More + info: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service' + required: + - port + type: object + type: array + publishNotReadyAddresses: + description: publishNotReadyAddresses, when set to true, + indicates that DNS implementations must publish the notReadyAddresses + of subsets for the Endpoints associated with the Service. + The default value is false. The primary use case for setting + this field is to use a StatefulSet's Headless Service + to propagate SRV records for its Pods without respect + to their readiness for purpose of peer discovery. + type: boolean + selector: + additionalProperties: + type: string + description: 'Route service traffic to pods with label keys + and values matching this selector. If empty or not present, + the service is assumed to have an external process managing + its endpoints, which Kubernetes will not modify. Only + applies to types ClusterIP, NodePort, and LoadBalancer. + Ignored if type is ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/' + type: object + sessionAffinity: + description: 'Supports "ClientIP" and "None". Used to maintain + session affinity. Enable client IP based session affinity. + Must be ClientIP or None. Defaults to None. More info: + https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies' + type: string + sessionAffinityConfig: + description: sessionAffinityConfig contains the configurations + of session affinity. + properties: + clientIP: + description: clientIP contains the configurations of + Client IP based session affinity. + properties: + timeoutSeconds: + description: timeoutSeconds specifies the seconds + of ClientIP type session sticky time. The value + must be >0 && <=86400(for 1 day) if ServiceAffinity + == "ClientIP". Default value is 10800(for 3 hours). + format: int32 + type: integer + type: object + type: object + type: + description: 'type determines how the Service is exposed. + Defaults to ClusterIP. Valid options are ExternalName, + ClusterIP, NodePort, and LoadBalancer. "ExternalName" + maps to the specified externalName. "ClusterIP" allocates + a cluster-internal IP address for load-balancing to endpoints. + Endpoints are determined by the selector or if that is + not specified, by manual construction of an Endpoints + object. If clusterIP is "None", no virtual IP is allocated + and the endpoints are published as a set of endpoints + rather than a stable IP. "NodePort" builds on ClusterIP + and allocates a port on every node which routes to the + clusterIP. "LoadBalancer" builds on NodePort and creates + an external load-balancer (if supported in the current + cloud) which routes to the clusterIP. More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types' + type: string + type: object + type: object + tls: + description: TLS defines options for configuring TLS for HTTP. + properties: + certificate: + description: "Certificate is a reference to a Kubernetes secret + that contains the certificate and private key for enabling + TLS. The referenced secret should contain the following: \n + - `ca.crt`: The certificate authority (optional). - `tls.crt`: + The certificate (or a chain). - `tls.key`: The private key + to the first certificate in the certificate chain." + properties: + secretName: + description: SecretName is the name of the secret. + type: string + type: object + selfSignedCertificate: + description: SelfSignedCertificate allows configuring the self-signed + certificate generated by the operator. + properties: + disabled: + description: Disabled indicates that the provisioning of + the self-signed certifcate should be disabled. + type: boolean + subjectAltNames: + description: SubjectAlternativeNames is a list of SANs to + include in the generated HTTP TLS certificate. + items: + description: SubjectAlternativeName represents a SAN entry + in a x509 certificate. + properties: + dns: + description: DNS is the DNS name of the subject. + type: string + ip: + description: IP is the IP address of the subject. + type: string + type: object + type: array + type: object + type: object + type: object + image: + description: Image is the Elasticsearch Docker image to deploy. + type: string + nodeSets: + description: 'NodeSets allow specifying groups of Elasticsearch nodes + sharing the same configuration and Pod templates. See: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-orchestration.html' + items: + description: NodeSet is the specification for a group of Elasticsearch + nodes sharing the same configuration and a Pod template. + properties: + config: + description: Config holds the Elasticsearch configuration. + type: object + count: + description: Count of Elasticsearch nodes to deploy. + format: int32 + minimum: 1 + type: integer + name: + description: Name of this set of nodes. Becomes a part of the + Elasticsearch node.name setting. + maxLength: 23 + pattern: '[a-zA-Z0-9-]+' + type: string + podTemplate: + description: PodTemplate provides customisation options (labels, + annotations, affinity rules, resource requests, and so on) for + the Pods belonging to this NodeSet. + type: object + volumeClaimTemplates: + description: 'VolumeClaimTemplates is a list of persistent volume + claims to be used by each Pod in this NodeSet. Every claim in + this list must have a matching volumeMount in one of the containers + defined in the PodTemplate. Items defined here take precedence + over any default claims added by the operator with the same + name. See: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-volume-claim-templates.html' + items: + description: PersistentVolumeClaim is a user's request for and + claim to a persistent volume + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of + this representation of an object. Servers should convert + recognized schemas to the latest internal value, and may + reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST + resource this object represents. Servers may infer this + from the endpoint the client submits requests to. Cannot + be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + description: 'Standard object''s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata' + type: object + spec: + description: 'Spec defines the desired characteristics of + a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims' + properties: + accessModes: + description: 'AccessModes contains the desired access + modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1' + items: + type: string + type: array + dataSource: + description: This field requires the VolumeSnapshotDataSource + alpha feature gate to be enabled and currently VolumeSnapshot + is the only supported data source. If the provisioner + can support VolumeSnapshot data source, it will create + a new volume and data will be restored to the volume + at the same time. If the provisioner does not support + VolumeSnapshot data source, volume will not be created + and the failure will be reported as an event. In the + future, we plan to support more data source types + and the behavior of the provisioner may change. + properties: + apiGroup: + description: APIGroup is the group for the resource + being referenced. If APIGroup is not specified, + the specified Kind must be in the core API group. + For any other third-party types, APIGroup is required. + type: string + kind: + description: Kind is the type of resource being + referenced + type: string + name: + description: Name is the name of resource being + referenced + type: string + required: + - kind + - name + type: object + resources: + description: 'Resources represents the minimum resources + the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources' + properties: + limits: + additionalProperties: + type: string + description: 'Limits describes the maximum amount + of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/' + type: object + requests: + additionalProperties: + type: string + description: 'Requests describes the minimum amount + of compute resources required. If Requests is + omitted for a container, it defaults to Limits + if that is explicitly specified, otherwise to + an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/' + type: object + type: object + selector: + description: A label query over volumes to consider + for binding. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are ANDed. + items: + description: A label selector requirement is a + selector that contains values, a key, and an + operator that relates the key and values. + properties: + key: + description: key is the label key that the + selector applies to. + type: string + operator: + description: operator represents a key's relationship + to a set of values. Valid operators are + In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string + values. If the operator is In or NotIn, + the values array must be non-empty. If the + operator is Exists or DoesNotExist, the + values array must be empty. This array is + replaced during a strategic merge patch. + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + description: matchLabels is a map of {key,value} + pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, + whose key field is "key", the operator is "In", + and the values array contains only "value". The + requirements are ANDed. + type: object + type: object + storageClassName: + description: 'Name of the StorageClass required by the + claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1' + type: string + volumeMode: + description: volumeMode defines what type of volume + is required by the claim. Value of Filesystem is implied + when not included in claim spec. This is a beta feature. + type: string + volumeName: + description: VolumeName is the binding reference to + the PersistentVolume backing this claim. + type: string + type: object + status: + description: 'Status represents the current information/status + of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims' + properties: + accessModes: + description: 'AccessModes contains the actual access + modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1' + items: + type: string + type: array + capacity: + additionalProperties: + type: string + description: Represents the actual resources of the + underlying volume. + type: object + conditions: + description: Current Condition of persistent volume + claim. If underlying persistent volume is being resized + then the Condition will be set to 'ResizeStarted'. + items: + description: PersistentVolumeClaimCondition contails + details about state of pvc + properties: + lastProbeTime: + description: Last time we probed the condition. + format: date-time + type: string + lastTransitionTime: + description: Last time the condition transitioned + from one status to another. + format: date-time + type: string + message: + description: Human-readable message indicating + details about last transition. + type: string + reason: + description: Unique, this should be a short, machine + understandable string that gives the reason + for condition's last transition. If it reports + "ResizeStarted" that means the underlying persistent + volume is being resized. + type: string + status: + type: string + type: + description: PersistentVolumeClaimConditionType + is a valid value of PersistentVolumeClaimCondition.Type + type: string + required: + - status + - type + type: object + type: array + phase: + description: Phase represents the current phase of PersistentVolumeClaim. + type: string + type: object + type: object + type: array + required: + - count + - name + type: object + minItems: 1 + type: array + podDisruptionBudget: + description: PodDisruptionBudget provides access to the default pod + disruption budget for the Elasticsearch cluster. The default budget + selects all cluster pods and sets `maxUnavailable` to 1. To disable, + set `PodDisruptionBudget` to the empty value (`{}` in YAML). + properties: + metadata: + description: ObjectMeta is the metadata of the PDB. The name and + namespace provided here are managed by ECK and will be ignored. + type: object + spec: + description: Spec is the specification of the PDB. + properties: + maxUnavailable: + anyOf: + - type: string + - type: integer + description: An eviction is allowed if at most "maxUnavailable" + pods selected by "selector" are unavailable after the eviction, + i.e. even in absence of the evicted pod. For example, one + can prevent all voluntary evictions by specifying 0. This + is a mutually exclusive setting with "minAvailable". + minAvailable: + anyOf: + - type: string + - type: integer + description: An eviction is allowed if at least "minAvailable" + pods selected by "selector" will still be available after + the eviction, i.e. even in the absence of the evicted pod. So + for example you can prevent all voluntary evictions by specifying + "100%". + selector: + description: Label query over pods whose evictions are managed + by the disruption budget. + properties: + matchExpressions: + description: matchExpressions is a list of label selector + requirements. The requirements are ANDed. + items: + description: A label selector requirement is a selector + that contains values, a key, and an operator that relates + the key and values. + properties: + key: + description: key is the label key that the selector + applies to. + type: string + operator: + description: operator represents a key's relationship + to a set of values. Valid operators are In, NotIn, + Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. + If the operator is In or NotIn, the values array + must be non-empty. If the operator is Exists or + DoesNotExist, the values array must be empty. This + array is replaced during a strategic merge patch. + items: + type: string + type: array + required: + - key + - operator + type: object + type: array + matchLabels: + additionalProperties: + type: string + description: matchLabels is a map of {key,value} pairs. + A single {key,value} in the matchLabels map is equivalent + to an element of matchExpressions, whose key field is + "key", the operator is "In", and the values array contains + only "value". The requirements are ANDed. + type: object + type: object + type: object + type: object + secureSettings: + description: 'SecureSettings is a list of references to Kubernetes secrets + containing sensitive configuration options for Elasticsearch. See: + https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-es-secure-settings.html' + items: + description: SecretSource defines a data source based on a Kubernetes + Secret. + properties: + entries: + description: Entries define how to project each key-value pair + in the secret to filesystem paths. If not defined, all keys + will be projected to similarly named paths in the filesystem. + If defined, only the specified keys will be projected to the + corresponding paths. + items: + description: KeyToPath defines how to map a key in a Secret + object to a filesystem path. + properties: + key: + description: Key is the key contained in the secret. + type: string + path: + description: Path is the relative file path to map the key + to. Path must not be an absolute file path and must not + contain any ".." components. + type: string + required: + - key + type: object + type: array + secretName: + description: SecretName is the name of the secret. + type: string + required: + - secretName + type: object + type: array + updateStrategy: + description: UpdateStrategy specifies how updates to the cluster should + be performed. + properties: + changeBudget: + description: ChangeBudget defines the constraints to consider when + applying changes to the Elasticsearch cluster. + properties: + maxSurge: + description: MaxSurge is the maximum number of new pods that + can be created exceeding the original number of pods defined + in the specification. MaxSurge is only taken into consideration + when scaling up. Setting a negative value will disable the + restriction. Defaults to unbounded if not specified. + format: int32 + type: integer + maxUnavailable: + description: MaxUnavailable is the maximum number of pods that + can be unavailable (not ready) during the update due to circumstances + under the control of the operator. Setting a negative value + will disable this restriction. Defaults to 1 if not specified. + format: int32 + type: integer + type: object + type: object + version: + description: Version of Elasticsearch. + type: string + required: + - nodeSets + type: object + status: + description: ElasticsearchStatus defines the observed state of Elasticsearch + properties: + availableNodes: + format: int32 + type: integer + health: + description: ElasticsearchHealth is the health of the cluster as returned + by the health API. + type: string + phase: + description: ElasticsearchOrchestrationPhase is the phase Elasticsearch + is in from the controller point of view. + type: string + type: object + version: v1 + versions: + - name: v1 + served: true + storage: true + - name: v1beta1 + served: true + storage: false + - name: v1alpha1 + served: false + storage: false +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] +--- +apiVersion: apiextensions.k8s.io/v1beta1 +kind: CustomResourceDefinition +metadata: + creationTimestamp: null + name: kibanas.kibana.k8s.elastic.co +spec: + additionalPrinterColumns: + - JSONPath: .status.health + name: health + type: string + - JSONPath: .status.availableNodes + description: Available nodes + name: nodes + type: integer + - JSONPath: .spec.version + description: Kibana version + name: version + type: string + - JSONPath: .metadata.creationTimestamp + name: age + type: date + group: kibana.k8s.elastic.co + names: + categories: + - elastic + kind: Kibana + listKind: KibanaList + plural: kibanas + shortNames: + - kb + singular: kibana + scope: Namespaced + subresources: + status: {} + validation: + openAPIV3Schema: + description: Kibana represents a Kibana resource in a Kubernetes cluster. + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: KibanaSpec holds the specification of a Kibana instance. + properties: + config: + description: 'Config holds the Kibana configuration. See: https://www.elastic.co/guide/en/kibana/current/settings.html' + type: object + count: + description: Count of Kibana instances to deploy. + format: int32 + type: integer + elasticsearchRef: + description: ElasticsearchRef is a reference to an Elasticsearch cluster + running in the same Kubernetes cluster. + properties: + name: + description: Name of the Kubernetes object. + type: string + namespace: + description: Namespace of the Kubernetes object. If empty, defaults + to the current namespace. + type: string + required: + - name + type: object + http: + description: HTTP holds the HTTP layer configuration for Kibana. + properties: + service: + description: Service defines the template for the associated Kubernetes + Service object. + properties: + metadata: + description: ObjectMeta is the metadata of the service. The + name and namespace provided here are managed by ECK and will + be ignored. + type: object + spec: + description: Spec is the specification of the service. + properties: + clusterIP: + description: 'clusterIP is the IP address of the service + and is usually assigned randomly by the master. If an + address is specified manually and is not in use by others, + it will be allocated to the service; otherwise, creation + of the service will fail. This field can not be changed + through updates. Valid values are "None", empty string + (""), or a valid IP address. "None" can be specified for + headless services when proxying is not required. Only + applies to types ClusterIP, NodePort, and LoadBalancer. + Ignored if type is ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies' + type: string + externalIPs: + description: externalIPs is a list of IP addresses for which + nodes in the cluster will also accept traffic for this + service. These IPs are not managed by Kubernetes. The + user is responsible for ensuring that traffic arrives + at a node with this IP. A common example is external + load-balancers that are not part of the Kubernetes system. + items: + type: string + type: array + externalName: + description: externalName is the external reference that + kubedns or equivalent will return as a CNAME record for + this service. No proxying will be involved. Must be a + valid RFC-1123 hostname (https://tools.ietf.org/html/rfc1123) + and requires Type to be ExternalName. + type: string + externalTrafficPolicy: + description: externalTrafficPolicy denotes if this Service + desires to route external traffic to node-local or cluster-wide + endpoints. "Local" preserves the client source IP and + avoids a second hop for LoadBalancer and Nodeport type + services, but risks potentially imbalanced traffic spreading. + "Cluster" obscures the client source IP and may cause + a second hop to another node, but should have good overall + load-spreading. + type: string + healthCheckNodePort: + description: healthCheckNodePort specifies the healthcheck + nodePort for the service. If not specified, HealthCheckNodePort + is created by the service api backend with the allocated + nodePort. Will use user-specified nodePort value if specified + by the client. Only effects when Type is set to LoadBalancer + and ExternalTrafficPolicy is set to Local. + format: int32 + type: integer + ipFamily: + description: ipFamily specifies whether this Service has + a preference for a particular IP family (e.g. IPv4 vs. + IPv6). If a specific IP family is requested, the clusterIP + field will be allocated from that family, if it is available + in the cluster. If no IP family is requested, the cluster's + primary IP family will be used. Other IP fields (loadBalancerIP, + loadBalancerSourceRanges, externalIPs) and controllers + which allocate external load-balancers should use the + same IP family. Endpoints for this Service will be of + this family. This field is immutable after creation. + Assigning a ServiceIPFamily not available in the cluster + (e.g. IPv6 in IPv4 only cluster) is an error condition + and will fail during clusterIP assignment. + type: string + loadBalancerIP: + description: 'Only applies to Service Type: LoadBalancer + LoadBalancer will get created with the IP specified in + this field. This feature depends on whether the underlying + cloud-provider supports specifying the loadBalancerIP + when a load balancer is created. This field will be ignored + if the cloud-provider does not support the feature.' + type: string + loadBalancerSourceRanges: + description: 'If specified and supported by the platform, + this will restrict traffic through the cloud-provider + load-balancer will be restricted to the specified client + IPs. This field will be ignored if the cloud-provider + does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/' + items: + type: string + type: array + ports: + description: 'The list of ports that are exposed by this + service. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies' + items: + description: ServicePort contains information on service's + port. + properties: + name: + description: The name of this port within the service. + This must be a DNS_LABEL. All ports within a ServiceSpec + must have unique names. When considering the endpoints + for a Service, this must match the 'name' field + in the EndpointPort. Optional if only one ServicePort + is defined on this service. + type: string + nodePort: + description: 'The port on each node on which this + service is exposed when type=NodePort or LoadBalancer. + Usually assigned by the system. If specified, it + will be allocated to the service if unused or else + creation of the service will fail. Default is to + auto-allocate a port if the ServiceType of this + Service requires one. More info: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport' + format: int32 + type: integer + port: + description: The port that will be exposed by this + service. + format: int32 + type: integer + protocol: + description: The IP protocol for this port. Supports + "TCP", "UDP", and "SCTP". Default is TCP. + type: string + targetPort: + anyOf: + - type: string + - type: integer + description: 'Number or name of the port to access + on the pods targeted by the service. Number must + be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + If this is a string, it will be looked up as a named + port in the target Pod''s container ports. If this + is not specified, the value of the ''port'' field + is used (an identity map). This field is ignored + for services with clusterIP=None, and should be + omitted or set equal to the ''port'' field. More + info: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service' + required: + - port + type: object + type: array + publishNotReadyAddresses: + description: publishNotReadyAddresses, when set to true, + indicates that DNS implementations must publish the notReadyAddresses + of subsets for the Endpoints associated with the Service. + The default value is false. The primary use case for setting + this field is to use a StatefulSet's Headless Service + to propagate SRV records for its Pods without respect + to their readiness for purpose of peer discovery. + type: boolean + selector: + additionalProperties: + type: string + description: 'Route service traffic to pods with label keys + and values matching this selector. If empty or not present, + the service is assumed to have an external process managing + its endpoints, which Kubernetes will not modify. Only + applies to types ClusterIP, NodePort, and LoadBalancer. + Ignored if type is ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/' + type: object + sessionAffinity: + description: 'Supports "ClientIP" and "None". Used to maintain + session affinity. Enable client IP based session affinity. + Must be ClientIP or None. Defaults to None. More info: + https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies' + type: string + sessionAffinityConfig: + description: sessionAffinityConfig contains the configurations + of session affinity. + properties: + clientIP: + description: clientIP contains the configurations of + Client IP based session affinity. + properties: + timeoutSeconds: + description: timeoutSeconds specifies the seconds + of ClientIP type session sticky time. The value + must be >0 && <=86400(for 1 day) if ServiceAffinity + == "ClientIP". Default value is 10800(for 3 hours). + format: int32 + type: integer + type: object + type: object + type: + description: 'type determines how the Service is exposed. + Defaults to ClusterIP. Valid options are ExternalName, + ClusterIP, NodePort, and LoadBalancer. "ExternalName" + maps to the specified externalName. "ClusterIP" allocates + a cluster-internal IP address for load-balancing to endpoints. + Endpoints are determined by the selector or if that is + not specified, by manual construction of an Endpoints + object. If clusterIP is "None", no virtual IP is allocated + and the endpoints are published as a set of endpoints + rather than a stable IP. "NodePort" builds on ClusterIP + and allocates a port on every node which routes to the + clusterIP. "LoadBalancer" builds on NodePort and creates + an external load-balancer (if supported in the current + cloud) which routes to the clusterIP. More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types' + type: string + type: object + type: object + tls: + description: TLS defines options for configuring TLS for HTTP. + properties: + certificate: + description: "Certificate is a reference to a Kubernetes secret + that contains the certificate and private key for enabling + TLS. The referenced secret should contain the following: \n + - `ca.crt`: The certificate authority (optional). - `tls.crt`: + The certificate (or a chain). - `tls.key`: The private key + to the first certificate in the certificate chain." + properties: + secretName: + description: SecretName is the name of the secret. + type: string + type: object + selfSignedCertificate: + description: SelfSignedCertificate allows configuring the self-signed + certificate generated by the operator. + properties: + disabled: + description: Disabled indicates that the provisioning of + the self-signed certifcate should be disabled. + type: boolean + subjectAltNames: + description: SubjectAlternativeNames is a list of SANs to + include in the generated HTTP TLS certificate. + items: + description: SubjectAlternativeName represents a SAN entry + in a x509 certificate. + properties: + dns: + description: DNS is the DNS name of the subject. + type: string + ip: + description: IP is the IP address of the subject. + type: string + type: object + type: array + type: object + type: object + type: object + image: + description: Image is the Kibana Docker image to deploy. + type: string + podTemplate: + description: PodTemplate provides customisation options (labels, annotations, + affinity rules, resource requests, and so on) for the Kibana pods + type: object + secureSettings: + description: 'SecureSettings is a list of references to Kubernetes secrets + containing sensitive configuration options for Kibana. See: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-kibana.html#k8s-kibana-secure-settings' + items: + description: SecretSource defines a data source based on a Kubernetes + Secret. + properties: + entries: + description: Entries define how to project each key-value pair + in the secret to filesystem paths. If not defined, all keys + will be projected to similarly named paths in the filesystem. + If defined, only the specified keys will be projected to the + corresponding paths. + items: + description: KeyToPath defines how to map a key in a Secret + object to a filesystem path. + properties: + key: + description: Key is the key contained in the secret. + type: string + path: + description: Path is the relative file path to map the key + to. Path must not be an absolute file path and must not + contain any ".." components. + type: string + required: + - key + type: object + type: array + secretName: + description: SecretName is the name of the secret. + type: string + required: + - secretName + type: object + type: array + version: + description: Version of Kibana. + type: string + type: object + status: + description: KibanaStatus defines the observed state of Kibana + properties: + associationStatus: + description: AssociationStatus is the status of an association resource. + type: string + availableNodes: + format: int32 + type: integer + health: + description: KibanaHealth expresses the status of the Kibana instances. + type: string + type: object + version: v1 + versions: + - name: v1 + served: true + storage: true + - name: v1beta1 + served: true + storage: false + - name: v1alpha1 + served: false + storage: false +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: elastic-operator +rules: +- apiGroups: + - "" + resources: + - pods + - endpoints + - events + - persistentvolumeclaims + - secrets + - services + - configmaps + verbs: + - get + - list + - watch + - create + - update + - patch + - delete +- apiGroups: + - apps + resources: + - deployments + - statefulsets + verbs: + - get + - list + - watch + - create + - update + - patch + - delete +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - get + - list + - watch + - create + - update + - patch + - delete +- apiGroups: + - elasticsearch.k8s.elastic.co + resources: + - elasticsearches + - elasticsearches/status + - elasticsearches/finalizers + - enterpriselicenses + - enterpriselicenses/status + verbs: + - get + - list + - watch + - create + - update + - patch + - delete +- apiGroups: + - kibana.k8s.elastic.co + resources: + - kibanas + - kibanas/status + - kibanas/finalizers + verbs: + - get + - list + - watch + - create + - update + - patch + - delete +- apiGroups: + - apm.k8s.elastic.co + resources: + - apmservers + - apmservers/status + - apmservers/finalizers + verbs: + - get + - list + - watch + - create + - update + - patch + - delete +- apiGroups: + - associations.k8s.elastic.co + resources: + - apmserverelasticsearchassociations + - apmserverelasticsearchassociations/status + verbs: + - get + - list + - watch + - create + - update + - patch + - delete +- apiGroups: + - admissionregistration.k8s.io + resources: + - mutatingwebhookconfigurations + - validatingwebhookconfigurations + verbs: + - get + - list + - watch + - create + - update + - patch + - delete + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: elastic-operator +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: elastic-operator +subjects: +- kind: ServiceAccount + name: elastic-operator + namespace: elastic-system + +--- +apiVersion: v1 +kind: Namespace +metadata: + name: elastic-system + +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: elastic-operator + namespace: elastic-system + labels: + control-plane: elastic-operator +spec: + selector: + matchLabels: + control-plane: elastic-operator + serviceName: elastic-operator + template: + metadata: + labels: + control-plane: elastic-operator + spec: + serviceAccountName: elastic-operator + containers: + - image: docker.elastic.co/eck/eck-operator:1.0.0 + name: manager + args: ["manager", "--operator-roles", "all", "--log-verbosity=0"] + env: + - name: OPERATOR_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: WEBHOOK_SECRET + value: elastic-webhook-server-cert + - name: WEBHOOK_PODS_LABEL + value: elastic-operator + - name: OPERATOR_IMAGE + value: docker.elastic.co/eck/eck-operator:1.0.0 + resources: + limits: + cpu: 1 + memory: 150Mi + requests: + cpu: 100m + memory: 50Mi + ports: + - containerPort: 9443 + name: webhook-server + protocol: TCP + volumeMounts: + - mountPath: /tmp/k8s-webhook-server/serving-certs + name: cert + readOnly: true + terminationGracePeriodSeconds: 10 + volumes: + - name: cert + secret: + defaultMode: 420 + secretName: elastic-webhook-server-cert + +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: elastic-operator + namespace: elastic-system + +--- +apiVersion: admissionregistration.k8s.io/v1beta1 +kind: ValidatingWebhookConfiguration +metadata: + name: elastic-webhook.k8s.elastic.co +webhooks: + - clientConfig: + caBundle: Cg== + service: + name: elastic-webhook-server + namespace: elastic-system + path: /validate-elasticsearch-k8s-elastic-co-v1-elasticsearch + failurePolicy: Ignore + name: elastic-es-validation-v1.k8s.elastic.co + rules: + - apiGroups: + - elasticsearch.k8s.elastic.co + apiVersions: + - v1 + operations: + - CREATE + - UPDATE + resources: + - elasticsearches + - clientConfig: + caBundle: Cg== + service: + name: elastic-webhook-server + namespace: elastic-system + path: /validate-elasticsearch-k8s-elastic-co-v1beta1-elasticsearch + failurePolicy: Ignore + name: elastic-es-validation-v1beta1.k8s.elastic.co + rules: + - apiGroups: + - elasticsearch.k8s.elastic.co + apiVersions: + - v1beta1 + operations: + - CREATE + - UPDATE + resources: + - elasticsearches +--- +apiVersion: v1 +kind: Service +metadata: + name: elastic-webhook-server + namespace: elastic-system +spec: + ports: + - port: 443 + targetPort: 9443 + selector: + control-plane: elastic-operator +--- +apiVersion: v1 +kind: Secret +metadata: + name: elastic-webhook-server-cert + namespace: elastic-system diff --git a/slides/k8s/operators.md b/slides/k8s/operators.md index b43f275f..5280bbf7 100644 --- a/slides/k8s/operators.md +++ b/slides/k8s/operators.md @@ -121,7 +121,7 @@ Examples: ## One operator in action -- We will install the UPMC Enterprises ElasticSearch operator +- We will install [Elastic Cloud on Kubernetes](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html), an ElasticSearch operator - This operator requires PersistentVolumes @@ -206,51 +206,92 @@ Now, the StorageClass should have `(default)` next to its name. ## Install the ElasticSearch operator -- The operator needs: +- The operator provides: - - a Deployment for its controller + - a few CustomResourceDefinitions + - a Namespace for its other resources + - a ValidatingWebhookConfiguration for type checking + - a StatefulSet for its controller and webhook code - a ServiceAccount, ClusterRole, ClusterRoleBinding for permissions - - a Namespace -- We have grouped all the definitions for these resources in a YAML file +- All these resources are grouped in a convenient YAML file .exercise[ - Install the operator: ```bash - kubectl apply -f ~/container.training/k8s/elasticsearch-operator.yaml + kubectl apply -f ~/container.training/k8s/eck-operator.yaml ``` ] --- -## Wait for the operator to be ready +## Check our new custom resources -- Some operators require to create their CRDs separately - -- This operator will create its CRD itself - - (i.e. the CRD is not listed in the YAML that we applied earlier) +- Let's see which CRDs were created .exercise[ -- Wait until the `elasticsearchclusters` CRD shows up: +- List all CRDs: ```bash kubectl get crds ``` ] +This operator supports ElasticSearch, but also Kibana and APM. Cool! + +--- + +## Create the `eck-demo` namespace + +- For clarity, we will create everything in a new namespace, `eck-demo` + +- This namespace is hard-coded in the YAML files that we are going to use + +- We need to create that namespace + +.exercise[ + +- Create the `eck-demo` namespace: + ```bash + kubectl create namespace eck-demo + ``` + +- Switch to that namespace: + ```bash + kns eck-demo + ``` + +] + +--- + +class: extra-details + +## Can we use a different namespace? + +Yes, but then we need to update all the YAML manifests that we +are going to apply in the next slides. + +The `eck-demo` namespace is hard-coded in these YAML manifests. + +Why? + +Because when defining a ClusterRoleBinding that references a +ServiceAccount, we have to indicate in which namespace the +ServiceAccount is located. + --- ## Create an ElasticSearch resource -- We can now create a resource with `kind: ElasticsearchCluster` +- We can now create a resource with `kind: ElasticSearch` - The YAML for that resource will specify all the desired parameters: - - how many nodes do we want of each type (client, master, data) + - how many nodes we want - image to use - add-ons (kibana, cerebro, ...) - whether to use TLS or not @@ -260,7 +301,7 @@ Now, the StorageClass should have `(default)` next to its name. - Create our ElasticSearch cluster: ```bash - kubectl apply -f ~/container.training/k8s/elasticsearch-cluster.yaml + kubectl apply -f ~/container.training/k8s/eck-elasticsearch.yaml ``` ] @@ -269,49 +310,88 @@ Now, the StorageClass should have `(default)` next to its name. ## Operator in action -- Over the next minutes, the operator will create: +- Over the next minutes, the operator will create our ES cluster - - StatefulSets (one for master nodes, one for data nodes) - - - Deployments (for client nodes; and for add-ons like cerebro and kibana) - - - Services (for all these pods) +- It will report our cluster status through the CRD .exercise[ -- Wait for all the StatefulSets to be fully up and running: +- Check the logs of the operator: ```bash - kubectl get statefulsets -w + stern --namespace=elastic-system operator ``` + + +- Watch the status of the cluster through the CRD: + ```bash + kubectl get es -w + ``` + + + ] --- ## Connecting to our cluster -- Since connecting directly to the ElasticSearch API is a bit raw, -
we'll connect to the cerebro frontend instead +- It's not easy to use the ElasticSearch API from the shell + +- But let's check at least if ElasticSearch is up! .exercise[ -- Edit the cerebro service to change its type from ClusterIP to NodePort: +- Get the ClusterIP of our ES instance: ```bash - kubectl patch svc cerebro-es -p "spec: { type: NodePort }" + kubectl get services ``` -- Retrieve the NodePort that was allocated: +- Issue a request with `curl`: ```bash - kubectl get svc cerebro-es + curl http://`CLUSTERIP`:9200 ``` -- Connect to that port with a browser - ] +We get an authentication error. Our cluster is protected! + --- -## (Bonus) Setup filebeat +## Obtaining the credentials + +- The operator creates a user named `elastic` + +- It generates a random password and stores it in a Secret + +.exercise[ + +- Extract the password: + ```bash + kubectl get secret demo-es-elastic-user \ + -o go-template="{{ .data.elastic | base64decode }} " + ``` + +- Use it to connect to the API: + ```bash + curl -u elastic:`PASSWORD` http://`CLUSTERIP`:9200 + ``` + +] + +We should see a JSON payload with the `"You Know, for Search"` tagline. + +--- + +## Sending data to the cluster - Let's send some data to our brand new ElasticSearch cluster! @@ -321,22 +401,170 @@ Now, the StorageClass should have `(default)` next to its name. - Deploy filebeat: ```bash - kubectl apply -f ~/container.training/k8s/filebeat.yaml + kubectl apply -f ~/container.training/k8s/eck-filebeat.yaml + ``` + +- Wait until some pods are up: + ```bash + watch kubectl get pods -l k8s-app=filebeat + ``` + + + +- Check that a filebeat index was created: + ```bash + curl -u elastic:`PASSWORD` http://`CLUSTERIP`:9200/_cat/indices ``` ] -We should see at least one index being created in cerebro. +--- + +## Deploying an instance of Kibana + +- Kibana can visualize the logs injected by filebeat + +- The ECK operator can also manage Kibana + +- Let's give it a try! + +.exercise[ + +- Deploy a Kibana instance: + ```bash + kubectl apply -f ~/container.training/k8s/eck-kibana.yaml + ``` + +- Wait for it to be ready: + ```bash + kubectl get kibana -w + ``` + + + +] --- -## (Bonus) Access log data with kibana +## Connecting to Kibana -- Let's expose kibana (by making kibana-es a NodePort too) +- Kibana is automatically set up to conect to ElasticSearch -- Then access kibana + (this is arranged by the YAML that we're using) -- We'll need to configure kibana indexes +- However, it will ask for authentication + +- It's using the same user/password as ElasticSearch + +.exercise[ + +- Get the NodePort allocated to Kibana: + ```bash + kubectl get services + ``` + +- Connect to it with a web browser + +- Use the same user/password as before + +] + +--- + +## Setting up Kibana + +After the Kibana UI loads, we need to click around a bit + +.exercise[ + +- Pick "explore on my own" + +- Click on Use Elasticsearch data / Connect to your Elasticsearch index" + +- Enter `filebeat-*` for the index pattern and click "Next step" + +- Select `@timestamp` as time filter field name + +- Click on "discover" (the small icon looking like a compass on the left bar) + +- Play around! + +] + +--- + +## Scaling up the cluster + +- At this point, we have only one node + +- We are going to scale up + +- But first, we'll deploy Cerebro, an UI for ElasticSearch + +- This will let us see the state of the cluster, how indexes are sharded, etc. + +--- + +## Deploying Cerebro + +- Cerebro is stateless, so it's fairly easy to deploy + + (one Deployment + one Service) + +- However, it needs the address and credentials for ElasticSearch + +- We prepared yet another manifest for that! + +.exercise[ + +- Deploy Cerebro: + ```bash + kubectl apply -f ~/container.training/k8s/eck-cerebro.yaml + ``` + +- Lookup the NodePort number and connect to it: + ```bash + kuebctl get services + ``` + +] + +--- + +## Scaling up the cluster + +- We can see on Cerebro that the cluster is "yellow" + + (because our index is not replicated) + +- Let's change that! + +.exercise[ + +- Edit the ElasticSearch cluster manifest: + ```bash + kubectl edit es demo + ``` + +- Find the field `count: 1` and change it to 3 + +- Save and quit + + + +] --- From 04d3a7b3605caa0c2a552968044a6d5ec73e0be0 Mon Sep 17 00:00:00 2001 From: Jerome Petazzoni Date: Sun, 19 Jan 2020 11:34:18 -0600 Subject: [PATCH 16/16] Fix up slide about operators limitations --- slides/k8s/operators.md | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/slides/k8s/operators.md b/slides/k8s/operators.md index 5280bbf7..a8e540b2 100644 --- a/slides/k8s/operators.md +++ b/slides/k8s/operators.md @@ -604,13 +604,11 @@ After the Kibana UI loads, we need to click around a bit - Look at the ElasticSearch resource definition - (`~/container.training/k8s/elasticsearch-cluster.yaml`) + (`~/container.training/k8s/eck-elasticsearch.yaml`) -- What should happen if we flip the `use-tls` flag? Twice? +- What should happen if we flip the TLS flag? Twice? -- What should happen if we remove / re-add the kibana or cerebro sections? - -- What should happen if we change the number of nodes? +- What should happen if we add another group of nodes? - What if we want different images or parameters for the different nodes?