📃 Import content for LKE workshop

This commit is contained in:
Jerome Petazzoni
2021-04-15 08:57:37 +02:00
parent a8ecffbaf0
commit 9c3ab19918
11 changed files with 1074 additions and 50 deletions

View File

@@ -1,41 +1,13 @@
# Accessing internal services
- When we are logged in on a cluster node, we can access internal services
(by virtue of the Kubernetes network model: all nodes can reach all pods and services)
- When we are accessing a remote cluster, things are different
(generally, our local machine won't have access to the cluster's internal subnet)
- How can we temporarily access a service without exposing it to everyone?
--
- `kubectl proxy`: gives us access to the API, which includes a proxy for HTTP resources
- `kubectl port-forward`: allows forwarding of TCP ports to arbitrary pods, services, ...
---
## Suspension of disbelief
The exercises in this section assume that we have set up `kubectl` on our
local machine in order to access a remote cluster.
We will therefore show how to access services and pods of the remote cluster,
from our local machine.
You can also run these exercises directly on the cluster (if you haven't
installed and set up `kubectl` locally).
Running commands locally will be less useful
(since you could access services and pods directly),
but keep in mind that these commands will work anywhere as long as you have
installed and set up `kubectl` to communicate with your cluster.
---
## `kubectl proxy` in theory
- Running `kubectl proxy` gives us access to the entire Kubernetes API
@@ -56,7 +28,7 @@ installed and set up `kubectl` to communicate with your cluster.
## `kubectl proxy` in practice
- Let's access the `webui` service through `kubectl proxy`
- Let's access the `web` service through `kubectl proxy`
.exercise[
@@ -65,9 +37,9 @@ installed and set up `kubectl` to communicate with your cluster.
kubectl proxy &
```
- Access the `webui` service:
- Access the `web` service:
```bash
curl localhost:8001/api/v1/namespaces/default/services/webui/proxy/index.html
curl localhost:8001/api/v1/namespaces/default/services/web/proxy/
```
- Terminate the proxy:
@@ -99,22 +71,20 @@ installed and set up `kubectl` to communicate with your cluster.
## `kubectl port-forward` in practice
- Let's access our remote Redis server
- Let's access our remote NGINX server
.exercise[
- Forward connections from local port 10000 to remote port 6379:
- Forward connections from local port 1234 to remote port 80:
```bash
kubectl port-forward svc/redis 10000:6379 &
kubectl port-forward svc/web 1234:80 &
```
- Connect to the Redis server:
- Connect to the NGINX server:
```bash
telnet localhost 10000
curl localhost:1234
```
- Issue a few commands, e.g. `INFO server` then `QUIT`
<!--
```wait Connected to localhost```
```keys INFO server```

View File

@@ -1,7 +1,8 @@
title: |
Cloud Native
Continuous Deployment
with GitLab, Helm, and LKE
with GitLab, Helm, and
Linode Kubernetes Engine
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
@@ -32,10 +33,13 @@ content:
- lke/kubernetes-review.md
- k8s/deploymentslideshow.md
- k8s/accessinternal.md
- lke/what-is-missing.md
-
- k8s/helm-intro.md
- lke/external-dns.md
- lke/traefik.md
- k8s/prometheus.md
- lke/metrics-server.md
#- k8s/prometheus.md
- lke/prometheus.md
- k8s/cert-manager.md
- k8s/gitlab.md

View File

@@ -1,3 +1,163 @@
# Deploying our LKE cluster
FIXME
- *If we wanted to deploy Kubernetes manually*, what would we need to do?
(not that I recommend doing that...)
- Control plane (etcd, API server, scheduler, controllers)
- Nodes (VMs with a container engine + the Kubelet agent; CNI setup)
- High availability (etcd clustering, API load balancer)
- Security (CA and TLS certificates everywhere)
- Cloud integration (to provision LoadBalancer services, storage...)
*And that's just to get a basic cluster!*
---
## The best way to deploy Kubernetes
*The best way to deploy Kubernetes is to get someone else to
do it for us.*
(Me, ever since I've been working with Kubernetes)
---
## Managed Kubernetes
- Cloud provider runs the control plane
(including etcd, API load balancer, TLS setup, cloud integration)
- We run nodes
(the cloud provider generally gives us an easy way to provision them)
- Get started in *minutes*
- We're going to use [Linode Kubernetes Engine](https://www.linode.com/products/kubernetes/)
---
## Creating a cluster
- With the web console:
https://cloud.linode.com/kubernetes/clusters
- Pick the region of your choice
- Pick the latest available Kubernetes version
- Pick 3 nodes with 8 GB of RAM
- Click! ✨
- Wait a few minutes... ⌚️
- Download the kubeconfig file 💾
---
## With the CLI
- View available regions with `linode-cli regions list`
- View available server types with `linode-cli linodes types`
- View available Kubernetes versions with `linode-cli lke versions-list`
- Create cluster:
```bash
linode-cli lke cluster-create --label=hello-lke --region=us-east \
--k8s_version=1.20 --node_pools.type=g6-standard-4 --node_pools.count=3
```
- Note the cluster ID (e.g.: 12345)
- Download the kubeconfig file:
```bash
linode-cli lke kueconfig-view `12345` --text --no-headers | base64 -d
```
---
## Communicating with the cluster
- All the Kubernetes tools (`kubectl`, but also `helm` etc) use the same config file
- That file is (by dfeault) `$HOME/.kube/config`
- It can hold multiple cluster definitions (or *contexts*)
- Or, we can have multiple config files and switch between them:
- by adding the `--kubeconfig` flag each time we invoke a tool (🙄)
- or by setting the `KUBECONFIG` environment variable (☺️)
---
## Using the kubeconfig file
Option 1:
- move the kubeconfig file to e.g. `~/.kube/config.lke`
- set the environment variable: `export KUBECONFIG=~/.kube/config.lke`
Option 2:
- directly move the kubeconfig file to `~/.kube/config`
- **do not** do that if you already have a file there!
Option 3:
- merge the new kubeconfig file with our existing file
---
## Merging kubeconfig
- Assuming that we want to merge `~/.kube/config` and `~/.kube/config.lke` ...
- Move our existing kubeconfig file:
```bash
cp ~/.kube/config ~/.kube/config.old
```
- Merge both files:
```bash
KUBECONFIG=~/.kube/config.old:~/.kube/config.lke kubectl config \
view --raw > ~/.kube/config
```
- Check that everything is there:
```bash
kubectl config get-contexts
```
---
## Are we there yet?
- Let's check if our control plane is available:
```bash
kubectl get services
```
→ This should show the `kubernetes` `ClusterIP` service
- Look for our nodes:
```bash
kubectl get nodes
```
→ This should show 3 nodes (or whatever amount we picked earlier)
- If the nodes aren't visible yet, give them a minute to join the cluster

108
slides/lke/external-dns.md Normal file
View File

@@ -0,0 +1,108 @@
# [ExternalDNS](https://github.com/kubernetes-sigs/external-dns)
- ExternalDNS will automatically create DNS records from Kubernetes resources
- Services (with the annotation `external-dns.alpha.kubernetes.io/hostname`)
- Ingresses (automatically)
- It requires a domain name (obviously)
- ... And that domain name should be configurable through an API
- As of April 2021, it supports [a few dozens of providers](https://github.com/kubernetes-sigs/external-dns#status-of-providers)
- We're going to use Linode DNS
---
## Prep work
- We need a domain name
(if you need a cheap one, look e.g. at [GANDI](https://shop.gandi.net/?search=funwithlinode); there are many options below $10)
- That domain name should be configured to point to Linode DNS servers
(ns1.linode.com to ns5.linode.com)
- We need to generate a Linode API token with DNS API access
- Pro-tip: reduce the default TTL of the domain to 5 minutes!
---
## Deploying ExternalDNS
- The ExternalDNS documentation has a [tutorial](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/linode.md) for Linode
- ... It's basically a lot of YAML!
- That's where using a Helm chart will be very helpful
- There are a few ExternalDNS charts available out there
- We will use the one from Bitnami
(these folks maintain *a lot* of great Helm charts!)
---
## How we'll install things with Helm
- We will install each chart in its own namespace
(this is not mandatory, but it helps to see what belongs to what)
- We will use `helm upgrade --install` instead of `helm install`
(that way, if we want to change something, we can just re-run the command)
- We will use the `--create-namespace` and `--namespace ...` options
- To keep things boring and predictible, if we are installing chart `xyz`:
- we will install it in namespace `xyz`
- we will name the release `xyz` as well
---
## Installing ExternalDNS
- First, let's add the Bitnami repo:
```bash
kubectl repo add bitnami https://charts.bitnami.com/bitnami
```
- Then, install ExternalDNS:
```bash
LINODE_API_TOKEN=`1234abcd...6789`
helm upgrade --install external-dns bitnami/external-dns \
--namespace external-dns --create-namespace \
--set provider=linode \
--set linode.apiToken=$LINODE_API_TOKEN
```
(Make sure to update your API token above!)
---
## Testing ExternalDNS
- Let's annotate our NGINX service to expose it with a DNS record:
```bash
kubectl annotate service web \
external-dns.alpha.kubernetes.io/hostname=nginx.`cloudnative.party`
```
(make sure to use *your* domain name above, otherwise that won't work!)
- Check ExternalDNS logs:
```bash
kubectl logs --n external-dns -l app.kubernetes.io/name=external-dns
```
- It might take a few minutes for ExternalDNS to start, patience!
- Then try to access `nginx.cloudnative.party` (or whatever domain you picked)

View File

@@ -1,4 +1,173 @@
# Get ready!
FIXME
- We're going to set up a whole Continous Deployment pipeline
- ... for Kubernetes apps
- ... on a Kubernetes cluster
- Ingredients: cert-manager, GitLab, Helm, Linode DNS, LKE, Traefik
---
## Philosophy
- "Do one thing, do it well"
--
- ... But a CD pipeline is a complex system with interconnected parts!
- GitLab is not exception to that rule
- Let's have a look at its components!
---
## GitLab components
- GitLab dependencies listed in the GitLab official Helm chart
- External dependencies:
cert-manager, grafana, minio, nginx-ingress, postgresql, prometheus,
redis, registry, shared-secrets
(these dependencies correspond to external charts not created by GitLab)
- Internal dependencies:
geo-logcursor, gitaly, gitlab-exporter, gitlab-grafana, gitlab-pages,
gitlab-shell, kas, mailroom, migrations, operator, praefect, sidekiq,
task-runner, webservice
(these dependencies correspond to subcharts embedded in the GitLab chart)
---
## Philosophy
- Use the GitLab chart to deploy everything that is specific to GitLab
- Deploy cluster-wide components separately
(cert-manager, ExternalDNS, Ingress Controller...)
---
## What we're going to do
- Spin up an LKE cluster
- Run a simple test app
- Install a few extras
(the cluster-wide components mentioned earlier)
- Set up GitLab
- Push an app with a CD pipeline to GitLab
---
## What you need to know
- If you just want to follow along and watch...
- container basics (what's an image, what's a container...)
- Kubernetes basics (what are Deployments, Namespaces, Pods, Services)
- If you want to run this on your own Kubernetes cluster...
- intermediate Kubernetes concepts (annotations, Ingresses)
- Helm basic concepts (how to install/upgrade releases; how to set "values")
- basic Kubernetes troubleshooting commands (view logs, events)
- There will be a lot of explanations and reminders along the way
---
## What you need to have
If you want to run this on your own...
- A Linode account
- A domain name that you will point to Linode DNS
(I got cloudnative.party for $5)
- Local tools to control your Kubernetes cluster:
- [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl)
- [helm](https://helm.sh/docs/intro/install/)
---
## Do I really need a Linode account?
- *Can I use a local cluster, e.g. with Minikube?*
It will be very difficult to get valid TLS certs with a local cluster.
Also, GitLab needs quite a bit of resources.
- *Can I use another Kubernetes provider?*
You certainly can: Kubernetes is a standard platform!
But you'll have to adjust a few things.
(I'll try my best to tell you what as we go along.)
---
## Why do I need a domain name?
- Because accessing gitlab.cloudnative.party is easier than 102.34.55.67
- Because we'll need TLS certificates
(and it's very easy to obtain certs with Let's Encrypt when we have a domain)
- We'll illustrate automatic DNS configuration with ExternalDNS, too!
(Kubernetes will automatically create DNS entries in our domain)
---
## Nice-to-haves
Here are a few tools that I like...
- [linode-cli](https://github.com/linode/linode-cli#installation)
to manage Linode resources from the command line
- [stern](https://github.com/stern/stern)
to comfortably view logs of Kubernetes pods
- [k9s](https://k9scli.io/topics/install/)
to manage Kubernetes resources with that retro BBS look and feel 😎
- [kube-ps1](https://github.com/jonmosco/kube-ps1)
to keep track of which Kubernetes cluster and namespace we're working on
- [kubectx](https://github.com/ahmetb/kubectx)
to easily switch between clusters, contexts, and namespaces
---
## Warning ⚠️💸
- We're going to spin up cloud resources
- Remember to shut them down when you're down!
- In the immortal words of Cloud Economist [Corey Quinn](https://twitter.com/QuinnyPig):
*[You're charged for what you forget to turn off.](https://www.theregister.com/2020/09/03/cloud_control_costs/)*

View File

@@ -1,7 +1,151 @@
# Quick Kubernetes review
FIXME
- Let's deploy a simple HTTP server
- And expose it to the outside world!
- Feel free to skip this section if you're familiar with Kubernetes
---
## Creating a container
- On Kubernetes, one doesn't simply run a container
- We need to create a "Pod"
- A Pod will be a group of containers running together
(often, it will be a group of *one* container)
- We can create a standalone Pod, but generally, we'll use a *controller*
(for instance: Deployment, Replica Set, Daemon Set, Job, Stateful Set...)
- The *controller* will take care of scaling and recreating the Pod if needed
(note that within a Pod, containers can also be restarted automatically if needed)
---
## A *controller*, you said?
- We're going to use one of the most common controllers: a *Deployment*
- Deployments...
- can be scaled (will create the requested number of Pods)
- will recreate Pods if e.g. they get evicted or their Node is down
- handle rolling updates
- Deployments actually delegate a lot of these tasks to *Replica Sets*
- We will generally have the following hierarchy:
Deployment → Replica Set → Pod
---
## Creating a Deployment
- Without further ado:
```bash
kubectl create deployment web --image=nginx
```
- Check what happened:
```bash
kubectl get all
```
- Wait until the NGINX Pod is "Running"!
- Note: `kubectl create deployment` is great when getting started...
- ... But later, we will probably write YAML instead!
---
## Exposing the Deployment
- We need to create a Service
- We can use `kubectl expose` for that
(but, again, we will probably use YAML later!)
- For *internal* use, we can use the default Service type, ClusterIP:
```bash
kubectl expose deployment web --port=80
```
- For *external* use, we can use a Service of type LoadBalancer:
```bash
kubectl expose deployment web --port=80 --type=LoadBalancer
```
---
## Changing the Service type
- We can `kubectl delete service web` and recreate it
- Or, `kubectl edit service web` and dive into the YAML
- Or, `kubectl patch service web --patch '{"spec": {"type": "LoadBalancer"}}'`
- ... These are just a few "classic" methods; there are many ways to do this!
---
## Deployment → Pod
- Can we check exactly what's going on when the Pod is created?
- Option 1: `watch kubectl get all`
- displays all object types
- refreshes every 2 seconds
- puts a high load on the API server when there are many objects
- Option 2: `kubectl get pods --watch --output-watch-events`
- can only display one type of object
- will show all modifications happening (à la `tail -f`)
- doesn't put a high load on the API server (except for initial display)
---
## Recreating the Deployment
- Let's delete our Deployment:
```bash
kubectl delete deployment web
```
- Watch Pod updates:
```bash
kubectl get pods --watch --output-watch-events
```
- Recreate the Deployment and see what Pods do:
```bash
kubectl create deployment web --image=nginx
```
---
## Service stability
- Our Service *still works* even though we deleted and re-created the Deployment
- It wouldn't have worked while the Deployment was deleted, though
- A Service is a *stable endpoint*
???
:T: Warming up with a quick Kubernetes review

View File

@@ -0,0 +1,147 @@
# Installing metrics-server
- We've installed a few things on our cluster so far
- How much resources (CPU, RAM) are we using?
- We need metrics!
- If metrics-server is installed, we can get Nodes metrics like this:
```bash
kubectl top nodes
```
- At the moment, this should show us `error: Metrics API not available`
- How do we fix this?
---
## Many ways to get metrics
- We could use a SAAS like Datadog, New Relic...
- We could use a self-hosted solution like Prometheus
- Or we could use metrics-server
- What's special about metrics-server?
---
## Pros/cons
Cons:
- no data retention (no history data, just instant numbers)
- only CPU and RAM of nodes and pods (no disk or network usage or I/O...)
Pros:
- very lightweight
- doesn't require storage
- used by Kubernetes autoscaling
---
## Why metrics-server
- We may install something fancier later
(think: Prometheus with Grafana)
- But metrics-server will work in *minutes*
- It will barely use resources on our cluster
- It's required for autoscaling anyway
---
## How metric-server works
- It runs a single Pod
- That Pod will fetch metrics from all our Nodes
- It will expose them through the Kubernetes API agregation layer
(we won't say much more about that agregation layer; that's fairly advanced stuff!)
---
## Installing metrics-server
- In a lot of places, this is done with a little bit of custom YAML
(derived from the [official installation instructions](https://github.com/kubernetes-sigs/metrics-server#installation))
- We're going to use Helm one more time:
```bash
helm upgrade --install metrics-server bitnami/metrics-server \
--create-namespace --namespace metrics-server \
--set apiService.create=true \
--set extraArgs.kubelet-insecure-tls=true \
--set extraArgs.kubelet-preferred-address-types=InternalIP
```
- What are these options for?
---
## Installation options
- `apiService.create=true`
register `metrics-server` with the Kubernetes agregation layer
(create an entry that will show up in `kubectl get apiservices`)
- `extraArgs.kubelet-insecure-tls=true`
when connecting to nodes to collect their metrics, don't check kubelet TLS certs
(because most kubelet certs include the node name, but not its IP address)
- `extraArgs.kubelet-preferred-address-types=InternalIP`
when connecting to nodes, use their internal IP address instead of node name
(because the latter requires an internal DNS, which is rarely configured)
---
## Testing metrics-server
- After a minute or two, metrics-server should be up
- We should now be able to check Nodes resource usage:
```bash
kubectl top nodes
```
- And Pods resource usage, too:
```bash
kubectl top pods --all-namespaces
```
---
## Keep some padding
- The RAM usage that we see should correspond more or less to the Resident Set Size
- Our pods also need some extra space for buffers, caches...
- Do not aim for 100% memory usage!
- Some more realistic targets:
50% (for workloads with disk I/O and leveraging caching)
90% (on very big nodes with mostly CPU-bound workloads)
75% (anywhere in between!)

View File

@@ -1,7 +1,111 @@
# Installing Prometheus and Grafana
# Prometheus and Grafana
FIXME
- What if we want metrics retention, view graphs, trends?
- A very popular combo is Prometheus+Grafana:
- Prometheus as the "metrics engine"
- Grafana to display comprehensive dashboards
- Prometheus also has an alert-manager component to trigger alerts
(we won't talk about that one)
---
## Installing Prometheus and Grafana
- A complete metrics stack needs at least:
- the Prometheus server (collects metrics and stores them efficiently)
- a collection of *exporters* (exposing metrics to Prometheus)
- Grafana
- a collection of Grafana dashboards (building them from scratch is tedious)
- The Helm char `kube-prometheus-stack` combines all these elements
- ... So we're going to use it to deploy our metrics stack!
---
## Installing `kube-prometheus-stack`
- Let's install that stack *directly* from its repo
(without doing `helm repo add` first)
- Otherwise, keep the same naming strategy:
```bash
helm upgrade --install kube-prometheus-stack kube-prometheus-stack \
--namespace kube-prometheus-stack --create-namespace \
--repo https://prometheus-community.github.io/helm-charts
```
- Check what was installed:
```bash
kubectl get all --namespace kube-prometheus-stack
```
---
## Exposing Grafana
- Let's create an Ingress for Grafana
```bash
kubectl create ingress --namespace kube-prometheus-stack grafana \
--rule=grafana.`cloudnative.party`/*=kube-prometheus-stack-grafana:80
```
(as usual, make sure to use *your* domain name above)
- Connect to Grafana
(remember that the DNS record might take a few minutes to come up)
---
## Grafana credentials
- What could the login and password be?
- Let's look at the Secrets available in the namespace:
```bash
kubectl get secrets --namespace kube-prometheus-stack
```
- There is a `kube-prometheus-stack-grafana` that looks promising!
- Decode the Secret:
```bash
kubectl get secret kube-prometheus-stack-grafana -o json \
| jq '.data | map_values(@base64d)'
```
- If you don't have the `jq` tool mentioned above, don't worry...
--
- The login/password is hardcoded to `admin`/`prom-operator` 😬
---
## Grafana dashboards
- Once logged in, click on the "Dashboards" icon on the left
(it's the one that looks like four squares)
- Then click on the "Manage" entry
- Then click on "Kubernetes / Compute Resources / Cluster"
- This gives us a breakdown of resource usage by Namespace
- Feel free to explore the other dashboards!
???

View File

@@ -1,6 +1,137 @@
# Installing Traefik
FIXME
- Traefik is going to be our Ingress Controller
- Let's install it with a Helm chart, in its own namespace
- First, let's add the Traefik chart repository:
```bash
helm repo add traefik https://helm.traefik.io/traefik
```
- Then, install the chart:
```bash
helm upgrade --install treafik trafik/traefik \
--create-namespace --namespace traefik \
--set "ports.websecure.tls.enabled=true"
```
(that option that we added enables HTTPS, it will be useful later!)
---
## Testing Traefik
- Let's create an Ingress resource!
- If we're using Kubernetes 1.20 or later, we can simply do this:
```bash
kubectl create ingress web \
--rule=`ingress-is-fun.cloudnative.party`/*=web:80
```
(make sure to update and use your own domain)
- Check that the Ingress was correctly created:
```bash
kubectl get ingress
kubectl describe ingress
```
- If we're using Kubernetes 1.19 or earlier, we'll need some YAML
---
## Creating an Ingress with YAML
- This is how we do it with YAML:
```bash
kubectl apply -f- <<EOF
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: web
spec:
rules:
- host: `ingress-is-fun.cloudnative.party`
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 80
EOF
```
---
## Ingress versions...
- Note how we used the `v1beta1` Ingress version on the previous YAML
(to be compatible with older Kubernetes versions)
- This YAML will give you deprecation warnings on recent version of Kubernetes
(since the Ingress spec is now at version `v1`)
- Don't worry too much about the deprecation warnings
(on Kubernetes, deprecation happens over a long time window, typically 1 year)
- You will have time to revisit and worry later! 😅
---
## Does it work?
- Try to connect to the Ingress host name
(in my example, http://ingress-is-fun.cloudnative.party/)
- *Normally,* it doesn't work (yet) 🤔
- Let's look at `kubectl get ingress` again
- ExternalDNS is trying to create records mapping HOSTS to ADDRESS
- But the ADDRESS field is currently empty!
- We need to tell Traefik to fill that ADDRESS field
---
## Reconfiguring Traefik
- There is a "magic" flag to tell Traefik to update the address status field
- Let's update our Traefik install:
```bash
helm upgrade --install treafik trafik/traefik \
--create-namespace --namespace traefik \
--set "ports.websecure.tls.enabled=true" \
--set "providers.kubernetesIngress.publishedService.enabled=true"
```
---
## Checking what we did
- Check the output of `kubectl get ingress`
(there should be an address now)
- Check the logs of ExternalDNS
(there should be a mention of the new DNS record)
- Try again to connect to the HTTP address
(now it should work)
- Note that some of these operations might take a minute or two
(be patient!)
???

View File

@@ -0,0 +1,87 @@
# DNS, Ingress, Metrics
- We got a basic app up and running
- We accessed it over a raw IP address
- Can we do better?
(i.e. access it with a domain name!)
- How much resources is it using?
---
## DNS
- We'd like to associate a fancy name to that LoadBalancer Service
(e.g. `nginx.cloudnative.party``A.B.C.D`)
- option 1: manually add a DNS record
- option 2: find a way to create DNS records automatically
- We will install ExternalDNS to automate DNS records creatoin
- ExternalDNS supports Linode DNS and dozens of other providers
---
## Ingress
- What if we have multiple web services to expose?
- We could create one LoadBalancer Service for each of them
- This would create a lot of cloud load balancers
(and they typically incur a cost, even if it's a small one)
- Instead, we can use an *Ingress Controller*
- Ingress Controller = HTTP load balancer / reverse proxy
- Put all our HTTP services behind a single LoadBalancer Service
- Can also do fancy "content-based" routing (using headers, request path...)
- We will install Traefik as our Ingress Controller
---
## Metrics
- How much resources are we using right now?
- When will we need to scale up our cluster?
- We need metrics!
- We're going to install the *metrics server*
- It's a very basic metrics system
(no retention, no graphs, no alerting...)
- But it's lightweight, and it is used internally by Kubernetes for autoscaling
---
## What's next
- We're going to install all these components
- Very often, things can be installed with a simple YAML file
- Very often, that YAML file needs to be customized a little bit
(add command-line parameters, provide API tokens...)
- Instead, we're going to use Helm charts
- Helm charts give us a way to customize what we deploy
- Helm can also keep track of what we install
(for easier uninstall and updates)

View File

@@ -1,8 +1,8 @@
# Our sample application
- We will clone the GitHub repository onto our `node1`
- I'm going to run our demo app locally, with Docker
- The repository also contains scripts and tools that we will use through the workshop
(you don't have to do that; do it if you like!)
.exercise[
@@ -15,7 +15,7 @@ fi
```
-->
- Clone the repository on `node1`:
- Clone the repository:
```bash
git clone https://@@GITREPO@@
```
@@ -34,7 +34,7 @@ Let's start this before we look around, as downloading will take a little time..
- Go to the `dockercoins` directory, in the cloned repo:
```bash
cd ~/container.training/dockercoins
cd container.training/dockercoins
```
- Use Compose to build and run all containers: