Compare commits

...

54 Commits
academy ... m6

Author SHA1 Message Date
Jérôme Petazzoni
01c374d0a4 Merge pull request #664 from lpiot/main
The missing slides…😅
2025-06-13 08:48:44 +02:00
Ludovic Piot
eee44979c5 📝 Add Kyverno install chapter 2025-06-12 22:13:19 +02:00
Ludovic Piot
4d3bc06e30 📝 Add Kyverno install chapter 2025-06-12 21:50:42 +02:00
Ludovic Piot
229ab045b3 🔥 2025-06-12 21:04:06 +02:00
Ludovic Piot
fe1a61eaeb 🎨 2025-06-12 21:03:49 +02:00
Ludovic Piot
9613589dea 📝 Add small section about SSH keypairs rotation for Flux 2025-06-12 20:23:59 +02:00
Ludovic Piot
ca8865a10b 📝 Change the mermaid scenario diagram 2025-06-12 20:07:11 +02:00
Ludovic Piot
f279bbea11 ✏️ 2025-06-12 20:06:27 +02:00
Ludovic Piot
bc6100301e 📝 Add monitoring stack install 2025-06-12 20:05:14 +02:00
Jérôme Petazzoni
a32751636a Merge pull request #663 from lpiot/main
The deck with a small fix
2025-06-11 20:33:27 +02:00
Ludovic Piot
4a0e23d131 🐛 Sorry Jerome 2025-06-11 19:59:52 +02:00
Ludovic Piot
6e987d1fca Merge branch 'm6' into main 2025-06-11 19:52:03 +02:00
Ludovic Piot
18b888009e 📝 Add an MVP Network policies section 2025-06-11 19:44:17 +02:00
Ludovic Piot
36dd8bb695 📝 Add the new chapters to the M6 stack 2025-06-11 19:33:35 +02:00
Ludovic Piot
395c5a38ab 🎨 Add reference to the chapter title 2025-06-11 19:24:57 +02:00
Ludovic Piot
2b0d3b87ac 📝 Add OpenEBS install chapter 2025-06-11 19:24:13 +02:00
Ludovic Piot
a165e60407 📝 Add k0s install chapter 2025-06-11 19:22:40 +02:00
Ludovic Piot
3c13fd51dd 🎨 Add Mario animation when Flux reconcile 2025-06-11 19:22:04 +02:00
Ludovic Piot
324ad2fdd0 🎨 Update mermaid scenario diagram 2025-06-11 19:21:13 +02:00
Ludovic Piot
269ae79e30 📝 Add k0s install chapter 2025-06-11 17:08:52 +02:00
Ludovic Piot
39a15b3d7d ✏️ Clean up consistency about how we evoke the OPS team 2025-06-11 17:08:52 +02:00
Ludovic Piot
9e7ed8cb49 📝 Add MOVY tenant creation chapter 2025-06-11 17:08:52 +02:00
Ludovic Piot
06e7a47659 📝 Upgrade the mermaid scenario 2025-06-11 17:08:52 +02:00
Ludovic Piot
802e525f57 📝 Add Ingress chapter 2025-06-11 17:08:52 +02:00
Ludovic Piot
0f68f89840 📝 Add Ingress chapter 2025-06-11 17:08:52 +02:00
Ludovic Piot
b275342bd2 ✏️ Fixing TEST emphasis 2025-06-11 17:08:52 +02:00
Ludovic Piot
e11e97ccff 📝 Add k0s install chapter 2025-06-11 15:10:43 +02:00
Ludovic Piot
023a9d0346 ✏️ Clean up consistency about how we evoke the OPS team 2025-06-10 19:20:25 +02:00
Ludovic Piot
3f5eaae6b9 📝 Add MOVY tenant creation chapter 2025-06-10 19:19:19 +02:00
Ludovic Piot
1634d5b5bc 📝 Upgrade the mermaid scenario 2025-06-10 17:15:38 +02:00
Ludovic Piot
40418be55a 📝 Add Ingress chapter 2025-06-10 16:19:06 +02:00
Ludovic Piot
04198b7f91 📝 Add Ingress chapter 2025-06-10 16:05:17 +02:00
Jérôme Petazzoni
150c8fc768 Merge pull request #660 from lpiot/main
Mostly the scenario upgrade with Mermaid schemas
2025-06-10 14:24:18 +02:00
Ludovic Piot
e2af1bb057 ✏️ Fixing TEST emphasis 2025-06-10 12:51:09 +02:00
Ludovic Piot
d4c260aa4a 💄 📝 🎨 Upgrade the mermaid scenario schema 2025-06-09 21:20:57 +02:00
Ludovic Piot
89cd677b09 📝 upgrade R01 chapter 2025-06-09 21:20:57 +02:00
Ludovic Piot
3008680c12 🛂 🐛 fix permissions for persistentVolumes management 2025-06-09 21:20:57 +02:00
Ludovic Piot
f7b8184617 🎨 2025-06-09 21:20:57 +02:00
Jérôme Petazzoni
a565c0979c Merge pull request #659 from lpiot/main
Add R01 chapter and fixes to previous chapters
2025-06-09 20:05:55 +02:00
Jérôme Petazzoni
7a11f03b5e Merge branch 'm6' into main 2025-06-09 20:05:26 +02:00
Ludovic Piot
b0760b99a5 ✏️ 📝 Fix shpod access methods 2025-06-09 17:11:57 +02:00
Ludovic Piot
bcb9c3003f 📝 Add R01 chapter about test-ROCKY tenant config 2025-06-09 17:10:35 +02:00
Ludovic Piot
99ce9b3a8a 🎨 📝 Add missing steps in demo 2025-06-09 16:09:45 +02:00
Ludovic Piot
0ba602b533 🎨 clean up code display 2025-06-09 16:08:58 +02:00
Jérôme Petazzoni
d43c41e11e Proof-read first half of M6-START 2025-06-09 14:46:13 +02:00
Ludovic Piot
331309dc63 🎨 cleanup display of some console results 2025-06-09 14:11:05 +02:00
Ludovic Piot
44146915e0 📝 🍱 add T03 chapter 2025-06-04 23:55:33 +02:00
Ludovic Piot
84996e739b 🍱 📝 rewording and updating pics 2025-06-04 23:54:51 +02:00
Ludovic Piot
2aea1f70b2 📝 Add Flux install 2025-05-29 18:00:18 +02:00
Ludovic Piot
985e2ae42c 📝 add M6 intro slidedeck 2025-05-29 12:25:57 +02:00
Ludovic Piot
ea58428a0c 🐛 Slides now generate! ♻️ Move a slide 2025-05-14 22:05:59 +02:00
Ludovic Piot
59e60786c0 🎨 make personnae and cluster names consistent 2025-05-14 21:49:09 +02:00
Ludovic Piot
af63cf1405 🚨 2025-05-14 21:25:59 +02:00
Ludovic Piot
f9041807f6 🎉 first M6 draft slidedeck 2025-05-14 20:52:32 +02:00
45 changed files with 3636 additions and 0 deletions

View File

@@ -0,0 +1,9 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
data:
use-forwarded-headers: true
compute-full-forwarded-for: true
use-proxy-protocol: true

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: ingress-nginx

View File

@@ -0,0 +1,12 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- M6-ingress-nginx-components.yaml
- sync.yaml
patches:
- path: M6-ingress-nginx-cm-patch.yaml
target:
kind: ConfigMap
- path: M6-ingress-nginx-svc-patch.yaml
target:
kind: Service

View File

@@ -0,0 +1,8 @@
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/scw-loadbalancer-proxy-protocol-v2: true
service.beta.kubernetes.io/scw-loadbalancer-use-hostname: true

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: kyverno

View File

@@ -0,0 +1,72 @@
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: flux-multi-tenancy
spec:
validationFailureAction: enforce
rules:
- name: serviceAccountName
exclude:
resources:
namespaces:
- flux-system
match:
resources:
kinds:
- Kustomization
- HelmRelease
validate:
message: ".spec.serviceAccountName is required"
pattern:
spec:
serviceAccountName: "?*"
- name: kustomizationSourceRefNamespace
exclude:
resources:
namespaces:
- flux-system
- ingress-nginx
- kyverno
- monitoring
- openebs
match:
resources:
kinds:
- Kustomization
preconditions:
any:
- key: "{{request.object.spec.sourceRef.namespace}}"
operator: NotEquals
value: ""
validate:
message: "spec.sourceRef.namespace must be the same as metadata.namespace"
deny:
conditions:
- key: "{{request.object.spec.sourceRef.namespace}}"
operator: NotEquals
value: "{{request.object.metadata.namespace}}"
- name: helmReleaseSourceRefNamespace
exclude:
resources:
namespaces:
- flux-system
- ingress-nginx
- kyverno
- monitoring
- openebs
match:
resources:
kinds:
- HelmRelease
preconditions:
any:
- key: "{{request.object.spec.chart.spec.sourceRef.namespace}}"
operator: NotEquals
value: ""
validate:
message: "spec.chart.spec.sourceRef.namespace must be the same as metadata.namespace"
deny:
conditions:
- key: "{{request.object.spec.chart.spec.sourceRef.namespace}}"
operator: NotEquals
value: "{{request.object.metadata.namespace}}"

View File

@@ -0,0 +1,29 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: monitoring
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana
namespace: monitoring
spec:
ingressClassName: nginx
rules:
- host: grafana.test.metal.mybestdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kube-prometheus-stack-grafana
port:
number: 80

View File

@@ -0,0 +1,35 @@
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-from-other-namespaces
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-webui
spec:
podSelector:
matchLabels:
app: web
ingress:
- from: []
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-db
spec:
podSelector:
matchLabels:
app: db
ingress:
- from:
- podSelector:
matchLabels:
app: web

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: openebs

View File

@@ -0,0 +1,12 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: openebs
resources:
- M6-openebs-components.yaml
- sync.yaml
configMapGenerator:
- name: openebs-values
files:
- values.yaml=M6-openebs-values.yaml
configurations:
- M6-openebs-kustomizeconfig.yaml

View File

@@ -0,0 +1,6 @@
nameReference:
- kind: ConfigMap
version: v1
fieldSpecs:
- path: spec/valuesFrom/name
kind: HelmRelease

View File

@@ -0,0 +1,15 @@
# helm install openebs --namespace openebs openebs/openebs
# --set engines.replicated.mayastor.enabled=false
# --set lvm-localpv.lvmNode.kubeletDir=/var/lib/k0s/kubelet/
# --create-namespace
engines:
replicated:
mayastor:
enabled: false
# Needed for k0s install since kubelet install is slightly divergent from vanilla install >:-(
lvm-localpv:
lvmNode:
kubeletDir: /var/lib/k0s/kubelet/
localprovisioner:
hostpathClass:
isDefaultClass: true

View File

@@ -0,0 +1,38 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: rocky-test
name: rocky-full-access
rules:
- apiGroups: ["", extensions, apps]
resources: [deployments, replicasets, pods, services, ingresses, statefulsets]
verbs: [get, list, watch, create, update, patch, delete] # You can also use [*]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: rocky-pv-access
rules:
- apiGroups: [""]
resources: [persistentvolumes]
verbs: [get, list, watch, create, patch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
toolkit.fluxcd.io/tenant: rocky
name: rocky-reconciler2
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: rocky-pv-access
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: gotk:rocky-test:reconciler
- kind: ServiceAccount
name: rocky
namespace: rocky-test

19
k8s/M6-rocky-ingress.yaml Normal file
View File

@@ -0,0 +1,19 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rocky
namespace: rocky-test
spec:
ingressClassName: nginx
rules:
- host: rocky.test.mybestdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 80

View File

@@ -0,0 +1,8 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base/rocky
patches:
- path: M6-rocky-test-patch.yaml
target:
kind: Kustomization

View File

@@ -0,0 +1,7 @@
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: rocky
namespace: rocky-test
spec:
path: ./k8s/plain

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 186 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 221 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 162 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 570 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 278 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 347 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 192 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 241 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 189 KiB

View File

@@ -0,0 +1,349 @@
# K01- Installing a Kubernetes cluster from scratch
We operated a managed cluster from **Scaleway** `Kapsule`.
It's great! Most batteries are included:
- storage classes, with an already configured default one
- a default CNI with `Cilium`
<br/>(`Calico` is supported too)
- a _IaaS_ load-balancer that is manageable by `ingress-controllers`
- a management _WebUI_ with the Kubernetes dashboard
- an observability stack with `metrics-server` and the Kubernetes dashboard
But what about _on premises_ needs?
---
class: extra-details
## On premises Kubernetes distributions
The [CNCF landscape](https://landscape.cncf.io/?fullscreen=yes&zoom=200&group=certified-partners-and-providers) currently lists **61!** Kubernetes distributions, today.
Not speaking of Kubernetes managed services from Cloud providers…
Please, refer to the [`Setting up Kubernetes` chapter in the High Five M2 module](./2.yml.html#toc-setting-up-kubernetes) for more infos about Kubernetes distributions.
---
## Introducing k0s
Nowadays, some "light" distros are considered good enough to run production clusters.
That's the case for `k0s`.
It's an open source Kubernetes lightweight distribution.
Mainly relying on **Mirantis**, a long-time software vendor in Kubernetes ecosystem.
(The ones who bought `Docker Enterprise` a long time ago. remember?)
`k0s` aims to be both
- a lightweight distribution for _edge-computing_ and development pupose
- an enterprise-grade HA distribution fully supported by its editor
<br/>`MKE4` and `kordent` leverage on `k0s`
---
### `k0s` package
Its single binary includes:
- a CRI (`containerd`)
- Kubernetes vanilla control plane components (including both `etcd`)
- a vanilla network stack
- `kube-router`
- `kube-proxy`
- `coredns`
- `konnectivity`
- `kubectl` CLI
- install / uninstall features
- backup / restore features
---
class: pic
![k0s package](images/M6-k0s-packaging.png)
---
class: extra-details
### Konnectivity
You've seen that Kubernetes cluster architecture is very versatile.
I'm referring to the [`Kubernetes architecture` chapter in the High Five M5 module](./5.yml.html#toc-kubernetes-architecture)
Network communications between control plane components and worker nodes might be uneasy to configure.
`Konnectivity` is a response to this pain. It acts as an RPC proxy for any communication initiated from control plane to workers.
These communications are listed in [`Kubernetes internal APIs` chapter in the High Five M5 module](https://2025-01-enix.container.training/5.yml.html#toc-kubernetes-internal-apis)
The agent deployed on each worker node maintains an RPC tunnel with the one deployed on control plane side.
---
class: pic
![konnectivity architecture](images/M6-konnectivity-architecture.png)
---
## Installing `k0s`
It installs with a one-liner command
- either in single-node lightweight footprint
- or in multi-nodes HA footprint
.lab[
- Get the binary
```bash
docker@m621: ~$ wget https://github.com/k0sproject/k0sctl/releases/download/v0.25.1/k0sctl-linux-amd64
```
]
---
### Prepare the config file
.lab[
- Create the config file
```bash
docker@m621: ~$ k0sctl init \
--controller-count 3 \
--user docker \
--k0s m621 m622 m623 > k0sctl.yaml
```
- change the following field: `spec.hosts[*].role: controller+worker`
- add the following fields: `spec.hosts[*].noTaints: true`
```bash
docker@m621: ~$ k0sctl apply --config k0sctl.yaml
```
]
---
### And the famous one-liner
.lab[
```bash
k8s@shpod: ~$ k0sctl apply --config k0sctl.yaml
```
]
---
### Check that k0s installed correctly
.lab[
```bash
docker@m621 ~$ sudo k0s status
Version: v1.33.1+k0s.1
Process ID: 60183
Role: controller
Workloads: true
SingleNode: false
Kube-api probing successful: true
Kube-api probing last error:
docker@m621 ~$ sudo k0s etcd member-list
{"members":{"m621":"https://10.10.3.190:2380","m622":"https://10.10.2.92:2380","m623":"https://10.10.2.110:2380"}}
```
]
---
### `kubectl` is included
.lab[
```bash
docker@m621 ~$ sudo k0s kubectl get nodes
NAME STATUS ROLES AGE VERSION
m621 Ready control-plane 66m v1.33.1+k0s
m622 Ready control-plane 66m v1.33.1+k0s
m623 Ready control-plane 66m v1.33.1+k0s
docker@m621 ~$ sudo k0s kubectl run shpod --image jpetazzo/shpod
```
]
---
class: extra-details
### Single node install (for info!)
For testing purpose, you may want to use a single-node (yet `etcd`-geared) install…
.lab[
- Install it
```bash
docker@m621 ~$ curl -sSLf https://get.k0s.sh | sudo sh
docker@m621 ~$ sudo k0s install controller --single
docker@m621 ~$ sudo k0s start
```
- Reset it
```bash
docker@m621 ~$ sudo k0s start
docker@m621 ~$ sudo k0s reset
```
]
---
## Deploying shpod
.lab[
```bash
docker@m621 ~$ sudo k0s kubectl apply -f https://shpod.in/shpod.yaml
docker@m621 ~$ sudo k0s kubectl apply -f https://shpod.in/shpod.yaml
```
]
---
## Flux install
We'll install `Flux`.
And replay the all scenario a 2nd time.
Let's face it: we don't have that much time. 😅
Since all our install and configuration is `GitOps`-based, we might just leverage on copy-paste and code configuration…
Maybe.
Let's copy the 📂 `./clusters/CLOUDY` folder and rename it 📂 `./clusters/METAL`.
---
### Modifying Flux config 📄 files
- In 📄 file `./clusters/METAL/flux-system/gotk-sync.yaml`
</br>change the `Kustomization` value `spec.path: ./clusters/METAL`
- ⚠️ We'll have to adapt the `Flux` _CLI_ command line
- And that's pretty much it!
- We'll see if anything goes wrong on that new cluster
---
### Connecting to our dedicated `Github` repo to host Flux config
.lab[
- let's replace `GITHUB_TOKEN` and `GITHUB_REPO` values
- don't forget to change the patch to `clusters/METAL`
```bash
k8s@shpod:~$ export GITHUB_TOKEN="my-token" && \
export GITHUB_USER="container-training-fleet" && \
export GITHUB_REPO="fleet-config-using-flux-XXXXX"
k8s@shpod:~$ flux bootstrap github \
--owner=${GITHUB_USER} \
--repository=${GITHUB_REPO} \
--team=OPS \
--team=ROCKY --team=MOVY \
--path=clusters/METAL
```
]
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
### Flux deployed our complete stack
Everything seems to be here but…
- one database is in `Pending` state
- our `ingresses` don't work well
```bash
k8s@shpod ~$ curl --header 'Host: rocky.test.enixdomain.com' http://${myIngressControllerSvcIP}
curl: (52) Empty reply from server
```
---
### Fixing the Ingress
The current `ingress-nginx` configuration leverages on specific annotations used by Scaleway to bind a _IaaS_ load-balancer to the `ingress-controller`.
We don't have such kind of things here.😕
- We could bind our `ingress-controller` to a `NodePort`.
`ingress-nginx` install manifests propose it here:
</br>https://github.com/kubernetes/ingress-nginx/deploy/static/provider/baremetal
- In the 📄file `./clusters/METAL/ingress-nginx/sync.yaml`,
</br>change the `Kustomization` value `spec.path: ./deploy/static/provider/baremetal`
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
### Troubleshooting the database
One of our `db-0` pod is in `Pending` state.
```bash
k8s@shpod ~$ k get pods db-0 -n *-test -oyaml
()
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2025-06-11T11:15:42Z"
message: '0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims.
preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: Burstable
```
---
### Troubleshooting the PersistentVolumeClaims
```bash
k8s@shpod ~$ k get pvc postgresql-data-db-0 -n *-test -o yaml
()
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 9s (x182 over 45m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
```
No `storage class` is available on this cluster.
We hadn't the problem on our managed cluster since a default storage class was configured and then associated to our `PersistentVolumeClaim`.
Why is there no problem with the other database?

View File

@@ -0,0 +1,129 @@
# K03- Installing OpenEBS as our CSI
`OpenEBS` is a _CSI_ solution capable of hyperconvergence, synchronous replication and other extra features.
It installs with `Helm` charts.
- `Flux` is able to watch `Helm` repositories and install `HelmReleases`
- To inject its configuration into the `Helm chart` , `Flux` relies on a `ConfigMap` including the `values.yaml` file
.lab[
```bash
k8s@shpod ~$ mkdir -p ./clusters/METAL/openebs/ && \
cp -pr ~/container.training/k8s/M6-openebs-*.yaml \
./clusters/METAL/openebs/ && \
cd ./clusters/METAL/openebs/ && \
mv M6-openebs-kustomization.yaml kustomization.yaml && \
cd -
```
]
---
## Creating an `Helm` source in Flux for OpenEBS Helm chart
.lab[
```bash
k8s@shpod ~$ flux create source helm openebs \
--url=https://openebs.github.io/openebs \
--interval=3m \
--export > ./clusters/METAL/openebs/sync.yaml
```
]
---
## Creating the `HelmRelease` in Flux
.lab[
```bash
k8s@shpod ~$ flux create helmrelease openebs \
--namespace=openebs \
--source=HelmRepository/openebs.flux-system \
--chart=openebs \
--values-from=ConfigMap/openebs-values \
--export >> ./clusters/METAL/openebs/sync.yaml
```
]
---
## 📂 Let's review the files
- `M6-openebs-components.yaml`
</br>To include the `Flux` resources in the same _namespace_ where `Flux` installs the `OpenEBS` resources, we need to create the _namespace_ **before** the installation occurs
- `sync.yaml`
</br>The resources `Flux` uses to watch and get the `Helm chart`
- `M6-openebs-values.yaml`
</br> the `values.yaml` file that will be injected into the `Helm chart`
- `kustomization.yaml`
</br>This one is a bit special: it includes a [ConfigMap generator](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/configmapgenerator/)
- `M6-openebs-kustomizeconfig.yaml`
</br></br>This one is tricky: in order for `Flux` to trigger an upgrade of the `Helm Release` when the `ConfigMap` is altered, you need to explain to the `Kustomize ConfigMap generator` how the resources are relating with each others. 🤯
And here we go!
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
## And the result
Now, we have a _cluster_ featuring `openEBS`.
But still… The PersistentVolumeClaim remains in `Pending` state!😭
```bash
k8s@shpod ~$ kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 82m
```
We still don't have a default `StorageClass`!😤
---
### Manually enforcing the default `StorageClass`
Even if Flux is constantly reconciling our resources, we still are able to test evolutions by hand.
.lab[
```bash
k8s@shpod ~$ flux suspend helmrelease openebs -n openebs
► suspending helmrelease openebs in openebs namespace
✔ helmrelease suspended
k8s@shpod ~$ kubectl patch storageclass openebs-hostpath \
-p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
k8s@shpod ~$ k get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
openebs-hostpath (default) openebs.io/local Delete WaitForFirstConsumer false 82m
```
]
---
### Now the database is OK
```bash
k8s@shpod ~$ get pvc,pods -n movy-test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/postgresql-data-db-0 Bound pvc-ede1634f-2478-42cd-8ee3-7547cd7cdde2 1Gi RWO openebs-hostpath <unset> 20m
NAME READY STATUS RESTARTS AGE
pod/db-0 1/1 Running 0 5h43m
()
```

View File

@@ -0,0 +1,320 @@
# M01- Configuring **_🎬MOVY_** deployment with Flux
**_🎸ROCKY_** _tenant_ is now fully usable in **_⚗TEST_** env, let's do the same for another _dev_ team: **_🎬MOVY_**
😈 We could do it by using `Flux` _CLI_,
but let's see if we can succeed by just adding manifests in our `Flux` configuration repository.
---
class: pic
![Flux configuration waterfall](images/M6-flux-config-dependencies.png)
---
## Impact study
In our `Flux` configuration repository:
- Creation of the following 📂 folders: `./tenants/[base|test]/MOVY`
- Modification of the following 📄 file: `./clusters/CLOUDY/tenants.yaml`?
- Well, we don't need to: the watched path include the whole `./tenants/[test]/*` folder
In the app repository:
- Creation of a `movy` branch to deploy another version of the app dedicated to movie soundtracks
---
### Creation of the 📂 folders
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
cp -pr tenants/base/rocky tenants/base/movy
cp -pr tenants/test/rocky tenants/test/movy
```
]
---
### Modification of tenants/[base|test]/movy/* 📄 files
- For 📄`M6-rocky-*.yaml`, change the file names…
- and update the 📄`kustomization.yaml` file as a result
- In any file, replace any `rocky` entry by `movy`
- In 📄 `sync.yaml` be aware of what repository and what branch you want `Flux` to watch for **_🎬MOVY_** app deployment.
- for this demo, let's assume we create a `movy` branch
---
class: extra-details
### What about reusing rocky-cluster-roles?
💡 In 📄`M6-movy-cluster-role.yaml` and 📄`rbac.yaml`, we could have reused the already existing `ClusterRoles`: `rocky-full-access`, and `rocky-pv-access`
A `ClusterRole` is cluster wide. It is not dedicated to a namespace.
- Its permissions are restrained to a specific namespace by being bound to a `ServiceAccount` by a `RoleBinding`.
- Whereas a `ClusterRoleBinding` extends the permissions to the whole cluster scope.
But a _tenant_ is a **_tenant_** and permissions might evolved separately for **_🎸ROCKY_** and **_🎬MOVY_**.
So [we got to keep'em separated](https://www.youtube.com/watch?v=GHUql3OC_uU).
---
### Let-su-go!
The **_⚙OPS_** team push this new tenant configuration to `Github` for `Flux` controllers to watch and catch it!
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
git add . && \
git commit -m':wrench: :construction_worker: add MOVY tenant configuration' && \
git push
```
]
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
class: extra-details
### Another Flux error?
.lab[
- It seems that our `movy` branch is not present in the app repository
```bash
k8s@shpod:~$ flux get kustomization -A
NAMESPACE NAME REVISION SUSPENDED MESSAGE
()
flux-system tenant-prod False False kustomization path not found: stat /tmp/kustomization-113582828/tenants/prod: no such file or directory
()
movy-test movy False False Source artifact not found, retrying in 30s
```
]
---
### Creating the `movy` branch
- Let's create this new `movy` branch from `rocky` branch
.lab[
- You can force immediate reconciliation by typing this command:
```bash
k8s@shpod:~$ flux reconcile source git movy-app -n movy-test
```
]
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
### New branch detected
You now have a second app responding on [http://movy.test.mybestdomain.com]
But as of now, it's just the same as the **_🎸ROCKY_** one.
We want a specific (pink-colored) version with a dataset full of movie soundtracks.
---
## New version of the **_🎬MOVY_** app
In our branch `movy`
Let's modify our `deployment.yaml` file with 2 modifications.
- in `spec.template.spec.containers.image` change the container image tag to `1.0.3`
- and… let's introduce some evil enthropy by changing this line… 😈😈😈
```yaml
value: jdbc:postgresql://db/music
```
by this one
```yaml
value: jdbc:postgresql://db.rocky-test/music
```
And push the modifications…
---
class: pic
![MOVY app has an incorrect dataset](images/M6-incorrect-dataset-in-MOVY-app.png)
---
class: pic
![ROCKY app has an incorrect dataset](images/M6-incorrect-dataset-in-ROCKY-app.png)
---
### MOVY app is connected to ROCKY database
How evil have we been! 😈
We connected the **_🎬MOVY_** app to the **_🎸ROCKY_** database.
Even if our tenants are isolated in how they manage their Kubernetes resources…
pod network is still full mesh and any connection is authorized.
> The **_⚙OPS_** team should fix this!
---
class: extra-details
## Adding NetworkPolicies to **_🎸ROCKY_** and **_🎬MOVY_** namespaces
`Network policies` may be seen as the firewall feature in the pod network.
They rules ingress and egress network connections considering a described subset of pods.
Please, refer to the [`Network policies` chapter in the High Five M4 module](./4.yml.html#toc-network-policies)
- In our case, we just add the file `~/container.training/k8s/M6-network-policies.yaml`
</br>in our `./tenants/base/movy` folder
- without forgetting to update our `kustomization.yaml` file
- and without forgetting to commit 😁
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
### 🗺️ Where are we in our scenario?
<pre class="mermaid">
%%{init:
{
"theme": "default",
"gitGraph": {
"mainBranchName": "OPS",
"mainBranchOrder": 0
}
}
}%%
gitGraph
commit id:"0" tag:"start"
branch ROCKY order:3
branch MOVY order:4
branch YouRHere order:5
checkout OPS
commit id:'Flux install on CLOUDY cluster' tag:'T01'
branch TEST-env order:1
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for TEST tenant' tag:'T03'
commit id:'namespace isolation by RBAC'
checkout TEST-env
merge OPS id:'ROCKY tenant creation' tag:'T04'
checkout OPS
commit id:'ROCKY deploy. config.' tag:'R01'
checkout TEST-env
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
checkout ROCKY
commit id:'ROCKY' tag:'v1.0.0'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.0'
checkout OPS
commit id:'Ingress-controller config.' tag:'T05'
checkout TEST-env
merge OPS id:'Ingress-controller install' type: HIGHLIGHT tag:'T06'
checkout OPS
commit id:'ROCKY patch for ingress config.' tag:'R03'
checkout TEST-env
merge OPS id:'ingress config. for ROCKY app'
checkout ROCKY
commit id:'blue color' tag:'v1.0.1'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.1'
checkout ROCKY
commit id:'pink color' tag:'v1.0.2'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.2'
checkout OPS
commit id:'FLUX config for MOVY deployment' tag:'M01'
checkout TEST-env
merge OPS id:'FLUX ready to deploy MOVY' type: HIGHLIGHT tag:'M02'
checkout MOVY
commit id:'MOVY' tag:'v1.0.3'
checkout TEST-env
merge MOVY tag:'MOVY v1.0.3' type: REVERSE
checkout OPS
commit id:'Network policies'
checkout TEST-env
merge OPS type: HIGHLIGHT
checkout YouRHere
commit id:'x'
checkout OPS
merge YouRHere id:'YOU ARE HERE'
checkout OPS
commit id:'k0s install on METAL cluster' tag:'K01'
commit id:'Flux config. for METAL cluster' tag:'K02'
branch METAL_TEST-PROD order:3
commit id:'ROCKY/MOVY tenants on METAL' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for OpenEBS' tag:'K03'
checkout METAL_TEST-PROD
merge OPS id:'openEBS on METAL' type: HIGHLIGHT
checkout OPS
commit id:'Prometheus install'
checkout TEST-env
merge OPS type: HIGHLIGHT
checkout OPS
commit id:'Kyverno install'
commit id:'Kyverno rules'
checkout TEST-env
merge OPS type: HIGHLIGHT
</pre>

View File

@@ -0,0 +1,417 @@
# R01- Configuring **_🎸ROCKY_** deployment with Flux
The **_⚙OPS_** team manages 2 distinct envs: **_⚗TEST_** et _**🚜PROD**_
Thanks to _Kustomize_
1. it creates a **_base_** common config
2. this common config is overwritten with a **_⚗TEST_** _tenant_-specific configuration
3. the same applies with a _**🚜PROD**_-specific configuration
> 💡 This seems complex, but no worries: Flux's CLI handles most of it.
---
## Creating the **_🎸ROCKY_**-dedicated _tenant_ in **_⚗TEST_** env
- Using the `flux` _CLI_, we create the file configuring the **_🎸ROCKY_** team's dedicated _tenant_
- … this file takes place in the `base` common configuration for both envs
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
mkdir -p ./tenants/base/rocky && \
flux create tenant rocky \
--with-namespace=rocky-test \
--cluster-role=rocky-full-access \
--export > ./tenants/base/rocky/rbac.yaml
```
]
---
class: extra-details
### 📂 ./tenants/base/rocky/rbac.yaml
Let's see our file…
3 resources are created: `Namespace`, `ServiceAccount`, and `ClusterRoleBinding`
`Flux` **impersonates** as this `ServiceAccount` when it applies any resources found in this _tenant_-dedicated source(s)
- By default, the `ServiceAccount` is bound to the `cluster-admin` `ClusterRole`
- The team maintaining the sourced `Github` repository is almighty at cluster scope
A not that much isolated _tenant_! 😕
That's why the **_⚙OPS_** team enforces specific `ClusterRoles` with restricted permissions
Let's create these permissions!
---
## _namespace_ isolation for **_🎸ROCKY_**
.lab[
- Here are the restricted permissions to use in the `rocky-test` `Namespace`
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
cp ~/container.training/k8s/M6-rocky-cluster-role.yaml ./tenants/base/rocky/
```
]
> 💡 Note that some resources are managed at cluster scope (like `PersistentVolumes`).
> We need specific permissions, then…
---
## Creating `Github` source in Flux for **_🎸ROCKY_** app repository
A specific _branch_ of the `Github` repository is monitored by the `Flux` source
.lab[
- ⚠️ you may change the **repository URL** to the one of your own clone
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create source git rocky-app \
--namespace=rocky-test \
--url=https://github.com/Musk8teers/container.training-spring-music/ \
--branch=rocky --export > ./tenants/base/rocky/sync.yaml
```
]
---
## Creating `kustomization` in Flux for **_🎸ROCKY_** app repository
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create kustomization rocky \
--namespace=rocky-test \
--service-account=rocky \
--source=GitRepository/rocky-app \
--path="./k8s/" --export >> ./tenants/base/rocky/sync.yaml
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
cd ./tenants/base/rocky/ && \
kustomize create --autodetect && \
cd -
```
]
---
class: extra-details
### 📂 Flux config files
Let's review our `Flux` configuration files
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
cat ./tenants/base/rocky/sync.yaml && \
cat ./tenants/base/rocky/kustomization.yaml
```
]
---
## Adding a kustomize patch for **_⚗TEST_** cluster deployment
💡 Remember the DRY strategy!
- The `Flux` tenant-dedicated configuration is looking for this file: `.tenants/test/rocky/kustomization.yaml`
- It has been configured here: `clusters/CLOUDY/tenants.yaml`
- All the files we just created are located in `.tenants/base/rocky`
- So we have to create a specific kustomization in the right location
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
mkdir -p ./tenants/test/rocky && \
cp ~/container.training/k8s/M6-rocky-test-patch.yaml ./tenants/test/rocky/ && \
cp ~/container.training/k8s/M6-rocky-test-kustomization.yaml ./tenants/test/rocky/kustomization.yaml
```
---
### Synchronizing Flux config with its Github repo
Locally, our `Flux` config repo is ready
The **_⚙OPS_** team has to push it to `Github` for `Flux` controllers to watch and catch it!
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
git add . && \
git commit -m':wrench: :construction_worker: add ROCKY tenant configuration' && \
git push
```
]
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
class: pic
![rocky config files](images/M6-R01-config-files.png)
---
class: extra-details
### Flux resources for ROCKY tenant 1/2
.lab[
```bash
k8s@shpod:~$ flux get all -A
NAMESPACE NAME REVISION SUSPENDED
READY MESSAGE
flux-system gitrepository/flux-system main@sha1:8ffd72cf False
True stored artifact for revision 'main@sha1:8ffd72cf'
rocky-test gitrepository/rocky-app rocky@sha1:ffe9f3fe False
True stored artifact for revision 'rocky@sha1:ffe9f3fe'
()
```
]
---
class: extra-details
### Flux resources for ROCKY _tenant_ 2/2
.lab[
```bash
k8s@shpod:~$ flux get all -A
()
NAMESPACE NAME REVISION SUSPENDED
READY MESSAGE
flux-system kustomization/flux-system main@sha1:8ffd72cf False
True Applied revision: main@sha1:8ffd72cf
flux-system kustomization/tenant-prod False
False kustomization path not found: stat /tmp/kustomization-1164119282/tenants/prod: no such file or directory
flux-system kustomization/tenant-test main@sha1:8ffd72cf False
True Applied revision: main@sha1:8ffd72cf
rocky-test kustomization/rocky False
False StatefulSet/db dry-run failed (Forbidden): statefulsets.apps "db" is forbidden: User "system:serviceaccount:rocky-test:rocky" cannot patch resource "statefulsets" in API group "apps" at the cluster scope
```
]
And here is our 2nd Flux error(s)! 😅
---
class: extra-details
### Flux Kustomization, mutability, …
🔍 Notice that none of the expected resources is created:
the whole kustomization is rejected, even if the `StatefulSet` is this only resource that fails!
🔍 Flux Kustomization uses the dry-run feature to templatize the resources and then applying patches onto them
Good but some resources are not completely mutable, such as `StatefulSets`
We have to fix the mutation by applying the change without having to patch the resource.
🔍 Simply add the `spec.targetNamespace: rocky-test` to the `Kustomization` named `rocky`
---
class: extra-details
## And then it's deployed 1/2
You should see the following resources in the `rocky-test` namespace
.lab[
```bash
k8s@shpod-578d64468-tp7r2 ~/$ k get pods,svc,deployments -n rocky-test
NAME READY STATUS RESTARTS AGE
pod/db-0 1/1 Running 0 47s
pod/web-6c677bf97f-c7pkv 0/1 Running 1 (22s ago) 47s
pod/web-6c677bf97f-p7b4r 0/1 Running 1 (19s ago) 47s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/db ClusterIP 10.32.6.128 <none> 5432/TCP 48s
service/web ClusterIP 10.32.2.202 <none> 80/TCP 48s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web 0/2 2 0 47s
```
]
---
class: extra-details
## And then it's deployed 2/2
You should see the following resources in the `rocky-test` namespace
.lab[
```bash
k8s@shpod-578d64468-tp7r2 ~/$ k get statefulsets,pvc,pv -n rocky-test
NAME READY AGE
statefulset.apps/db 1/1 47s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/postgresql-data-db-0 Bound pvc-c1963a2b-4fc9-4c74-9c5a-b0870b23e59a 1Gi RWO sbs-default <unset> 47s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
persistentvolume/postgresql-data 1Gi RWO,RWX Retain Available <unset> 47s
persistentvolume/pvc-150fcef5-ebba-458e-951f-68a7e214c635 1G RWO Delete Bound shpod/shpod sbs-default <unset> 4h46m
persistentvolume/pvc-c1963a2b-4fc9-4c74-9c5a-b0870b23e59a 1Gi RWO Delete Bound rocky-test/postgresql-data-db-0 sbs-default <unset> 47s
```
]
---
class: extra-details
### PersistentVolumes are using a default `StorageClass`
💡 This managed cluster comes with custom `StorageClasses` leveraging on Cloud _IaaS_ capabilities (i.e. block devices)
![Flux configuration waterfall](images/M6-persistentvolumes.png)
- a default `StorageClass` is applied if none is specified (like here)
- for **_🏭PROD_** purpose, ops team might enforce a more performant `StorageClass`
- on a bare-metal cluster, **_🏭PROD_** team has to configure and provide `StorageClasses` on its own
---
class: pic
![Flux configuration waterfall](images/M6-flux-config-dependencies.png)
---
## Upgrading ROCKY app
The Git source named `rocky-app` is pointing at
- a Github repository named [Musk8teers/container.training-spring-music](https://github.com/Musk8teers/container.training-spring-music/)
- on its branch named `rocky`
This branch deploy the v1.0.0 of the _Web_ app:
`spec.template.spec.containers.image: ghcr.io/musk8teers/container.training-spring-music:1.0.0`
What happens if the **_🎸ROCKY_** team upgrades its branch to deploy `v1.0.1` of the _Web_ app?
---
## _tenant_ **_🏭PROD_**
💡 **_🏭PROD_** _tenant_ is still waiting for its `Flux` configuration, but don't bother for it right now.
---
### 🗺️ Where are we in our scenario?
<pre class="mermaid">
%%{init:
{
"theme": "default",
"gitGraph": {
"mainBranchName": "OPS",
"mainBranchOrder": 0
}
}
}%%
gitGraph
commit id:"0" tag:"start"
branch ROCKY order:3
branch MOVY order:4
branch YouRHere order:5
checkout OPS
commit id:'Flux install on CLOUDY cluster' tag:'T01'
branch TEST-env order:1
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for TEST tenant' tag:'T03'
commit id:'namespace isolation by RBAC'
checkout TEST-env
merge OPS id:'ROCKY tenant creation' tag:'T04'
checkout OPS
commit id:'ROCKY deploy. config.' tag:'R01'
checkout TEST-env
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
checkout ROCKY
commit id:'ROCKY' tag:'v1.0.0'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.0'
checkout YouRHere
commit id:'x'
checkout OPS
merge YouRHere id:'YOU ARE HERE'
checkout OPS
commit id:'Ingress-controller config.' tag:'T05'
checkout TEST-env
merge OPS id:'Ingress-controller install' type: HIGHLIGHT tag:'T06'
checkout OPS
commit id:'ROCKY patch for ingress config.' tag:'R03'
checkout TEST-env
merge OPS id:'ingress config. for ROCKY app'
checkout ROCKY
commit id:'blue color' tag:'v1.0.1'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.1'
checkout ROCKY
commit id:'pink color' tag:'v1.0.2'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.2'
checkout OPS
commit id:'FLUX config for MOVY deployment' tag:'M01'
checkout TEST-env
merge OPS id:'FLUX ready to deploy MOVY' type: HIGHLIGHT tag:'M02'
checkout MOVY
commit id:'MOVY' tag:'v1.0.3'
checkout TEST-env
merge MOVY tag:'MOVY v1.0.3' type: REVERSE
checkout OPS
commit id:'Network policies'
checkout TEST-env
merge OPS type: HIGHLIGHT
</pre>

View File

View File

@@ -0,0 +1,354 @@
# Kubernetes in production — <br/>an end-to-end example
- Previous training modules focused on individual topics
(e.g. RBAC, network policies, CRDs, Helm...)
- We will now show how to put everything together to deploy apps in production
(dealing with typical challenges like: multiple apps, multiple teams, multiple clusters...)
- Our first challenge will be to pick and choose which components to use
(among the vast [Cloud Native Landscape](https://landscape.cncf.io/))
- We'll start with a basic Kubernetes cluster (on cloud or on premises)
- We'll and enhance it by adding features one at a time
---
## The cast
There are 3 teams in our company:
- **_⚙OPS_** is the platform engineering team
- they're responsible for building and configuring Kubernetes clusters
- the **_🎸ROCKY_** team develops and manages the **_🎸ROCKY_** app
- that app manages a collection of _rock & pop_ albums
- it's deployed with plain YAML manifests
- the **_🎬MOVY_** team develops and manages the **_🎬MOVY_** app
- that app manages a collection of _movie soundtrack_ albums
- it's deployed with Helm charts
---
## Code and team organization
- **_🎸ROCKY_** and **_🎬MOVY_** reside in separate git repositories
- Each team can write code, build package, and deploy their applications:
- independently
<br/>(= without having to worry about what's happening in the other repo)
- autonomously
<br/>(= without having to synchronize or obtain privileges from another team)
---
## Cluster organization
The **_⚙OPS_** team manages 2 Kubernetes clusters:
- **_☁CLOUDY_**: managed cluster from a public cloud provider
- **_🤘METAL_**: custom-built cluster installed on bare Linux servers
Let's see the differences between these clusters.
---
## **_☁CLOUDY_** cluster
- Managed cluster from a public cloud provider ("Kubernetes-as-a-Service")
- HA control plane deployed and managed by the cloud provider
- Two worker nodes (potentially with cluster autoscaling)
- Usually comes pre-installed with some basic features
(e.g. metrics-server, CNI, CSI, sometimes an ingress controller)
- Requires extra components to be production-ready
(e.g. Flux or other gitops pipeline, observability...)
- Example: [Scaleway Kapsule][kapsule] (but many other KaaS options are available)
[kapsule]: https://www.scaleway.com/en/kubernetes-kapsule/
---
## **_🤘METAL_** cluster
- Custom-built cluster installed on bare Linux servers
- HA control plane deployed and managed by the **_⚙OPS_** team
- 3 nodes
- in our example, the nodes will run both the control plane and our apps
- it is more typical to use dedicated control plane nodes
<br/>(example: 3 control plane nodes + at least 3 worker nodes)
- Comes with even less pre-installed components than **_☁CLOUDY_**
(requiring more work from our **_⚙OPS_** team)
- Example: we'll use [k0s] (but many other distros are available)
[k0s]: https://k0sproject.io/
---
## **_⚗TEST_** and **_🏭PROD_**
- The **_⚙OPS_** team creates 2 environments for each dev team
(**_⚗TEST_** and **_🏭PROD_**)
- These environments exist on both clusters
(meaning 2 apps × 2 clusters × 2 envs = 8 envs total)
- The setup for each env and cluster should follow DRY principles
(to ensure configurations are consistent and minimize maintenance)
- Each cluster and each env has its own lifecycle
(= it should be possible to deploy, add an extra components/feature...
<br/>on one env without impacting the other)
---
### Multi-tenancy
Both **_🎸ROCKY_** and **_🎬MOVY_** teams should use **dedicated _"tenants"_** on each cluster/env
- the **_🎸ROCKY_** team should be able to deploy, upgrade and configure its app within its dedicated **namespace** without anybody else involved
- and the same for **_🎬MOVY_**
- neither team's deployments might interfere with the other, maintaining a clean and conflict-free environment
---
## Application overview
- Both dev teams are working on an app to manage music albums
- This app is mostly based on a `Spring` framework demo called spring-music
- This lab uses a dedicated fork [container.training-spring-music](https://github.com/Musk8teers/container.training-spring-music):
- with 2 branches dedicated to the **_🎸ROCKY_** and **_🎬MOVY_** teams
- The app architecture consists of 2 tiers:
- a `Java/Spring` Web app
- a `PostgreSQL` database
---
### 📂 specific file: application.yaml
This is where we configure the application to connect to the `PostgreSQL` database.
.lab[
🔍 Location: [/src/main/resources/application.yml](https://github.com/Musk8teers/container.training-spring-music/blob/main/src/main/resources/application.yml)
]
`PROFILE=postgres` env var is set in [docker-compose.yaml](https://github.com/Musk8teers/container.training-spring-music/blob/main/docker-compose.yml) file, for example…
---
### 📂 specific file: AlbumRepositoryPopulator.java
This is where the album collection is initially loaded from the file [`album.json`](https://github.com/Musk8teers/container.training-spring-music/blob/main/src/main/resources/albums.json)
.lab[
🔍 Location: [`/src/main/java/org/cloudfoundry/samples/music/repositories/AlbumRepositoryPopulator.java`](https://github.com/Musk8teers/container.training-spring-music/blob/main/src/main/java/org/cloudfoundry/samples/music/repositories/AlbumRepositoryPopulator.java)
]
---
## 🚚 How to deploy?
The **_⚙OPS_** team offers 2 deployment strategies that dev teams can use autonomously:
- **_🎸ROCKY_** uses a `Flux` _GitOps_ workflow based on regular Kubernetes `YAML` resources
- **_🎬MOVY_** uses a `Flux` _GitOps_ workflow based on `Helm` charts
---
## 🍱 What features?
<!-- TODO: complete this slide when all the modules are there -->
The **_⚙OPS_** team aims to provide clusters offering the following features to its users:
- a network stack with efficient workload isolation
- ingress and load-balancing capabilites
- an enterprise-grade monitoring solution for real-time insights
- automated policy rule enforcement to control Kubernetes resources requested by dev teams
<!-- - HA PostgreSQL -->
<!-- - HTTPs certificates to expose the applications -->
---
## 🌰 In a nutshell
- 3 teams: **_⚙OPS_**, **_🎸ROCKY_**, **_🎬MOVY_**
- 2 clusters: **_☁CLOUDY_**, **_🤘METAL_**
- 2 envs per cluster and per dev team: **_⚗TEST_**, **_🏭PROD_**
- 2 Web apps Java/Spring + PostgreSQL: one for pop and rock albums, another for movie soundtrack albums
- 2 deployment strategies: regular `YAML` resources + `Kustomize`, `Helm` charts
> 💻 `Flux` is used both
> - to operate the clusters
> - and to manage the _GitOps_ deployment workflows
---
### What our scenario might look like…
<pre class="mermaid">
%%{init:
{
"theme": "default",
"gitGraph": {
"mainBranchName": "OPS",
"mainBranchOrder": 0
}
}
}%%
gitGraph
commit id:"0" tag:"start"
branch ROCKY order:4
branch MOVY order:5
branch YouRHere order:6
checkout YouRHere
commit id:'x'
checkout OPS
merge YouRHere id:'YOU ARE HERE'
checkout OPS
commit id:'Flux install on CLOUDY cluster' tag:'T01'
branch TEST-env order:1
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for TEST tenant' tag:'T03'
commit id:'namespace isolation by RBAC'
checkout TEST-env
merge OPS id:'ROCKY tenant creation' tag:'T04'
checkout OPS
commit id:'ROCKY deploy. config.' tag:'R01'
checkout TEST-env
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
checkout ROCKY
commit id:'ROCKY' tag:'v1.0.0'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.0'
checkout OPS
commit id:'Ingress-controller config.' tag:'T05'
checkout TEST-env
merge OPS id:'Ingress-controller install' type: HIGHLIGHT tag:'T06'
checkout OPS
commit id:'ROCKY patch for ingress config.' tag:'R03'
checkout TEST-env
merge OPS id:'ingress config. for ROCKY app'
checkout ROCKY
commit id:'blue color' tag:'v1.0.1'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.1'
checkout ROCKY
commit id:'pink color' tag:'v1.0.2'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.2'
checkout OPS
commit id:'FLUX config for MOVY deployment' tag:'M01'
checkout TEST-env
merge OPS id:'FLUX ready to deploy MOVY' type: HIGHLIGHT tag:'M02'
checkout MOVY
commit id:'MOVY' tag:'v1.0.3'
checkout TEST-env
merge MOVY tag:'MOVY v1.0.3' type: REVERSE
checkout OPS
commit id:'Network policies'
checkout TEST-env
merge OPS type: HIGHLIGHT tag:'T07'
checkout OPS
commit id:'k0s install on METAL cluster' tag:'K01'
commit id:'Flux config. for METAL cluster' tag:'K02'
branch METAL_TEST-PROD order:3
commit id:'ROCKY/MOVY tenants on METAL' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for OpenEBS' tag:'K03'
checkout METAL_TEST-PROD
merge OPS id:'openEBS on METAL' type: HIGHLIGHT
checkout OPS
commit id:'Prometheus install'
checkout TEST-env
merge OPS type: HIGHLIGHT
checkout OPS
commit id:'Kyverno install'
commit id:'Kyverno rules'
checkout TEST-env
merge OPS type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for PROD tenant' tag:'P01'
branch PROD-env order:2
commit id:'ROCKY tenant on PROD'
checkout OPS
commit id:'ROCKY patch for PROD' tag:'R04'
checkout PROD-env
merge OPS id:'PROD ready to deploy ROCKY' type: HIGHLIGHT
checkout PROD-env
merge ROCKY tag:'ROCKY v1.0.2'
checkout MOVY
commit id:'MOVY HELM chart' tag:'M03'
checkout TEST-env
merge MOVY tag:'MOVY v1.0'
</pre>

View File

@@ -0,0 +1,410 @@
# T02- creating **_⚗TEST_** env on our **_☁CLOUDY_** cluster
Let's take a look at our **_☁CLOUDY_** cluster!
**_☁CLOUDY_** is a Kubernetes cluster created with [Scaleway Kapsule](https://www.scaleway.com/en/kubernetes-kapsule/) managed service
This managed cluster comes preinstalled with specific features:
- Kubernetes dashboard
- specific _Storage Classes_ based on Scaleway _IaaS_ block storage offerings
- a `Cilium` _CNI_ stack already set up
---
## Accessing the managed Kubernetes cluster
To access our cluster, we'll connect via [`shpod`](https://github.com/jpetazzo/shpod)
.lab[
- If you already have a kubectl on your desktop computer
```bash
kubectl -n shpod run shpod --image=jpetazzo/shpod
kubectl -n shpod exec -it shpod -- bash
```
- or directly via ssh
```bash
ssh -p myPort k8s@mySHPODSvcIpAddress
```
]
---
## Flux installation
Once `Flux` is installed,
the **_⚙OPS_** team exclusively operates its clusters by updating a code base in a `Github` repository
_GitOps_ and `Flux` enable the **_⚙OPS_** team to rely on the _first-class citizen pattern_ in Kubernetes' world through these steps:
- describe the **desired target state**
- and let the **automated convergence** happens
---
### Checking prerequisites
The `Flux` _CLI_ is available in our `shpod` pod
Before installation, we need to check that:
- `Flux` _CLI_ is correctly installed
- it can connect to the `API server`
- our versions of `Flux` and Kubernetes are compatible
.lab[
```bash
k8s@shpod:~$ flux --version
flux version 2.5.1
k8s@shpod:~$ flux check --pre
► checking prerequisites
✔ Kubernetes 1.32.3 >=1.30.0-0
✔ prerequisites checks passed
```
]
---
### Git repository for Flux configuration
The **_⚙OPS_** team uses `Flux` _CLI_
- to create a `git` repository named `fleet-config-using-flux-XXXXX` (⚠ replace `XXXXX` by a personnal suffix)
- in our `Github` organization named `container-training-fleet`
Prerequisites are:
- `Flux` _CLI_ needs a `Github` personal access token (_PAT_)
- to create and/or access the `Github` repository
- to give permissions to existing teams in our `Github` organization
- The PAT needs _CRUD_ permissions on our `Github` organization
- repositories
- admin:public_key
- users
- As **_⚙OPS_** team, let's creates a `Github` personal access token…
---
class: pic
![Generating a Github personal access token](images/M6-github-add-token.jpg)
---
### Creating dedicated `Github` repo to host Flux config
.lab[
- let's replace the `GITHUB_TOKEN` value by our _Personal Access Token_
- and the `GITHUB_REPO` value by our specific repository name
```bash
k8s@shpod:~$ export GITHUB_TOKEN="my-token" && \
export GITHUB_USER="container-training-fleet" && \
export GITHUB_REPO="fleet-config-using-flux-XXXXX"
k8s@shpod:~$ flux bootstrap github \
--owner=${GITHUB_USER} \
--repository=${GITHUB_REPO} \
--team=OPS \
--team=ROCKY --team=MOVY \
--path=clusters/CLOUDY
```
]
---
class: extra-details
Here is the result
```bash
✔ repository "https://github.com/container-training-fleet/fleet-config-using-flux-XXXXX" created
► reconciling repository permissions
✔ granted "maintain" permissions to "OPS"
✔ granted "maintain" permissions to "ROCKY"
✔ granted "maintain" permissions to "MOVY"
► reconciling repository permissions
✔ reconciled repository permissions
► cloning branch "main" from Git repository "https://github.com/container-training-fleet/fleet-config-using-flux-XXXXX.git"
✔ cloned repository
► generating component manifests
✔ generated component manifests
✔ committed component manifests to "main" ("7c97bdeb5b932040fd8d8a65fe1dc84c66664cbf")
► pushing component manifests to "https://github.com/container-training-fleet/fleet-config-using-flux-XXXXX.git"
✔ component manifests are up to date
► installing components in "flux-system" namespace
✔ installed components
✔ reconciled components
► determining if source secret "flux-system/flux-system" exists
► generating source secret
✔ public key: ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFqaT8B8SezU92qoE+bhnv9xONv9oIGuy7yVAznAZfyoWWEVkgP2dYDye5lMbgl6MorG/yjfkyo75ETieAE49/m9D2xvL4esnSx9zsOLdnfS9W99XSfFpC2n6soL+Exodw==
✔ configured deploy key "flux-system-main-flux-system-./clusters/CLOUDY" for "https://github.com/container-training-fleet/fleet-config-using-flux-XXXXX"
► applying source secret "flux-system/flux-system"
✔ reconciled source secret
► generating sync manifests
✔ generated sync manifests
✔ committed sync manifests to "main" ("11035e19cabd9fd2c7c94f6e93707f22d69a5ff2")
► pushing sync manifests to "https://github.com/container-training-fleet/fleet-config-using-flux-XXXXX.git"
► applying sync manifests
✔ reconciled sync configuration
◎ waiting for GitRepository "flux-system/flux-system" to be reconciled
✔ GitRepository reconciled successfully
◎ waiting for Kustomization "flux-system/flux-system" to be reconciled
✔ Kustomization reconciled successfully
► confirming components are healthy
✔ helm-controller: deployment ready
✔ kustomize-controller: deployment ready
✔ notification-controller: deployment ready
✔ source-controller: deployment ready
✔ all components are healthy
```
---
### Flux configures Github repository access for teams
- `Flux` sets up permissions that allow teams within our organization to **access** the `Github` repository as maintainers
- Teams need to exist before `Flux` proceeds to this configuration
![Teams in Github](images/M6-github-teams.png)
---
### ⚠️ Disclaimer
- In this lab, adding these teams as maintainers was merely a demonstration of how `Flux` _CLI_ sets up permissions in Github
- But there is no need for dev teams to have access to this `Github` repository
- One advantage of _GitOps_ lies in its ability to easily set up 💪🏼 **Separation of concerns** by using multiple `Flux` sources
---
### 📂 Flux config files
`Flux` has been successfully installed onto our **_☁CLOUDY_** Kubernetes cluster!
Its configuration is managed through a _Gitops_ workflow sourced directly from our `Github` repository
Let's review our `Flux` configuration files we've created and pushed into the `Github` repository…
… as well as the corresponding components running in our Kubernetes cluster
![Flux config files](images/M6-flux-config-files.png)
---
class: pic
<!-- FIXME: wrong schema -->
![Flux architecture](images/M6-flux-controllers.png)
---
class: extra-details
### Flux resources 1/2
.lab[
```bash
k8s@shpod:~$ kubectl get all --namespace flux-system
NAME READY STATUS RESTARTS AGE
pod/helm-controller-b6767d66-h6qhk 1/1 Running 0 5m
pod/kustomize-controller-57c7ff5596-94rnd 1/1 Running 0 5m
pod/notification-controller-58ffd586f7-zxfvk 1/1 Running 0 5m
pod/source-controller-6ff87cb475-g6gn6 1/1 Running 0 5m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/notification-controller ClusterIP 10.104.139.156 <none> 80/TCP 5m1s
service/source-controller ClusterIP 10.106.120.137 <none> 80/TCP 5m
service/webhook-receiver ClusterIP 10.96.28.236 <none> 80/TCP 5m
()
```
]
---
class: extra-details
### Flux resources 2/2
.lab[
```bash
k8s@shpod:~$ kubectl get all --namespace flux-system
()
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/helm-controller 1/1 1 1 5m
deployment.apps/kustomize-controller 1/1 1 1 5m
deployment.apps/notification-controller 1/1 1 1 5m
deployment.apps/source-controller 1/1 1 1 5m
NAME DESIRED CURRENT READY AGE
replicaset.apps/helm-controller-b6767d66 1 1 1 5m
replicaset.apps/kustomize-controller-57c7ff5596 1 1 1 5m
replicaset.apps/notification-controller-58ffd586f7 1 1 1 5m
replicaset.apps/source-controller-6ff87cb475 1 1 1 5m
```
]
---
### Flux components
- the `source controller` monitors `Git` repositories to apply Kubernetes resources on the cluster
- the `Helm controller` checks for new `Helm` _charts_ releases in `Helm` repositories and installs updates as needed
- _CRDs_ store `Flux` configuration within the Kubernetes control plane
---
class: extra-details
### Flux resources that have been created
.lab[
```bash
k8s@shpod:~$ flux get all --all-namespaces
NAMESPACE NAME REVISION SUSPENDED
READY MESSAGE
flux-system gitrepository/flux-system main@sha1:d48291a8 False
True stored artifact for revision 'main@sha1:d48291a8'
NAMESPACE NAME REVISION SUSPENDED
READY MESSAGE
flux-system kustomization/flux-system main@sha1:d48291a8 False
True Applied revision: main@sha1:d48291a8
```
]
---
### Flux CLI
`Flux` Command-Line Interface fulfills 3 primary functions:
1. It installs and configures first mandatory `Flux` resources in a _Gitops_ `git` repository
- ensuring proper access and permissions
2. It locally generates `YAML` files for desired `Flux` resources so that we just need to `git push` them
- _tenants_
- sources
-
3. It requests the API server to manage `Flux`-related resources
- _operators_
- _CRDs_
- logs
---
class: extra-details
### Flux -- for more info
Please, refer to the [`Flux` chapter in the High Five M3 module](./3.yml.html#toc-helm-chart-format)
---
### Flux relies on Kustomize
The `Flux` component named `kustomize controller` look for `Kustomize` resources in `Flux` code-based sources
1. `Kustomize` look for `YAML` manifests listed in the `kustomization.yaml` file
2. and aggregates, hydrates and patches them following the `kustomization` configuration
---
class: extra-details
### 2 different kustomization resources
⚠️ `Flux` uses 2 distinct resources with `kind: kustomization`
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: kustomization
```
describes how Kustomize (the _CLI_ tool) appends and transforms `YAML` manifests into a single bunch of `YAML` described resources
```yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1 group
kind: Kustomization
```
describes where `Flux kustomize-controller` looks for a `kustomization.yaml` file in a given `Flux` code-based source
---
class: extra-details
### Kustomize -- for more info
Please, refer to the [`Kustomize` chapter in the High Five M3 module](./3.yml.html#toc-kustomize)
---
class: extra-details
### Group / Version / Kind -- for more info
For more info about how Kubernetes resource natures are identified by their `Group / Version / Kind` triplet…
… please, refer to the [`Kubernetes API` chapter in the High Five M5 module](./5.yml.html#toc-the-kubernetes-api)
---
### 🗺️ Where are we in our scenario?
<pre class="mermaid">
%%{init:
{
"theme": "default",
"gitGraph": {
"mainBranchName": "OPS",
"mainBranchOrder": 0
}
}
}%%
gitGraph
commit id:"0" tag:"start"
branch ROCKY order:3
branch MOVY order:4
branch YouRHere order:5
checkout OPS
commit id:'Flux install on CLOUDY cluster' tag:'T01'
branch TEST-env order:1
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
checkout YouRHere
commit id:'x'
checkout OPS
merge YouRHere id:'YOU ARE HERE'
checkout OPS
commit id:'Flux config. for TEST tenant' tag:'T03'
commit id:'namespace isolation by RBAC'
checkout TEST-env
merge OPS id:'ROCKY tenant creation' tag:'T04'
checkout OPS
commit id:'ROCKY deploy. config.' tag:'R01'
checkout TEST-env
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
checkout ROCKY
commit id:'ROCKY' tag:'v1.0.0'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.0'
</pre>

View File

@@ -0,0 +1,200 @@
# Multi-tenants management with Flux
💡 Thanks to `Flux`, we can manage Kubernetes resources from inside the clusters.
The **_⚙OPS_** team uses `Flux` with a _GitOps_ code base to:
- configure the clusters
- deploy tools and components to extend the clusters capabilites
- configure _GitOps_ workflow for dev teams in **dedicated and isolated _tenants_**
The **_🎸ROCKY_** team uses `Flux` to deploy every new release of its app, by detecting every new `git push` events happening in its app `Github` repository
The **_🎬MOVY_** team uses `Flux` to deploy every new release of its app, packaged and published in a new `Helm` chart release
---
## Creating _tenants_ with Flux
While basic `Flux` behavior is to use a single configuration directory applied by a cluster-wide role…
… it can also enable _multi-tenant_ configuration by:
- creating dedicated directories for each _tenant_ in its configuration code base
- and using a dedicated `ServiceAccount` with limited permissions to operate in each _tenant_
Several _tenants_ are created
- per env
- for **_⚗TEST_**
- and **_🏭PROD_**
- per team
- for **_🎸ROCKY_**
- and **_🎬MOVY_**
---
class: pic
![Multi-tenants clusters](images/M6-cluster-multi-tenants.png )
---
### Flux CLI works locally
First, we have to **locally** clone your `Flux` configuration `Github` repository
- create an ssh key pair
- add the **public** key to your `Github` repository (**with write access**)
- and git clone the repository
---
### The command line 1/2
Creating the **_⚗TEST_** _tenant_
.lab[
- ⚠️ Think about renaming the repo with your own suffix
```bash
k8s@shpod:~$ cd fleet-config-using-flux-XXXXX/
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
flux create kustomization tenant-test \
--namespace=flux-system \
--source=GitRepository/flux-system \
--path ./tenants/test \
--interval=1m \
--prune --export >> clusters/CLOUDY/tenants.yaml
```
]
---
### The command line 2/2
Then we create the **_🏭PROD_** _tenant_
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
flux create kustomization tenant-prod \
--namespace=flux-system \
--source=GitRepository/flux-system \
--path ./tenants/prod \
--interval=3m \
--prune --export >> clusters/CLOUDY/tenants.yaml
```
]
---
### 📂 Flux tenants.yaml files
Let's review the `fleet-config-using-flux-XXXXX/clusters/CLOUDY/tenants.yaml` file
⚠️ The last command we type in `Flux` _CLI_ creates the `YAML` manifest **locally**
> ☝🏻 Don't forget to `git commit` and `git push` to `Github`!
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
### Our 1st Flux error
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux get all
NAMESPACE NAME REVISION SUSPENDED
READY MESSAGE
flux-system gitrepository/flux-system main@sha1:0466652e False
True stored artifact for revision 'main@sha1:0466652e'
NAMESPACE NAME REVISION SUSPENDED
READY MESSAGE
kustomization/flux-system main@sha1:0466652e False True
Applied revision: main@sha1:0466652e
kustomization/tenant-prod False False
kustomization path not found: stat /tmp/kustomization-417981261/tenants/prod: no such file or directory
kustomization/tenant-test False False
kustomization path not found: stat /tmp/kustomization-2532810750/tenants/test: no such file or directory
```
]
> Our configuration may be incomplete 😅
---
## Configuring Flux for the **_🎸ROCKY_** team
What the **_⚙OPS_** team has to do:
- 🔧 Create a dedicated `rocky` _tenant_ for **_⚗TEST_** and **_🏭PROD_** envs on the cluster
- 🔧 Create the `Flux` source pointing to the `Github` repository embedding the **_🎸ROCKY_** app source code
- 🔧 Add a `kustomize` _patch_ into the global `Flux` config to include this specific `Flux` config. dedicated to the deployment of the **_🎸ROCKY_** app
What the **_🎸ROCKY_** team has to do:
- 👨‍💻 Create the `kustomization.yaml` file in the **_🎸ROCKY_** app source code repository on `Github`
---
### 🗺️ Where are we in our scenario?
<pre class="mermaid">
%%{init:
{
"theme": "default",
"gitGraph": {
"mainBranchName": "OPS",
"mainBranchOrder": 0
}
}
}%%
gitGraph
commit id:"0" tag:"start"
branch ROCKY order:3
branch MOVY order:4
branch YouRHere order:5
checkout OPS
commit id:'Flux install on CLOUDY cluster' tag:'T01'
branch TEST-env order:1
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for TEST tenant' tag:'T03'
commit id:'namespace isolation by RBAC'
checkout TEST-env
merge OPS id:'ROCKY tenant creation' tag:'T04'
checkout YouRHere
commit id:'x'
checkout OPS
merge YouRHere id:'YOU ARE HERE'
checkout OPS
commit id:'ROCKY deploy. config.' tag:'R01'
checkout TEST-env
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
checkout ROCKY
commit id:'ROCKY' tag:'v1.0.0'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.0'
</pre>

View File

@@ -0,0 +1,284 @@
# T05- Configuring ingress for **_🎸ROCKY_** app
🍾 **_🎸ROCKY_** team has just deployed its `v1.0.0`
We would like to reach it from our workstations
The regular way to do it in Kubernetes is to configure an `Ingress` resource.
- `Ingress` is an abstract resource that manages how services are exposed outside of the Kubernetes cluster (Layer 7).
- It relies on `ingress-controller`(s) that are technical solutions to handle all the rules related to ingress.
- Available features vary, depending on the `ingress-controller`: load-balancing, networking, firewalling, API management, throttling, TLS encryption, etc.
- `ingress-controller` may provision Cloud _IaaS_ network resources such as load-balancer, persistent IPs, etc.
---
class: extra-details
## Ingress -- for more info
Please, refer to the [`Ingress` chapter in the High Five M2 module](./2.yml.html#toc-exposing-http-services-with-ingress-resources)
---
## Installing `ingress-nginx` as our `ingress-controller`
We'll use `ingress-nginx` (relying on `NGinX`), quite a popular choice.
- It is able to provision IaaS load-balancer in ScaleWay Cloud services
- As a reverse-proxy, it is able to balance HTTP connections on an on-premises cluster
The **_⚙OPS_** Team add this new install to its `Flux` config. repo
---
### Creating a `Github` source in Flux for `ingress-nginx`
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
mkdir -p ./clusters/CLOUDY/ingress-nginx && \
flux create source git ingress-nginx \
--namespace=ingress-nginx \
--url=https://github.com/kubernetes/ingress-nginx/ \
--branch=release-1.12 \
--export > ./clusters/CLOUDY/ingress-nginx/sync.yaml
```
]
---
### Creating `kustomization` in Flux for `ingress-nginx`
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create kustomization ingress-nginx \
--namespace=ingress-nginx \
--source=GitRepository/ingress-nginx \
--path="./deploy/static/provider/scw/" \
--export >> ./clusters/CLOUDY/ingress-nginx/sync.yaml
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
cp -p ~/container.training/k8s/M6-ingress-nginx-kustomization.yaml \
./clusters/CLOUDY/ingress-nginx/kustomization.yaml && \
cp -p ~/container.training/k8s/M6-ingress-nginx-components.yaml \
~/container.training/k8s/M6-ingress-nginx-*-patch.yaml \
./clusters/CLOUDY/ingress-nginx/
```
]
---
### Applying the new config
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
git add ./clusters/CLOUDY/ingress-nginx && \
git commit -m':wrench: :rocket: add Ingress-controller' && \
git push
```
]
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
class: pic
![Ingress-nginx provisionned a IaaS load-balancer in Scaleway Cloud services](images/M6-ingress-nginx-scaleway-lb.png)
---
class: extra-details
### Using external Git source
💡 Note that you can directly use pubilc `Github` repository (not maintained by your company).
- If you have to alter the configuration, `Kustomize` patching capabilities might help.
- Depending on the _gitflow_ this repository uses, updates will be deployed automatically to your cluster (here we're using a `release` branch).
- This repo exposes a `kustomization.yaml`. Well done!
---
## Adding the `ingress` resource to ROCKY app
.lab[
- Add the new manifest to our kustomization bunch
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
cp -pr ~/container.training/k8s/M6-rocky-ingress.yaml ./tenants/base/rocky && \
echo '- M6-rocky-ingress.yaml' >> ./tenants/base/rocky/kustomization.yaml
```
- Commit and its done
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
git add . && \
git commit -m':wrench: :rocket: add Ingress' && \
git push
```
]
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
### Here is the result
After Flux reconciled the whole bunch of sources and kustomizations, you should see
- `Ingress-NGinX` controller components in `ingress-nginx` namespace
- A new `Ingress` in `rocky-test` namespace
.lab[
```bash
k8s@shpod:~$ kubectl get all -n ingress-nginx && \
kubectl get ingress -n rocky-test
k8s@shpod:~$ \
PublicIP=$(kubectl get ingress rocky -n rocky-test \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}')
k8s@shpod:~$ \
curl --header 'rocky.test.mybestdomain.com' http://$PublicIP/
```
]
---
class: pic
![Rocky application screenshot](images/M6-rocky-app-screenshot.png)
---
## Upgrading **_🎸ROCKY_** app
**_🎸ROCKY_** team is now fully able to upgrade and deploy its app autonomously.
Just give it a try!
- In the `deployment.yaml` file
- in the app repo ([https://github.com/Musk8teers/container.training-spring-music/])
- you can change the `spec.template.spec.containers.image` to `1.0.1` and then to `1.0.2`
Dont' forget which branch is watched by `Flux` Git source named `rocky`
Don't forget to commit!
---
## Few considerations
- The **_⚙OPS_** team has to decide how to manage name resolution for public IPs
- Scaleway propose to expose a wildcard domain for its Kubernetes clusters
- Here, we chose that `Ingress-controller` (that makes sense) but `Ingress` as well were managed by the **_⚙OPS_** team.
- It might have been done in many different ways!
---
### 🗺️ Where are we in our scenario?
<pre class="mermaid">
%%{init:
{
"theme": "default",
"gitGraph": {
"mainBranchName": "OPS",
"mainBranchOrder": 0
}
}
}%%
gitGraph
commit id:"0" tag:"start"
branch ROCKY order:3
branch MOVY order:4
branch YouRHere order:5
checkout OPS
commit id:'Flux install on CLOUDY cluster' tag:'T01'
branch TEST-env order:1
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for TEST tenant' tag:'T03'
commit id:'namespace isolation by RBAC'
checkout TEST-env
merge OPS id:'ROCKY tenant creation' tag:'T04'
checkout OPS
commit id:'ROCKY deploy. config.' tag:'R01'
checkout TEST-env
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
checkout ROCKY
commit id:'ROCKY' tag:'v1.0.0'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.0'
checkout OPS
commit id:'Ingress-controller config.' tag:'T05'
checkout TEST-env
merge OPS id:'Ingress-controller install' type: HIGHLIGHT tag:'T06'
checkout OPS
commit id:'ROCKY patch for ingress config.' tag:'R03'
checkout TEST-env
merge OPS id:'ingress config. for ROCKY app'
checkout ROCKY
commit id:'blue color' tag:'v1.0.1'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.1'
checkout ROCKY
commit id:'pink color' tag:'v1.0.2'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.2'
checkout YouRHere
commit id:'x'
checkout OPS
merge YouRHere id:'YOU ARE HERE'
checkout OPS
commit id:'FLUX config for MOVY deployment' tag:'M01'
checkout TEST-env
merge OPS id:'FLUX ready to deploy MOVY' type: HIGHLIGHT tag:'M02'
checkout MOVY
commit id:'MOVY' tag:'v1.0.3'
checkout TEST-env
merge MOVY tag:'MOVY v1.0.3' type: REVERSE
checkout OPS
commit id:'Network policies'
checkout TEST-env
merge OPS type: HIGHLIGHT
</pre>

View File

@@ -0,0 +1,353 @@
# Installing a Kubernetes cluster from scratch
We operated a managed cluster from **Scaleway** `Kapsule`.
It's great! Most batteries are included:
- storage classes, with an already configured default one
- a default CNI with `Cilium`
<br/>(`Calico` is supported too)
- a _IaaS_ load-balancer that is manageable by `ingress-controllers`
- a management _WebUI_ with the Kubernetes dashboard
- an observability stack with `metrics-server` and the Kubernetes dashboard
But what about _on premises_ needs?
---
class: extra-details
## On premises Kubernetes distributions
The [CNCF landscape](https://landscape.cncf.io/?fullscreen=yes&zoom=200&group=certified-partners-and-providers) currently lists **61!** Kubernetes distributions, today.
Not speaking of Kubernetes managed services from Cloud providers…
Please, refer to the [`Setting up Kubernetes` chapter in the High Five M2 module](./2.yml.html#toc-setting-up-kubernetes) for more infos about Kubernetes distributions.
---
## Introducing k0s
Nowadays, some "light" distros are considered good enough to run production clusters.
That's the case for `k0s`.
It's an open source Kubernetes lightweight distribution.
Mainly relying on **Mirantis**, a long-time software vendor in Kubernetes ecosystem.
(The ones who bought `Docker Enterprise` a long time ago. remember?)
`k0s` aims to be both
- a lightweight distribution for _edge-computing_ and development pupose
- an enterprise-grade HA distribution fully supported by its editor
<br/>`MKE4` and `kordent` leverage on `k0s`
---
### `k0s` package
Its single binary includes:
- a CRI (`containerd`)
- Kubernetes vanilla control plane components (including both `etcd`)
- a vanilla network stack
- `kube-router`
- `kube-proxy`
- `coredns`
- `konnectivity`
- `kubectl` CLI
- install / uninstall features
- backup / restore features
---
class: pic
![k0s package](images/M6-k0s-packaging.png)
---
class: extra-details
### Konnectivity
You've seen that Kubernetes cluster architecture is very versatile.
I'm referring to the [`Kubernetes architecture` chapter in the High Five M5 module](./5.yml.html#toc-kubernetes-architecture)
Network communications between control plane components and worker nodes might be uneasy to configure.
`Konnectivity` is a response to this pain. It acts as an RPC proxy for any communication initiated from control plane to workers.
These communications are listed in [`Kubernetes internal APIs` chapter in the High Five M5 module](https://2025-01-enix.container.training/5.yml.html#toc-kubernetes-internal-apis)
The agent deployed on each worker node maintains an RPC tunnel with the one deployed on control plane side.
---
class: pic
![konnectivity architecture](images/M6-konnectivity-architecture.png)
---
## Installing `k0s`
It installs with a one-liner command
- either in single-node lightweight footprint
- or in multi-nodes HA footprint
.lab[
- Get the binary
```bash
docker@m621: ~$ wget https://github.com/k0sproject/k0sctl/releases/download/v0.25.1/k0sctl-linux-amd64
```
]
---
### Prepare the config file
.lab[
- Create the config file
```bash
docker@m621: ~$ k0sctl init \
--controller-count 3 \
--user docker \
--k0s m621 m622 m623 > k0sctl.yaml
```
- change the following field: `spec.hosts[*].role: controller+worker`
- add the following fields: `spec.hosts[*].noTaints: true`
```bash
docker@m621: ~$ k0sctl apply --config k0sctl.yaml
```
]
---
### And the famous one-liner
.lab[
```bash
k8s@shpod: ~$ k0sctl apply --config k0sctl.yaml
```
]
---
### Check that k0s installed correctly
.lab[
```bash
docker@m621 ~$ sudo k0s status
Version: v1.33.1+k0s.1
Process ID: 60183
Role: controller
Workloads: true
SingleNode: false
Kube-api probing successful: true
Kube-api probing last error:
docker@m621 ~$ sudo k0s etcd member-list
{"members":{"m621":"https://10.10.3.190:2380","m622":"https://10.10.2.92:2380","m623":"https://10.10.2.110:2380"}}
```
]
---
### `kubectl` is included
.lab[
```bash
docker@m621 ~$ sudo k0s kubectl get nodes
NAME STATUS ROLES AGE VERSION
m621 Ready control-plane 66m v1.33.1+k0s
m622 Ready control-plane 66m v1.33.1+k0s
m623 Ready control-plane 66m v1.33.1+k0s
docker@m621 ~$ sudo k0s kubectl run shpod --image jpetazzo/shpod
```
]
---
class: extra-details
### Single node install (for info!)
For testing purpose, you may want to use a single-node (yet `etcd`-geared) install…
.lab[
- Install it
```bash
docker@m621 ~$ curl -sSLf https://get.k0s.sh | sudo sh
docker@m621 ~$ sudo k0s install controller --single
docker@m621 ~$ sudo k0s start
```
- Reset it
```bash
docker@m621 ~$ sudo k0s start
docker@m621 ~$ sudo k0s reset
```
]
---
## Deploying shpod
.lab[
```bash
docker@m621 ~$ sudo k0s kubectl apply -f https://shpod.in/shpod.yaml
docker@m621 ~$ sudo k0s kubectl apply -f https://shpod.in/shpod.yaml
```
]
---
## Flux install
We'll install `Flux`.
And replay the all scenario a 2nd time.
Let's face it: we don't have that much time. 😅
Since all our install and configuration is `GitOps`-based, we might just leverage on copy-paste and code configuration…
Maybe.
Let's copy the 📂 `./clusters/CLOUDY` folder and rename it 📂 `./clusters/METAL`.
---
### Modifying Flux config 📄 files
- In 📄 file `./clusters/METAL/flux-system/gotk-sync.yaml`
</br>change the `Kustomization` value `spec.path: ./clusters/METAL`
- ⚠️ We'll have to adapt the `Flux` _CLI_ command line
- And that's pretty much it!
- We'll see if anything goes wrong on that new cluster
---
### Connecting to our dedicated `Github` repo to host Flux config
.lab[
- let's replace `GITHUB_TOKEN` and `GITHUB_REPO` values
- don't forget to change the patch to `clusters/METAL`
```bash
k8s@shpod:~$ export GITHUB_TOKEN="my-token" && \
export GITHUB_USER="container-training-fleet" && \
export GITHUB_REPO="fleet-config-using-flux-XXXXX"
k8s@shpod:~$ flux bootstrap github \
--owner=${GITHUB_USER} \
--repository=${GITHUB_REPO} \
--team=OPS \
--team=ROCKY --team=MOVY \
--path=clusters/METAL
```
]
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
### Flux deployed our complete stack
Everything seems to be here but…
- one database is in `Pending` state
- our `ingresses` don't work well
```bash
k8s@shpod ~$ curl --header 'Host: rocky.test.enixdomain.com' http://${myIngressControllerSvcIP}
curl: (52) Empty reply from server
```
---
### Fixing the Ingress
The current `ingress-nginx` configuration leverages on specific annotations used by Scaleway to bind a _IaaS_ load-balancer to the `ingress-controller`.
We don't have such kind of things here.😕
- We could bind our `ingress-controller` to a `NodePort`.
`ingress-nginx` install manifests propose it here:
</br>https://github.com/kubernetes/ingress-nginx/deploy/static/provider/baremetal
- In the 📄file `./clusters/METAL/ingress-nginx/sync.yaml`,
</br>change the `Kustomization` value `spec.path: ./deploy/static/provider/baremetal`
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
### Troubleshooting the database
One of our `db-0` pod is in `Pending` state.
```bash
k8s@shpod ~$ k get pods db-0 -n *-test -oyaml
()
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2025-06-11T11:15:42Z"
message: '0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims.
preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: Burstable
```
---
### Troubleshooting the PersistentVolumeClaims
```bash
k8s@shpod ~$ k get pvc postgresql-data-db-0 -n *-test -o yaml
()
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 9s (x182 over 45m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
```
No `storage class` is available on this cluster.
We hadn't the problem on our managed cluster since a default storage class was configured and then associated to our `PersistentVolumeClaim`.
Why is there no problem with the other database?
---
## Installing OpenEBS as our CSI

View File

@@ -0,0 +1,241 @@
## introducing Kyverno
Kyverno is a tool to extend Kubernetes permission management to express complex policies…
</br>… and override manifests delivered by client teams.
---
class: extra-details
### Kyverno -- for more info
Please, refer to the [`Setting up Kubernetes` chapter in the High Five M4 module](./4.yml.html#toc-policy-management-with-kyverno) for more infos about `Kyverno`.
---
## Creating an `Helm` source in Flux for OpenEBS Helm chart
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
mkdir -p clusters/CLOUDY/kyverno && \
cp -pr ~/container.training/k8s/
k8s@shpod ~$ flux create source helm kyverno \
--namespace=kyverno \
--url=https://kyverno.github.io/kyverno/ \
--interval=3m \
--export > ./clusters/CLOUDY/kyverno/sync2.yaml
```
]
---
## Creating the `HelmRelease` in Flux
.lab[
```bash
k8s@shpod ~$ flux create helmrelease kyverno \
--namespace=kyverno \
--source=HelmRepository/kyverno.flux-system \
--target-namespace=kyverno \
--create-target-namespace=true \
--chart-version=">=3.4.2" \
--chart=kyverno \
--export >> ./clusters/CLOUDY/kyverno/sync.yaml
```
]
---
## Add Kyverno policy
This polivy is just an example.
It enforces the use of a `Service Account` in `Flux` configurations
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
mkdir -p clusters/CLOUDY/kyverno-policies && \
cp -pr ~/container.training/k8s/M6-kyverno-enforce-service-account.yaml \
./clusters/CLOUDY/kyverno-policies/
---
### Creating `kustomization` in Flux for Kyverno policies
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
flux create kustomization kyverno-policies \
--namespace=kyverno \
--source=GitRepository/flux-system \
--path="./clusters/CLOUDY/kyverno-policies/" \
--prune true --interval 5m \
--depends-on kyverno \
--export >> ./clusters/CLOUDY/kyverno-policies/sync.yaml
```
]
## Apply Kyverno policy
```bash
flux create kustomization
--path
--source GitRepository/
--export > ./clusters/CLOUDY/kyverno-policies/sync.yaml
```
---
## Add Kyverno dependency for **_⚗TEST_** cluster
- Now that we've got `Kyverno` policies,
- ops team will enforce any upgrade from any kustomization in our dev team tenants
- to wait for the `kyverno` policies to be reconciled (in a `Flux` perspective)
- upgrade file `./clusters/CLOUDY/tenants.yaml`,
- by adding this property: `spec.dependsOn.{name: kyverno-policies}`
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
### Debugging
`Kyverno-policies` `Kustomization` failed because `spec.dependsOn` property can only target a resource from the same `Kind`.
- Let's suppress the `spec.dependsOn` property.
Now `Kustomizations` for **_🎸ROCKY_** and **_🎬MOVY_** tenants failed because of our policies.
---
### 🗺️ Where are we in our scenario?
<pre class="mermaid">
%%{init:
{
"theme": "default",
"gitGraph": {
"mainBranchName": "OPS",
"mainBranchOrder": 0
}
}
}%%
gitGraph
commit id:"0" tag:"start"
branch ROCKY order:4
branch MOVY order:5
branch YouRHere order:6
checkout OPS
commit id:'Flux install on CLOUDY cluster' tag:'T01'
branch TEST-env order:1
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for TEST tenant' tag:'T03'
commit id:'namespace isolation by RBAC'
checkout TEST-env
merge OPS id:'ROCKY tenant creation' tag:'T04'
checkout OPS
commit id:'ROCKY deploy. config.' tag:'R01'
checkout TEST-env
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
checkout ROCKY
commit id:'ROCKY' tag:'v1.0.0'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.0'
checkout OPS
commit id:'Ingress-controller config.' tag:'T05'
checkout TEST-env
merge OPS id:'Ingress-controller install' type: HIGHLIGHT tag:'T06'
checkout OPS
commit id:'ROCKY patch for ingress config.' tag:'R03'
checkout TEST-env
merge OPS id:'ingress config. for ROCKY app'
checkout ROCKY
commit id:'blue color' tag:'v1.0.1'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.1'
checkout ROCKY
commit id:'pink color' tag:'v1.0.2'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.2'
checkout OPS
commit id:'FLUX config for MOVY deployment' tag:'M01'
checkout TEST-env
merge OPS id:'FLUX ready to deploy MOVY' type: HIGHLIGHT tag:'M02'
checkout MOVY
commit id:'MOVY' tag:'v1.0.3'
checkout TEST-env
merge MOVY tag:'MOVY v1.0.3' type: REVERSE
checkout OPS
commit id:'Network policies'
checkout TEST-env
merge OPS type: HIGHLIGHT tag:'T07'
checkout OPS
commit id:'k0s install on METAL cluster' tag:'K01'
commit id:'Flux config. for METAL cluster' tag:'K02'
branch METAL_TEST-PROD order:3
commit id:'ROCKY/MOVY tenants on METAL' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for OpenEBS' tag:'K03'
checkout METAL_TEST-PROD
merge OPS id:'openEBS on METAL' type: HIGHLIGHT
checkout OPS
commit id:'Prometheus install'
checkout TEST-env
merge OPS type: HIGHLIGHT
checkout OPS
commit id:'Kyverno install'
commit id:'Kyverno rules'
checkout TEST-env
merge OPS type: HIGHLIGHT
checkout YouRHere
commit id:'x'
checkout OPS
merge YouRHere id:'YOU ARE HERE'
checkout OPS
commit id:'Flux config. for PROD tenant' tag:'P01'
branch PROD-env order:2
commit id:'ROCKY tenant on PROD'
checkout OPS
commit id:'ROCKY patch for PROD' tag:'R04'
checkout PROD-env
merge OPS id:'PROD ready to deploy ROCKY' type: HIGHLIGHT
checkout PROD-env
merge ROCKY tag:'ROCKY v1.0.2'
checkout MOVY
commit id:'MOVY HELM chart' tag:'M03'
checkout TEST-env
merge MOVY tag:'MOVY v1.0'
</pre>

View File

@@ -0,0 +1,251 @@
# Install monitoring stack
The **_⚙OPS_** team wants to have a real monitoring stack for its clusters.
Let's deploy `Prometheus` and `Grafana` onto the clusters.
Note:
---
## Creating `Github` source in Flux for monitoring components install repository
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ mkdir -p clusters/CLOUDY/kube-prometheus-stack
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create source git monitoring \
--namespace=monitoring \
--url=https://github.com/fluxcd/flux2-monitoring-example.git \
--branch=main --export > ./clusters/CLOUDY/kube-prometheus-stack/sync.yaml
```
]
---
### Creating `kustomization` in Flux for monitoring stack
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create kustomization monitoring \
--namespace=monitoring \
--source=GitRepository/monitoring \
--path="./monitoring/controllers/kube-prometheus-stack/" \
--export >> ./clusters/CLOUDY/kube-prometheus-stack/sync.yaml
```
]
---
### Install Flux Grafana dashboards
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create kustomization dashboards \
--namespace=monitoring \
--source=GitRepository/monitoring \
--path="./monitoring/configs/" \
--export >> ./clusters/CLOUDY/kube-prometheus-stack/sync.yaml
```
]
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
## Flux repository synchro is broken😅
It seems that `Flux` on **_☁CLOUDY_** cluster is not able to authenticate with `ssh` on its `Github` config repository!
What happened?
When we install `Flux` on **_🤘METAL_** cluster, it generates a new `ssh` keypair and override the one used by **_☁CLOUDY_** among the "deployment keys" of the `Github` repository.
⚠️ Beware of flux bootstrap command!
We have to
- generate a new keypair (or reuse an already existing one)
- add the private key to the Flux-dedicated secrets in **_☁CLOUDY_** cluster
- add it to the "deployment keys" of the `Github` repository
---
### the command
.lab[
- `Flux` _CLI_ helps to recreate the secret holding the `ssh` **private** key.
```bash
k8s@shpod:~$ flux create secret git flux-system \
--url=ssh://git@github.com/container-training-fleet/fleet-config-using-flux-XXXXX \
--private-key-file=/home/k8s/.ssh/id_ed25519
```
- copy the **public** key into the deployment keys of the `Github` repository
]
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
## Access the Grafana dashboard
.lab[
- Get the `Host` and `IP` address to request
```bash
k8s@shpod:~$ kubectl -n monitoring get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
grafana nginx grafana.test.metal.mybestdomain.com 62.210.39.83 80 6m30s
```
- Get the `Grafana` admin password
```bash
k8s@shpod:~$ k get secret kube-prometheus-stack-grafana -n monitoring \
-o jsonpath='{.data.admin-password}' | base64 -d
```
]
## And browse…
class: pic
![Grafana dashboard screenshot](images/M6-grafana-dashboard.png)
---
### 🗺️ Where are we in our scenario?
<pre class="mermaid">
%%{init:
{
"theme": "default",
"gitGraph": {
"mainBranchName": "OPS",
"mainBranchOrder": 0
}
}
}%%
gitGraph
commit id:"0" tag:"start"
branch ROCKY order:4
branch MOVY order:5
branch YouRHere order:6
checkout OPS
commit id:'Flux install on CLOUDY cluster' tag:'T01'
branch TEST-env order:1
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for TEST tenant' tag:'T03'
commit id:'namespace isolation by RBAC'
checkout TEST-env
merge OPS id:'ROCKY tenant creation' tag:'T04'
checkout OPS
commit id:'ROCKY deploy. config.' tag:'R01'
checkout TEST-env
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
checkout ROCKY
commit id:'ROCKY' tag:'v1.0.0'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.0'
checkout OPS
commit id:'Ingress-controller config.' tag:'T05'
checkout TEST-env
merge OPS id:'Ingress-controller install' type: HIGHLIGHT tag:'T06'
checkout OPS
commit id:'ROCKY patch for ingress config.' tag:'R03'
checkout TEST-env
merge OPS id:'ingress config. for ROCKY app'
checkout ROCKY
commit id:'blue color' tag:'v1.0.1'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.1'
checkout ROCKY
commit id:'pink color' tag:'v1.0.2'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.2'
checkout OPS
commit id:'FLUX config for MOVY deployment' tag:'M01'
checkout TEST-env
merge OPS id:'FLUX ready to deploy MOVY' type: HIGHLIGHT tag:'M02'
checkout MOVY
commit id:'MOVY' tag:'v1.0.3'
checkout TEST-env
merge MOVY tag:'MOVY v1.0.3' type: REVERSE
checkout OPS
commit id:'Network policies'
checkout TEST-env
merge OPS type: HIGHLIGHT tag:'T07'
checkout OPS
commit id:'k0s install on METAL cluster' tag:'K01'
commit id:'Flux config. for METAL cluster' tag:'K02'
branch METAL_TEST-PROD order:3
commit id:'ROCKY/MOVY tenants on METAL' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for OpenEBS' tag:'K03'
checkout METAL_TEST-PROD
merge OPS id:'openEBS on METAL' type: HIGHLIGHT
checkout OPS
commit id:'Prometheus install'
checkout TEST-env
merge OPS type: HIGHLIGHT
checkout YouRHere
commit id:'x'
checkout OPS
merge YouRHere id:'YOU ARE HERE'
checkout OPS
commit id:'Kyverno install'
commit id:'Kyverno rules'
checkout TEST-env
merge OPS type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for PROD tenant' tag:'P01'
branch PROD-env order:2
commit id:'ROCKY tenant on PROD'
checkout OPS
commit id:'ROCKY patch for PROD' tag:'R04'
checkout PROD-env
merge OPS id:'PROD ready to deploy ROCKY' type: HIGHLIGHT
checkout PROD-env
merge ROCKY tag:'ROCKY v1.0.2'
checkout MOVY
commit id:'MOVY HELM chart' tag:'M03'
checkout TEST-env
merge MOVY tag:'MOVY v1.0'
</pre>

28
slides/m6.yml Normal file
View File

@@ -0,0 +1,28 @@
title: |
Using Kubernetes
in an Enterprise-like scenario
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- in-person
content:
- k8s/M6-START-a-company-scenario.md
- k8s/M6-T02-flux-install.md
- k8s/M6-T03-installing-tenants.md
- k8s/M6-R01-flux_configure-ROCKY-deployment.md
- k8s/M6-T05-ingress-config.md
- k8s/M6-M01-adding-MOVY-tenant.md
- k8s/M6-K01-METAL-install.md
- k8s/M6-K03-openebs-install.md
- k8s/M6-monitoring-stack-install.md
- k8s/M6-kyverno-install.md