mirror of
https://github.com/jpetazzo/container.training.git
synced 2026-02-14 17:49:59 +00:00
🆕 Add Flux (M5B/M6) content
This commit is contained in:
committed by
Jérôme Petazzoni
parent
d3c5bde6de
commit
e820ca466f
126
slides/flux/add-cluster.md
Normal file
126
slides/flux/add-cluster.md
Normal file
@@ -0,0 +1,126 @@
|
||||
|
||||
## Flux install
|
||||
|
||||
We'll install `Flux`.
|
||||
And replay the all scenario a 2nd time.
|
||||
Let's face it: we don't have that much time. 😅
|
||||
|
||||
Since all our install and configuration is `GitOps`-based, we might just leverage on copy-paste and code configuration…
|
||||
Maybe.
|
||||
|
||||
Let's copy the 📂 `./clusters/CLOUDY` folder and rename it 📂 `./clusters/METAL`.
|
||||
|
||||
---
|
||||
|
||||
### Modifying Flux config 📄 files
|
||||
|
||||
- In 📄 file `./clusters/METAL/flux-system/gotk-sync.yaml`
|
||||
</br>change the `Kustomization` value `spec.path: ./clusters/METAL`
|
||||
- ⚠️ We'll have to adapt the `Flux` _CLI_ command line
|
||||
|
||||
- And that's pretty much it!
|
||||
- We'll see if anything goes wrong on that new cluster
|
||||
|
||||
---
|
||||
|
||||
### Connecting to our dedicated `Github` repo to host Flux config
|
||||
|
||||
.lab[
|
||||
|
||||
- let's replace `GITHUB_TOKEN` and `GITHUB_REPO` values
|
||||
- don't forget to change the patch to `clusters/METAL`
|
||||
|
||||
```bash
|
||||
k8s@shpod:~$ export GITHUB_TOKEN="my-token" && \
|
||||
export GITHUB_USER="container-training-fleet" && \
|
||||
export GITHUB_REPO="fleet-config-using-flux-XXXXX"
|
||||
|
||||
k8s@shpod:~$ flux bootstrap github \
|
||||
--owner=${GITHUB_USER} \
|
||||
--repository=${GITHUB_REPO} \
|
||||
--team=OPS \
|
||||
--team=ROCKY --team=MOVY \
|
||||
--path=clusters/METAL
|
||||
```
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
### Flux deployed our complete stack
|
||||
|
||||
Everything seems to be here but…
|
||||
|
||||
- one database is in `Pending` state
|
||||
|
||||
- our `ingresses` don't work well
|
||||
|
||||
```bash
|
||||
k8s@shpod ~$ curl --header 'Host: rocky.test.enixdomain.com' http://${myIngressControllerSvcIP}
|
||||
curl: (52) Empty reply from server
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Fixing the Ingress
|
||||
|
||||
The current `ingress-nginx` configuration leverages on specific annotations used by Scaleway to bind a _IaaS_ load-balancer to the `ingress-controller`.
|
||||
We don't have such kind of things here.😕
|
||||
|
||||
- We could bind our `ingress-controller` to a `NodePort`.
|
||||
`ingress-nginx` install manifests propose it here:
|
||||
</br>https://github.com/kubernetes/ingress-nginx/deploy/static/provider/baremetal
|
||||
|
||||
- In the 📄file `./clusters/METAL/ingress-nginx/sync.yaml`,
|
||||
</br>change the `Kustomization` value `spec.path: ./deploy/static/provider/baremetal`
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
### Troubleshooting the database
|
||||
|
||||
One of our `db-0` pod is in `Pending` state.
|
||||
|
||||
```bash
|
||||
k8s@shpod ~$ k get pods db-0 -n *-test -oyaml
|
||||
(…)
|
||||
status:
|
||||
conditions:
|
||||
- lastProbeTime: null
|
||||
lastTransitionTime: "2025-06-11T11:15:42Z"
|
||||
message: '0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims.
|
||||
preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.'
|
||||
reason: Unschedulable
|
||||
status: "False"
|
||||
type: PodScheduled
|
||||
phase: Pending
|
||||
qosClass: Burstable
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Troubleshooting the PersistentVolumeClaims
|
||||
|
||||
```bash
|
||||
k8s@shpod ~$ k get pvc postgresql-data-db-0 -n *-test -o yaml
|
||||
(…)
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal FailedBinding 9s (x182 over 45m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
|
||||
```
|
||||
|
||||
No `storage class` is available on this cluster.
|
||||
We hadn't the problem on our managed cluster since a default storage class was configured and then associated to our `PersistentVolumeClaim`.
|
||||
|
||||
Why is there no problem with the other database?
|
||||
|
||||
417
slides/flux/app1-rocky-test.md
Normal file
417
slides/flux/app1-rocky-test.md
Normal file
@@ -0,0 +1,417 @@
|
||||
# R01- Configuring **_🎸ROCKY_** deployment with Flux
|
||||
|
||||
The **_⚙️OPS_** team manages 2 distinct envs: **_⚗️TEST_** et _**🚜PROD**_
|
||||
|
||||
Thanks to _Kustomize_
|
||||
1. it creates a **_base_** common config
|
||||
2. this common config is overwritten with a **_⚗️TEST_** _tenant_-specific configuration
|
||||
3. the same applies with a _**🚜PROD**_-specific configuration
|
||||
|
||||
> 💡 This seems complex, but no worries: Flux's CLI handles most of it.
|
||||
|
||||
---
|
||||
|
||||
## Creating the **_🎸ROCKY_**-dedicated _tenant_ in **_⚗️TEST_** env
|
||||
|
||||
- Using the `flux` _CLI_, we create the file configuring the **_🎸ROCKY_** team's dedicated _tenant_…
|
||||
- … this file takes place in the `base` common configuration for both envs
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
|
||||
mkdir -p ./tenants/base/rocky && \
|
||||
flux create tenant rocky \
|
||||
--with-namespace=rocky-test \
|
||||
--cluster-role=rocky-full-access \
|
||||
--export > ./tenants/base/rocky/rbac.yaml
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
### 📂 ./tenants/base/rocky/rbac.yaml
|
||||
|
||||
Let's see our file…
|
||||
|
||||
3 resources are created: `Namespace`, `ServiceAccount`, and `ClusterRoleBinding`
|
||||
|
||||
`Flux` **impersonates** as this `ServiceAccount` when it applies any resources found in this _tenant_-dedicated source(s)
|
||||
|
||||
- By default, the `ServiceAccount` is bound to the `cluster-admin` `ClusterRole`
|
||||
- The team maintaining the sourced `Github` repository is almighty at cluster scope
|
||||
|
||||
A not that much isolated _tenant_! 😕
|
||||
|
||||
That's why the **_⚙️OPS_** team enforces specific `ClusterRoles` with restricted permissions
|
||||
|
||||
Let's create these permissions!
|
||||
|
||||
---
|
||||
|
||||
## _namespace_ isolation for **_🎸ROCKY_**
|
||||
|
||||
.lab[
|
||||
|
||||
- Here are the restricted permissions to use in the `rocky-test` `Namespace`
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
|
||||
cp ~/container.training/k8s/M6-rocky-cluster-role.yaml ./tenants/base/rocky/
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
> 💡 Note that some resources are managed at cluster scope (like `PersistentVolumes`).
|
||||
> We need specific permissions, then…
|
||||
|
||||
---
|
||||
|
||||
## Creating `Github` source in Flux for **_🎸ROCKY_** app repository
|
||||
|
||||
A specific _branch_ of the `Github` repository is monitored by the `Flux` source
|
||||
|
||||
.lab[
|
||||
|
||||
- ⚠️ you may change the **repository URL** to the one of your own clone
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create source git rocky-app \
|
||||
--namespace=rocky-test \
|
||||
--url=https://github.com/Musk8teers/container.training-spring-music/ \
|
||||
--branch=rocky --export > ./tenants/base/rocky/sync.yaml
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Creating `kustomization` in Flux for **_🎸ROCKY_** app repository
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create kustomization rocky \
|
||||
--namespace=rocky-test \
|
||||
--service-account=rocky \
|
||||
--source=GitRepository/rocky-app \
|
||||
--path="./k8s/" --export >> ./tenants/base/rocky/sync.yaml
|
||||
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
|
||||
cd ./tenants/base/rocky/ && \
|
||||
kustomize create --autodetect && \
|
||||
cd -
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
### 📂 Flux config files
|
||||
|
||||
Let's review our `Flux` configuration files
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
|
||||
cat ./tenants/base/rocky/sync.yaml && \
|
||||
cat ./tenants/base/rocky/kustomization.yaml
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Adding a kustomize patch for **_⚗️TEST_** cluster deployment
|
||||
|
||||
💡 Remember the DRY strategy!
|
||||
|
||||
- The `Flux` tenant-dedicated configuration is looking for this file: `.tenants/test/rocky/kustomization.yaml`
|
||||
- It has been configured here: `clusters/CLOUDY/tenants.yaml`
|
||||
|
||||
- All the files we just created are located in `.tenants/base/rocky`
|
||||
- So we have to create a specific kustomization in the right location
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
|
||||
mkdir -p ./tenants/test/rocky && \
|
||||
cp ~/container.training/k8s/M6-rocky-test-patch.yaml ./tenants/test/rocky/ && \
|
||||
cp ~/container.training/k8s/M6-rocky-test-kustomization.yaml ./tenants/test/rocky/kustomization.yaml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Synchronizing Flux config with its Github repo
|
||||
|
||||
Locally, our `Flux` config repo is ready
|
||||
The **_⚙️OPS_** team has to push it to `Github` for `Flux` controllers to watch and catch it!
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
|
||||
git add . && \
|
||||
git commit -m':wrench: :construction_worker: add ROCKY tenant configuration' && \
|
||||
git push
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
### Flux resources for ROCKY tenant 1/2
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~$ flux get all -A
|
||||
NAMESPACE NAME REVISION SUSPENDED
|
||||
READY MESSAGE
|
||||
flux-system gitrepository/flux-system main@sha1:8ffd72cf False
|
||||
True stored artifact for revision 'main@sha1:8ffd72cf'
|
||||
rocky-test gitrepository/rocky-app rocky@sha1:ffe9f3fe False
|
||||
True stored artifact for revision 'rocky@sha1:ffe9f3fe'
|
||||
(…)
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
### Flux resources for ROCKY _tenant_ 2/2
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~$ flux get all -A
|
||||
(…)
|
||||
NAMESPACE NAME REVISION SUSPENDED
|
||||
READY MESSAGE
|
||||
flux-system kustomization/flux-system main@sha1:8ffd72cf False
|
||||
True Applied revision: main@sha1:8ffd72cf
|
||||
flux-system kustomization/tenant-prod False
|
||||
False kustomization path not found: stat /tmp/kustomization-1164119282/tenants/prod: no such file or directory
|
||||
flux-system kustomization/tenant-test main@sha1:8ffd72cf False
|
||||
True Applied revision: main@sha1:8ffd72cf
|
||||
rocky-test kustomization/rocky False
|
||||
False StatefulSet/db dry-run failed (Forbidden): statefulsets.apps "db" is forbidden: User "system:serviceaccount:rocky-test:rocky" cannot patch resource "statefulsets" in API group "apps" at the cluster scope
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
And here is our 2nd Flux error(s)! 😅
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
### Flux Kustomization, mutability, …
|
||||
|
||||
🔍 Notice that none of the expected resources is created:
|
||||
the whole kustomization is rejected, even if the `StatefulSet` is this only resource that fails!
|
||||
|
||||
🔍 Flux Kustomization uses the dry-run feature to templatize the resources and then applying patches onto them
|
||||
Good but some resources are not completely mutable, such as `StatefulSets`
|
||||
|
||||
We have to fix the mutation by applying the change without having to patch the resource.
|
||||
|
||||
🔍 Simply add the `spec.targetNamespace: rocky-test` to the `Kustomization` named `rocky`
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## And then it's deployed 1/2
|
||||
|
||||
You should see the following resources in the `rocky-test` namespace
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod-578d64468-tp7r2 ~/$ k get pods,svc,deployments -n rocky-test
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/db-0 1/1 Running 0 47s
|
||||
pod/web-6c677bf97f-c7pkv 0/1 Running 1 (22s ago) 47s
|
||||
pod/web-6c677bf97f-p7b4r 0/1 Running 1 (19s ago) 47s
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/db ClusterIP 10.32.6.128 <none> 5432/TCP 48s
|
||||
service/web ClusterIP 10.32.2.202 <none> 80/TCP 48s
|
||||
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
deployment.apps/web 0/2 2 0 47s
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## And then it's deployed 2/2
|
||||
|
||||
You should see the following resources in the `rocky-test` namespace
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod-578d64468-tp7r2 ~/$ k get statefulsets,pvc,pv -n rocky-test
|
||||
NAME READY AGE
|
||||
statefulset.apps/db 1/1 47s
|
||||
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
|
||||
persistentvolumeclaim/postgresql-data-db-0 Bound pvc-c1963a2b-4fc9-4c74-9c5a-b0870b23e59a 1Gi RWO sbs-default <unset> 47s
|
||||
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
|
||||
persistentvolume/postgresql-data 1Gi RWO,RWX Retain Available <unset> 47s
|
||||
persistentvolume/pvc-150fcef5-ebba-458e-951f-68a7e214c635 1G RWO Delete Bound shpod/shpod sbs-default <unset> 4h46m
|
||||
persistentvolume/pvc-c1963a2b-4fc9-4c74-9c5a-b0870b23e59a 1Gi RWO Delete Bound rocky-test/postgresql-data-db-0 sbs-default <unset> 47s
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
### PersistentVolumes are using a default `StorageClass`
|
||||
|
||||
💡 This managed cluster comes with custom `StorageClasses` leveraging on Cloud _IaaS_ capabilities (i.e. block devices)
|
||||
|
||||

|
||||
|
||||
- a default `StorageClass` is applied if none is specified (like here)
|
||||
- for **_🏭PROD_** purpose, ops team might enforce a more performant `StorageClass`
|
||||
- on a bare-metal cluster, **_🏭PROD_** team has to configure and provide `StorageClasses` on its own
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
|
||||
## Upgrading ROCKY app
|
||||
|
||||
The Git source named `rocky-app` is pointing at
|
||||
- a Github repository named [Musk8teers/container.training-spring-music](https://github.com/Musk8teers/container.training-spring-music/)
|
||||
- on its branch named `rocky`
|
||||
|
||||
This branch deploy the v1.0.0 of the _Web_ app:
|
||||
`spec.template.spec.containers.image: ghcr.io/musk8teers/container.training-spring-music:1.0.0`
|
||||
|
||||
What happens if the **_🎸ROCKY_** team upgrades its branch to deploy `v1.0.1` of the _Web_ app?
|
||||
|
||||
---
|
||||
|
||||
## _tenant_ **_🏭PROD_**
|
||||
|
||||
💡 **_🏭PROD_** _tenant_ is still waiting for its `Flux` configuration, but don't bother for it right now.
|
||||
|
||||
---
|
||||
|
||||
### 🗺️ Where are we in our scenario?
|
||||
|
||||
<pre class="mermaid">
|
||||
%%{init:
|
||||
{
|
||||
"theme": "default",
|
||||
"gitGraph": {
|
||||
"mainBranchName": "OPS",
|
||||
"mainBranchOrder": 0
|
||||
}
|
||||
}
|
||||
}%%
|
||||
gitGraph
|
||||
commit id:"0" tag:"start"
|
||||
branch ROCKY order:3
|
||||
branch MOVY order:4
|
||||
branch YouRHere order:5
|
||||
|
||||
checkout OPS
|
||||
commit id:'Flux install on CLOUDY cluster' tag:'T01'
|
||||
branch TEST-env order:1
|
||||
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
|
||||
|
||||
checkout OPS
|
||||
commit id:'Flux config. for TEST tenant' tag:'T03'
|
||||
commit id:'namespace isolation by RBAC'
|
||||
checkout TEST-env
|
||||
merge OPS id:'ROCKY tenant creation' tag:'T04'
|
||||
|
||||
checkout OPS
|
||||
commit id:'ROCKY deploy. config.' tag:'R01'
|
||||
|
||||
checkout TEST-env
|
||||
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'ROCKY' tag:'v1.0.0'
|
||||
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.0'
|
||||
|
||||
checkout YouRHere
|
||||
commit id:'x'
|
||||
checkout OPS
|
||||
merge YouRHere id:'YOU ARE HERE'
|
||||
|
||||
checkout OPS
|
||||
commit id:'Ingress-controller config.' tag:'T05'
|
||||
checkout TEST-env
|
||||
merge OPS id:'Ingress-controller install' type: HIGHLIGHT tag:'T06'
|
||||
|
||||
checkout OPS
|
||||
commit id:'ROCKY patch for ingress config.' tag:'R03'
|
||||
checkout TEST-env
|
||||
merge OPS id:'ingress config. for ROCKY app'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'blue color' tag:'v1.0.1'
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.1'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'pink color' tag:'v1.0.2'
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.2'
|
||||
|
||||
checkout OPS
|
||||
commit id:'FLUX config for MOVY deployment' tag:'M01'
|
||||
checkout TEST-env
|
||||
merge OPS id:'FLUX ready to deploy MOVY' type: HIGHLIGHT tag:'M02'
|
||||
|
||||
checkout MOVY
|
||||
commit id:'MOVY' tag:'v1.0.3'
|
||||
checkout TEST-env
|
||||
merge MOVY tag:'MOVY v1.0.3' type: REVERSE
|
||||
|
||||
checkout OPS
|
||||
commit id:'Network policies'
|
||||
checkout TEST-env
|
||||
merge OPS type: HIGHLIGHT
|
||||
</pre>
|
||||
320
slides/flux/app2-movy-test.md
Normal file
320
slides/flux/app2-movy-test.md
Normal file
@@ -0,0 +1,320 @@
|
||||
# M01- Configuring **_🎬MOVY_** deployment with Flux
|
||||
|
||||
**_🎸ROCKY_** _tenant_ is now fully usable in **_⚗️TEST_** env, let's do the same for another _dev_ team: **_🎬MOVY_**
|
||||
|
||||
😈 We could do it by using `Flux` _CLI_,
|
||||
but let's see if we can succeed by just adding manifests in our `Flux` configuration repository.
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Impact study
|
||||
|
||||
In our `Flux` configuration repository:
|
||||
|
||||
- Creation of the following 📂 folders: `./tenants/[base|test]/MOVY`
|
||||
|
||||
- Modification of the following 📄 file: `./clusters/CLOUDY/tenants.yaml`?
|
||||
- Well, we don't need to: the watched path include the whole `./tenants/[test]/*` folder
|
||||
|
||||
In the app repository:
|
||||
|
||||
- Creation of a `movy` branch to deploy another version of the app dedicated to movie soundtracks
|
||||
|
||||
---
|
||||
|
||||
### Creation of the 📂 folders
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
|
||||
cp -pr tenants/base/rocky tenants/base/movy
|
||||
cp -pr tenants/test/rocky tenants/test/movy
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
### Modification of tenants/[base|test]/movy/* 📄 files
|
||||
|
||||
- For 📄`M6-rocky-*.yaml`, change the file names…
|
||||
- and update the 📄`kustomization.yaml` file as a result
|
||||
|
||||
- In any file, replace any `rocky` entry by `movy`
|
||||
|
||||
- In 📄 `sync.yaml` be aware of what repository and what branch you want `Flux` to watch for **_🎬MOVY_** app deployment.
|
||||
- for this demo, let's assume we create a `movy` branch
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
### What about reusing rocky-cluster-roles?
|
||||
|
||||
💡 In 📄`M6-movy-cluster-role.yaml` and 📄`rbac.yaml`, we could have reused the already existing `ClusterRoles`: `rocky-full-access`, and `rocky-pv-access`
|
||||
|
||||
A `ClusterRole` is cluster wide. It is not dedicated to a namespace.
|
||||
- Its permissions are restrained to a specific namespace by being bound to a `ServiceAccount` by a `RoleBinding`.
|
||||
- Whereas a `ClusterRoleBinding` extends the permissions to the whole cluster scope.
|
||||
|
||||
But a _tenant_ is a **_tenant_** and permissions might evolved separately for **_🎸ROCKY_** and **_🎬MOVY_**.
|
||||
|
||||
So [we got to keep'em separated](https://www.youtube.com/watch?v=GHUql3OC_uU).
|
||||
|
||||
---
|
||||
|
||||
### Let-su-go!
|
||||
|
||||
The **_⚙️OPS_** team push this new tenant configuration to `Github` for `Flux` controllers to watch and catch it!
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
|
||||
git add . && \
|
||||
git commit -m':wrench: :construction_worker: add MOVY tenant configuration' && \
|
||||
git push
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
### Another Flux error?
|
||||
|
||||
.lab[
|
||||
|
||||
- It seems that our `movy` branch is not present in the app repository
|
||||
|
||||
```bash
|
||||
k8s@shpod:~$ flux get kustomization -A
|
||||
NAMESPACE NAME REVISION SUSPENDED MESSAGE
|
||||
(…)
|
||||
flux-system tenant-prod False False kustomization path not found: stat /tmp/kustomization-113582828/tenants/prod: no such file or directory
|
||||
(…)
|
||||
movy-test movy False False Source artifact not found, retrying in 30s
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
### Creating the `movy` branch
|
||||
|
||||
- Let's create this new `movy` branch from `rocky` branch
|
||||
|
||||
.lab[
|
||||
|
||||
- You can force immediate reconciliation by typing this command:
|
||||
|
||||
```bash
|
||||
k8s@shpod:~$ flux reconcile source git movy-app -n movy-test
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
### New branch detected
|
||||
|
||||
You now have a second app responding on [http://movy.test.mybestdomain.com]
|
||||
But as of now, it's just the same as the **_🎸ROCKY_** one.
|
||||
|
||||
We want a specific (pink-colored) version with a dataset full of movie soundtracks.
|
||||
|
||||
---
|
||||
|
||||
## New version of the **_🎬MOVY_** app
|
||||
|
||||
In our branch `movy`…
|
||||
Let's modify our `deployment.yaml` file with 2 modifications.
|
||||
|
||||
- in `spec.template.spec.containers.image` change the container image tag to `1.0.3`
|
||||
|
||||
- and… let's introduce some evil enthropy by changing this line… 😈😈😈
|
||||
|
||||
```yaml
|
||||
value: jdbc:postgresql://db/music
|
||||
```
|
||||
|
||||
by this one
|
||||
|
||||
```yaml
|
||||
value: jdbc:postgresql://db.rocky-test/music
|
||||
```
|
||||
|
||||
And push the modifications…
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
### MOVY app is connected to ROCKY database
|
||||
|
||||
How evil have we been! 😈
|
||||
We connected the **_🎬MOVY_** app to the **_🎸ROCKY_** database.
|
||||
|
||||
Even if our tenants are isolated in how they manage their Kubernetes resources…
|
||||
pod network is still full mesh and any connection is authorized.
|
||||
|
||||
> The **_⚙️OPS_** team should fix this!
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Adding NetworkPolicies to **_🎸ROCKY_** and **_🎬MOVY_** namespaces
|
||||
|
||||
`Network policies` may be seen as the firewall feature in the pod network.
|
||||
They rules ingress and egress network connections considering a described subset of pods.
|
||||
|
||||
Please, refer to the [`Network policies` chapter in the High Five M4 module](./4.yml.html#toc-network-policies)
|
||||
|
||||
- In our case, we just add the file `~/container.training/k8s/M6-network-policies.yaml`
|
||||
</br>in our `./tenants/base/movy` folder
|
||||
|
||||
- without forgetting to update our `kustomization.yaml` file
|
||||
|
||||
- and without forgetting to commit 😁
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
### 🗺️ Where are we in our scenario?
|
||||
|
||||
<pre class="mermaid">
|
||||
%%{init:
|
||||
{
|
||||
"theme": "default",
|
||||
"gitGraph": {
|
||||
"mainBranchName": "OPS",
|
||||
"mainBranchOrder": 0
|
||||
}
|
||||
}
|
||||
}%%
|
||||
gitGraph
|
||||
commit id:"0" tag:"start"
|
||||
branch ROCKY order:3
|
||||
branch MOVY order:4
|
||||
branch YouRHere order:5
|
||||
|
||||
checkout OPS
|
||||
commit id:'Flux install on CLOUDY cluster' tag:'T01'
|
||||
branch TEST-env order:1
|
||||
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
|
||||
|
||||
checkout OPS
|
||||
commit id:'Flux config. for TEST tenant' tag:'T03'
|
||||
commit id:'namespace isolation by RBAC'
|
||||
checkout TEST-env
|
||||
merge OPS id:'ROCKY tenant creation' tag:'T04'
|
||||
|
||||
checkout OPS
|
||||
commit id:'ROCKY deploy. config.' tag:'R01'
|
||||
|
||||
checkout TEST-env
|
||||
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'ROCKY' tag:'v1.0.0'
|
||||
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.0'
|
||||
|
||||
checkout OPS
|
||||
commit id:'Ingress-controller config.' tag:'T05'
|
||||
checkout TEST-env
|
||||
merge OPS id:'Ingress-controller install' type: HIGHLIGHT tag:'T06'
|
||||
|
||||
checkout OPS
|
||||
commit id:'ROCKY patch for ingress config.' tag:'R03'
|
||||
checkout TEST-env
|
||||
merge OPS id:'ingress config. for ROCKY app'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'blue color' tag:'v1.0.1'
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.1'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'pink color' tag:'v1.0.2'
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.2'
|
||||
|
||||
checkout OPS
|
||||
commit id:'FLUX config for MOVY deployment' tag:'M01'
|
||||
checkout TEST-env
|
||||
merge OPS id:'FLUX ready to deploy MOVY' type: HIGHLIGHT tag:'M02'
|
||||
|
||||
checkout MOVY
|
||||
commit id:'MOVY' tag:'v1.0.3'
|
||||
checkout TEST-env
|
||||
merge MOVY tag:'MOVY v1.0.3' type: REVERSE
|
||||
|
||||
checkout OPS
|
||||
commit id:'Network policies'
|
||||
checkout TEST-env
|
||||
merge OPS type: HIGHLIGHT
|
||||
|
||||
checkout YouRHere
|
||||
commit id:'x'
|
||||
checkout OPS
|
||||
merge YouRHere id:'YOU ARE HERE'
|
||||
|
||||
checkout OPS
|
||||
commit id:'k0s install on METAL cluster' tag:'K01'
|
||||
commit id:'Flux config. for METAL cluster' tag:'K02'
|
||||
branch METAL_TEST-PROD order:3
|
||||
commit id:'ROCKY/MOVY tenants on METAL' type: HIGHLIGHT
|
||||
checkout OPS
|
||||
commit id:'Flux config. for OpenEBS' tag:'K03'
|
||||
checkout METAL_TEST-PROD
|
||||
merge OPS id:'openEBS on METAL' type: HIGHLIGHT
|
||||
|
||||
checkout OPS
|
||||
commit id:'Prometheus install'
|
||||
checkout TEST-env
|
||||
merge OPS type: HIGHLIGHT
|
||||
|
||||
checkout OPS
|
||||
commit id:'Kyverno install'
|
||||
commit id:'Kyverno rules'
|
||||
checkout TEST-env
|
||||
merge OPS type: HIGHLIGHT
|
||||
</pre>
|
||||
410
slides/flux/bootstrap.md
Normal file
410
slides/flux/bootstrap.md
Normal file
@@ -0,0 +1,410 @@
|
||||
# T02- creating **_⚗️TEST_** env on our **_☁️CLOUDY_** cluster
|
||||
|
||||
Let's take a look at our **_☁️CLOUDY_** cluster!
|
||||
|
||||
**_☁️CLOUDY_** is a Kubernetes cluster created with [Scaleway Kapsule](https://www.scaleway.com/en/kubernetes-kapsule/) managed service
|
||||
|
||||
This managed cluster comes preinstalled with specific features:
|
||||
- Kubernetes dashboard
|
||||
- specific _Storage Classes_ based on Scaleway _IaaS_ block storage offerings
|
||||
- a `Cilium` _CNI_ stack already set up
|
||||
|
||||
---
|
||||
|
||||
## Accessing the managed Kubernetes cluster
|
||||
|
||||
To access our cluster, we'll connect via [`shpod`](https://github.com/jpetazzo/shpod)
|
||||
|
||||
.lab[
|
||||
|
||||
- If you already have a kubectl on your desktop computer
|
||||
```bash
|
||||
kubectl -n shpod run shpod --image=jpetazzo/shpod
|
||||
kubectl -n shpod exec -it shpod -- bash
|
||||
```
|
||||
- or directly via ssh
|
||||
```bash
|
||||
ssh -p myPort k8s@mySHPODSvcIpAddress
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Flux installation
|
||||
|
||||
Once `Flux` is installed,
|
||||
the **_⚙️OPS_** team exclusively operates its clusters by updating a code base in a `Github` repository
|
||||
|
||||
_GitOps_ and `Flux` enable the **_⚙️OPS_** team to rely on the _first-class citizen pattern_ in Kubernetes' world through these steps:
|
||||
|
||||
- describe the **desired target state**
|
||||
- and let the **automated convergence** happens
|
||||
|
||||
---
|
||||
|
||||
### Checking prerequisites
|
||||
|
||||
The `Flux` _CLI_ is available in our `shpod` pod
|
||||
|
||||
Before installation, we need to check that:
|
||||
- `Flux` _CLI_ is correctly installed
|
||||
- it can connect to the `API server`
|
||||
- our versions of `Flux` and Kubernetes are compatible
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~$ flux --version
|
||||
flux version 2.5.1
|
||||
|
||||
k8s@shpod:~$ flux check --pre
|
||||
► checking prerequisites
|
||||
✔ Kubernetes 1.32.3 >=1.30.0-0
|
||||
✔ prerequisites checks passed
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
### Git repository for Flux configuration
|
||||
|
||||
The **_⚙️OPS_** team uses `Flux` _CLI_
|
||||
- to create a `git` repository named `fleet-config-using-flux-XXXXX` (⚠ replace `XXXXX` by a personnal suffix)
|
||||
- in our `Github` organization named `container-training-fleet`
|
||||
|
||||
Prerequisites are:
|
||||
- `Flux` _CLI_ needs a `Github` personal access token (_PAT_)
|
||||
- to create and/or access the `Github` repository
|
||||
- to give permissions to existing teams in our `Github` organization
|
||||
- The PAT needs _CRUD_ permissions on our `Github` organization
|
||||
- repositories
|
||||
- admin:public_key
|
||||
- users
|
||||
|
||||
- As **_⚙️OPS_** team, let's creates a `Github` personal access token…
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
### Creating dedicated `Github` repo to host Flux config
|
||||
|
||||
.lab[
|
||||
|
||||
- let's replace the `GITHUB_TOKEN` value by our _Personal Access Token_
|
||||
- and the `GITHUB_REPO` value by our specific repository name
|
||||
|
||||
```bash
|
||||
k8s@shpod:~$ export GITHUB_TOKEN="my-token" && \
|
||||
export GITHUB_USER="container-training-fleet" && \
|
||||
export GITHUB_REPO="fleet-config-using-flux-XXXXX"
|
||||
|
||||
k8s@shpod:~$ flux bootstrap github \
|
||||
--owner=${GITHUB_USER} \
|
||||
--repository=${GITHUB_REPO} \
|
||||
--team=OPS \
|
||||
--team=ROCKY --team=MOVY \
|
||||
--path=clusters/CLOUDY
|
||||
```
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
Here is the result
|
||||
|
||||
```bash
|
||||
✔ repository "https://github.com/container-training-fleet/fleet-config-using-flux-XXXXX" created
|
||||
► reconciling repository permissions
|
||||
✔ granted "maintain" permissions to "OPS"
|
||||
✔ granted "maintain" permissions to "ROCKY"
|
||||
✔ granted "maintain" permissions to "MOVY"
|
||||
► reconciling repository permissions
|
||||
✔ reconciled repository permissions
|
||||
► cloning branch "main" from Git repository "https://github.com/container-training-fleet/fleet-config-using-flux-XXXXX.git"
|
||||
✔ cloned repository
|
||||
► generating component manifests
|
||||
✔ generated component manifests
|
||||
✔ committed component manifests to "main" ("7c97bdeb5b932040fd8d8a65fe1dc84c66664cbf")
|
||||
► pushing component manifests to "https://github.com/container-training-fleet/fleet-config-using-flux-XXXXX.git"
|
||||
✔ component manifests are up to date
|
||||
► installing components in "flux-system" namespace
|
||||
✔ installed components
|
||||
✔ reconciled components
|
||||
► determining if source secret "flux-system/flux-system" exists
|
||||
► generating source secret
|
||||
✔ public key: ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFqaT8B8SezU92qoE+bhnv9xONv9oIGuy7yVAznAZfyoWWEVkgP2dYDye5lMbgl6MorG/yjfkyo75ETieAE49/m9D2xvL4esnSx9zsOLdnfS9W99XSfFpC2n6soL+Exodw==
|
||||
✔ configured deploy key "flux-system-main-flux-system-./clusters/CLOUDY" for "https://github.com/container-training-fleet/fleet-config-using-flux-XXXXX"
|
||||
► applying source secret "flux-system/flux-system"
|
||||
✔ reconciled source secret
|
||||
► generating sync manifests
|
||||
✔ generated sync manifests
|
||||
✔ committed sync manifests to "main" ("11035e19cabd9fd2c7c94f6e93707f22d69a5ff2")
|
||||
► pushing sync manifests to "https://github.com/container-training-fleet/fleet-config-using-flux-XXXXX.git"
|
||||
► applying sync manifests
|
||||
✔ reconciled sync configuration
|
||||
◎ waiting for GitRepository "flux-system/flux-system" to be reconciled
|
||||
✔ GitRepository reconciled successfully
|
||||
◎ waiting for Kustomization "flux-system/flux-system" to be reconciled
|
||||
✔ Kustomization reconciled successfully
|
||||
► confirming components are healthy
|
||||
✔ helm-controller: deployment ready
|
||||
✔ kustomize-controller: deployment ready
|
||||
✔ notification-controller: deployment ready
|
||||
✔ source-controller: deployment ready
|
||||
✔ all components are healthy
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Flux configures Github repository access for teams
|
||||
|
||||
- `Flux` sets up permissions that allow teams within our organization to **access** the `Github` repository as maintainers
|
||||
- Teams need to exist before `Flux` proceeds to this configuration
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
### ⚠️ Disclaimer
|
||||
|
||||
- In this lab, adding these teams as maintainers was merely a demonstration of how `Flux` _CLI_ sets up permissions in Github
|
||||
|
||||
- But there is no need for dev teams to have access to this `Github` repository
|
||||
|
||||
- One advantage of _GitOps_ lies in its ability to easily set up 💪🏼 **Separation of concerns** by using multiple `Flux` sources
|
||||
|
||||
---
|
||||
|
||||
### 📂 Flux config files
|
||||
|
||||
`Flux` has been successfully installed onto our **_☁️CLOUDY_** Kubernetes cluster!
|
||||
|
||||
Its configuration is managed through a _Gitops_ workflow sourced directly from our `Github` repository
|
||||
|
||||
Let's review our `Flux` configuration files we've created and pushed into the `Github` repository…
|
||||
… as well as the corresponding components running in our Kubernetes cluster
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
<!-- FIXME: wrong schema -->
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
### Flux resources 1/2
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~$ kubectl get all --namespace flux-system
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/helm-controller-b6767d66-h6qhk 1/1 Running 0 5m
|
||||
pod/kustomize-controller-57c7ff5596-94rnd 1/1 Running 0 5m
|
||||
pod/notification-controller-58ffd586f7-zxfvk 1/1 Running 0 5m
|
||||
pod/source-controller-6ff87cb475-g6gn6 1/1 Running 0 5m
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/notification-controller ClusterIP 10.104.139.156 <none> 80/TCP 5m1s
|
||||
service/source-controller ClusterIP 10.106.120.137 <none> 80/TCP 5m
|
||||
service/webhook-receiver ClusterIP 10.96.28.236 <none> 80/TCP 5m
|
||||
(…)
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
### Flux resources 2/2
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~$ kubectl get all --namespace flux-system
|
||||
(…)
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
deployment.apps/helm-controller 1/1 1 1 5m
|
||||
deployment.apps/kustomize-controller 1/1 1 1 5m
|
||||
deployment.apps/notification-controller 1/1 1 1 5m
|
||||
deployment.apps/source-controller 1/1 1 1 5m
|
||||
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
replicaset.apps/helm-controller-b6767d66 1 1 1 5m
|
||||
replicaset.apps/kustomize-controller-57c7ff5596 1 1 1 5m
|
||||
replicaset.apps/notification-controller-58ffd586f7 1 1 1 5m
|
||||
replicaset.apps/source-controller-6ff87cb475 1 1 1 5m
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
### Flux components
|
||||
|
||||
- the `source controller` monitors `Git` repositories to apply Kubernetes resources on the cluster
|
||||
|
||||
- the `Helm controller` checks for new `Helm` _charts_ releases in `Helm` repositories and installs updates as needed
|
||||
|
||||
- _CRDs_ store `Flux` configuration within the Kubernetes control plane
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
### Flux resources that have been created
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~$ flux get all --all-namespaces
|
||||
NAMESPACE NAME REVISION SUSPENDED
|
||||
READY MESSAGE
|
||||
flux-system gitrepository/flux-system main@sha1:d48291a8 False
|
||||
True stored artifact for revision 'main@sha1:d48291a8'
|
||||
|
||||
NAMESPACE NAME REVISION SUSPENDED
|
||||
READY MESSAGE
|
||||
flux-system kustomization/flux-system main@sha1:d48291a8 False
|
||||
True Applied revision: main@sha1:d48291a8
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
### Flux CLI
|
||||
|
||||
`Flux` Command-Line Interface fulfills 3 primary functions:
|
||||
|
||||
1. It installs and configures first mandatory `Flux` resources in a _Gitops_ `git` repository
|
||||
- ensuring proper access and permissions
|
||||
|
||||
2. It locally generates `YAML` files for desired `Flux` resources so that we just need to `git push` them
|
||||
- _tenants_
|
||||
- sources
|
||||
- …
|
||||
|
||||
3. It requests the API server to manage `Flux`-related resources
|
||||
- _operators_
|
||||
- _CRDs_
|
||||
- logs
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
### Flux -- for more info
|
||||
|
||||
Please, refer to the [`Flux` chapter in the High Five M3 module](./3.yml.html#toc-helm-chart-format)
|
||||
|
||||
---
|
||||
|
||||
### Flux relies on Kustomize
|
||||
|
||||
The `Flux` component named `kustomize controller` look for `Kustomize` resources in `Flux` code-based sources
|
||||
|
||||
1. `Kustomize` look for `YAML` manifests listed in the `kustomization.yaml` file
|
||||
|
||||
2. and aggregates, hydrates and patches them following the `kustomization` configuration
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
### 2 different kustomization resources
|
||||
|
||||
⚠️ `Flux` uses 2 distinct resources with `kind: kustomization`
|
||||
|
||||
```yaml
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: kustomization
|
||||
```
|
||||
|
||||
describes how Kustomize (the _CLI_ tool) appends and transforms `YAML` manifests into a single bunch of `YAML` described resources
|
||||
|
||||
```yaml
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1 group
|
||||
kind: Kustomization
|
||||
```
|
||||
|
||||
describes where `Flux kustomize-controller` looks for a `kustomization.yaml` file in a given `Flux` code-based source
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
### Kustomize -- for more info
|
||||
|
||||
Please, refer to the [`Kustomize` chapter in the High Five M3 module](./3.yml.html#toc-kustomize)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
### Group / Version / Kind -- for more info
|
||||
|
||||
For more info about how Kubernetes resource natures are identified by their `Group / Version / Kind` triplet…
|
||||
… please, refer to the [`Kubernetes API` chapter in the High Five M5 module](./5.yml.html#toc-the-kubernetes-api)
|
||||
|
||||
---
|
||||
|
||||
### 🗺️ Where are we in our scenario?
|
||||
|
||||
<pre class="mermaid">
|
||||
%%{init:
|
||||
{
|
||||
"theme": "default",
|
||||
"gitGraph": {
|
||||
"mainBranchName": "OPS",
|
||||
"mainBranchOrder": 0
|
||||
}
|
||||
}
|
||||
}%%
|
||||
gitGraph
|
||||
commit id:"0" tag:"start"
|
||||
branch ROCKY order:3
|
||||
branch MOVY order:4
|
||||
branch YouRHere order:5
|
||||
|
||||
checkout OPS
|
||||
commit id:'Flux install on CLOUDY cluster' tag:'T01'
|
||||
branch TEST-env order:1
|
||||
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
|
||||
|
||||
checkout YouRHere
|
||||
commit id:'x'
|
||||
checkout OPS
|
||||
merge YouRHere id:'YOU ARE HERE'
|
||||
|
||||
checkout OPS
|
||||
commit id:'Flux config. for TEST tenant' tag:'T03'
|
||||
commit id:'namespace isolation by RBAC'
|
||||
checkout TEST-env
|
||||
merge OPS id:'ROCKY tenant creation' tag:'T04'
|
||||
|
||||
checkout OPS
|
||||
commit id:'ROCKY deploy. config.' tag:'R01'
|
||||
|
||||
checkout TEST-env
|
||||
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'ROCKY' tag:'v1.0.0'
|
||||
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.0'
|
||||
</pre>
|
||||
284
slides/flux/ingress.md
Normal file
284
slides/flux/ingress.md
Normal file
@@ -0,0 +1,284 @@
|
||||
# T05- Configuring ingress for **_🎸ROCKY_** app
|
||||
|
||||
🍾 **_🎸ROCKY_** team has just deployed its `v1.0.0`
|
||||
|
||||
We would like to reach it from our workstations
|
||||
The regular way to do it in Kubernetes is to configure an `Ingress` resource.
|
||||
|
||||
- `Ingress` is an abstract resource that manages how services are exposed outside of the Kubernetes cluster (Layer 7).
|
||||
- It relies on `ingress-controller`(s) that are technical solutions to handle all the rules related to ingress.
|
||||
|
||||
- Available features vary, depending on the `ingress-controller`: load-balancing, networking, firewalling, API management, throttling, TLS encryption, etc.
|
||||
- `ingress-controller` may provision Cloud _IaaS_ network resources such as load-balancer, persistent IPs, etc.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Ingress -- for more info
|
||||
|
||||
Please, refer to the [`Ingress` chapter in the High Five M2 module](./2.yml.html#toc-exposing-http-services-with-ingress-resources)
|
||||
|
||||
---
|
||||
|
||||
## Installing `ingress-nginx` as our `ingress-controller`
|
||||
|
||||
We'll use `ingress-nginx` (relying on `NGinX`), quite a popular choice.
|
||||
|
||||
- It is able to provision IaaS load-balancer in ScaleWay Cloud services
|
||||
- As a reverse-proxy, it is able to balance HTTP connections on an on-premises cluster
|
||||
|
||||
The **_⚙️OPS_** Team add this new install to its `Flux` config. repo
|
||||
|
||||
---
|
||||
|
||||
### Creating a `Github` source in Flux for `ingress-nginx`
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
|
||||
mkdir -p ./clusters/CLOUDY/ingress-nginx && \
|
||||
flux create source git ingress-nginx \
|
||||
--namespace=ingress-nginx \
|
||||
--url=https://github.com/kubernetes/ingress-nginx/ \
|
||||
--branch=release-1.12 \
|
||||
--export > ./clusters/CLOUDY/ingress-nginx/sync.yaml
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
### Creating `kustomization` in Flux for `ingress-nginx`
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create kustomization ingress-nginx \
|
||||
--namespace=ingress-nginx \
|
||||
--source=GitRepository/ingress-nginx \
|
||||
--path="./deploy/static/provider/scw/" \
|
||||
--export >> ./clusters/CLOUDY/ingress-nginx/sync.yaml
|
||||
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
|
||||
cp -p ~/container.training/k8s/M6-ingress-nginx-kustomization.yaml \
|
||||
./clusters/CLOUDY/ingress-nginx/kustomization.yaml && \
|
||||
cp -p ~/container.training/k8s/M6-ingress-nginx-components.yaml \
|
||||
~/container.training/k8s/M6-ingress-nginx-*-patch.yaml \
|
||||
./clusters/CLOUDY/ingress-nginx/
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
### Applying the new config
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
|
||||
git add ./clusters/CLOUDY/ingress-nginx && \
|
||||
git commit -m':wrench: :rocket: add Ingress-controller' && \
|
||||
git push
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
### Using external Git source
|
||||
|
||||
💡 Note that you can directly use pubilc `Github` repository (not maintained by your company).
|
||||
|
||||
- If you have to alter the configuration, `Kustomize` patching capabilities might help.
|
||||
|
||||
- Depending on the _gitflow_ this repository uses, updates will be deployed automatically to your cluster (here we're using a `release` branch).
|
||||
|
||||
- This repo exposes a `kustomization.yaml`. Well done!
|
||||
|
||||
---
|
||||
|
||||
## Adding the `ingress` resource to ROCKY app
|
||||
|
||||
.lab[
|
||||
|
||||
- Add the new manifest to our kustomization bunch
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
|
||||
cp -pr ~/container.training/k8s/M6-rocky-ingress.yaml ./tenants/base/rocky && \
|
||||
echo '- M6-rocky-ingress.yaml' >> ./tenants/base/rocky/kustomization.yaml
|
||||
```
|
||||
|
||||
- Commit and its done
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
|
||||
git add . && \
|
||||
git commit -m':wrench: :rocket: add Ingress' && \
|
||||
git push
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
### Here is the result
|
||||
|
||||
After Flux reconciled the whole bunch of sources and kustomizations, you should see
|
||||
|
||||
- `Ingress-NGinX` controller components in `ingress-nginx` namespace
|
||||
- A new `Ingress` in `rocky-test` namespace
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~$ kubectl get all -n ingress-nginx && \
|
||||
kubectl get ingress -n rocky-test
|
||||
|
||||
k8s@shpod:~$ \
|
||||
PublicIP=$(kubectl get ingress rocky -n rocky-test \
|
||||
-o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
|
||||
k8s@shpod:~$ \
|
||||
curl --header 'rocky.test.mybestdomain.com' http://$PublicIP/
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Upgrading **_🎸ROCKY_** app
|
||||
|
||||
**_🎸ROCKY_** team is now fully able to upgrade and deploy its app autonomously.
|
||||
|
||||
Just give it a try!
|
||||
- In the `deployment.yaml` file
|
||||
- in the app repo ([https://github.com/Musk8teers/container.training-spring-music/])
|
||||
- you can change the `spec.template.spec.containers.image` to `1.0.1` and then to `1.0.2`
|
||||
|
||||
Dont' forget which branch is watched by `Flux` Git source named `rocky`
|
||||
|
||||
Don't forget to commit!
|
||||
|
||||
---
|
||||
|
||||
## Few considerations
|
||||
|
||||
- The **_⚙️OPS_** team has to decide how to manage name resolution for public IPs
|
||||
- Scaleway propose to expose a wildcard domain for its Kubernetes clusters
|
||||
|
||||
- Here, we chose that `Ingress-controller` (that makes sense) but `Ingress` as well were managed by the **_⚙️OPS_** team.
|
||||
- It might have been done in many different ways!
|
||||
|
||||
---
|
||||
|
||||
### 🗺️ Where are we in our scenario?
|
||||
|
||||
<pre class="mermaid">
|
||||
%%{init:
|
||||
{
|
||||
"theme": "default",
|
||||
"gitGraph": {
|
||||
"mainBranchName": "OPS",
|
||||
"mainBranchOrder": 0
|
||||
}
|
||||
}
|
||||
}%%
|
||||
gitGraph
|
||||
commit id:"0" tag:"start"
|
||||
branch ROCKY order:3
|
||||
branch MOVY order:4
|
||||
branch YouRHere order:5
|
||||
|
||||
checkout OPS
|
||||
commit id:'Flux install on CLOUDY cluster' tag:'T01'
|
||||
branch TEST-env order:1
|
||||
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
|
||||
|
||||
checkout OPS
|
||||
commit id:'Flux config. for TEST tenant' tag:'T03'
|
||||
commit id:'namespace isolation by RBAC'
|
||||
checkout TEST-env
|
||||
merge OPS id:'ROCKY tenant creation' tag:'T04'
|
||||
|
||||
checkout OPS
|
||||
commit id:'ROCKY deploy. config.' tag:'R01'
|
||||
|
||||
checkout TEST-env
|
||||
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'ROCKY' tag:'v1.0.0'
|
||||
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.0'
|
||||
|
||||
checkout OPS
|
||||
commit id:'Ingress-controller config.' tag:'T05'
|
||||
checkout TEST-env
|
||||
merge OPS id:'Ingress-controller install' type: HIGHLIGHT tag:'T06'
|
||||
|
||||
checkout OPS
|
||||
commit id:'ROCKY patch for ingress config.' tag:'R03'
|
||||
checkout TEST-env
|
||||
merge OPS id:'ingress config. for ROCKY app'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'blue color' tag:'v1.0.1'
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.1'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'pink color' tag:'v1.0.2'
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.2'
|
||||
|
||||
checkout YouRHere
|
||||
commit id:'x'
|
||||
checkout OPS
|
||||
merge YouRHere id:'YOU ARE HERE'
|
||||
|
||||
checkout OPS
|
||||
commit id:'FLUX config for MOVY deployment' tag:'M01'
|
||||
checkout TEST-env
|
||||
merge OPS id:'FLUX ready to deploy MOVY' type: HIGHLIGHT tag:'M02'
|
||||
|
||||
checkout MOVY
|
||||
commit id:'MOVY' tag:'v1.0.3'
|
||||
checkout TEST-env
|
||||
merge MOVY tag:'MOVY v1.0.3' type: REVERSE
|
||||
|
||||
checkout OPS
|
||||
commit id:'Network policies'
|
||||
checkout TEST-env
|
||||
merge OPS type: HIGHLIGHT
|
||||
</pre>
|
||||
241
slides/flux/kyverno.md
Normal file
241
slides/flux/kyverno.md
Normal file
@@ -0,0 +1,241 @@
|
||||
## introducing Kyverno
|
||||
|
||||
Kyverno is a tool to extend Kubernetes permission management to express complex policies…
|
||||
</br>… and override manifests delivered by client teams.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
### Kyverno -- for more info
|
||||
|
||||
Please, refer to the [`Setting up Kubernetes` chapter in the High Five M4 module](./4.yml.html#toc-policy-management-with-kyverno) for more infos about `Kyverno`.
|
||||
|
||||
---
|
||||
|
||||
## Creating an `Helm` source in Flux for OpenEBS Helm chart
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
|
||||
mkdir -p clusters/CLOUDY/kyverno && \
|
||||
cp -pr ~/container.training/k8s/
|
||||
|
||||
k8s@shpod ~$ flux create source helm kyverno \
|
||||
--namespace=kyverno \
|
||||
--url=https://kyverno.github.io/kyverno/ \
|
||||
--interval=3m \
|
||||
--export > ./clusters/CLOUDY/kyverno/sync2.yaml
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Creating the `HelmRelease` in Flux
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod ~$ flux create helmrelease kyverno \
|
||||
--namespace=kyverno \
|
||||
--source=HelmRepository/kyverno.flux-system \
|
||||
--target-namespace=kyverno \
|
||||
--create-target-namespace=true \
|
||||
--chart-version=">=3.4.2" \
|
||||
--chart=kyverno \
|
||||
--export >> ./clusters/CLOUDY/kyverno/sync.yaml
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Add Kyverno policy
|
||||
|
||||
This polivy is just an example.
|
||||
It enforces the use of a `Service Account` in `Flux` configurations
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
|
||||
mkdir -p clusters/CLOUDY/kyverno-policies && \
|
||||
cp -pr ~/container.training/k8s/M6-kyverno-enforce-service-account.yaml \
|
||||
./clusters/CLOUDY/kyverno-policies/
|
||||
|
||||
---
|
||||
|
||||
### Creating `kustomization` in Flux for Kyverno policies
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
|
||||
flux create kustomization kyverno-policies \
|
||||
--namespace=kyverno \
|
||||
--source=GitRepository/flux-system \
|
||||
--path="./clusters/CLOUDY/kyverno-policies/" \
|
||||
--prune true --interval 5m \
|
||||
--depends-on kyverno \
|
||||
--export >> ./clusters/CLOUDY/kyverno-policies/sync.yaml
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
|
||||
## Apply Kyverno policy
|
||||
```bash
|
||||
flux create kustomization
|
||||
|
||||
--path
|
||||
--source GitRepository/
|
||||
--export > ./clusters/CLOUDY/kyverno-policies/sync.yaml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Add Kyverno dependency for **_⚗️TEST_** cluster
|
||||
|
||||
- Now that we've got `Kyverno` policies,
|
||||
- ops team will enforce any upgrade from any kustomization in our dev team tenants
|
||||
- to wait for the `kyverno` policies to be reconciled (in a `Flux` perspective)
|
||||
|
||||
- upgrade file `./clusters/CLOUDY/tenants.yaml`,
|
||||
- by adding this property: `spec.dependsOn.{name: kyverno-policies}`
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
### Debugging
|
||||
|
||||
`Kyverno-policies` `Kustomization` failed because `spec.dependsOn` property can only target a resource from the same `Kind`.
|
||||
|
||||
- Let's suppress the `spec.dependsOn` property.
|
||||
|
||||
Now `Kustomizations` for **_🎸ROCKY_** and **_🎬MOVY_** tenants failed because of our policies.
|
||||
|
||||
---
|
||||
|
||||
### 🗺️ Where are we in our scenario?
|
||||
|
||||
<pre class="mermaid">
|
||||
%%{init:
|
||||
{
|
||||
"theme": "default",
|
||||
"gitGraph": {
|
||||
"mainBranchName": "OPS",
|
||||
"mainBranchOrder": 0
|
||||
}
|
||||
}
|
||||
}%%
|
||||
gitGraph
|
||||
commit id:"0" tag:"start"
|
||||
branch ROCKY order:4
|
||||
branch MOVY order:5
|
||||
branch YouRHere order:6
|
||||
|
||||
checkout OPS
|
||||
commit id:'Flux install on CLOUDY cluster' tag:'T01'
|
||||
branch TEST-env order:1
|
||||
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
|
||||
|
||||
checkout OPS
|
||||
commit id:'Flux config. for TEST tenant' tag:'T03'
|
||||
commit id:'namespace isolation by RBAC'
|
||||
checkout TEST-env
|
||||
merge OPS id:'ROCKY tenant creation' tag:'T04'
|
||||
|
||||
checkout OPS
|
||||
commit id:'ROCKY deploy. config.' tag:'R01'
|
||||
|
||||
checkout TEST-env
|
||||
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'ROCKY' tag:'v1.0.0'
|
||||
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.0'
|
||||
|
||||
checkout OPS
|
||||
commit id:'Ingress-controller config.' tag:'T05'
|
||||
checkout TEST-env
|
||||
merge OPS id:'Ingress-controller install' type: HIGHLIGHT tag:'T06'
|
||||
|
||||
checkout OPS
|
||||
commit id:'ROCKY patch for ingress config.' tag:'R03'
|
||||
checkout TEST-env
|
||||
merge OPS id:'ingress config. for ROCKY app'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'blue color' tag:'v1.0.1'
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.1'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'pink color' tag:'v1.0.2'
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.2'
|
||||
|
||||
checkout OPS
|
||||
commit id:'FLUX config for MOVY deployment' tag:'M01'
|
||||
checkout TEST-env
|
||||
merge OPS id:'FLUX ready to deploy MOVY' type: HIGHLIGHT tag:'M02'
|
||||
|
||||
checkout MOVY
|
||||
commit id:'MOVY' tag:'v1.0.3'
|
||||
checkout TEST-env
|
||||
merge MOVY tag:'MOVY v1.0.3' type: REVERSE
|
||||
|
||||
checkout OPS
|
||||
commit id:'Network policies'
|
||||
checkout TEST-env
|
||||
merge OPS type: HIGHLIGHT tag:'T07'
|
||||
|
||||
checkout OPS
|
||||
commit id:'k0s install on METAL cluster' tag:'K01'
|
||||
commit id:'Flux config. for METAL cluster' tag:'K02'
|
||||
branch METAL_TEST-PROD order:3
|
||||
commit id:'ROCKY/MOVY tenants on METAL' type: HIGHLIGHT
|
||||
checkout OPS
|
||||
commit id:'Flux config. for OpenEBS' tag:'K03'
|
||||
checkout METAL_TEST-PROD
|
||||
merge OPS id:'openEBS on METAL' type: HIGHLIGHT
|
||||
|
||||
checkout OPS
|
||||
commit id:'Prometheus install'
|
||||
checkout TEST-env
|
||||
merge OPS type: HIGHLIGHT
|
||||
|
||||
checkout OPS
|
||||
commit id:'Kyverno install'
|
||||
commit id:'Kyverno rules'
|
||||
checkout TEST-env
|
||||
merge OPS type: HIGHLIGHT
|
||||
|
||||
checkout YouRHere
|
||||
commit id:'x'
|
||||
checkout OPS
|
||||
merge YouRHere id:'YOU ARE HERE'
|
||||
|
||||
checkout OPS
|
||||
commit id:'Flux config. for PROD tenant' tag:'P01'
|
||||
branch PROD-env order:2
|
||||
commit id:'ROCKY tenant on PROD'
|
||||
checkout OPS
|
||||
commit id:'ROCKY patch for PROD' tag:'R04'
|
||||
checkout PROD-env
|
||||
merge OPS id:'PROD ready to deploy ROCKY' type: HIGHLIGHT
|
||||
checkout PROD-env
|
||||
merge ROCKY tag:'ROCKY v1.0.2'
|
||||
|
||||
checkout MOVY
|
||||
commit id:'MOVY HELM chart' tag:'M03'
|
||||
checkout TEST-env
|
||||
merge MOVY tag:'MOVY v1.0'
|
||||
</pre>
|
||||
251
slides/flux/observability.md
Normal file
251
slides/flux/observability.md
Normal file
@@ -0,0 +1,251 @@
|
||||
# Install monitoring stack
|
||||
|
||||
The **_⚙️OPS_** team wants to have a real monitoring stack for its clusters.
|
||||
Let's deploy `Prometheus` and `Grafana` onto the clusters.
|
||||
|
||||
Note:
|
||||
|
||||
---
|
||||
|
||||
## Creating `Github` source in Flux for monitoring components install repository
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ mkdir -p clusters/CLOUDY/kube-prometheus-stack
|
||||
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create source git monitoring \
|
||||
--namespace=monitoring \
|
||||
--url=https://github.com/fluxcd/flux2-monitoring-example.git \
|
||||
--branch=main --export > ./clusters/CLOUDY/kube-prometheus-stack/sync.yaml
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
### Creating `kustomization` in Flux for monitoring stack
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create kustomization monitoring \
|
||||
--namespace=monitoring \
|
||||
--source=GitRepository/monitoring \
|
||||
--path="./monitoring/controllers/kube-prometheus-stack/" \
|
||||
--export >> ./clusters/CLOUDY/kube-prometheus-stack/sync.yaml
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
### Install Flux Grafana dashboards
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create kustomization dashboards \
|
||||
--namespace=monitoring \
|
||||
--source=GitRepository/monitoring \
|
||||
--path="./monitoring/configs/" \
|
||||
--export >> ./clusters/CLOUDY/kube-prometheus-stack/sync.yaml
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Flux repository synchro is broken😅
|
||||
|
||||
It seems that `Flux` on **_☁️CLOUDY_** cluster is not able to authenticate with `ssh` on its `Github` config repository!
|
||||
|
||||
What happened?
|
||||
When we install `Flux` on **_🤘METAL_** cluster, it generates a new `ssh` keypair and override the one used by **_☁️CLOUDY_** among the "deployment keys" of the `Github` repository.
|
||||
|
||||
⚠️ Beware of flux bootstrap command!
|
||||
|
||||
We have to
|
||||
- generate a new keypair (or reuse an already existing one)
|
||||
- add the private key to the Flux-dedicated secrets in **_☁️CLOUDY_** cluster
|
||||
- add it to the "deployment keys" of the `Github` repository
|
||||
|
||||
---
|
||||
|
||||
### the command
|
||||
|
||||
.lab[
|
||||
|
||||
- `Flux` _CLI_ helps to recreate the secret holding the `ssh` **private** key.
|
||||
|
||||
```bash
|
||||
k8s@shpod:~$ flux create secret git flux-system \
|
||||
--url=ssh://git@github.com/container-training-fleet/fleet-config-using-flux-XXXXX \
|
||||
--private-key-file=/home/k8s/.ssh/id_ed25519
|
||||
```
|
||||
|
||||
- copy the **public** key into the deployment keys of the `Github` repository
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Access the Grafana dashboard
|
||||
|
||||
.lab[
|
||||
|
||||
- Get the `Host` and `IP` address to request
|
||||
|
||||
```bash
|
||||
k8s@shpod:~$ kubectl -n monitoring get ingress
|
||||
NAME CLASS HOSTS ADDRESS PORTS AGE
|
||||
grafana nginx grafana.test.metal.mybestdomain.com 62.210.39.83 80 6m30s
|
||||
```
|
||||
|
||||
- Get the `Grafana` admin password
|
||||
|
||||
```bash
|
||||
k8s@shpod:~$ k get secret kube-prometheus-stack-grafana -n monitoring \
|
||||
-o jsonpath='{.data.admin-password}' | base64 -d
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
## And browse…
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
### 🗺️ Where are we in our scenario?
|
||||
|
||||
<pre class="mermaid">
|
||||
%%{init:
|
||||
{
|
||||
"theme": "default",
|
||||
"gitGraph": {
|
||||
"mainBranchName": "OPS",
|
||||
"mainBranchOrder": 0
|
||||
}
|
||||
}
|
||||
}%%
|
||||
gitGraph
|
||||
commit id:"0" tag:"start"
|
||||
branch ROCKY order:4
|
||||
branch MOVY order:5
|
||||
branch YouRHere order:6
|
||||
|
||||
checkout OPS
|
||||
commit id:'Flux install on CLOUDY cluster' tag:'T01'
|
||||
branch TEST-env order:1
|
||||
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
|
||||
|
||||
checkout OPS
|
||||
commit id:'Flux config. for TEST tenant' tag:'T03'
|
||||
commit id:'namespace isolation by RBAC'
|
||||
checkout TEST-env
|
||||
merge OPS id:'ROCKY tenant creation' tag:'T04'
|
||||
|
||||
checkout OPS
|
||||
commit id:'ROCKY deploy. config.' tag:'R01'
|
||||
|
||||
checkout TEST-env
|
||||
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'ROCKY' tag:'v1.0.0'
|
||||
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.0'
|
||||
|
||||
checkout OPS
|
||||
commit id:'Ingress-controller config.' tag:'T05'
|
||||
checkout TEST-env
|
||||
merge OPS id:'Ingress-controller install' type: HIGHLIGHT tag:'T06'
|
||||
|
||||
checkout OPS
|
||||
commit id:'ROCKY patch for ingress config.' tag:'R03'
|
||||
checkout TEST-env
|
||||
merge OPS id:'ingress config. for ROCKY app'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'blue color' tag:'v1.0.1'
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.1'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'pink color' tag:'v1.0.2'
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.2'
|
||||
|
||||
checkout OPS
|
||||
commit id:'FLUX config for MOVY deployment' tag:'M01'
|
||||
checkout TEST-env
|
||||
merge OPS id:'FLUX ready to deploy MOVY' type: HIGHLIGHT tag:'M02'
|
||||
|
||||
checkout MOVY
|
||||
commit id:'MOVY' tag:'v1.0.3'
|
||||
checkout TEST-env
|
||||
merge MOVY tag:'MOVY v1.0.3' type: REVERSE
|
||||
|
||||
checkout OPS
|
||||
commit id:'Network policies'
|
||||
checkout TEST-env
|
||||
merge OPS type: HIGHLIGHT tag:'T07'
|
||||
|
||||
checkout OPS
|
||||
commit id:'k0s install on METAL cluster' tag:'K01'
|
||||
commit id:'Flux config. for METAL cluster' tag:'K02'
|
||||
branch METAL_TEST-PROD order:3
|
||||
commit id:'ROCKY/MOVY tenants on METAL' type: HIGHLIGHT
|
||||
checkout OPS
|
||||
commit id:'Flux config. for OpenEBS' tag:'K03'
|
||||
checkout METAL_TEST-PROD
|
||||
merge OPS id:'openEBS on METAL' type: HIGHLIGHT
|
||||
|
||||
checkout OPS
|
||||
commit id:'Prometheus install'
|
||||
checkout TEST-env
|
||||
merge OPS type: HIGHLIGHT
|
||||
|
||||
checkout YouRHere
|
||||
commit id:'x'
|
||||
checkout OPS
|
||||
merge YouRHere id:'YOU ARE HERE'
|
||||
|
||||
checkout OPS
|
||||
commit id:'Kyverno install'
|
||||
commit id:'Kyverno rules'
|
||||
checkout TEST-env
|
||||
merge OPS type: HIGHLIGHT
|
||||
|
||||
checkout OPS
|
||||
commit id:'Flux config. for PROD tenant' tag:'P01'
|
||||
branch PROD-env order:2
|
||||
commit id:'ROCKY tenant on PROD'
|
||||
checkout OPS
|
||||
commit id:'ROCKY patch for PROD' tag:'R04'
|
||||
checkout PROD-env
|
||||
merge OPS id:'PROD ready to deploy ROCKY' type: HIGHLIGHT
|
||||
checkout PROD-env
|
||||
merge ROCKY tag:'ROCKY v1.0.2'
|
||||
|
||||
checkout MOVY
|
||||
commit id:'MOVY HELM chart' tag:'M03'
|
||||
checkout TEST-env
|
||||
merge MOVY tag:'MOVY v1.0'
|
||||
</pre>
|
||||
129
slides/flux/openebs.md
Normal file
129
slides/flux/openebs.md
Normal file
@@ -0,0 +1,129 @@
|
||||
# K03- Installing OpenEBS as our CSI
|
||||
|
||||
`OpenEBS` is a _CSI_ solution capable of hyperconvergence, synchronous replication and other extra features.
|
||||
It installs with `Helm` charts.
|
||||
|
||||
- `Flux` is able to watch `Helm` repositories and install `HelmReleases`
|
||||
- To inject its configuration into the `Helm chart` , `Flux` relies on a `ConfigMap` including the `values.yaml` file
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod ~$ mkdir -p ./clusters/METAL/openebs/ && \
|
||||
cp -pr ~/container.training/k8s/M6-openebs-*.yaml \
|
||||
./clusters/METAL/openebs/ && \
|
||||
cd ./clusters/METAL/openebs/ && \
|
||||
mv M6-openebs-kustomization.yaml kustomization.yaml && \
|
||||
cd -
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Creating an `Helm` source in Flux for OpenEBS Helm chart
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod ~$ flux create source helm openebs \
|
||||
--url=https://openebs.github.io/openebs \
|
||||
--interval=3m \
|
||||
--export > ./clusters/METAL/openebs/sync.yaml
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Creating the `HelmRelease` in Flux
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod ~$ flux create helmrelease openebs \
|
||||
--namespace=openebs \
|
||||
--source=HelmRepository/openebs.flux-system \
|
||||
--chart=openebs \
|
||||
--values-from=ConfigMap/openebs-values \
|
||||
--export >> ./clusters/METAL/openebs/sync.yaml
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## 📂 Let's review the files
|
||||
|
||||
- `M6-openebs-components.yaml`
|
||||
</br>To include the `Flux` resources in the same _namespace_ where `Flux` installs the `OpenEBS` resources, we need to create the _namespace_ **before** the installation occurs
|
||||
|
||||
- `sync.yaml`
|
||||
</br>The resources `Flux` uses to watch and get the `Helm chart`
|
||||
|
||||
- `M6-openebs-values.yaml`
|
||||
</br> the `values.yaml` file that will be injected into the `Helm chart`
|
||||
|
||||
- `kustomization.yaml`
|
||||
</br>This one is a bit special: it includes a [ConfigMap generator](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/configmapgenerator/)
|
||||
|
||||
- `M6-openebs-kustomizeconfig.yaml`
|
||||
</br></br>This one is tricky: in order for `Flux` to trigger an upgrade of the `Helm Release` when the `ConfigMap` is altered, you need to explain to the `Kustomize ConfigMap generator` how the resources are relating with each others. 🤯
|
||||
|
||||
And here we go!
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## And the result
|
||||
|
||||
Now, we have a _cluster_ featuring `openEBS`.
|
||||
But still… The PersistentVolumeClaim remains in `Pending` state!😭
|
||||
|
||||
```bash
|
||||
k8s@shpod ~$ kubectl get storageclass
|
||||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
|
||||
openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 82m
|
||||
```
|
||||
We still don't have a default `StorageClass`!😤
|
||||
|
||||
---
|
||||
|
||||
### Manually enforcing the default `StorageClass`
|
||||
|
||||
Even if Flux is constantly reconciling our resources, we still are able to test evolutions by hand.
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod ~$ flux suspend helmrelease openebs -n openebs
|
||||
► suspending helmrelease openebs in openebs namespace
|
||||
✔ helmrelease suspended
|
||||
k8s@shpod ~$ kubectl patch storageclass openebs-hostpath \
|
||||
-p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
|
||||
|
||||
k8s@shpod ~$ k get storageclass
|
||||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
|
||||
openebs-hostpath (default) openebs.io/local Delete WaitForFirstConsumer false 82m
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
### Now the database is OK
|
||||
|
||||
```bash
|
||||
k8s@shpod ~$ get pvc,pods -n movy-test
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
|
||||
persistentvolumeclaim/postgresql-data-db-0 Bound pvc-ede1634f-2478-42cd-8ee3-7547cd7cdde2 1Gi RWO openebs-hostpath <unset> 20m
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/db-0 1/1 Running 0 5h43m
|
||||
(…)
|
||||
```
|
||||
354
slides/flux/scenario.md
Normal file
354
slides/flux/scenario.md
Normal file
@@ -0,0 +1,354 @@
|
||||
# Kubernetes in production — <br/>an end-to-end example
|
||||
|
||||
- Previous training modules focused on individual topics
|
||||
|
||||
(e.g. RBAC, network policies, CRDs, Helm...)
|
||||
|
||||
- We will now show how to put everything together to deploy apps in production
|
||||
|
||||
(dealing with typical challenges like: multiple apps, multiple teams, multiple clusters...)
|
||||
|
||||
- Our first challenge will be to pick and choose which components to use
|
||||
|
||||
(among the vast [Cloud Native Landscape](https://landscape.cncf.io/))
|
||||
|
||||
- We'll start with a basic Kubernetes cluster (on cloud or on premises)
|
||||
|
||||
- We'll and enhance it by adding features one at a time
|
||||
|
||||
---
|
||||
|
||||
## The cast
|
||||
|
||||
There are 3 teams in our company:
|
||||
|
||||
- **_⚙️OPS_** is the platform engineering team
|
||||
|
||||
- they're responsible for building and configuring Kubernetes clusters
|
||||
|
||||
- the **_🎸ROCKY_** team develops and manages the **_🎸ROCKY_** app
|
||||
|
||||
- that app manages a collection of _rock & pop_ albums
|
||||
|
||||
- it's deployed with plain YAML manifests
|
||||
|
||||
- the **_🎬MOVY_** team develops and manages the **_🎬MOVY_** app
|
||||
|
||||
- that app manages a collection of _movie soundtrack_ albums
|
||||
|
||||
- it's deployed with Helm charts
|
||||
|
||||
---
|
||||
|
||||
## Code and team organization
|
||||
|
||||
- **_🎸ROCKY_** and **_🎬MOVY_** reside in separate git repositories
|
||||
|
||||
- Each team can write code, build package, and deploy their applications:
|
||||
|
||||
- independently
|
||||
<br/>(= without having to worry about what's happening in the other repo)
|
||||
|
||||
- autonomously
|
||||
<br/>(= without having to synchronize or obtain privileges from another team)
|
||||
|
||||
---
|
||||
|
||||
## Cluster organization
|
||||
|
||||
The **_⚙️OPS_** team manages 2 Kubernetes clusters:
|
||||
|
||||
- **_☁️CLOUDY_**: managed cluster from a public cloud provider
|
||||
|
||||
- **_🤘METAL_**: custom-built cluster installed on bare Linux servers
|
||||
|
||||
Let's see the differences between these clusters.
|
||||
|
||||
---
|
||||
|
||||
## **_☁️CLOUDY_** cluster
|
||||
|
||||
- Managed cluster from a public cloud provider ("Kubernetes-as-a-Service")
|
||||
|
||||
- HA control plane deployed and managed by the cloud provider
|
||||
|
||||
- Two worker nodes (potentially with cluster autoscaling)
|
||||
|
||||
- Usually comes pre-installed with some basic features
|
||||
|
||||
(e.g. metrics-server, CNI, CSI, sometimes an ingress controller)
|
||||
|
||||
- Requires extra components to be production-ready
|
||||
|
||||
(e.g. Flux or other gitops pipeline, observability...)
|
||||
|
||||
- Example: [Scaleway Kapsule][kapsule] (but many other KaaS options are available)
|
||||
|
||||
[kapsule]: https://www.scaleway.com/en/kubernetes-kapsule/
|
||||
|
||||
---
|
||||
|
||||
## **_🤘METAL_** cluster
|
||||
|
||||
- Custom-built cluster installed on bare Linux servers
|
||||
|
||||
- HA control plane deployed and managed by the **_⚙️OPS_** team
|
||||
|
||||
- 3 nodes
|
||||
|
||||
- in our example, the nodes will run both the control plane and our apps
|
||||
|
||||
- it is more typical to use dedicated control plane nodes
|
||||
<br/>(example: 3 control plane nodes + at least 3 worker nodes)
|
||||
|
||||
- Comes with even less pre-installed components than **_☁️CLOUDY_**
|
||||
|
||||
(requiring more work from our **_⚙️OPS_** team)
|
||||
|
||||
- Example: we'll use [k0s] (but many other distros are available)
|
||||
|
||||
[k0s]: https://k0sproject.io/
|
||||
|
||||
---
|
||||
|
||||
## **_⚗️TEST_** and **_🏭PROD_**
|
||||
|
||||
- The **_⚙️OPS_** team creates 2 environments for each dev team
|
||||
|
||||
(**_⚗️TEST_** and **_🏭PROD_**)
|
||||
|
||||
- These environments exist on both clusters
|
||||
|
||||
(meaning 2 apps × 2 clusters × 2 envs = 8 envs total)
|
||||
|
||||
- The setup for each env and cluster should follow DRY principles
|
||||
|
||||
(to ensure configurations are consistent and minimize maintenance)
|
||||
|
||||
- Each cluster and each env has its own lifecycle
|
||||
|
||||
(= it should be possible to deploy, add an extra components/feature...
|
||||
<br/>on one env without impacting the other)
|
||||
|
||||
---
|
||||
|
||||
### Multi-tenancy
|
||||
|
||||
Both **_🎸ROCKY_** and **_🎬MOVY_** teams should use **dedicated _"tenants"_** on each cluster/env
|
||||
|
||||
- the **_🎸ROCKY_** team should be able to deploy, upgrade and configure its app within its dedicated **namespace** without anybody else involved
|
||||
|
||||
- and the same for **_🎬MOVY_**
|
||||
|
||||
- neither team's deployments might interfere with the other, maintaining a clean and conflict-free environment
|
||||
|
||||
---
|
||||
|
||||
## Application overview
|
||||
|
||||
- Both dev teams are working on an app to manage music albums
|
||||
|
||||
- This app is mostly based on a `Spring` framework demo called spring-music
|
||||
|
||||
- This lab uses a dedicated fork [container.training-spring-music](https://github.com/Musk8teers/container.training-spring-music):
|
||||
- with 2 branches dedicated to the **_🎸ROCKY_** and **_🎬MOVY_** teams
|
||||
|
||||
- The app architecture consists of 2 tiers:
|
||||
- a `Java/Spring` Web app
|
||||
- a `PostgreSQL` database
|
||||
|
||||
---
|
||||
|
||||
### 📂 specific file: application.yaml
|
||||
|
||||
This is where we configure the application to connect to the `PostgreSQL` database.
|
||||
|
||||
.lab[
|
||||
|
||||
🔍 Location: [/src/main/resources/application.yml](https://github.com/Musk8teers/container.training-spring-music/blob/main/src/main/resources/application.yml)
|
||||
|
||||
]
|
||||
|
||||
`PROFILE=postgres` env var is set in [docker-compose.yaml](https://github.com/Musk8teers/container.training-spring-music/blob/main/docker-compose.yml) file, for example…
|
||||
|
||||
---
|
||||
|
||||
### 📂 specific file: AlbumRepositoryPopulator.java
|
||||
|
||||
|
||||
This is where the album collection is initially loaded from the file [`album.json`](https://github.com/Musk8teers/container.training-spring-music/blob/main/src/main/resources/albums.json)
|
||||
|
||||
.lab[
|
||||
|
||||
🔍 Location: [`/src/main/java/org/cloudfoundry/samples/music/repositories/AlbumRepositoryPopulator.java`](https://github.com/Musk8teers/container.training-spring-music/blob/main/src/main/java/org/cloudfoundry/samples/music/repositories/AlbumRepositoryPopulator.java)
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## 🚚 How to deploy?
|
||||
|
||||
The **_⚙️OPS_** team offers 2 deployment strategies that dev teams can use autonomously:
|
||||
|
||||
- **_🎸ROCKY_** uses a `Flux` _GitOps_ workflow based on regular Kubernetes `YAML` resources
|
||||
|
||||
- **_🎬MOVY_** uses a `Flux` _GitOps_ workflow based on `Helm` charts
|
||||
|
||||
---
|
||||
|
||||
## 🍱 What features?
|
||||
|
||||
<!-- TODO: complete this slide when all the modules are there -->
|
||||
The **_⚙️OPS_** team aims to provide clusters offering the following features to its users:
|
||||
|
||||
- a network stack with efficient workload isolation
|
||||
|
||||
- ingress and load-balancing capabilites
|
||||
|
||||
- an enterprise-grade monitoring solution for real-time insights
|
||||
|
||||
- automated policy rule enforcement to control Kubernetes resources requested by dev teams
|
||||
|
||||
<!-- - HA PostgreSQL -->
|
||||
|
||||
<!-- - HTTPs certificates to expose the applications -->
|
||||
|
||||
---
|
||||
|
||||
## 🌰 In a nutshell
|
||||
|
||||
- 3 teams: **_⚙️OPS_**, **_🎸ROCKY_**, **_🎬MOVY_**
|
||||
|
||||
- 2 clusters: **_☁️CLOUDY_**, **_🤘METAL_**
|
||||
|
||||
- 2 envs per cluster and per dev team: **_⚗️TEST_**, **_🏭PROD_**
|
||||
|
||||
- 2 Web apps Java/Spring + PostgreSQL: one for pop and rock albums, another for movie soundtrack albums
|
||||
|
||||
- 2 deployment strategies: regular `YAML` resources + `Kustomize`, `Helm` charts
|
||||
|
||||
|
||||
> 💻 `Flux` is used both
|
||||
> - to operate the clusters
|
||||
> - and to manage the _GitOps_ deployment workflows
|
||||
|
||||
---
|
||||
|
||||
### What our scenario might look like…
|
||||
|
||||
<pre class="mermaid">
|
||||
%%{init:
|
||||
{
|
||||
"theme": "default",
|
||||
"gitGraph": {
|
||||
"mainBranchName": "OPS",
|
||||
"mainBranchOrder": 0
|
||||
}
|
||||
}
|
||||
}%%
|
||||
gitGraph
|
||||
commit id:"0" tag:"start"
|
||||
branch ROCKY order:4
|
||||
branch MOVY order:5
|
||||
branch YouRHere order:6
|
||||
|
||||
checkout YouRHere
|
||||
commit id:'x'
|
||||
checkout OPS
|
||||
merge YouRHere id:'YOU ARE HERE'
|
||||
|
||||
checkout OPS
|
||||
commit id:'Flux install on CLOUDY cluster' tag:'T01'
|
||||
branch TEST-env order:1
|
||||
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
|
||||
|
||||
checkout OPS
|
||||
commit id:'Flux config. for TEST tenant' tag:'T03'
|
||||
commit id:'namespace isolation by RBAC'
|
||||
checkout TEST-env
|
||||
merge OPS id:'ROCKY tenant creation' tag:'T04'
|
||||
|
||||
checkout OPS
|
||||
commit id:'ROCKY deploy. config.' tag:'R01'
|
||||
|
||||
checkout TEST-env
|
||||
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'ROCKY' tag:'v1.0.0'
|
||||
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.0'
|
||||
|
||||
checkout OPS
|
||||
commit id:'Ingress-controller config.' tag:'T05'
|
||||
checkout TEST-env
|
||||
merge OPS id:'Ingress-controller install' type: HIGHLIGHT tag:'T06'
|
||||
|
||||
checkout OPS
|
||||
commit id:'ROCKY patch for ingress config.' tag:'R03'
|
||||
checkout TEST-env
|
||||
merge OPS id:'ingress config. for ROCKY app'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'blue color' tag:'v1.0.1'
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.1'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'pink color' tag:'v1.0.2'
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.2'
|
||||
|
||||
checkout OPS
|
||||
commit id:'FLUX config for MOVY deployment' tag:'M01'
|
||||
checkout TEST-env
|
||||
merge OPS id:'FLUX ready to deploy MOVY' type: HIGHLIGHT tag:'M02'
|
||||
|
||||
checkout MOVY
|
||||
commit id:'MOVY' tag:'v1.0.3'
|
||||
checkout TEST-env
|
||||
merge MOVY tag:'MOVY v1.0.3' type: REVERSE
|
||||
|
||||
checkout OPS
|
||||
commit id:'Network policies'
|
||||
checkout TEST-env
|
||||
merge OPS type: HIGHLIGHT tag:'T07'
|
||||
|
||||
checkout OPS
|
||||
commit id:'k0s install on METAL cluster' tag:'K01'
|
||||
commit id:'Flux config. for METAL cluster' tag:'K02'
|
||||
branch METAL_TEST-PROD order:3
|
||||
commit id:'ROCKY/MOVY tenants on METAL' type: HIGHLIGHT
|
||||
checkout OPS
|
||||
commit id:'Flux config. for OpenEBS' tag:'K03'
|
||||
checkout METAL_TEST-PROD
|
||||
merge OPS id:'openEBS on METAL' type: HIGHLIGHT
|
||||
|
||||
checkout OPS
|
||||
commit id:'Prometheus install'
|
||||
checkout TEST-env
|
||||
merge OPS type: HIGHLIGHT
|
||||
|
||||
checkout OPS
|
||||
commit id:'Kyverno install'
|
||||
commit id:'Kyverno rules'
|
||||
checkout TEST-env
|
||||
merge OPS type: HIGHLIGHT
|
||||
|
||||
checkout OPS
|
||||
commit id:'Flux config. for PROD tenant' tag:'P01'
|
||||
branch PROD-env order:2
|
||||
commit id:'ROCKY tenant on PROD'
|
||||
checkout OPS
|
||||
commit id:'ROCKY patch for PROD' tag:'R04'
|
||||
checkout PROD-env
|
||||
merge OPS id:'PROD ready to deploy ROCKY' type: HIGHLIGHT
|
||||
checkout PROD-env
|
||||
merge ROCKY tag:'ROCKY v1.0.2'
|
||||
|
||||
checkout MOVY
|
||||
commit id:'MOVY HELM chart' tag:'M03'
|
||||
checkout TEST-env
|
||||
merge MOVY tag:'MOVY v1.0'
|
||||
</pre>
|
||||
200
slides/flux/tenants.md
Normal file
200
slides/flux/tenants.md
Normal file
@@ -0,0 +1,200 @@
|
||||
# Multi-tenants management with Flux
|
||||
|
||||
💡 Thanks to `Flux`, we can manage Kubernetes resources from inside the clusters.
|
||||
|
||||
The **_⚙️OPS_** team uses `Flux` with a _GitOps_ code base to:
|
||||
- configure the clusters
|
||||
- deploy tools and components to extend the clusters capabilites
|
||||
- configure _GitOps_ workflow for dev teams in **dedicated and isolated _tenants_**
|
||||
|
||||
The **_🎸ROCKY_** team uses `Flux` to deploy every new release of its app, by detecting every new `git push` events happening in its app `Github` repository
|
||||
|
||||
|
||||
The **_🎬MOVY_** team uses `Flux` to deploy every new release of its app, packaged and published in a new `Helm` chart release
|
||||
|
||||
---
|
||||
|
||||
## Creating _tenants_ with Flux
|
||||
|
||||
While basic `Flux` behavior is to use a single configuration directory applied by a cluster-wide role…
|
||||
|
||||
… it can also enable _multi-tenant_ configuration by:
|
||||
- creating dedicated directories for each _tenant_ in its configuration code base
|
||||
- and using a dedicated `ServiceAccount` with limited permissions to operate in each _tenant_
|
||||
|
||||
Several _tenants_ are created
|
||||
- per env
|
||||
- for **_⚗️TEST_**
|
||||
- and **_🏭PROD_**
|
||||
- per team
|
||||
- for **_🎸ROCKY_**
|
||||
- and **_🎬MOVY_**
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
### Flux CLI works locally
|
||||
|
||||
First, we have to **locally** clone your `Flux` configuration `Github` repository
|
||||
|
||||
- create an ssh key pair
|
||||
- add the **public** key to your `Github` repository (**with write access**)
|
||||
- and git clone the repository
|
||||
|
||||
---
|
||||
|
||||
### The command line 1/2
|
||||
|
||||
Creating the **_⚗️TEST_** _tenant_
|
||||
|
||||
.lab[
|
||||
|
||||
- ⚠️ Think about renaming the repo with your own suffix
|
||||
```bash
|
||||
k8s@shpod:~$ cd fleet-config-using-flux-XXXXX/
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
|
||||
flux create kustomization tenant-test \
|
||||
--namespace=flux-system \
|
||||
--source=GitRepository/flux-system \
|
||||
--path ./tenants/test \
|
||||
--interval=1m \
|
||||
--prune --export >> clusters/CLOUDY/tenants.yaml
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
### The command line 2/2
|
||||
|
||||
Then we create the **_🏭PROD_** _tenant_
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
|
||||
flux create kustomization tenant-prod \
|
||||
--namespace=flux-system \
|
||||
--source=GitRepository/flux-system \
|
||||
--path ./tenants/prod \
|
||||
--interval=3m \
|
||||
--prune --export >> clusters/CLOUDY/tenants.yaml
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
### 📂 Flux tenants.yaml files
|
||||
|
||||
Let's review the `fleet-config-using-flux-XXXXX/clusters/CLOUDY/tenants.yaml` file
|
||||
|
||||
|
||||
|
||||
|
||||
⚠️ The last command we type in `Flux` _CLI_ creates the `YAML` manifest **locally**
|
||||
|
||||
> ☝🏻 Don't forget to `git commit` and `git push` to `Github`!
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
### Our 1st Flux error
|
||||
|
||||
.lab[
|
||||
|
||||
```bash
|
||||
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux get all
|
||||
NAMESPACE NAME REVISION SUSPENDED
|
||||
READY MESSAGE
|
||||
flux-system gitrepository/flux-system main@sha1:0466652e False
|
||||
True stored artifact for revision 'main@sha1:0466652e'
|
||||
|
||||
NAMESPACE NAME REVISION SUSPENDED
|
||||
READY MESSAGE
|
||||
kustomization/flux-system main@sha1:0466652e False True
|
||||
Applied revision: main@sha1:0466652e
|
||||
kustomization/tenant-prod False False
|
||||
kustomization path not found: stat /tmp/kustomization-417981261/tenants/prod: no such file or directory
|
||||
kustomization/tenant-test False False
|
||||
kustomization path not found: stat /tmp/kustomization-2532810750/tenants/test: no such file or directory
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
> Our configuration may be incomplete 😅
|
||||
|
||||
---
|
||||
|
||||
## Configuring Flux for the **_🎸ROCKY_** team
|
||||
|
||||
What the **_⚙️OPS_** team has to do:
|
||||
|
||||
- 🔧 Create a dedicated `rocky` _tenant_ for **_⚗️TEST_** and **_🏭PROD_** envs on the cluster
|
||||
|
||||
- 🔧 Create the `Flux` source pointing to the `Github` repository embedding the **_🎸ROCKY_** app source code
|
||||
|
||||
- 🔧 Add a `kustomize` _patch_ into the global `Flux` config to include this specific `Flux` config. dedicated to the deployment of the **_🎸ROCKY_** app
|
||||
|
||||
What the **_🎸ROCKY_** team has to do:
|
||||
|
||||
- 👨💻 Create the `kustomization.yaml` file in the **_🎸ROCKY_** app source code repository on `Github`
|
||||
|
||||
---
|
||||
|
||||
### 🗺️ Where are we in our scenario?
|
||||
|
||||
<pre class="mermaid">
|
||||
%%{init:
|
||||
{
|
||||
"theme": "default",
|
||||
"gitGraph": {
|
||||
"mainBranchName": "OPS",
|
||||
"mainBranchOrder": 0
|
||||
}
|
||||
}
|
||||
}%%
|
||||
gitGraph
|
||||
commit id:"0" tag:"start"
|
||||
branch ROCKY order:3
|
||||
branch MOVY order:4
|
||||
branch YouRHere order:5
|
||||
|
||||
checkout OPS
|
||||
commit id:'Flux install on CLOUDY cluster' tag:'T01'
|
||||
branch TEST-env order:1
|
||||
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
|
||||
|
||||
checkout OPS
|
||||
commit id:'Flux config. for TEST tenant' tag:'T03'
|
||||
commit id:'namespace isolation by RBAC'
|
||||
checkout TEST-env
|
||||
merge OPS id:'ROCKY tenant creation' tag:'T04'
|
||||
|
||||
checkout YouRHere
|
||||
commit id:'x'
|
||||
checkout OPS
|
||||
merge YouRHere id:'YOU ARE HERE'
|
||||
|
||||
checkout OPS
|
||||
commit id:'ROCKY deploy. config.' tag:'R01'
|
||||
|
||||
checkout TEST-env
|
||||
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
|
||||
|
||||
checkout ROCKY
|
||||
commit id:'ROCKY' tag:'v1.0.0'
|
||||
|
||||
checkout TEST-env
|
||||
merge ROCKY tag:'ROCKY v1.0.0'
|
||||
</pre>
|
||||
BIN
slides/images/konnectivity.png
Normal file
BIN
slides/images/konnectivity.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 71 KiB |
390
slides/k8s/k0s.md
Normal file
390
slides/k8s/k0s.md
Normal file
@@ -0,0 +1,390 @@
|
||||
# K01 - Setting up a cluster with k0s
|
||||
|
||||
- Running a Kubernetes cluster in the cloud can be relatively straightforward
|
||||
|
||||
- If our cloud provider offers a managed Kubernetes service, it can be as easy as...:
|
||||
|
||||
- clicking a few buttons in their web console
|
||||
|
||||
- a short one-liner leveraging their CLI
|
||||
|
||||
- applying a [Terraform / OpenTofu configuration][one-kubernetes]
|
||||
|
||||
- What if our cloud provider does not offer a managed Kubernetes service?
|
||||
|
||||
- What if we want to run Kubernetes on premises?
|
||||
|
||||
[one-kubernetes]: https://github.com/jpetazzo/container.training/tree/main/prepare-labs/terraform/one-kubernetes
|
||||
|
||||
---
|
||||
|
||||
## A typical managed Kubernetes cluster
|
||||
|
||||
For instance, with Scaleway's Kapsule, we can easily get a cluster with:
|
||||
|
||||
- a CNI configuration providing pod network connectivity and network policies
|
||||
|
||||
(Cilium by default; Calico and Kilo are also supported)
|
||||
|
||||
- a Cloud Controller Manager
|
||||
|
||||
(to automatically label nodes; and to implement `Services` of type `LoadBalancer`)
|
||||
|
||||
- a CSI plugin and `StorageClass` leveraging their network-attached block storage API
|
||||
|
||||
- `metrics-server` to check resource utilization and horizontal pod autoscaling
|
||||
|
||||
- optionally, the cluster autoscaler to dynamically add/remove nodes
|
||||
|
||||
- optionally, a management web interface with the Kubernetes dashboard
|
||||
|
||||
---
|
||||
|
||||
## A typical cluster installed with `kubeadm`
|
||||
|
||||
When using a tool like `kubeadm`, we get:
|
||||
|
||||
- a basic control plane running on a single node
|
||||
|
||||
- some basic services like CoreDNS and kube-proxy
|
||||
|
||||
- no CNI configuration
|
||||
|
||||
(our cluster won't work without one; we need to pick one and set it up ourselves)
|
||||
|
||||
- no Cloud Controller Manager
|
||||
|
||||
- no CSI plugin, no `StorageClass`
|
||||
|
||||
- no `metrics-server`, no cluster autoscaler, no dashboard
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## On premises Kubernetes distributions
|
||||
|
||||
As of October 2025, the [CNCF landscape](https://landscape.cncf.io/?fullscreen=yes&zoom=200&group=certified-partners-and-providers) lists:
|
||||
|
||||
- more than 60 [distributions](https://landscape.cncf.io/guide#platform--certified-kubernetes-distribution),
|
||||
|
||||
- at least 18 [installers](https://landscape.cncf.io/guide#platform--certified-kubernetes-installer),
|
||||
|
||||
- more than 20 [container runtimes](https://landscape.cncf.io/guide#runtime--container-runtime),
|
||||
|
||||
- more than 25 Cloud Native [network](https://landscape.cncf.io/guide#runtime--cloud-native-network) solutions,
|
||||
|
||||
- more than 70 Cloud Native [storage](https://landscape.cncf.io/guide#runtime--cloud-native-storage) solutions.
|
||||
|
||||
Which one(s) are we going to choose? And Why?
|
||||
|
||||
---
|
||||
|
||||
## Lightweight distributions
|
||||
|
||||
- Some Kubernetes distributions put an emphasis on being "lightweight":
|
||||
|
||||
- removing non-essential features or making them optional
|
||||
|
||||
- reducing or removing dependencies on external programs and libraries
|
||||
|
||||
- optionally replacing etcd with another data store (e.g. built-in sqlite)
|
||||
|
||||
- sometimes bundling together multiple components in a single binary for simplicity
|
||||
|
||||
- It often promises easier maintenance (e.g. upgrades)
|
||||
|
||||
- This makes them ideal for "edge" and development environments
|
||||
|
||||
- And sometimes they also fit the bill for regular production clusters!
|
||||
|
||||
---
|
||||
|
||||
## Introducing k0s
|
||||
|
||||
- Open source Kubernetes lightweight distribution
|
||||
|
||||
- Developed and maintained by Mirantis
|
||||
|
||||
- long-time software vendor in the Kubernetes ecosystem
|
||||
|
||||
- bought Docker Enterprise in 2019
|
||||
|
||||
- Addresses multiple segments:
|
||||
|
||||
- edge computing
|
||||
|
||||
- development
|
||||
|
||||
- enterprise-grade HA environments
|
||||
|
||||
- Fully supported by Mirantis (used in [MKE4], [k0rdent], [k0smotron]...)
|
||||
|
||||
[MKE4]: https://www.mirantis.com/blog/mirantis-kubernetes-engine-4-released/
|
||||
[k0rdent]: https://k0rdent.io/
|
||||
[k0smotron]: https://k0smotron.io/
|
||||
|
||||
---
|
||||
|
||||
## `k0s` package
|
||||
|
||||
Its single binary includes:
|
||||
|
||||
- the `kubectl` CLI
|
||||
|
||||
- `kubelet` and a container engine (`containerd`)
|
||||
|
||||
- Kubernetes core control plane components
|
||||
|
||||
(API server, scheduler, controller manager, etcd)
|
||||
|
||||
- Network components
|
||||
|
||||
(like `konnectivity` and core CNI plugins)
|
||||
|
||||
- install, uninstall, back up, restore features
|
||||
|
||||
- helpers to fetch images needed for airgap environments (CoreDNS, kube-proxy...)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Konnectivity
|
||||
|
||||
- Kubernetes cluster architecture is very versatile
|
||||
|
||||
(the control plane can run inside or outside of the cluster, in pods or not...)
|
||||
|
||||
- The control plane needs to [communicate with kubelets][api-server-to-kubelet]
|
||||
|
||||
(e.g. to retrieve logs, attach to containers, forward ports...)
|
||||
|
||||
- The control plane also needs to [communicate with pods][api-server-to-nodes-pods-services]
|
||||
|
||||
(e.g. when running admission or conversion webhooks, or aggregated APIs, in Pods)
|
||||
|
||||
- In some scenarios, there is no easy way for the control plane to reach nodes and pods
|
||||
|
||||
- The traditional approach has been to use SSH tunnels
|
||||
|
||||
- The modern approach is to use Konnectivity
|
||||
|
||||
[api-server-to-kubelet]: https://kubernetes.io/docs/concepts/architecture/control-plane-node-communication/#api-server-to-kubelet
|
||||
[api-server-to-nodes-pods-services]: https://kubernetes.io/docs/concepts/architecture/control-plane-node-communication/#api-server-to-nodes-pods-and-services
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Konnectivity architecture
|
||||
|
||||
- A konnectivity *server* (or *proxy*) runs on the control plane
|
||||
|
||||
- A konnectivity *agent* runs on each worker node (typically through a DaemonSet)
|
||||
|
||||
- Each agent maintains an RPC tunnel to the server
|
||||
|
||||
- When the control plane needs to connect to a pod or node, it solicits the proxy
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## `k0sctl`
|
||||
|
||||
- It is possible to use "raw" `k0s`
|
||||
|
||||
(that works great for e.g. single-node clusters)
|
||||
|
||||
- There is also a tool called `k0sctl`
|
||||
|
||||
(wrapping `k0s` and facilitating multi-nodes installations)
|
||||
|
||||
.lab[
|
||||
|
||||
- Download the `k0sctl` binary
|
||||
|
||||
```bash
|
||||
curl -fsSL https://github.com/k0sproject/k0sctl/releases/download/v0.25.1/k0sctl-linux-amd64 \
|
||||
> /usr/local/bin/k0sctl
|
||||
chmod +x /usr/local/bin/k0sctl
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## `k0sctl` configuration file
|
||||
|
||||
.lab[
|
||||
|
||||
- Create a default configuration file:
|
||||
```bash
|
||||
k0sctl init \
|
||||
--controller-count 3 \
|
||||
--user docker \
|
||||
--k0s m621 m622 m623 > k0sctl.yaml
|
||||
```
|
||||
|
||||
- Edit the following field so that controller nodes also run kubelet:
|
||||
|
||||
`spec.hosts[*].role: controller+worker`
|
||||
|
||||
- Add the following fields so that controller nodes can run normal workloads:
|
||||
|
||||
`spec.hosts[*].noTaints: true`
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Deploy the cluster
|
||||
|
||||
- `k0sctl` will connect to all our nodes using SSH
|
||||
|
||||
- It will copy `k0s` to the nodes
|
||||
|
||||
- ...And invoke it with the correct parameters
|
||||
|
||||
- ✨️ Magic! ✨️
|
||||
|
||||
.lab[
|
||||
|
||||
- Let's do this!
|
||||
```bash
|
||||
k0sctl apply --config k0sctl.yaml
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Check the results
|
||||
|
||||
- `k0s` has multiple troubleshooting commands to check cluster health
|
||||
|
||||
.lab[
|
||||
|
||||
- Check cluster status:
|
||||
```bash
|
||||
sudo k0s status
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
- The result should look like this:
|
||||
```
|
||||
Version: v1.33.1+k0s.1
|
||||
Process ID: 60183
|
||||
Role: controller
|
||||
Workloads: true
|
||||
SingleNode: false
|
||||
Kube-api probing successful: true
|
||||
Kube-api probing last error:
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Checking etcd status
|
||||
|
||||
- We can also check the status of our etcd cluster
|
||||
|
||||
.lab[
|
||||
|
||||
- Check that the etcd cluster has 3 members:
|
||||
```bash
|
||||
sudo k0s etcd member-list
|
||||
```
|
||||
]
|
||||
|
||||
- The result should look like this:
|
||||
```
|
||||
{"members":{"m621":"https://10.10.3.190:2380","m622":"https://10.10.2.92:2380",
|
||||
"m623":"https://10.10.2.110:2380"}}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Running `kubectl `commands
|
||||
|
||||
- `k0s` embeds `kubectl` as well
|
||||
|
||||
.lab[
|
||||
|
||||
- Check that our nodes are all `Ready`:
|
||||
```bash
|
||||
sudo k0s kubectl get nodes
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
- The result should look like this:
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
m621 Ready control-plane 66m v1.33.1+k0s
|
||||
m622 Ready control-plane 66m v1.33.1+k0s
|
||||
m623 Ready control-plane 66m v1.33.1+k0s
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Single node install (FYI!)
|
||||
|
||||
Just in case you need to quickly get a single-node cluster with `k0s`...
|
||||
|
||||
Download `k0s`:
|
||||
```bash
|
||||
curl -sSLf https://get.k0s.sh | sudo sh
|
||||
```
|
||||
|
||||
Set up the control plane and other components:
|
||||
```bash
|
||||
sudo k0s install controller --single
|
||||
```
|
||||
|
||||
Start it:
|
||||
```bash
|
||||
sudo k0s start
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Single node uninstall
|
||||
|
||||
To stop the running cluster:
|
||||
```bash
|
||||
sudo k0s start
|
||||
```
|
||||
|
||||
Reset and wipe its state:
|
||||
```bash
|
||||
sudo k0s reset
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Deploying shpod
|
||||
|
||||
- Our machines might be very barebones
|
||||
|
||||
- Let's get ourselves an environment with completion, colors, Helm, etc.
|
||||
|
||||
.lab[
|
||||
|
||||
- Run shpod:
|
||||
```bash
|
||||
curl https://shpod.in | sh
|
||||
```
|
||||
|
||||
]
|
||||
Reference in New Issue
Block a user