mirror of
https://github.com/jpetazzo/container.training.git
synced 2026-02-14 09:39:56 +00:00
🏭️ Refactor Kyverno chapter
- split out the kyverno 'colors' policies - add a concrete example about conflicting ingress resources
This commit is contained in:
249
slides/k8s/kyverno-colors.md
Normal file
249
slides/k8s/kyverno-colors.md
Normal file
@@ -0,0 +1,249 @@
|
||||
## Painting pods
|
||||
|
||||
- As an example, we'll implement a policy regarding "Pod color"
|
||||
|
||||
- The color of a Pod is the value of the label `color`
|
||||
|
||||
- Example: `kubectl label pod hello color=yellow` to paint a Pod in yellow
|
||||
|
||||
- We want to implement the following policies:
|
||||
|
||||
- color is optional (i.e. the label is not required)
|
||||
|
||||
- if color is set, it *must* be `red`, `green`, or `blue`
|
||||
|
||||
- once the color has been set, it cannot be changed
|
||||
|
||||
- once the color has been set, it cannot be removed
|
||||
|
||||
---
|
||||
|
||||
## Immutable primary colors, take 1
|
||||
|
||||
- First, we will add a policy to block forbidden colors
|
||||
|
||||
(i.e. only allow `red`, `green`, or `blue`)
|
||||
|
||||
- One possible approach:
|
||||
|
||||
- *match* all pods that have a `color` label that is not `red`, `green`, or `blue`
|
||||
|
||||
- *deny* these pods
|
||||
|
||||
- We could also *match* all pods, then *deny* with a condition
|
||||
|
||||
---
|
||||
|
||||
.small[
|
||||
```yaml
|
||||
@@INCLUDE[k8s/kyverno-pod-color-1.yaml]
|
||||
```
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Testing without the policy
|
||||
|
||||
- First, let's create a pod with an "invalid" label
|
||||
|
||||
(while we still can!)
|
||||
|
||||
- We will use this later
|
||||
|
||||
.lab[
|
||||
|
||||
- Create a pod:
|
||||
```bash
|
||||
kubectl run test-color-0 --image=nginx
|
||||
```
|
||||
|
||||
- Apply a color label:
|
||||
```bash
|
||||
kubectl label pod test-color-0 color=purple
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Load and try the policy
|
||||
|
||||
.lab[
|
||||
|
||||
- Load the policy:
|
||||
```bash
|
||||
kubectl apply -f ~/container.training/k8s/kyverno-pod-color-1.yaml
|
||||
```
|
||||
|
||||
- Create a pod:
|
||||
```bash
|
||||
kubectl run test-color-1 --image=nginx
|
||||
```
|
||||
|
||||
- Try to apply a few color labels:
|
||||
```bash
|
||||
kubectl label pod test-color-1 color=purple
|
||||
kubectl label pod test-color-1 color=red
|
||||
kubectl label pod test-color-1 color-
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Immutable primary colors, take 2
|
||||
|
||||
- Next rule: once a `color` label has been added, it cannot be changed
|
||||
|
||||
(i.e. if `color=red`, we can't change it to `color=blue`)
|
||||
|
||||
- Our approach:
|
||||
|
||||
- *match* all pods
|
||||
|
||||
- add a *precondition* matching pods that have a `color` label
|
||||
<br/>
|
||||
(both in their "before" and "after" states)
|
||||
|
||||
- *deny* these pods if their `color` label has changed
|
||||
|
||||
- Again, other approaches are possible!
|
||||
|
||||
---
|
||||
|
||||
.small[
|
||||
```yaml
|
||||
@@INCLUDE[k8s/kyverno-pod-color-2.yaml]
|
||||
```
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Comparing "old" and "new"
|
||||
|
||||
- The fields of the webhook payload are available through `{{ request }}`
|
||||
|
||||
- For UPDATE requests, we can access:
|
||||
|
||||
`{{ request.oldObject }}` → the object as it is right now (before the request)
|
||||
|
||||
`{{ request.object }}` → the object with the changes made by the request
|
||||
|
||||
---
|
||||
|
||||
## Missing labels
|
||||
|
||||
- We can access the `color` label through `{{ request.object.metadata.labels.color }}`
|
||||
|
||||
- If we reference a label (or any field) that doesn't exist, the policy fails
|
||||
|
||||
(with an error similar to `JMESPAth query failed: Unknown key ... in path`)
|
||||
|
||||
- If a precondition fails, the policy will be skipped altogether (and ignored!)
|
||||
|
||||
- To work around that, [use an OR expression][non-existence-checks]:
|
||||
|
||||
`{{ requests.object.metadata.labels.color || '' }}`
|
||||
|
||||
[non-existence-checks]: https://kyverno.io/docs/policy-types/cluster-policy/jmespath/#non-existence-checks
|
||||
|
||||
---
|
||||
|
||||
## Load and try the policy
|
||||
|
||||
.lab[
|
||||
|
||||
- Load the policy:
|
||||
```bash
|
||||
kubectl apply -f ~/container.training/k8s/kyverno-pod-color-2.yaml
|
||||
```
|
||||
|
||||
- Create a pod:
|
||||
```bash
|
||||
kubectl run test-color-2 --image=nginx
|
||||
```
|
||||
|
||||
- Try to apply a few color labels:
|
||||
```bash
|
||||
kubectl label pod test-color-2 color=purple
|
||||
kubectl label pod test-color-2 color=red
|
||||
kubectl label pod test-color-2 color=blue --overwrite
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Immutable primary colors, take 3
|
||||
|
||||
- Last rule: once a `color` label has been added, it cannot be removed
|
||||
|
||||
- Our approach is to match all pods that:
|
||||
|
||||
- *had* a `color` label (in `request.oldObject`)
|
||||
|
||||
- *don't have* a `color` label (in `request.Object`)
|
||||
|
||||
- And *deny* these pods
|
||||
|
||||
- Again, other approaches are possible!
|
||||
|
||||
---
|
||||
|
||||
.small[
|
||||
```yaml
|
||||
@@INCLUDE[k8s/kyverno-pod-color-3.yaml]
|
||||
```
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Load and try the policy
|
||||
|
||||
.lab[
|
||||
|
||||
- Load the policy:
|
||||
```bash
|
||||
kubectl apply -f ~/container.training/k8s/kyverno-pod-color-3.yaml
|
||||
```
|
||||
|
||||
- Create a pod:
|
||||
```bash
|
||||
kubectl run test-color-3 --image=nginx
|
||||
```
|
||||
|
||||
- Try to apply a few color labels:
|
||||
```bash
|
||||
kubectl label pod test-color-3 color=purple
|
||||
kubectl label pod test-color-3 color=red
|
||||
kubectl label pod test-color-3 color-
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Background checks
|
||||
|
||||
- What about the `test-color-0` pod that we create initially?
|
||||
|
||||
(remember: we did set `color=purple`)
|
||||
|
||||
- We can see the infringing Pod in a PolicyReport
|
||||
|
||||
.lab[
|
||||
|
||||
- Check that the pod still an "invalid" color:
|
||||
```bash
|
||||
kubectl get pods -L color
|
||||
```
|
||||
|
||||
- List PolicyReports:
|
||||
```bash
|
||||
kubectl get policyreports
|
||||
kubectl get polr
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
(Sometimes it takes a little while for the infringement to show up, though.)
|
||||
223
slides/k8s/kyverno-ingress.md
Normal file
223
slides/k8s/kyverno-ingress.md
Normal file
@@ -0,0 +1,223 @@
|
||||
## Detecting duplicate Ingress routes
|
||||
|
||||
- What happens when two Ingress resources have the same host+path?
|
||||
|
||||
--
|
||||
|
||||
- Undefined behavior!
|
||||
|
||||
--
|
||||
|
||||
- Possibilities:
|
||||
|
||||
- one of the Ingress rules is ignored (newer, older, lexicographic, random...)
|
||||
|
||||
- both Ingress rules are ignored
|
||||
|
||||
- traffic is randomly processed by both rules (sort of load balancing)
|
||||
|
||||
- creation of the second resource is blocked by an admission policy
|
||||
|
||||
--
|
||||
|
||||
- Can we implement that last option with Kyverno? 🤔
|
||||
|
||||
---
|
||||
|
||||
## General idea
|
||||
|
||||
- When a new Ingress resource is created:
|
||||
|
||||
*check if there is already an identical Ingress resource*
|
||||
|
||||
- We'll want to use the `apiCall` feature
|
||||
|
||||
(to retrieve all existing Ingress resources across all Namespaces)
|
||||
|
||||
- Problem: we don't care about strict equality
|
||||
|
||||
(there could be different labels, annotations, TLS configuration)
|
||||
|
||||
- Problem: an Ingress resource is a collection of *rules*
|
||||
|
||||
(we want to check if any rule of the new Ingress...
|
||||
<br/>...conflicts with any rule of an existing Ingress)
|
||||
|
||||
---
|
||||
|
||||
## Good news, everyone
|
||||
|
||||
- There is an example in the Kyverno documentation!
|
||||
|
||||
[Unique Ingress Host and Path][kyverno-unique-ingress]
|
||||
|
||||
--
|
||||
|
||||
- Unfortunately, the example doesn't really work
|
||||
|
||||
(at least as of [Kyverno 1.16 / January 2026][kyverno-unique-ingress-github])
|
||||
|
||||
- Can you see problems with it?
|
||||
|
||||
--
|
||||
|
||||
- Suggestion: load the policy and make some experiments!
|
||||
|
||||
(remember to switch the `validationFailureAction` to `Enforce` for easier testing)
|
||||
|
||||
[kyverno-unique-ingress]: https://kyverno.io/policies/other/unique-ingress-host-and-path/unique-ingress-host-and-path/
|
||||
[kyverno-unique-ingress-github]: https://github.com/kyverno/policies/blob/release-1.16/other/unique-ingress-host-and-path/unique-ingress-host-and-path.yaml
|
||||
|
||||
---
|
||||
|
||||
## Problem - no `host`
|
||||
|
||||
- If we try to create an Ingress without specifying the `host`:
|
||||
```
|
||||
JMESPath query failed: Unknown key "host" in path
|
||||
```
|
||||
|
||||
- In some cases, this could be a feature
|
||||
|
||||
(maybe we don't want to allow Ingress rules without a `host`!)
|
||||
|
||||
---
|
||||
|
||||
## Problem - no UPDATE
|
||||
|
||||
- If we try to modify an existing Ingress, the modification will be blocked
|
||||
|
||||
- This is because the "new" Ingress rules are checked against "existing" rules
|
||||
|
||||
- When we CREATE a new Ingress, its rules don't exist yet (no conflict)
|
||||
|
||||
- When we UPDATE an existing Ingress, its rules will show up in the existing rules
|
||||
|
||||
- By definition, a rule will always conflict with itself
|
||||
|
||||
- So UPDATE requests will always be blocked
|
||||
|
||||
- If we exclude UPDATE operations, then it will be possible to introduce conflicts
|
||||
|
||||
(by modifying existing Ingress resources to add/edit rules in them)
|
||||
|
||||
- This problem makes the policy useless as it is (unless we completely block updates)
|
||||
|
||||
---
|
||||
|
||||
## Problem - poor UX
|
||||
|
||||
- When the policy detects a conflict, it doesn't say which other resource is involved
|
||||
|
||||
- Sometimes, it's possible to find it manually
|
||||
|
||||
(with a bunch of clever `kubectl get ingresses --all-namespaces` commands)
|
||||
|
||||
- Sometimes, we don't have read permissions on the conflicting resource
|
||||
|
||||
(e.g. if it's in a different Namespace that we cannot access)
|
||||
|
||||
- It would be nice if the policy could report the exact Ingress and Namespace involved
|
||||
|
||||
---
|
||||
|
||||
## Problem - useless block
|
||||
|
||||
- There is a `preconditions` block to ignore `DELETE` operations
|
||||
|
||||
- This is useless, as the default is to match only `CREATE` and `UPDATE` requests
|
||||
|
||||
(See the [documentation about match statements][kyverno-match])
|
||||
|
||||
- This block can be safely removed
|
||||
|
||||
[kyverno-patch]: https://kyverno.io/docs/policy-types/cluster-policy/match-exclude/#match-statements
|
||||
|
||||
---
|
||||
|
||||
## Solution - no `host`
|
||||
|
||||
- In Kyverno, when doing a lookup, the way to handle non-existent keys is with a `||`
|
||||
|
||||
- For instance, replace `{{element.host}}` with `{{element.host||''}}`
|
||||
|
||||
(or a placeholder value like `{{element.host||'NOHOST'}}`)
|
||||
|
||||
---
|
||||
|
||||
## Solution - no UPDATE
|
||||
|
||||
- When retrieving existing Ingress resources, we need to exclude the current one
|
||||
|
||||
- This can look like this:
|
||||
```yaml
|
||||
context:
|
||||
- name: ingresses
|
||||
apiCall:
|
||||
urlPath: "/apis/networking.k8s.io/v1/ingresses"
|
||||
jmesPath: |
|
||||
items[?
|
||||
metadata.namespace!='{{request.object.metadata.namespace}}'
|
||||
||
|
||||
metadata.name!='{{request.object.metadata.name}}'
|
||||
]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Solution - poor UX
|
||||
|
||||
- Ideally, when there is a conflict, we'd like to display a message like this one:
|
||||
```
|
||||
Ingress host+path combinations must be unique across the cluster.
|
||||
This Ingress contains a rule for host 'www.example.com' and path '/',
|
||||
which conflicts with Ingress 'example' in Namespace 'default'.
|
||||
```
|
||||
|
||||
- This requires a significant refactor of the policy logic
|
||||
|
||||
- Instead of:
|
||||
|
||||
*loop on rules; filter by rule's host; find if there is any common path*
|
||||
|
||||
- We need e.g.:
|
||||
|
||||
*loop on rules; nested loop on paths; filter ingresses with conflicts*
|
||||
|
||||
- This requires nested loops, and way to access the `element` of each nested loop
|
||||
|
||||
---
|
||||
|
||||
## Nested loops
|
||||
|
||||
- As of January 2026, this isn't very well documented
|
||||
|
||||
(author's note: I had to [dive into Kyverno's code][kyverno-nested-element] to figure it out...)
|
||||
|
||||
- The trick is that the outer loop's element is `element0`, the next one is `element1`, etc.
|
||||
|
||||
- Additionally, there is a bug in Kyverno's context handling when defining a variable in a loop
|
||||
|
||||
(the variable needs to be defined at the top-level, with e.g. a dummy value)
|
||||
|
||||
TODO: propose a PR to Kyverno's documentation! 🤓💡
|
||||
|
||||
[kyverno-nested-element]: https://github.com/kyverno/kyverno/blob/5d5345ec3347f4f5c281652461d42231ea3703e5/pkg/engine/context/context.go#L284
|
||||
|
||||
---
|
||||
|
||||
## Putting it all together
|
||||
|
||||
- Try to write a Kyverno policy to detect conflicting Ingress resources
|
||||
|
||||
- Make sure to test the following edge cases:
|
||||
|
||||
- rules that don't define a host (e.g. `kubectl create ingress test --rule=/=test:80`)
|
||||
|
||||
- ingresses with multiple rules
|
||||
|
||||
- no-op edits (e.g. adding a label or annotation)
|
||||
|
||||
- conflicting edits (e.g. adding/editing a rule that adds a conflict)
|
||||
|
||||
- rules for `host1/path1` and `host2/path2` shouldn't conflict with `host1/path2`
|
||||
@@ -4,17 +4,16 @@
|
||||
|
||||
- It has many use cases, including:
|
||||
|
||||
- validating resources when they are created/edited
|
||||
<br/>(blocking or logging violations)
|
||||
- enforcing or giving warnings about best practices or misconfigurations
|
||||
<br/>(e.g. `:latest` images, healthchecks, requests and limits...)
|
||||
|
||||
- tightening security
|
||||
<br/>(possibly for multitenant clusters)
|
||||
|
||||
- preventing some modifications
|
||||
<br/>(e.g. restricting modifications to some fields, labels...)
|
||||
|
||||
- modifying resources automatically
|
||||
|
||||
- generating resources automatically
|
||||
|
||||
- clean up resources automatically
|
||||
- modifying, generating, cleaning up resources automatically
|
||||
|
||||
---
|
||||
|
||||
@@ -118,14 +117,6 @@
|
||||
|
||||
---
|
||||
|
||||
## Kyverno in action
|
||||
|
||||
- We're going to install Kyverno on our cluster
|
||||
|
||||
- Then, we will use it to implement a few policies
|
||||
|
||||
---
|
||||
|
||||
## Installing Kyverno
|
||||
|
||||
The recommended [installation method][install-kyverno] is to use Helm charts.
|
||||
@@ -150,9 +141,9 @@ The recommended [installation method][install-kyverno] is to use Helm charts.
|
||||
|
||||
- Which resources does it *select?*
|
||||
|
||||
- can specify resources to *match* and/or *exclude*
|
||||
- *match* and/or *exclude* resources
|
||||
|
||||
- can specify *kinds* and/or *selector* and/or users/roles doing the action
|
||||
- match by *kind*, *selector*, *namespace selector*, user/roles doing the action...
|
||||
|
||||
- Which operation should be done?
|
||||
|
||||
@@ -164,183 +155,47 @@ The recommended [installation method][install-kyverno] is to use Helm charts.
|
||||
|
||||
---
|
||||
|
||||
## Painting pods
|
||||
## Validating objects
|
||||
|
||||
- As an example, we'll implement a policy regarding "Pod color"
|
||||
Example: [require resource requests and limits][kyverno-requests-limits].
|
||||
|
||||
- The color of a Pod is the value of the label `color`
|
||||
|
||||
- Example: `kubectl label pod hello color=yellow` to paint a Pod in yellow
|
||||
|
||||
- We want to implement the following policies:
|
||||
|
||||
- color is optional (i.e. the label is not required)
|
||||
|
||||
- if color is set, it *must* be `red`, `green`, or `blue`
|
||||
|
||||
- once the color has been set, it cannot be changed
|
||||
|
||||
- once the color has been set, it cannot be removed
|
||||
|
||||
---
|
||||
|
||||
## Immutable primary colors, take 1
|
||||
|
||||
- First, we will add a policy to block forbidden colors
|
||||
|
||||
(i.e. only allow `red`, `green`, or `blue`)
|
||||
|
||||
- One possible approach:
|
||||
|
||||
- *match* all pods that have a `color` label that is not `red`, `green`, or `blue`
|
||||
|
||||
- *deny* these pods
|
||||
|
||||
- We could also *match* all pods, then *deny* with a condition
|
||||
|
||||
---
|
||||
|
||||
.small[
|
||||
```yaml
|
||||
@@INCLUDE[k8s/kyverno-pod-color-1.yaml]
|
||||
validate:
|
||||
message: "CPU and memory resource requests and memory limits are required."
|
||||
pattern:
|
||||
spec:
|
||||
containers:
|
||||
- resources:
|
||||
requests:
|
||||
memory: "?*"
|
||||
cpu: "?*"
|
||||
limits:
|
||||
memory: "?*"
|
||||
```
|
||||
]
|
||||
|
||||
(The full policy also has sections for `initContainers` and `ephemeralContainers`.)
|
||||
|
||||
[kyverno-requests-limits]: https://kyverno.io/policies/best-practices/require-pod-requests-limits/require-pod-requests-limits/
|
||||
|
||||
---
|
||||
|
||||
## Testing without the policy
|
||||
## Optional fields
|
||||
|
||||
- First, let's create a pod with an "invalid" label
|
||||
Example: [disallow `NodePort` Services][kyverno-disallow-nodeports].
|
||||
|
||||
(while we still can!)
|
||||
|
||||
- We will use this later
|
||||
|
||||
.lab[
|
||||
|
||||
- Create a pod:
|
||||
```bash
|
||||
kubectl run test-color-0 --image=nginx
|
||||
```
|
||||
|
||||
- Apply a color label:
|
||||
```bash
|
||||
kubectl label pod test-color-0 color=purple
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Load and try the policy
|
||||
|
||||
.lab[
|
||||
|
||||
- Load the policy:
|
||||
```bash
|
||||
kubectl apply -f ~/container.training/k8s/kyverno-pod-color-1.yaml
|
||||
```
|
||||
|
||||
- Create a pod:
|
||||
```bash
|
||||
kubectl run test-color-1 --image=nginx
|
||||
```
|
||||
|
||||
- Try to apply a few color labels:
|
||||
```bash
|
||||
kubectl label pod test-color-1 color=purple
|
||||
kubectl label pod test-color-1 color=red
|
||||
kubectl label pod test-color-1 color-
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Immutable primary colors, take 2
|
||||
|
||||
- Next rule: once a `color` label has been added, it cannot be changed
|
||||
|
||||
(i.e. if `color=red`, we can't change it to `color=blue`)
|
||||
|
||||
- Our approach:
|
||||
|
||||
- *match* all pods
|
||||
|
||||
- add a *precondition* matching pods that have a `color` label
|
||||
<br/>
|
||||
(both in their "before" and "after" states)
|
||||
|
||||
- *deny* these pods if their `color` label has changed
|
||||
|
||||
- Again, other approaches are possible!
|
||||
|
||||
---
|
||||
|
||||
.small[
|
||||
```yaml
|
||||
@@INCLUDE[k8s/kyverno-pod-color-2.yaml]
|
||||
validate:
|
||||
message: "Services of type NodePort are not allowed."
|
||||
pattern:
|
||||
spec:
|
||||
=(type): "!NodePort"
|
||||
```
|
||||
]
|
||||
|
||||
---
|
||||
`=(...):` means that the field is optional.
|
||||
|
||||
## Comparing "old" and "new"
|
||||
`type: "!NodePort"` would *require* the field to exist, but be different from `NodePort`.
|
||||
|
||||
- The fields of the webhook payload are available through `{{ request }}`
|
||||
|
||||
- For UPDATE requests, we can access:
|
||||
|
||||
`{{ request.oldObject }}` → the object as it is right now (before the request)
|
||||
|
||||
`{{ request.object }}` → the object with the changes made by the request
|
||||
|
||||
---
|
||||
|
||||
## Missing labels
|
||||
|
||||
- We can access the `color` label through `{{ request.object.metadata.labels.color }}`
|
||||
|
||||
- If we reference a label (or any field) that doesn't exist, the policy fails
|
||||
|
||||
(with an error similar to `JMESPAth query failed: Unknown key ... in path`)
|
||||
|
||||
- If a precondition fails, the policy will be skipped altogether (and ignored!)
|
||||
|
||||
- To work around that, [use an OR expression][non-existence-checks]:
|
||||
|
||||
`{{ requests.object.metadata.labels.color || '' }}`
|
||||
|
||||
- Note that in older versions of Kyverno, this wasn't always necessary
|
||||
|
||||
(e.g. in *preconditions*, a missing label would evalute to an empty string)
|
||||
|
||||
[non-existence-checks]: https://kyverno.io/docs/policy-types/cluster-policy/jmespath/#non-existence-checks
|
||||
|
||||
---
|
||||
|
||||
## Load and try the policy
|
||||
|
||||
.lab[
|
||||
|
||||
- Load the policy:
|
||||
```bash
|
||||
kubectl apply -f ~/container.training/k8s/kyverno-pod-color-2.yaml
|
||||
```
|
||||
|
||||
- Create a pod:
|
||||
```bash
|
||||
kubectl run test-color-2 --image=nginx
|
||||
```
|
||||
|
||||
- Try to apply a few color labels:
|
||||
```bash
|
||||
kubectl label pod test-color-2 color=purple
|
||||
kubectl label pod test-color-2 color=red
|
||||
kubectl label pod test-color-2 color=blue --overwrite
|
||||
```
|
||||
|
||||
]
|
||||
[kyverno-disallow-nodeports]: https://kyverno.io/policies/best-practices/restrict-node-port/restrict-node-port/
|
||||
|
||||
---
|
||||
|
||||
@@ -354,7 +209,7 @@ The recommended [installation method][install-kyverno] is to use Helm charts.
|
||||
|
||||
(more on that later)
|
||||
|
||||
- We need to change the `failureAction` to `Enforce`
|
||||
- We (very often) need to change the `failureAction` to `Enforce`
|
||||
|
||||
---
|
||||
|
||||
@@ -382,7 +237,7 @@ The recommended [installation method][install-kyverno] is to use Helm charts.
|
||||
|
||||
- Existing objects are not affected
|
||||
|
||||
(e.g. if we have a pod with `color=pink` *before* installing our policy)
|
||||
(e.g. if we create "invalid" objects *before* installing the policy)
|
||||
|
||||
- Kyvero can also run checks in the background, and report violations
|
||||
|
||||
@@ -390,128 +245,80 @@ The recommended [installation method][install-kyverno] is to use Helm charts.
|
||||
|
||||
- `background: true/false` controls that
|
||||
|
||||
- When would we want to disabled it? 🤔
|
||||
|
||||
---
|
||||
|
||||
## Accessing `AdmissionRequest` context
|
||||
## Loops
|
||||
|
||||
- In some of our policies, we want to prevent an *update*
|
||||
Example: [require image tags][kyverno-disallow-latest].
|
||||
|
||||
(as opposed to a mere *create* operation)
|
||||
This uses `request`, which gives access to the `AdmissionRequest` payload.
|
||||
|
||||
- We want to compare the *old* and *new* version
|
||||
`request` has an `object` field containing the object that we're validating.
|
||||
|
||||
(to check if a specific label was removed)
|
||||
|
||||
- The `AdmissionRequest` object has `object` and `oldObject` fields
|
||||
|
||||
(the `AdmissionRequest` object is the thing that gets submitted to the webhook)
|
||||
|
||||
- We access the `AdmissionRequest` object through `{{ request }}`
|
||||
|
||||
---
|
||||
|
||||
## `{{ request }}`
|
||||
|
||||
- The `{{ request }}` context is only available when there is an `AdmissionRequest`
|
||||
|
||||
- When a resource is "at rest", there is no `{{ request }}` (and no old/new)
|
||||
|
||||
- Therefore, a policy that uses `{{ request }}` cannot validate existing objects
|
||||
|
||||
(it can only be used when an object is actually created/updated/deleted)
|
||||
|
||||
--
|
||||
|
||||
- *Well, actually...*
|
||||
|
||||
--
|
||||
|
||||
- Kyverno exposes `{{ request.object }}` and `{{ request.namespace }}`
|
||||
|
||||
(see [the documentation](https://kyverno.io/docs/policy-reports/background/) for details!)
|
||||
|
||||
---
|
||||
|
||||
## Immutable primary colors, take 3
|
||||
|
||||
- Last rule: once a `color` label has been added, it cannot be removed
|
||||
|
||||
- Our approach is to match all pods that:
|
||||
|
||||
- *had* a `color` label (in `request.oldObject`)
|
||||
|
||||
- *don't have* a `color` label (in `request.Object`)
|
||||
|
||||
- And *deny* these pods
|
||||
|
||||
- Again, other approaches are possible!
|
||||
|
||||
---
|
||||
|
||||
.small[
|
||||
```yaml
|
||||
@@INCLUDE[k8s/kyverno-pod-color-3.yaml]
|
||||
validate:
|
||||
message: "An image tag is required."
|
||||
foreach:
|
||||
- list: "request.object.spec.containers"
|
||||
pattern:
|
||||
image: "*:*"
|
||||
```
|
||||
]
|
||||
|
||||
Note: again, there should also be an entry for `initContainers` and `ephemeralContainers`.
|
||||
|
||||
[kyverno-disallow-latest]: https://kyverno.io/policies/best-practices/disallow-latest-tag/disallow-latest-tag/
|
||||
|
||||
---
|
||||
|
||||
## Load and try the policy
|
||||
class: extra-details
|
||||
|
||||
.lab[
|
||||
## ...Or not to loop
|
||||
|
||||
- Load the policy:
|
||||
```bash
|
||||
kubectl apply -f ~/container.training/k8s/kyverno-pod-color-3.yaml
|
||||
```
|
||||
Requiring image tags can also be achieved like this:
|
||||
|
||||
- Create a pod:
|
||||
```bash
|
||||
kubectl run test-color-3 --image=nginx
|
||||
```
|
||||
|
||||
- Try to apply a few color labels:
|
||||
```bash
|
||||
kubectl label pod test-color-3 color=purple
|
||||
kubectl label pod test-color-3 color=red
|
||||
kubectl label pod test-color-3 color-
|
||||
```
|
||||
|
||||
]
|
||||
```yaml
|
||||
validate:
|
||||
message: "An image tag is required."
|
||||
pattern:
|
||||
spec:
|
||||
containers:
|
||||
- image: "*:*"
|
||||
=(initContainers):
|
||||
- image: "*:*"
|
||||
=(ephemeralContainers):
|
||||
- image: "*:*"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Background checks
|
||||
## `request` and other variables
|
||||
|
||||
- What about the `test-color-0` pod that we create initially?
|
||||
- `request` gives us access to the `AdmissionRequest` payload
|
||||
|
||||
(remember: we did set `color=purple`)
|
||||
- This gives us access to a bunch of interesting fields:
|
||||
|
||||
- We can see the infringing Pod in a PolicyReport
|
||||
`request.operation`: CREATE, UPDATE, DELETE, or CONNECT
|
||||
|
||||
.lab[
|
||||
`request.object`: the object being created or modified
|
||||
|
||||
- Check that the pod still an "invalid" color:
|
||||
```bash
|
||||
kubectl get pods -L color
|
||||
```
|
||||
`request.oldObject`: the object being modified (only for UPDATE)
|
||||
|
||||
- List PolicyReports:
|
||||
```bash
|
||||
kubectl get policyreports
|
||||
kubectl get polr
|
||||
```
|
||||
`request.userInfo`: information about the user making the API request
|
||||
|
||||
]
|
||||
- `object` and `oldObject` are very convenient to block specific *modifications*
|
||||
|
||||
(Sometimes it takes a little while for the infringement to show up, though.)
|
||||
(e.g. making some labels or annotations immutable)
|
||||
|
||||
(See [here][kyverno-request] for details.)
|
||||
|
||||
[kyverno-request]: https://kyverno.io/docs/policy-types/cluster-policy/variables/#variables-from-admission-review-requests
|
||||
|
||||
---
|
||||
|
||||
## Generating objects
|
||||
|
||||
- Let's review a fairly common use-case...
|
||||
|
||||
- When we create a Namespace, we also want to automatically create:
|
||||
|
||||
- a LimitRange (to set default CPU and RAM requests and limits)
|
||||
@@ -552,13 +359,13 @@ Note: the `apiVersion` field appears to be optional.
|
||||
|
||||
- Excerpt:
|
||||
```yaml
|
||||
generate:
|
||||
kind: LimitRange
|
||||
name: default-limitrange
|
||||
namespace: "{{request.object.metadata.name}}"
|
||||
data:
|
||||
spec:
|
||||
limits:
|
||||
generate:
|
||||
kind: LimitRange
|
||||
name: default-limitrange
|
||||
namespace: "{{request.object.metadata.name}}"
|
||||
data:
|
||||
spec:
|
||||
limits:
|
||||
```
|
||||
|
||||
- Note that we have to specify the `namespace`
|
||||
@@ -567,11 +374,80 @@ Note: the `apiVersion` field appears to be optional.
|
||||
|
||||
---
|
||||
|
||||
## Templates and JMESpath
|
||||
|
||||
- We can use `{{ }}` templates in Kyverno policies
|
||||
|
||||
(when generating or validating resources; in conditions, pre-conditions...)
|
||||
|
||||
- This lets us access `request` as well as [a few other variables][kyverno-variables]
|
||||
|
||||
- We can also use JMESPath expressions, for instance:
|
||||
|
||||
`{{request.object.spec.containers[?name=='worker'].image}}`
|
||||
|
||||
`{{request.object.spec.[containers,initContainers][][].image}}`
|
||||
|
||||
- To experiment with JMESPath, use e.g. [jmespath.org] or [install the kyverno CLI][kyverno-cli]
|
||||
|
||||
(then use `kubectl kyverno jp query < data.json ...expression... `)
|
||||
|
||||
[jmespath.org]: https://jmespath.org/
|
||||
[kyverno-cli]: https://kyverno.io/docs/kyverno-cli/install/
|
||||
[kyverno-variables]: https://kyverno.io/docs/policy-types/cluster-policy/variables/#pre-defined-variables
|
||||
|
||||
---
|
||||
|
||||
## Data sources
|
||||
|
||||
- It's also possible to access data in Kubernetes ConfigMaps:
|
||||
```yaml
|
||||
context:
|
||||
- name: ingressconfig
|
||||
configMap:
|
||||
name: ingressconfig
|
||||
namespace: {{request.object.metadata.namespace}}
|
||||
```
|
||||
|
||||
- And then use it e.g. in a policy generating or modifying Ingress resources:
|
||||
```yaml
|
||||
...
|
||||
host: {{request.object.metadata.name}}.{{ingressconfig.data.domainsuffix}}
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Kubernetes API calls
|
||||
|
||||
- It's also possible to access arbitrary Kubernetes resources through API calls:
|
||||
```yaml
|
||||
context:
|
||||
- name: dns
|
||||
apiCall:
|
||||
urlPath: "/api/v1/namespaces/kube-system/services/kube-dns"
|
||||
jmesPath: "spec.clusterIP"
|
||||
```
|
||||
|
||||
- And then use that e.g. in a mutating policy:
|
||||
```yaml
|
||||
mutate:
|
||||
patchStrategicMerge:
|
||||
spec:
|
||||
containers:
|
||||
- (name): "*"
|
||||
env:
|
||||
- name: DNS
|
||||
value: "{{dns}}"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Lifecycle
|
||||
|
||||
- After generated objects have been created, we can change them
|
||||
|
||||
(Kyverno won't update them)
|
||||
(Kyverno won't automatically revert them)
|
||||
|
||||
- Except if we use `clone` together with the `synchronize` flag
|
||||
|
||||
@@ -579,8 +455,6 @@ Note: the `apiVersion` field appears to be optional.
|
||||
|
||||
- This is convenient for e.g. ConfigMaps shared between Namespaces
|
||||
|
||||
- Objects are generated only at *creation* (not when updating an old object)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
@@ -599,12 +473,14 @@ class: extra-details
|
||||
|
||||
(in the generated object `metadata`)
|
||||
|
||||
- See [Linking resources with ownerReferences][ownerref] for an example
|
||||
- See [Linking resources with ownerReferences][kyverno-ownerref] for an example
|
||||
|
||||
[ownerref]: https://kyverno.io/docs/writing-policies/generate/#linking-trigger-with-downstream
|
||||
[kyverno-ownerref]: https://kyverno.io/docs/policy-types/cluster-policy/generate/#linking-trigger-with-downstream
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Asynchronous creation
|
||||
|
||||
- Kyverno creates resources asynchronously
|
||||
@@ -621,6 +497,30 @@ class: extra-details
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Autogen rules for Pod validating policies
|
||||
|
||||
- In Kubernetes, we rarely create Pods directly
|
||||
|
||||
(instead, we create controllers like Deployments, DaemonSets, Jobs, etc)
|
||||
|
||||
- As a result, Pod validating policies can be tricky to debug
|
||||
|
||||
(the policy blocks invalid Pods, but doesn't block their controller)
|
||||
|
||||
- Kyverno helps us with "autogen rules"
|
||||
|
||||
(when we create a Pod policy, it will automatically create policies on Pod controllers)
|
||||
|
||||
- This can be customized if needed; [see documentation for details][kyverno-autogen]
|
||||
|
||||
(it can be disabled, or extended to Custom Resources)
|
||||
|
||||
[kyverno-autogen]: https://kyverno.io/docs/policy-types/cluster-policy/autogen/
|
||||
|
||||
---
|
||||
|
||||
## Footprint (current versions)
|
||||
|
||||
- 14 CRDs
|
||||
@@ -663,45 +563,7 @@ class: extra-details
|
||||
|
||||
- There is also a CLI tool (not discussed here)
|
||||
|
||||
---
|
||||
|
||||
## Caveats
|
||||
|
||||
- The `{{ request }}` context is powerful, but difficult to validate
|
||||
|
||||
(Kyverno can't know ahead of time how it will be populated)
|
||||
|
||||
- Advanced policies (with conditionals) have unique, exotic syntax:
|
||||
```yaml
|
||||
spec:
|
||||
=(volumes):
|
||||
=(hostPath):
|
||||
path: "!/var/run/docker.sock"
|
||||
```
|
||||
|
||||
- Writing and validating policies can be difficult
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Pods created by controllers
|
||||
|
||||
- When e.g. a ReplicaSet or DaemonSet creates a pod, it "owns" it
|
||||
|
||||
(the ReplicaSet or DaemonSet is listed in the Pod's `.metadata.ownerReferences`)
|
||||
|
||||
- Kyverno treats these Pods differently
|
||||
|
||||
- If my understanding of the code is correct (big *if*):
|
||||
|
||||
- it skips validation for "owned" Pods
|
||||
|
||||
- instead, it validates their controllers
|
||||
|
||||
- this way, Kyverno can report errors on the controller instead of the pod
|
||||
|
||||
- This can be a bit confusing when testing policies on such pods!
|
||||
- It continues to evolve and gain new features
|
||||
|
||||
???
|
||||
|
||||
|
||||
@@ -77,6 +77,8 @@ content:
|
||||
- #7
|
||||
- k8s/admission.md
|
||||
- k8s/kyverno.md
|
||||
- k8s/kyverno-colors.md
|
||||
- k8s/kyverno-ingress.md
|
||||
- #8
|
||||
- k8s/aggregation-layer.md
|
||||
- k8s/metrics-server.md
|
||||
|
||||
@@ -156,6 +156,8 @@ content:
|
||||
- k8s/kuik.md
|
||||
- k8s/sealed-secrets.md
|
||||
- k8s/kyverno.md
|
||||
- k8s/kyverno-colors.md
|
||||
- k8s/kyverno-ingress.md
|
||||
- k8s/eck.md
|
||||
- k8s/finalizers.md
|
||||
- k8s/owners-and-dependents.md
|
||||
|
||||
Reference in New Issue
Block a user