Edits for aks

This commit is contained in:
Bridget Kromhout
2019-07-14 16:24:26 -07:00
parent 7da663c9e7
commit b63458c8e7

View File

@@ -58,7 +58,7 @@
---
## Anonymous requests
## Anonymous & unauthenticated requests
- If any authentication method *rejects* a request, it's denied
@@ -72,93 +72,16 @@
- By default, the anonymous user can't do anything
(that's what you get if you just `curl` the Kubernetes API)
---
## Authentication with TLS certificates
- This is enabled in most Kubernetes deployments
- The user name is derived from the `CN` in the client certificates
- The groups are derived from the `O` fields in the client certificate
- From the point of view of the Kubernetes API, users do not exist
(i.e. they are not stored in etcd or anywhere else)
- Users can be created (and added to groups) independently of the API
- The Kubernetes API can be set up to use your custom CA to validate client certs
---
class: extra-details
## Viewing our admin certificate
- Let's inspect the certificate we've been using all this time!
.exercise[
- This command will show the `CN` and `O` fields for our certificate:
- Note that 401 (not 403) is what you get if you just `curl` the Kubernetes API
```bash
kubectl config view \
--raw \
-o json \
| jq -r .users[0].user[\"client-certificate-data\"] \
| openssl base64 -d -A \
| openssl x509 -text \
| grep Subject:
curl -k $API_URL
```
]
Let's break down that command together! 😅
---
class: extra-details
## Breaking down the command
- `kubectl config view` shows the Kubernetes user configuration
- `--raw` includes certificate information (which shows as REDACTED otherwise)
- `-o json` outputs the information in JSON format
- `| jq ...` extracts the field with the user certificate (in base64)
- `| openssl base64 -d -A` decodes the base64 format (now we have a PEM file)
- `| openssl x509 -text` parses the certificate and outputs it as plain text
- `| grep Subject:` shows us the line that interests us
→ We are user `kubernetes-admin`, in group `system:masters`.
(We will see later how and why this gives us the permissions that we have.)
---
## User certificates in practice
- The Kubernetes API server does not support certificate revocation
(see issue [#18982](https://github.com/kubernetes/kubernetes/issues/18982))
- As a result, we don't have an easy way to terminate someone's access
(if their key is compromised, or they leave the organization)
- Option 1: re-create a new CA and re-issue everyone's certificates
<br/>
→ Maybe OK if we only have a few users; no way otherwise
- Option 2: don't use groups; grant permissions to individual users
<br/>
→ Inconvenient if we have many users and teams; error-prone
- Option 3: issue short-lived certificates (e.g. 24 hours) and renew them often
<br/>
→ This can be facilitated by e.g. Vault or by the Kubernetes CSR API
---
## Authentication with tokens
@@ -238,6 +161,7 @@ class: extra-details
```bash
kubectl get sa default -o yaml
SECRET=$(kubectl get sa default -o json | jq -r .secrets[0].name)
echo $SECRET
```
]
@@ -296,18 +220,6 @@ class: extra-details
---
class: extra-details
## Results
- Without authentication, the user is `system:anonymous`
- With authentication, it is shown as `system:serviceaccount:default:default`
- The API "sees" us as a different user
---
## Authorization in Kubernetes
- There are multiple ways to grant permissions in Kubernetes, called [authorizers](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authorization-modules):
@@ -398,264 +310,6 @@ class: extra-details
---
## In practice
- We are going to create a service account
- We will use a default cluster role (`view`)
- We will bind together this role and this service account
- Then we will run a pod using that service account
- In this pod, we will install `kubectl` and check our permissions
---
## Creating a service account
- We will call the new service account `viewer`
(note that nothing prevents us from calling it `view`, like the role)
.exercise[
- Create the new service account:
```bash
kubectl create serviceaccount viewer
```
- List service accounts now:
```bash
kubectl get serviceaccounts
```
]
---
## Binding a role to the service account
- Binding a role = creating a *rolebinding* object
- We will call that object `viewercanview`
(but again, we could call it `view`)
.exercise[
- Create the new role binding:
```bash
kubectl create rolebinding viewercanview \
--clusterrole=view \
--serviceaccount=default:viewer
```
]
It's important to note a couple of details in these flags...
---
## Roles vs Cluster Roles
- We used `--clusterrole=view`
- What would have happened if we had used `--role=view`?
- we would have bound the role `view` from the local namespace
<br/>(instead of the cluster role `view`)
- the command would have worked fine (no error)
- but later, our API requests would have been denied
- This is a deliberate design decision
(we can reference roles that don't exist, and create/update them later)
---
## Users vs Service Accounts
- We used `--serviceaccount=default:viewer`
- What would have happened if we had used `--user=default:viewer`?
- we would have bound the role to a user instead of a service account
- again, the command would have worked fine (no error)
- ...but our API requests would have been denied later
- What's about the `default:` prefix?
- that's the namespace of the service account
- yes, it could be inferred from context, but... `kubectl` requires it
---
## Testing
- We will run an `alpine` pod and install `kubectl` there
.exercise[
- Run a one-time pod:
```bash
kubectl run eyepod --rm -ti --restart=Never \
--serviceaccount=viewer \
--image alpine
```
- Install `curl`, then use it to install `kubectl`:
```bash
apk add --no-cache curl
URLBASE=https://storage.googleapis.com/kubernetes-release/release
KUBEVER=$(curl -s $URLBASE/stable.txt)
curl -LO $URLBASE/$KUBEVER/bin/linux/amd64/kubectl
chmod +x kubectl
```
]
---
## Running `kubectl` in the pod
- We'll try to use our `view` permissions, then to create an object
.exercise[
- Check that we can, indeed, view things:
```bash
./kubectl get all
```
- But that we can't create things:
```
./kubectl create deployment testrbac --image=nginx
```
- Exit the container with `exit` or `^D`
<!-- ```keys ^D``` -->
]
- We will see the pod has terminated with an error
---
## Testing directly with `kubectl`
- We can also check for permission with `kubectl auth can-i`:
```bash
kubectl auth can-i list nodes
kubectl auth can-i create pods
kubectl auth can-i get pod/name-of-pod
kubectl auth can-i get /url-fragment-of-api-request/
kubectl auth can-i '*' services
```
- And we can check permissions on behalf of other users:
```bash
kubectl auth can-i list nodes \
--as some-user
kubectl auth can-i list nodes \
--as system:serviceaccount:<namespace>:<name-of-service-account>
```
---
class: extra-details
## Where does this `view` role come from?
- Kubernetes defines a number of ClusterRoles intended to be bound to users
- `cluster-admin` can do *everything* (think `root` on UNIX)
- `admin` can do *almost everything* (except e.g. changing resource quotas and limits)
- `edit` is similar to `admin`, but cannot view or edit permissions
- `view` has read-only access to most resources, except permissions and secrets
*In many situations, these roles will be all you need.*
*You can also customize them!*
---
class: extra-details
## Customizing the default roles
- If you need to *add* permissions to these default roles (or others),
<br/>
you can do it through the [ClusterRole Aggregation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles) mechanism
- This happens by creating a ClusterRole with the following labels:
```yaml
metadata:
labels:
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
```
- This ClusterRole permissions will be added to `admin`/`edit`/`view` respectively
- This is particulary useful when using CustomResourceDefinitions
(since Kubernetes cannot guess which resources are sensitive and which ones aren't)
---
class: extra-details
## Where do our permissions come from?
- When interacting with the Kubernetes API, we are using a client certificate
- We saw previously that this client certificate contained:
`CN=kubernetes-admin` and `O=system:masters`
- Let's look for these in existing ClusterRoleBindings:
```bash
kubectl get clusterrolebindings -o yaml |
grep -e kubernetes-admin -e system:masters
```
(`system:masters` should show up, but not `kubernetes-admin`.)
- Where does this match come from?
---
class: extra-details
## The `system:masters` group
- If we eyeball the output of `kubectl get clusterrolebindings -o yaml`, we'll find out!
- It is in the `cluster-admin` binding:
```bash
kubectl describe clusterrolebinding cluster-admin
```
- This binding associates `system:masters` with the cluster role `cluster-admin`
- And the `cluster-admin` is, basically, `root`:
```bash
kubectl describe clusterrole cluster-admin
```
---
class: extra-details
## Pod Security Policies