mirror of
https://github.com/jpetazzo/container.training.git
synced 2026-02-15 18:19:56 +00:00
Compare commits
47 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1ed755407f | ||
|
|
731bf66122 | ||
|
|
df6976919c | ||
|
|
e8234ebaa8 | ||
|
|
c758f8c957 | ||
|
|
426fa67b19 | ||
|
|
ce8261c3be | ||
|
|
c446530a16 | ||
|
|
f2a57b61be | ||
|
|
0411267595 | ||
|
|
1f125775b2 | ||
|
|
9c8b96156c | ||
|
|
398ec9278f | ||
|
|
e46bed9edd | ||
|
|
1162aedff9 | ||
|
|
12915b2c57 | ||
|
|
325c14edc8 | ||
|
|
24a74ce734 | ||
|
|
a941b313c0 | ||
|
|
7ed0fe8fab | ||
|
|
b63458c8e7 | ||
|
|
625953ff84 | ||
|
|
7da663c9e7 | ||
|
|
5ae94306e7 | ||
|
|
c401d28dad | ||
|
|
b48e1d6f64 | ||
|
|
ef64b83040 | ||
|
|
3816dc43e6 | ||
|
|
7e90a221ac | ||
|
|
8e72087cab | ||
|
|
93cc4a33fe | ||
|
|
072c9f3fbe | ||
|
|
b6b5331824 | ||
|
|
2eace3fb18 | ||
|
|
43beed8e2d | ||
|
|
b11221d33d | ||
|
|
4d6f336c7e | ||
|
|
a53a384aed | ||
|
|
15023bd30a | ||
|
|
5c55a7453f | ||
|
|
15c8fe5e39 | ||
|
|
7988e86aa2 | ||
|
|
e3c41d9422 | ||
|
|
cc99729b2b | ||
|
|
26c16bb73c | ||
|
|
cb87e51c3c | ||
|
|
7b3ec79918 |
@@ -27,7 +27,7 @@ spec:
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- "apk update && apk add curl && curl https://github.com/jpetazzo.keys > /root/.ssh/authorized_keys"
|
||||
- "apk update && apk add curl && curl https://github.com/bridgetkromhout.keys > /root/.ssh/authorized_keys"
|
||||
containers:
|
||||
- name: web
|
||||
image: nginx
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
#/ /kube-halfday.yml.html 200
|
||||
#/ /kube-fullday.yml.html 200
|
||||
#/ /kube-twodays.yml.html 200
|
||||
/ /k8s-201.yml.html 200!
|
||||
|
||||
# And this allows to do "git clone https://container.training".
|
||||
/info/refs service=git-upload-pack https://github.com/jpetazzo/container.training/info/refs?service=git-upload-pack
|
||||
|
||||
38
slides/k8s-201.yml
Normal file
38
slides/k8s-201.yml
Normal file
@@ -0,0 +1,38 @@
|
||||
title: |
|
||||
Kubernetes 201
|
||||
Production tooling
|
||||
|
||||
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||
chat: "[Gitter](https://gitter.im/k8s-workshops/oscon2019)"
|
||||
#chat: "In person!"
|
||||
|
||||
gitrepo: github.com/jpetazzo/container.training
|
||||
|
||||
slides: https://container.training/
|
||||
|
||||
exclude:
|
||||
- self-paced
|
||||
- static-pods-exercise
|
||||
|
||||
chapters:
|
||||
- shared/title.md
|
||||
- logistics-bridget.md
|
||||
- k8s/intro.md
|
||||
- shared/about-slides.md
|
||||
- shared/toc.md
|
||||
- - k8s/prereqs-k8s201.md
|
||||
- k8s/localkubeconfig-k8s201.md
|
||||
- k8s/architecture-k8s201.md
|
||||
- - k8s/healthchecks.md
|
||||
- k8s/kubercoins-k8s201.md
|
||||
- k8s/authn-authz-k8s201.md
|
||||
- - k8s/resource-limits-k8s201.md
|
||||
- k8s/metrics-server.md
|
||||
- - k8s/cluster-sizing-k8s201.md
|
||||
- k8s/horizontal-pod-autoscaler.md
|
||||
- k8s/extending-api.md
|
||||
- k8s/helm.md
|
||||
- - k8s/lastwords-admin.md
|
||||
- k8s/links-bridget.md
|
||||
- shared/thankyou.md
|
||||
- k8s/operators.md
|
||||
390
slides/k8s/architecture-k8s201.md
Normal file
390
slides/k8s/architecture-k8s201.md
Normal file
@@ -0,0 +1,390 @@
|
||||
# Kubernetes architecture
|
||||
|
||||
We can arbitrarily split Kubernetes in two parts:
|
||||
|
||||
- the *nodes*, a set of machines that run our containerized workloads;
|
||||
|
||||
- the *control plane*, a set of processes implementing the Kubernetes APIs.
|
||||
|
||||
Kubernetes also relies on underlying infrastructure:
|
||||
|
||||
- servers, network connectivity (obviously!),
|
||||
|
||||
- optional components like storage systems, load balancers ...
|
||||
|
||||
---
|
||||
|
||||
## Control plane location
|
||||
|
||||
The control plane can run:
|
||||
|
||||
- in containers, on the same nodes that run other application workloads
|
||||
|
||||
(example: Minikube; 1 node runs everything)
|
||||
|
||||
- on a dedicated node
|
||||
|
||||
(example: a cluster installed with kubeadm)
|
||||
|
||||
- on a dedicated set of nodes
|
||||
|
||||
(example: Kubernetes The Hard Way; kops)
|
||||
|
||||
- outside of the cluster
|
||||
|
||||
(example: most managed clusters like AKS, EKS, GKE)
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## What runs on a node
|
||||
|
||||
- Our containerized workloads
|
||||
|
||||
- A container engine like Docker, CRI-O, containerd...
|
||||
|
||||
(in theory, the choice doesn't matter, as the engine is abstracted by Kubernetes)
|
||||
|
||||
- kubelet: an agent connecting the node to the cluster
|
||||
|
||||
(it connects to the API server, registers the node, receives instructions)
|
||||
|
||||
- kube-proxy: a component used for internal cluster communication
|
||||
|
||||
(note that this is *not* an overlay network or a CNI plugin!)
|
||||
|
||||
---
|
||||
|
||||
## What's in the control plane
|
||||
|
||||
- Everything is stored in etcd
|
||||
|
||||
(it's the only stateful component)
|
||||
|
||||
- Everyone communicates exclusively through the API server:
|
||||
|
||||
- we (users) interact with the cluster through the API server
|
||||
|
||||
- the nodes register and get their instructions through the API server
|
||||
|
||||
- the other control plane components also register with the API server
|
||||
|
||||
- API server is the only component that reads/writes from/to etcd
|
||||
|
||||
---
|
||||
|
||||
## Communication protocols: API server
|
||||
|
||||
- The API server exposes a REST API
|
||||
|
||||
(except for some calls, e.g. to attach interactively to a container)
|
||||
|
||||
- Almost all requests and responses are JSON following a strict format
|
||||
|
||||
- For performance, the requests and responses can also be done over protobuf
|
||||
|
||||
(see this [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md) for details)
|
||||
|
||||
- In practice, protobuf is used for all internal communication
|
||||
|
||||
(between control plane components, and with kubelet)
|
||||
|
||||
---
|
||||
|
||||
## Communication protocols: on the nodes
|
||||
|
||||
The kubelet agent uses a number of special-purpose protocols and interfaces, including:
|
||||
|
||||
- CRI (Container Runtime Interface)
|
||||
|
||||
- used for communication with the container engine
|
||||
- abstracts the differences between container engines
|
||||
- based on gRPC+protobuf
|
||||
|
||||
- [CNI (Container Network Interface)](https://github.com/containernetworking/cni/blob/master/SPEC.md)
|
||||
|
||||
- used for communication with network plugins
|
||||
- network plugins are implemented as executable programs invoked by kubelet
|
||||
- network plugins provide IPAM
|
||||
- network plugins set up network interfaces in pods
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
# The Kubernetes API
|
||||
|
||||
[
|
||||
*The Kubernetes API server is a "dumb server" which offers storage, versioning, validation, update, and watch semantics on API resources.*
|
||||
](
|
||||
https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md#proposal-and-motivation
|
||||
)
|
||||
|
||||
([Clayton Coleman](https://twitter.com/smarterclayton), Kubernetes Architect and Maintainer)
|
||||
|
||||
What does that mean?
|
||||
|
||||
---
|
||||
|
||||
## The Kubernetes API is declarative
|
||||
|
||||
- We cannot tell the API, "run a pod"
|
||||
|
||||
- We can tell the API, "here is the definition for pod X"
|
||||
|
||||
- The API server will store that definition (in etcd)
|
||||
|
||||
- *Controllers* will then wake up and create a pod matching the definition
|
||||
|
||||
---
|
||||
|
||||
## The core features of the Kubernetes API
|
||||
|
||||
- We can create, read, update, and delete objects
|
||||
|
||||
- We can also *watch* objects
|
||||
|
||||
(be notified when an object changes, or when an object of a given type is created)
|
||||
|
||||
- Objects are strongly typed
|
||||
|
||||
- Types are *validated* and *versioned*
|
||||
|
||||
- Storage and watch operations are provided by etcd
|
||||
|
||||
(note: the [k3s](https://k3s.io/) project allows us to use sqlite instead of etcd)
|
||||
|
||||
---
|
||||
|
||||
## Let's experiment a bit!
|
||||
|
||||
- For the exercises in this section, you'll be using `kubectl` locally and connecting to an AKS cluster
|
||||
|
||||
.exercise[
|
||||
|
||||
- Get cluster info
|
||||
```bash
|
||||
kubectl cluster-info
|
||||
```
|
||||
- Check that the cluster is operational:
|
||||
```bash
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
- All nodes should be `Ready`
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Create
|
||||
|
||||
- Let's create a simple object
|
||||
|
||||
.exercise[
|
||||
|
||||
- List existing namespaces:
|
||||
```bash
|
||||
kubectl get ns
|
||||
```
|
||||
|
||||
- Create a new namespace with the following command:
|
||||
```bash
|
||||
kubectl create -f- <<EOF
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: hello
|
||||
EOF
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
This is equivalent to `kubectl create namespace hello`.
|
||||
|
||||
---
|
||||
|
||||
## Read
|
||||
|
||||
- Let's retrieve the object we just created
|
||||
|
||||
.exercise[
|
||||
|
||||
- Read back our object:
|
||||
```bash
|
||||
kubectl get namespace hello -o yaml
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
We see a lot of data that wasn't here when we created the object.
|
||||
|
||||
Some data was automatically added to the object (like `spec.finalizers`).
|
||||
|
||||
Some data is dynamic (typically, the content of `status`.)
|
||||
|
||||
---
|
||||
|
||||
## API requests and responses
|
||||
|
||||
- Almost every Kubernetes API payload (requests and responses) has the same format:
|
||||
```yaml
|
||||
apiVersion: xxx
|
||||
kind: yyy
|
||||
metadata:
|
||||
name: zzz
|
||||
(more metadata fields here)
|
||||
(more fields here)
|
||||
```
|
||||
|
||||
- The fields shown above are mandatory, except for some special cases
|
||||
|
||||
(e.g.: in lists of resources, the list itself doesn't have a `metadata.name`)
|
||||
|
||||
- We show YAML for convenience, but the API uses JSON
|
||||
|
||||
(with optional protobuf encoding)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## API versions
|
||||
|
||||
- The `apiVersion` field corresponds to an *API group*
|
||||
|
||||
- It can be either `v1` (aka "core" group or "legacy group"), or `group/versions`; e.g.:
|
||||
|
||||
- `apps/v1`
|
||||
- `rbac.authorization.k8s.io/v1`
|
||||
- `extensions/v1beta1`
|
||||
|
||||
- It does not indicate which version of Kubernetes we're talking about
|
||||
|
||||
- It *indirectly* indicates the version of the `kind`
|
||||
|
||||
(which fields exist, their format, which ones are mandatory...)
|
||||
|
||||
- A single resource type (`kind`) is rarely versioned alone
|
||||
|
||||
(e.g.: the `batch` API group contains `jobs` and `cronjobs`)
|
||||
|
||||
---
|
||||
|
||||
## Update
|
||||
|
||||
- Let's update our namespace object
|
||||
|
||||
- There are many ways to do that, including:
|
||||
|
||||
- `kubectl apply` (and provide an updated YAML file)
|
||||
- `kubectl edit`
|
||||
- `kubectl patch`
|
||||
- many helpers, like `kubectl label`, or `kubectl set`
|
||||
|
||||
- In each case, `kubectl` will:
|
||||
|
||||
- get the current definition of the object
|
||||
- compute changes
|
||||
- submit the changes (with `PATCH` requests)
|
||||
|
||||
---
|
||||
|
||||
## Adding a label
|
||||
|
||||
- For demonstration purposes, let's add a label to the namespace
|
||||
|
||||
- The easiest way is to use `kubectl label`
|
||||
|
||||
.exercise[
|
||||
|
||||
- In one terminal, watch namespaces:
|
||||
```bash
|
||||
kubectl get namespaces --show-labels -w
|
||||
```
|
||||
|
||||
- In the other, update our namespace:
|
||||
```bash
|
||||
kubectl label namespaces hello color=purple
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
We demonstrated *update* and *watch* semantics.
|
||||
|
||||
---
|
||||
|
||||
## What's special about *watch*?
|
||||
|
||||
- The API server itself doesn't do anything: it's just a fancy object store
|
||||
|
||||
- All the actual logic in Kubernetes is implemented with *controllers*
|
||||
|
||||
- A *controller* watches a set of resources, and takes action when they change
|
||||
|
||||
- Examples:
|
||||
|
||||
- when a Pod object is created, it gets scheduled and started
|
||||
|
||||
- when a Pod belonging to a ReplicaSet terminates, it gets replaced
|
||||
|
||||
- when a Deployment object is updated, it can trigger a rolling update
|
||||
|
||||
---
|
||||
|
||||
# Other control plane components
|
||||
|
||||
- API server ✔️
|
||||
|
||||
- etcd ✔️
|
||||
|
||||
- Controller manager
|
||||
|
||||
- Scheduler
|
||||
|
||||
---
|
||||
|
||||
## Controller manager
|
||||
|
||||
- This is a collection of loops watching all kinds of objects
|
||||
|
||||
- That's where the actual logic of Kubernetes lives
|
||||
|
||||
- When we create a Deployment (e.g. with `kubectl run web --image=nginx`),
|
||||
|
||||
- we create a Deployment object
|
||||
|
||||
- the Deployment controller notices it, and creates a ReplicaSet
|
||||
|
||||
- the ReplicaSet controller notices the ReplicaSet, and creates a Pod
|
||||
|
||||
---
|
||||
|
||||
## Scheduler
|
||||
|
||||
- When a pod is created, it is in `Pending` state
|
||||
|
||||
- The scheduler (or rather: *a scheduler*) must bind it to a node
|
||||
|
||||
- Kubernetes comes with an efficient scheduler with many features
|
||||
|
||||
- if we have special requirements, we can add another scheduler
|
||||
<br/>
|
||||
(example: this [demo scheduler](https://github.com/kelseyhightower/scheduler) uses the cost of nodes, stored in node annotations)
|
||||
|
||||
- A pod might stay in `Pending` state for a long time:
|
||||
|
||||
- if the cluster is full
|
||||
|
||||
- if the pod has special constraints that can't be met
|
||||
|
||||
- if the scheduler is not running (!)
|
||||
319
slides/k8s/authn-authz-k8s201.md
Normal file
319
slides/k8s/authn-authz-k8s201.md
Normal file
@@ -0,0 +1,319 @@
|
||||
# Authentication and authorization
|
||||
|
||||
*And first, a little refresher!*
|
||||
|
||||
- Authentication = verifying the identity of a person
|
||||
|
||||
On a UNIX system, we can authenticate with login+password, SSH keys ...
|
||||
|
||||
- Authorization = listing what they are allowed to do
|
||||
|
||||
On a UNIX system, this can include file permissions, sudoer entries ...
|
||||
|
||||
- Sometimes abbreviated as "authn" and "authz"
|
||||
|
||||
- In good modular systems, these things are decoupled
|
||||
|
||||
(so we can e.g. change a password or SSH key without having to reset access rights)
|
||||
|
||||
---
|
||||
|
||||
## Authentication in Kubernetes
|
||||
|
||||
- When the API server receives a request, it tries to authenticate it
|
||||
|
||||
(it examines headers, certificates... anything available)
|
||||
|
||||
- Many authentication methods are available and can be used simultaneously
|
||||
|
||||
(we will see them on the next slide)
|
||||
|
||||
- It's the job of the authentication method to produce:
|
||||
|
||||
- the user name
|
||||
- the user ID
|
||||
- a list of groups
|
||||
|
||||
- The API server doesn't interpret these; that'll be the job of *authorizers*
|
||||
|
||||
---
|
||||
|
||||
## Authentication methods
|
||||
|
||||
- TLS client certificates
|
||||
|
||||
(that's what we've been doing with `kubectl` so far)
|
||||
|
||||
- Bearer tokens
|
||||
|
||||
(a secret token in the HTTP headers of the request)
|
||||
|
||||
- [HTTP basic auth](https://en.wikipedia.org/wiki/Basic_access_authentication)
|
||||
|
||||
(carrying user and password in an HTTP header)
|
||||
|
||||
- Authentication proxy
|
||||
|
||||
(sitting in front of the API and setting trusted headers)
|
||||
|
||||
---
|
||||
|
||||
## Anonymous & unauthenticated requests
|
||||
|
||||
- If any authentication method *rejects* a request, it's denied
|
||||
|
||||
(`401 Unauthorized` HTTP code)
|
||||
|
||||
- If a request is neither rejected nor accepted by anyone, it's anonymous
|
||||
|
||||
- the user name is `system:anonymous`
|
||||
|
||||
- the list of groups is `[system:unauthenticated]`
|
||||
|
||||
- By default, the anonymous user can't do anything
|
||||
|
||||
|
||||
.exercise[
|
||||
|
||||
- Note that 401 (not 403) is what you get if you just `curl` the Kubernetes API
|
||||
```bash
|
||||
curl -k $API_URL
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Authentication with tokens
|
||||
|
||||
- Tokens are passed as HTTP headers:
|
||||
|
||||
`Authorization: Bearer and-then-here-comes-the-token`
|
||||
|
||||
- Tokens can be validated through a number of different methods:
|
||||
|
||||
- static tokens hard-coded in a file on the API server
|
||||
|
||||
- [bootstrap tokens](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/) (special case to create a cluster or join nodes)
|
||||
|
||||
- [OpenID Connect tokens](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens) (to delegate authentication to compatible OAuth2 providers)
|
||||
|
||||
- service accounts (these deserve more details, coming right up!)
|
||||
|
||||
---
|
||||
|
||||
## Service accounts
|
||||
|
||||
- A service account is a user that exists in the Kubernetes API
|
||||
|
||||
(it is visible with e.g. `kubectl get serviceaccounts`)
|
||||
|
||||
- Service accounts can therefore be created / updated dynamically
|
||||
|
||||
(they don't require hand-editing a file and restarting the API server)
|
||||
|
||||
- A service account is associated with a set of secrets
|
||||
|
||||
(the kind that you can view with `kubectl get secrets`)
|
||||
|
||||
- Service accounts are generally used to grant permissions to applications, services...
|
||||
|
||||
(as opposed to humans)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Token authentication in practice
|
||||
|
||||
- We are going to list existing service accounts
|
||||
|
||||
- Then we will extract the token for a given service account
|
||||
|
||||
- And we will use that token to authenticate with the API
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Listing service accounts
|
||||
|
||||
.exercise[
|
||||
|
||||
- The resource name is `serviceaccount` or `sa` for short:
|
||||
```bash
|
||||
kubectl get sa
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
There should be just one service account in the default namespace: `default`.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Finding the secret
|
||||
|
||||
.exercise[
|
||||
|
||||
- List the secrets for the `default` service account:
|
||||
```bash
|
||||
kubectl get sa default -o yaml
|
||||
SECRET=$(kubectl get sa default -o json | jq -r .secrets[0].name)
|
||||
echo $SECRET
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
It should be named `default-token-XXXXX`.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Extracting the token
|
||||
|
||||
- The token is stored in the secret, wrapped with base64 encoding
|
||||
|
||||
.exercise[
|
||||
|
||||
- View the secret:
|
||||
```bash
|
||||
kubectl get secret $SECRET -o yaml
|
||||
```
|
||||
|
||||
- Extract the token and decode it:
|
||||
```bash
|
||||
TOKEN=$(kubectl get secret $SECRET -o json \
|
||||
| jq -r .data.token | openssl base64 -d -A)
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Using the token
|
||||
|
||||
- Let's send a request to the API, without and with the token
|
||||
|
||||
.exercise[
|
||||
|
||||
- Find the URL for the `kubernetes` master:
|
||||
```bash
|
||||
kubectl cluster-info
|
||||
```
|
||||
- Set it programmatically, if AKS_NAME is set: (choose from `kubectl config view`):
|
||||
```bash
|
||||
API=$(kubectl config view -o \
|
||||
jsonpath="{.clusters[?(@.name==\"$AKS_NAME\")].cluster.server}")
|
||||
```
|
||||
- Connect without the token, then with the token::
|
||||
```bash
|
||||
curl -k $API
|
||||
curl -k -H "Authorization: Bearer $TOKEN" $API
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Authorization in Kubernetes
|
||||
|
||||
- There are multiple ways to grant permissions in Kubernetes, called [authorizers](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authorization-modules):
|
||||
|
||||
- [Node Authorization](https://kubernetes.io/docs/reference/access-authn-authz/node/) (used internally by kubelet; we can ignore it)
|
||||
|
||||
- [Attribute-based access control](https://kubernetes.io/docs/reference/access-authn-authz/abac/) (powerful but complex and static; ignore it too)
|
||||
|
||||
- [Webhook](https://kubernetes.io/docs/reference/access-authn-authz/webhook/) (each API request is submitted to an external service for approval)
|
||||
|
||||
- [Role-based access control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) (associates permissions to users dynamically)
|
||||
|
||||
- The one we want is the last one, generally abbreviated as RBAC
|
||||
|
||||
---
|
||||
|
||||
## Role-based access control
|
||||
|
||||
- RBAC allows to specify fine-grained permissions
|
||||
|
||||
- Permissions are expressed as *rules*
|
||||
|
||||
- A rule is a combination of:
|
||||
|
||||
- [verbs](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-the-request-verb) like create, get, list, update, delete...
|
||||
|
||||
- resources (as in "API resource," like pods, nodes, services...)
|
||||
|
||||
- resource names (to specify e.g. one specific pod instead of all pods)
|
||||
|
||||
- in some case, [subresources](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources) (e.g. logs are subresources of pods)
|
||||
|
||||
---
|
||||
|
||||
## From rules to roles to rolebindings
|
||||
|
||||
- A *role* is an API object containing a list of *rules*
|
||||
|
||||
Example: role "external-load-balancer-configurator" can:
|
||||
- [list, get] resources [endpoints, services, pods]
|
||||
- [update] resources [services]
|
||||
|
||||
- A *rolebinding* associates a role with a user
|
||||
|
||||
Example: rolebinding "external-load-balancer-configurator":
|
||||
- associates user "external-load-balancer-configurator"
|
||||
- with role "external-load-balancer-configurator"
|
||||
|
||||
- Yes, there can be users, roles, and rolebindings with the same name
|
||||
|
||||
- It's a good idea for 1-1-1 bindings; not so much for 1-N ones
|
||||
|
||||
---
|
||||
|
||||
## Cluster-scope permissions
|
||||
|
||||
- API resources Role and RoleBinding are for objects within a namespace
|
||||
|
||||
- We can also define API resources ClusterRole and ClusterRoleBinding
|
||||
|
||||
- These are a superset, allowing us to:
|
||||
|
||||
- specify actions on cluster-wide objects (like nodes)
|
||||
|
||||
- operate across all namespaces
|
||||
|
||||
- We can create Role and RoleBinding resources within a namespace
|
||||
|
||||
- ClusterRole and ClusterRoleBinding resources are global
|
||||
|
||||
---
|
||||
|
||||
## Pods and service accounts
|
||||
|
||||
- A pod can be associated with a service account
|
||||
|
||||
- by default, it is associated with the `default` service account
|
||||
|
||||
- as we saw earlier, this service account has no permissions anyway
|
||||
|
||||
- The associated token is exposed to the pod's filesystem
|
||||
|
||||
(in `/var/run/secrets/kubernetes.io/serviceaccount/token`)
|
||||
|
||||
- Standard Kubernetes tooling (like `kubectl`) will look for it there
|
||||
|
||||
- So Kubernetes tools running in a pod will automatically use the service account
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Pod Security Policies
|
||||
|
||||
- If you'd like to check out pod-level controls in AKS, they are [available in preview](https://docs.microsoft.com/en-us/azure/aks/use-pod-security-policies)
|
||||
|
||||
- Experiment, but not in production!
|
||||
167
slides/k8s/cluster-sizing-k8s201.md
Normal file
167
slides/k8s/cluster-sizing-k8s201.md
Normal file
@@ -0,0 +1,167 @@
|
||||
# Cluster sizing
|
||||
|
||||
- What happens when the cluster gets full?
|
||||
|
||||
- How can we scale up the cluster?
|
||||
|
||||
- Can we do it automatically?
|
||||
|
||||
- What are other methods to address capacity planning?
|
||||
|
||||
---
|
||||
|
||||
## When are we out of resources?
|
||||
|
||||
- kubelet monitors node resources:
|
||||
|
||||
- memory
|
||||
|
||||
- node disk usage (typically the root filesystem of the node)
|
||||
|
||||
- image disk usage (where container images and RW layers are stored)
|
||||
|
||||
- For each resource, we can provide two thresholds:
|
||||
|
||||
- a hard threshold (if it's met, it provokes immediate action)
|
||||
|
||||
- a soft threshold (provokes action only after a grace period)
|
||||
|
||||
- Resource thresholds and grace periods are configurable
|
||||
|
||||
(by passing kubelet command-line flags)
|
||||
|
||||
---
|
||||
|
||||
## What happens then?
|
||||
|
||||
- If disk usage is too high:
|
||||
|
||||
- kubelet will try to remove terminated pods
|
||||
|
||||
- then, it will try to *evict* pods
|
||||
|
||||
- If memory usage is too high:
|
||||
|
||||
- it will try to evict pods
|
||||
|
||||
- The node is marked as "under pressure"
|
||||
|
||||
- This temporarily prevents new pods from being scheduled on the node
|
||||
|
||||
---
|
||||
|
||||
## Which pods get evicted?
|
||||
|
||||
- kubelet looks at the pods' QoS and PriorityClass
|
||||
|
||||
- First, pods with BestEffort QoS are considered
|
||||
|
||||
- Then, pods with Burstable QoS exceeding their *requests*
|
||||
|
||||
(but only if the exceeding resource is the one that is low on the node)
|
||||
|
||||
- Finally, pods with Guaranteed QoS, and Burstable pods within their requests
|
||||
|
||||
- Within each group, pods are sorted by PriorityClass
|
||||
|
||||
- If there are pods with the same PriorityClass, they are sorted by usage excess
|
||||
|
||||
(i.e. the pods whose usage exceeds their requests the most are evicted first)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Eviction of Guaranteed pods
|
||||
|
||||
- *Normally*, pods with Guaranteed QoS should not be evicted
|
||||
|
||||
- A chunk of resources is reserved for node processes (like kubelet)
|
||||
|
||||
- It is expected that these processes won't use more than this reservation
|
||||
|
||||
- If they do use more resources anyway, all bets are off!
|
||||
|
||||
- If this happens, kubelet must evict Guaranteed pods to preserve node stability
|
||||
|
||||
(or Burstable pods that are still within their requested usage)
|
||||
|
||||
---
|
||||
|
||||
## What happens to evicted pods?
|
||||
|
||||
- The pod is terminated
|
||||
|
||||
- It is marked as `Failed` at the API level
|
||||
|
||||
- If the pod was created by a controller, the controller will recreate it
|
||||
|
||||
- The pod will be recreated on another node, *if there are resources available!*
|
||||
|
||||
- For more details about the eviction process, see:
|
||||
|
||||
- [this documentation page](https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/) about resource pressure and pod eviction,
|
||||
|
||||
- [this other documentation page](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/) about pod priority and preemption.
|
||||
|
||||
---
|
||||
|
||||
## What if there are no resources available?
|
||||
|
||||
- Sometimes, a pod cannot be scheduled anywhere:
|
||||
|
||||
- all the nodes are under pressure,
|
||||
|
||||
- or the pod requests more resources than are available
|
||||
|
||||
- The pod then remains in `Pending` state until the situation improves
|
||||
|
||||
---
|
||||
|
||||
## Cluster scaling
|
||||
|
||||
- One way to improve the situation is to add new nodes
|
||||
|
||||
- This can be done automatically with the [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler)
|
||||
|
||||
- The autoscaler will automatically scale up:
|
||||
|
||||
- if there are pods that failed to be scheduled
|
||||
|
||||
- The autoscaler will automatically scale down:
|
||||
|
||||
- if nodes have a low utilization for an extended period of time
|
||||
|
||||
---
|
||||
|
||||
## Restrictions, gotchas ...
|
||||
|
||||
- The Cluster Autoscaler only supports a few cloud infrastructures
|
||||
|
||||
(see [here](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider) for a list) - ([in preview for AKS](https://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler))
|
||||
|
||||
- The Cluster Autoscaler cannot scale down nodes that have pods using:
|
||||
|
||||
- local storage
|
||||
|
||||
- affinity/anti-affinity rules preventing them from being rescheduled
|
||||
|
||||
- a restrictive PodDisruptionBudget
|
||||
|
||||
---
|
||||
|
||||
## Other way to do capacity planning
|
||||
|
||||
- "Running Kubernetes without nodes"
|
||||
|
||||
- Systems like [Virtual Kubelet](https://virtual-kubelet.io/) or Kiyot can run pods using on-demand resources
|
||||
|
||||
- Virtual Kubelet can leverage e.g. ACI or Fargate to run pods
|
||||
|
||||
- Kiyot runs pods in ad-hoc EC2 instances (1 instance per pod)
|
||||
|
||||
- Economic advantage (no wasted capacity)
|
||||
|
||||
- Security advantage (stronger isolation between pods)
|
||||
|
||||
Check [this blog post](http://jpetazzo.github.io/2019/02/13/running-kubernetes-without-nodes-with-kiyot/) for more details.
|
||||
@@ -120,7 +120,7 @@
|
||||
|
||||
## Example: HTTP probe
|
||||
|
||||
Here is a pod template for the `rng` web service of the DockerCoins app:
|
||||
Here is a pod template for the `rng` web service of our DockerCoins sample app:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
||||
@@ -88,9 +88,10 @@
|
||||
kubectl expose deployment busyhttp --port=80
|
||||
```
|
||||
|
||||
- Get the ClusterIP allocated to the service:
|
||||
- Port-forward to our service
|
||||
```bash
|
||||
kubectl get svc busyhttp
|
||||
kubectl port-forward service/busyhttp 8080:80 &
|
||||
curl -k localhost:8080
|
||||
```
|
||||
|
||||
]
|
||||
@@ -99,18 +100,13 @@
|
||||
|
||||
## Monitor what's going on
|
||||
|
||||
- Let's start a bunch of commands to watch what is happening
|
||||
- Let's use some commands to watch what is happening
|
||||
|
||||
.exercise[
|
||||
|
||||
- Monitor pod CPU usage:
|
||||
```bash
|
||||
watch kubectl top pods
|
||||
```
|
||||
|
||||
- Monitor service latency:
|
||||
```bash
|
||||
httping http://`ClusterIP`/
|
||||
kubectl top pods
|
||||
```
|
||||
|
||||
- Monitor cluster events:
|
||||
@@ -124,19 +120,19 @@
|
||||
|
||||
## Send traffic to the service
|
||||
|
||||
- We will use `ab` (Apache Bench) to send traffic
|
||||
- We will use [hey](https://github.com/rakyll/hey/releases) to send traffic
|
||||
|
||||
.exercise[
|
||||
|
||||
- Send a lot of requests to the service, with a concurrency level of 3:
|
||||
- Send a lot of requests to the service with a concurrency level of 3:
|
||||
```bash
|
||||
ab -c 3 -n 100000 http://`ClusterIP`/
|
||||
curl https://storage.googleapis.com/jblabs/dist/hey_linux_v0.1.2 > hey
|
||||
chmod +x hey
|
||||
./hey http://localhost:8080 -c 3 -n 200
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
The latency (reported by `httping`) should increase above 3s.
|
||||
|
||||
The CPU utilization should increase to 100%.
|
||||
|
||||
(The server is single-threaded and won't go above 100%.)
|
||||
@@ -168,7 +164,7 @@ This can also be set with `--cpu-percent=`.
|
||||
|
||||
## What did we miss?
|
||||
|
||||
- The events stream gives us a hint, but to be honest, it's not very clear:
|
||||
- The events stream (`kubectl get events -w`) gives us a hint, but to be honest, it's not very clear:
|
||||
|
||||
`missing request for cpu`
|
||||
|
||||
@@ -194,12 +190,9 @@ This can also be set with `--cpu-percent=`.
|
||||
```
|
||||
|
||||
- In the `containers` list, add the following block:
|
||||
```yaml
|
||||
resources:
|
||||
requests:
|
||||
cpu: "1"
|
||||
```
|
||||
|
||||
resources: {"requests":{"cpu":"1", "memory":"64Mi"}}
|
||||
```
|
||||
]
|
||||
|
||||
---
|
||||
@@ -208,7 +201,7 @@ This can also be set with `--cpu-percent=`.
|
||||
|
||||
- After saving and quitting, a rolling update happens
|
||||
|
||||
(if `ab` or `httping` exits, make sure to restart it)
|
||||
(if `hey` exits, make sure to restart it)
|
||||
|
||||
- It will take a minute or two for the HPA to kick in:
|
||||
|
||||
|
||||
221
slides/k8s/kubercoins-k8s201.md
Normal file
221
slides/k8s/kubercoins-k8s201.md
Normal file
@@ -0,0 +1,221 @@
|
||||
# Deploying a sample application
|
||||
|
||||
- We will connect to our new Kubernetes cluster
|
||||
|
||||
- We will deploy a sample application, "DockerCoins"
|
||||
|
||||
- That app features multiple micro-services and a web UI
|
||||
|
||||
---
|
||||
|
||||
## Cloning some repos
|
||||
|
||||
- We will need two repositories:
|
||||
|
||||
- the first one has the "DockerCoins" demo app
|
||||
|
||||
- the second one has these slides, some scripts, more manifests ...
|
||||
|
||||
.exercise[
|
||||
|
||||
- Clone the kubercoins repository locally:
|
||||
```bash
|
||||
git clone https://github.com/jpetazzo/kubercoins
|
||||
```
|
||||
|
||||
|
||||
- Clone the container.training repository as well:
|
||||
```bash
|
||||
git clone https://@@GITREPO@@
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Running the application
|
||||
|
||||
Without further ado, let's start this application!
|
||||
|
||||
.exercise[
|
||||
|
||||
- Apply all the manifests from the kubercoins repository:
|
||||
```bash
|
||||
kubectl apply -f kubercoins/
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## What's this application?
|
||||
|
||||
--
|
||||
|
||||
- It is a DockerCoin miner! .emoji[💰🐳📦🚢]
|
||||
|
||||
--
|
||||
|
||||
- No, you can't buy coffee with DockerCoins
|
||||
|
||||
--
|
||||
|
||||
- How DockerCoins works:
|
||||
|
||||
- generate a few random bytes
|
||||
|
||||
- hash these bytes
|
||||
|
||||
- increment a counter (to keep track of speed)
|
||||
|
||||
- repeat forever!
|
||||
|
||||
--
|
||||
|
||||
- DockerCoins is *not* a cryptocurrency
|
||||
|
||||
(the only common points are "randomness", "hashing", and "coins" in the name)
|
||||
|
||||
---
|
||||
|
||||
## DockerCoins in the microservices era
|
||||
|
||||
- DockerCoins is made of 5 services:
|
||||
|
||||
- `rng` = web service generating random bytes
|
||||
|
||||
- `hasher` = web service computing hash of POSTed data
|
||||
|
||||
- `worker` = background process calling `rng` and `hasher`
|
||||
|
||||
- `webui` = web interface to watch progress
|
||||
|
||||
- `redis` = data store (holds a counter updated by `worker`)
|
||||
|
||||
- These 5 services are visible in the application's Compose file,
|
||||
[docker-compose.yml](
|
||||
https://@@GITREPO@@/blob/master/dockercoins/docker-compose.yml)
|
||||
|
||||
---
|
||||
|
||||
## How DockerCoins works
|
||||
|
||||
- `worker` invokes web service `rng` to generate random bytes
|
||||
|
||||
- `worker` invokes web service `hasher` to hash these bytes
|
||||
|
||||
- `worker` does this in an infinite loop
|
||||
|
||||
- every second, `worker` updates `redis` to indicate how many loops were done
|
||||
|
||||
- `webui` queries `redis`, and computes and exposes "hashing speed" in our browser
|
||||
|
||||
*(See diagram on next slide!)*
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Service discovery in container-land
|
||||
|
||||
How does each service find out the address of the other ones?
|
||||
|
||||
--
|
||||
|
||||
- We do not hard-code IP addresses in the code
|
||||
|
||||
- We do not hard-code FQDNs in the code, either
|
||||
|
||||
- We just connect to a service name, and container-magic does the rest
|
||||
|
||||
(And by container-magic, we mean "a crafty, dynamic, embedded DNS server")
|
||||
|
||||
---
|
||||
|
||||
## Example in `worker/worker.py`
|
||||
|
||||
```python
|
||||
redis = Redis("`redis`")
|
||||
|
||||
|
||||
def get_random_bytes():
|
||||
r = requests.get("http://`rng`/32")
|
||||
return r.content
|
||||
|
||||
|
||||
def hash_bytes(data):
|
||||
r = requests.post("http://`hasher`/",
|
||||
data=data,
|
||||
headers={"Content-Type": "application/octet-stream"})
|
||||
```
|
||||
|
||||
(Full source code available [here](
|
||||
https://@@GITREPO@@/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/worker/worker.py#L17
|
||||
))
|
||||
|
||||
---
|
||||
|
||||
## Show me the code!
|
||||
|
||||
- You can check the GitHub repository with all the materials of this workshop:
|
||||
<br/>https://@@GITREPO@@
|
||||
|
||||
- The application is in the [dockercoins](
|
||||
https://@@GITREPO@@/tree/master/dockercoins)
|
||||
subdirectory
|
||||
|
||||
- The Compose file ([docker-compose.yml](
|
||||
https://@@GITREPO@@/blob/master/dockercoins/docker-compose.yml))
|
||||
lists all 5 services
|
||||
|
||||
- `redis` is using an official image from the Docker Hub
|
||||
|
||||
- `hasher`, `rng`, `worker`, `webui` are each built from a Dockerfile
|
||||
|
||||
- Each service's Dockerfile and source code is in its own directory
|
||||
|
||||
(`hasher` is in the [hasher](https://@@GITREPO@@/blob/master/dockercoins/hasher/) directory,
|
||||
`rng` is in the [rng](https://@@GITREPO@@/blob/master/dockercoins/rng/)
|
||||
directory, etc.)
|
||||
|
||||
---
|
||||
|
||||
## Our application at work
|
||||
|
||||
- We can check the logs of our application's pods
|
||||
|
||||
.exercise[
|
||||
|
||||
- Check the logs of the various components:
|
||||
```bash
|
||||
kubectl logs deploy/worker
|
||||
kubectl logs deploy/hasher
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Connecting to the web UI
|
||||
|
||||
- The `webui` container exposes a web dashboard; let's view it
|
||||
|
||||
.exercise[
|
||||
|
||||
- Open a proxy to our cluster:
|
||||
```bash
|
||||
kubectl proxy &
|
||||
```
|
||||
|
||||
- Open in a web browser: [http://localhost:8001/api/v1/namespaces/default/services/webui/proxy/index.html](http://localhost:8001/api/v1/namespaces/default/services/webui/proxy/index.html)
|
||||
|
||||
]
|
||||
|
||||
A drawing area should show up, and after a few seconds, a blue
|
||||
graph will appear.
|
||||
|
||||
- If using Cloud Shell, use the [Cloud Shell Web Preview](https://docs.microsoft.com/en-us/azure/cloud-shell/using-the-shell-window#web-preview) and append `/api/v1/namespaces/default/services/webui/proxy/index.html` to the existing path
|
||||
@@ -1,10 +1,10 @@
|
||||
# Links and resources
|
||||
|
||||
- [Microsoft Learn](https://docs.microsoft.com/learn/)
|
||||
- [What is Kubernetes? by Microsoft Azure](https://aka.ms/k8slearning)
|
||||
|
||||
- [Azure Kubernetes Service](https://docs.microsoft.com/azure/aks/)
|
||||
|
||||
- [Cloud Developer Advocates](https://developer.microsoft.com/advocates/)
|
||||
- [Deis Labs](https://deislabs.io) - Cloud Native Developer Tooling
|
||||
|
||||
- [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups
|
||||
|
||||
@@ -12,4 +12,8 @@
|
||||
|
||||
- [devopsdays](https://www.devopsdays.org/)
|
||||
|
||||
.footnote[These slides (and future updates) are on → http://container.training/]
|
||||
- [Training with Jérôme](https://tinyshellscript.com/)
|
||||
|
||||
- **Please rate this session!** (with [this link](https://conferences.oreilly.com/oscon/oscon-or/public/schedule/detail/76390))
|
||||
|
||||
.footnote[These slides (and future updates) are on → https://container.training/]
|
||||
|
||||
236
slides/k8s/localkubeconfig-k8s201.md
Normal file
236
slides/k8s/localkubeconfig-k8s201.md
Normal file
@@ -0,0 +1,236 @@
|
||||
# Controlling a Kubernetes cluster remotely
|
||||
|
||||
- `kubectl` can be used either on cluster instances or outside the cluster
|
||||
|
||||
- Since we're using [AKS](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough), we'll be running `kubectl` outside the cluster
|
||||
|
||||
- We can use Azure Cloud Shell
|
||||
|
||||
- Or we can use `kubectl` from our local machine
|
||||
|
||||
---
|
||||
|
||||
## Connecting to your AKS cluster via Azure Cloud Shell
|
||||
|
||||
- open portal.azure.com in a browser
|
||||
- auth with the info on your card
|
||||
|
||||
- click `[>_]` in the top menu bar to open cloud shell
|
||||
|
||||
.exercise[
|
||||
|
||||
- get your cluster credentials:
|
||||
```bash
|
||||
RESOURCE_GROUP=$(az group list | jq -r \
|
||||
'[.[].name|select(. | startswith("Group-"))][0]')
|
||||
AKS_NAME=$(az aks list -g $RESOURCE_GROUP | jq -r '.[0].name')
|
||||
az aks get-credentials -g $RESOURCE_GROUP -n $AKS_NAME
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
- If you're going to use Cloud Shell, you can skip ahead
|
||||
|
||||
---
|
||||
class: extra-details
|
||||
|
||||
## Preserving the existing `~/.kube/config` (optional)
|
||||
|
||||
- If you already have a `~/.kube/config` file, rename it
|
||||
|
||||
(we are going to overwrite it in the following slides!)
|
||||
|
||||
- If you never used `kubectl` on your machine before: nothing to do!
|
||||
|
||||
.exercise[
|
||||
|
||||
- Make a copy of `~/.kube/config`; if you are using macOS or Linux, you can do:
|
||||
```bash
|
||||
cp ~/.kube/config ~/.kube/config.before.training
|
||||
```
|
||||
|
||||
- If you are using Windows, you will need to adapt this command
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Connecting to your AKS cluster via local tools
|
||||
|
||||
.exercise[
|
||||
|
||||
- install the [az CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest)
|
||||
|
||||
- log in to azure:
|
||||
```bash
|
||||
az login
|
||||
```
|
||||
|
||||
|
||||
- get your cluster credentials (requires jq):
|
||||
```bash
|
||||
RESOURCE_GROUP=$(az group list | jq -r \
|
||||
'[.[].name|select(. | startswith("Group-"))][0]')
|
||||
AKS_NAME=$(az aks list -g $RESOURCE_GROUP | jq -r '.[0].name')
|
||||
az aks get-credentials -g $RESOURCE_GROUP -n $AKS_NAME
|
||||
```
|
||||
|
||||
- optionally, if you don't have kubectl:
|
||||
```bash
|
||||
az aks install-cli
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Getting started with kubectl
|
||||
|
||||
|
||||
- `kubectl` is officially available on Linux, macOS, Windows
|
||||
|
||||
(and unofficially anywhere we can build and run Go binaries)
|
||||
|
||||
- You may want to try Azure cloud shell if you are following along from:
|
||||
|
||||
- a tablet or phone
|
||||
|
||||
- a web-based terminal
|
||||
|
||||
- an environment where you can't install and run new binaries
|
||||
|
||||
---
|
||||
class: extra-details
|
||||
|
||||
## Installing `kubectl`
|
||||
|
||||
- If you already have `kubectl` on your local machine, you can skip this
|
||||
|
||||
.exercise[
|
||||
|
||||
<!-- ##VERSION## -->
|
||||
|
||||
- Download the `kubectl` binary from one of these links:
|
||||
|
||||
[Linux](https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/linux/amd64/kubectl)
|
||||
|
|
||||
[macOS](https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/darwin/amd64/kubectl)
|
||||
|
|
||||
[Windows](https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/windows/amd64/kubectl.exe)
|
||||
|
||||
- On Linux and macOS, make the binary executable with `chmod +x kubectl`
|
||||
|
||||
(And remember to run it with `./kubectl` or move it to your `$PATH`)
|
||||
|
||||
]
|
||||
|
||||
Note: if you are following along with a different platform (e.g. Linux on an architecture different from amd64, or with a phone or tablet), installing `kubectl` might be more complicated (or even impossible) so check with us about cloud shell.
|
||||
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Testing `kubectl`
|
||||
|
||||
- Check that `kubectl` works correctly
|
||||
|
||||
(before even trying to connect to a remote cluster!)
|
||||
|
||||
.exercise[
|
||||
|
||||
- Ask `kubectl` to show its version number:
|
||||
```bash
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
The output should look like this:
|
||||
```
|
||||
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0",
|
||||
GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean",
|
||||
BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc",
|
||||
Platform:"darwin/amd64"}
|
||||
```
|
||||
---
|
||||
|
||||
|
||||
## Let's look at your cluster!
|
||||
|
||||
.exercise[
|
||||
|
||||
- Scan for the `server:` address that matches the `name` of your new cluster
|
||||
```bash
|
||||
kubectl config view
|
||||
```
|
||||
|
||||
- Store the API endpoint you find:
|
||||
```bash
|
||||
API_URL=$(kubectl config view -o json | jq -r ".clusters[] \
|
||||
| select(.name == \"$AKS_NAME\") | .cluster.server")
|
||||
echo $API_URL
|
||||
```
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## What if we get a certificate error?
|
||||
|
||||
- Generally, the Kubernetes API uses a certificate that is valid for:
|
||||
|
||||
- `kubernetes`
|
||||
- `kubernetes.default`
|
||||
- `kubernetes.default.svc`
|
||||
- `kubernetes.default.svc.cluster.local`
|
||||
- the ClusterIP address of the `kubernetes` service
|
||||
- the hostname of the node hosting the control plane
|
||||
- the IP address of the node hosting the control plane
|
||||
|
||||
- On most clouds, the IP address of the node is an internal IP address
|
||||
|
||||
- ... And we are going to connect over the external IP address
|
||||
|
||||
- ... And that external IP address was not used when creating the certificate!
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Working around the certificate error
|
||||
|
||||
- We need to tell `kubectl` to skip TLS verification
|
||||
|
||||
(only do this with testing clusters, never in production!)
|
||||
|
||||
- The following command will do the trick:
|
||||
```bash
|
||||
kubectl config set-cluster $AKS_NAME --insecure-skip-tls-verify
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Checking that we can connect to the cluster
|
||||
|
||||
- We can now run a couple of trivial commands to check that all is well
|
||||
|
||||
.exercise[
|
||||
|
||||
- Check the versions of the local client and remote server:
|
||||
```bash
|
||||
kubectl version
|
||||
```
|
||||
|
||||
It is okay if you have a newer client than what is available on the server.
|
||||
|
||||
- View the nodes of the cluster:
|
||||
```bash
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
We can now utilize the cluster exactly as if we're logged into a node, except that it's remote.
|
||||
@@ -24,14 +24,20 @@
|
||||
|
||||
.exercise[
|
||||
|
||||
- Create the "green" namespace:
|
||||
- Show existing namespaces
|
||||
```bash
|
||||
kubectl get namespaces --show-labels
|
||||
```
|
||||
|
||||
- Create the "green" namespace
|
||||
```bash
|
||||
kubectl create namespace green
|
||||
```
|
||||
|
||||
- Change to that namespace:
|
||||
```bash
|
||||
kns green
|
||||
kubectl config set-context --current --namespace=green
|
||||
kubectl config view | grep namespace:
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
65
slides/k8s/prereqs-k8s201.md
Normal file
65
slides/k8s/prereqs-k8s201.md
Normal file
@@ -0,0 +1,65 @@
|
||||
# Pre-requirements
|
||||
|
||||
- Kubernetes concepts
|
||||
|
||||
(pods, deployments, services, labels, selectors)
|
||||
|
||||
- Hands-on experience working with containers
|
||||
|
||||
(building images, running them; doesn't matter how exactly)
|
||||
|
||||
- Familiar with the UNIX command-line
|
||||
|
||||
(navigating directories, editing files, using `kubectl`)
|
||||
|
||||
---
|
||||
|
||||
## Labs and exercises
|
||||
|
||||
- We are going to explore advanced k8s concepts
|
||||
|
||||
- Everyone will get their own private environment
|
||||
|
||||
- You are invited to reproduce all the demos (but you don't have to)
|
||||
|
||||
- All hands-on sections are clearly identified, like the gray rectangle below
|
||||
|
||||
.exercise[
|
||||
|
||||
- This is the stuff you're supposed to do!
|
||||
|
||||
- Go to @@SLIDES@@ to view these slides
|
||||
|
||||
- Join the chat room: @@CHAT@@
|
||||
|
||||
<!-- ```open @@SLIDES@@``` -->
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Private environments
|
||||
|
||||
- Each person gets their own Kubernetes cluster
|
||||
|
||||
- Each person should have a printed card with connection information
|
||||
|
||||
- We will connect to these clusters with `kubectl`
|
||||
|
||||
- If you don't have `kubectl` installed, we'll explain how to install it shortly
|
||||
|
||||
- You will also want to install [jq](https://stedolan.github.io/jq/)
|
||||
|
||||
---
|
||||
|
||||
## Doing or re-doing this on your own?
|
||||
|
||||
- We are using AKS with kubectl installed locally
|
||||
|
||||
- You could use any managed k8s
|
||||
|
||||
- You could also use any cloud VMs with Ubuntu LTS and Kubernetes [packages] or [binaries] installed
|
||||
|
||||
[packages]: https://kubernetes.io/docs/setup/independent/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl
|
||||
|
||||
[binaries]: https://kubernetes.io/docs/setup/release/notes/#server-binaries
|
||||
527
slides/k8s/resource-limits-k8s201.md
Normal file
527
slides/k8s/resource-limits-k8s201.md
Normal file
@@ -0,0 +1,527 @@
|
||||
# Resource Limits
|
||||
|
||||
- We can attach resource indications to our pods
|
||||
|
||||
(or rather: to the *containers* in our pods)
|
||||
|
||||
- We can specify *limits* and/or *requests*
|
||||
|
||||
- We can specify quantities of CPU and/or memory
|
||||
|
||||
---
|
||||
|
||||
## CPU vs memory
|
||||
|
||||
- CPU is a *compressible resource*
|
||||
|
||||
(it can be preempted immediately without adverse effect)
|
||||
|
||||
- Memory is an *incompressible resource*
|
||||
|
||||
(it needs to be swapped out to be reclaimed; and this is costly)
|
||||
|
||||
- As a result, exceeding limits will have different consequences for CPU and memory
|
||||
|
||||
---
|
||||
|
||||
## Exceeding CPU limits
|
||||
|
||||
- CPU can be reclaimed instantaneously
|
||||
|
||||
(in fact, it is preempted hundreds of times per second, at each context switch)
|
||||
|
||||
- If a container uses too much CPU, it can be throttled
|
||||
|
||||
(it will be scheduled less often)
|
||||
|
||||
- The processes in that container will run slower
|
||||
|
||||
(or rather: they will not run faster)
|
||||
|
||||
---
|
||||
|
||||
## Exceeding memory limits
|
||||
|
||||
- Memory needs to be swapped out before being reclaimed
|
||||
|
||||
- "Swapping" means writing memory pages to disk, which is very slow
|
||||
|
||||
- On a classic system, a process that swaps can get 1000x slower
|
||||
|
||||
(because disk I/O is 1000x slower than memory I/O)
|
||||
|
||||
- Exceeding the memory limit (even by a small amount) can reduce performance *a lot*
|
||||
|
||||
- Kubernetes *does not support swap* (more on that later!)
|
||||
|
||||
- Exceeding the memory limit will cause the container to be killed
|
||||
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Limits vs requests
|
||||
|
||||
- Limits are "hard limits" (they can't be exceeded)
|
||||
|
||||
- a container exceeding its memory limit is killed
|
||||
|
||||
- a container exceeding its CPU limit is throttled
|
||||
|
||||
- Requests are used for scheduling purposes
|
||||
|
||||
- a container using *less* than what it requested will never be killed or throttled
|
||||
|
||||
- the scheduler uses the requested sizes to determine placement
|
||||
|
||||
- the resources requested by all pods on a node will never exceed the node size
|
||||
|
||||
---
|
||||
|
||||
## Pod quality of service
|
||||
|
||||
Each pod is assigned a QoS class (visible in `status.qosClass`).
|
||||
|
||||
- If limits = requests:
|
||||
|
||||
- as long as the container uses less than the limit, it won't be affected
|
||||
|
||||
- if all containers in a pod have *(limits=requests)*, QoS is considered "Guaranteed"
|
||||
|
||||
- If requests < limits:
|
||||
|
||||
- as long as the container uses less than the request, it won't be affected
|
||||
|
||||
- otherwise, it might be killed/evicted if the node gets overloaded
|
||||
|
||||
- if at least one container has *(requests<limits)*, QoS is considered "Burstable"
|
||||
|
||||
- If a pod doesn't have any request nor limit, QoS is considered "BestEffort"
|
||||
|
||||
---
|
||||
|
||||
## Quality of service impact
|
||||
|
||||
- When a node is overloaded, BestEffort pods are killed first
|
||||
|
||||
- Then, Burstable pods that exceed their limits
|
||||
|
||||
- Burstable and Guaranteed pods below their limits are never killed
|
||||
|
||||
(except if their node fails)
|
||||
|
||||
- If we only use Guaranteed pods, no pod should ever be killed
|
||||
|
||||
(as long as they stay within their limits)
|
||||
|
||||
(Pod QoS is also explained in [this page](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/) of the Kubernetes documentation and in [this blog post](https://medium.com/google-cloud/quality-of-service-class-qos-in-kubernetes-bb76a89eb2c6).)
|
||||
|
||||
---
|
||||
|
||||
## Where is my swap?
|
||||
|
||||
- The semantics of memory and swap limits on Linux cgroups are complex
|
||||
|
||||
- In particular, it's not possible to disable swap for a cgroup
|
||||
|
||||
(the closest option is to [reduce "swappiness"](https://unix.stackexchange.com/questions/77939/turning-off-swapping-for-only-one-process-with-cgroups))
|
||||
|
||||
- The architects of Kubernetes wanted to ensure that Guaranteed pods never swap
|
||||
|
||||
- The only solution was to disable swap entirely
|
||||
|
||||
---
|
||||
|
||||
## Alternative point of view
|
||||
|
||||
- Swap enables paging¹ of anonymous² memory
|
||||
|
||||
- Even when swap is disabled, Linux will still page memory for:
|
||||
|
||||
- executables, libraries
|
||||
|
||||
- mapped files
|
||||
|
||||
- Disabling swap *will reduce performance and available resources*
|
||||
|
||||
- For a good time, read [kubernetes/kubernetes#53533](https://github.com/kubernetes/kubernetes/issues/53533)
|
||||
|
||||
- Also read this [excellent blog post about swap](https://jvns.ca/blog/2017/02/17/mystery-swap/)
|
||||
|
||||
¹Paging: reading/writing memory pages from/to disk to reclaim physical memory
|
||||
|
||||
²Anonymous memory: memory that is not backed by files or blocks
|
||||
|
||||
---
|
||||
|
||||
## Enabling swap anyway
|
||||
|
||||
- If you don't care that pods are swapping, you can enable swap
|
||||
|
||||
- You will need to add the flag `--fail-swap-on=false` to kubelet
|
||||
|
||||
(otherwise, it won't start!)
|
||||
|
||||
---
|
||||
|
||||
## Specifying resources
|
||||
|
||||
- Resource requests are expressed at the *container* level
|
||||
|
||||
- CPU is expressed in "virtual CPUs"
|
||||
|
||||
(corresponding to the virtual CPUs offered by some cloud providers)
|
||||
|
||||
- CPU can be expressed with a decimal value, or even a "milli" suffix
|
||||
|
||||
(so 100m = 0.1)
|
||||
|
||||
- Memory is expressed in bytes
|
||||
|
||||
- Memory can be expressed with k, M, G, T, ki, Mi, Gi, Ti suffixes
|
||||
|
||||
(corresponding to 10^3, 10^6, 10^9, 10^12, 2^10, 2^20, 2^30, 2^40)
|
||||
|
||||
---
|
||||
|
||||
## Specifying resources in practice
|
||||
|
||||
This is what the spec of a Pod with resources will look like:
|
||||
|
||||
```yaml
|
||||
containers:
|
||||
- name: httpenv
|
||||
image: jpetazzo/httpenv
|
||||
resources:
|
||||
limits:
|
||||
memory: "100Mi"
|
||||
cpu: "100m"
|
||||
requests:
|
||||
memory: "100Mi"
|
||||
cpu: "10m"
|
||||
```
|
||||
|
||||
This set of resources makes sure that this service won't be killed (as long as it stays below 100 MB of RAM), but allows its CPU usage to be throttled if necessary.
|
||||
|
||||
---
|
||||
|
||||
## Default values
|
||||
|
||||
- If we specify a limit without a request:
|
||||
|
||||
the request is set to the limit
|
||||
|
||||
- If we specify a request without a limit:
|
||||
|
||||
there will be no limit
|
||||
|
||||
(which means that the limit will be the size of the node)
|
||||
|
||||
- If we don't specify anything:
|
||||
|
||||
the request is zero and the limit is the size of the node
|
||||
|
||||
*Unless there are default values defined for our namespace!*
|
||||
|
||||
---
|
||||
|
||||
## We need default resource values
|
||||
|
||||
- If we do not set resource values at all:
|
||||
|
||||
- the limit is "the size of the node"
|
||||
|
||||
- the request is zero
|
||||
|
||||
- This is generally *not* what we want
|
||||
|
||||
- a container without a limit can use up all the resources of a node
|
||||
|
||||
- if the request is zero, the scheduler can't make a smart placement decision
|
||||
|
||||
- To address this, we can set default values for resources
|
||||
|
||||
- This is done with a LimitRange object
|
||||
|
||||
---
|
||||
|
||||
# Defining min, max, and default resources
|
||||
|
||||
- We can create LimitRange objects to indicate any combination of:
|
||||
|
||||
- min and/or max resources allowed per pod
|
||||
|
||||
- default resource *limits*
|
||||
|
||||
- default resource *requests*
|
||||
|
||||
- maximal burst ratio (*limit/request*)
|
||||
|
||||
- LimitRange objects are namespaced
|
||||
|
||||
- They apply to their namespace only
|
||||
|
||||
---
|
||||
|
||||
## LimitRange example
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: my-very-detailed-limitrange
|
||||
spec:
|
||||
limits:
|
||||
- type: Container
|
||||
min:
|
||||
cpu: "100m"
|
||||
max:
|
||||
cpu: "2000m"
|
||||
memory: "1Gi"
|
||||
default:
|
||||
cpu: "500m"
|
||||
memory: "250Mi"
|
||||
defaultRequest:
|
||||
cpu: "500m"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Example explanation
|
||||
|
||||
The YAML on the previous slide shows an example LimitRange object specifying very detailed limits on CPU usage,
|
||||
and providing defaults on RAM usage.
|
||||
|
||||
Note the `type: Container` line: in the future,
|
||||
it might also be possible to specify limits
|
||||
per Pod, but it's not [officially documented yet](https://github.com/kubernetes/website/issues/9585).
|
||||
|
||||
---
|
||||
|
||||
## LimitRange details
|
||||
|
||||
- LimitRange restrictions are enforced only when a Pod is created
|
||||
|
||||
(they don't apply retroactively)
|
||||
|
||||
- They don't prevent creation of e.g. an invalid Deployment or DaemonSet
|
||||
|
||||
(but the pods will not be created as long as the LimitRange is in effect)
|
||||
|
||||
- If there are multiple LimitRange restrictions, they all apply together
|
||||
|
||||
(which means that it's possible to specify conflicting LimitRanges,
|
||||
<br/>preventing any Pod from being created)
|
||||
|
||||
- If a LimitRange specifies a `max` for a resource but no `default`,
|
||||
<br/>that `max` value becomes the `default` limit too
|
||||
|
||||
---
|
||||
|
||||
# Namespace quotas
|
||||
|
||||
- We can also set quotas per namespace
|
||||
|
||||
- Quotas apply to the total usage in a namespace
|
||||
|
||||
(e.g. total CPU limits of all pods in a given namespace)
|
||||
|
||||
- Quotas can apply to resource limits and/or requests
|
||||
|
||||
(like the CPU and memory limits that we saw earlier)
|
||||
|
||||
- Quotas can also apply to other resources:
|
||||
|
||||
- "extended" resources (like GPUs)
|
||||
|
||||
- storage size
|
||||
|
||||
- number of objects (number of pods, services...)
|
||||
|
||||
---
|
||||
|
||||
## Creating a quota for a namespace
|
||||
|
||||
- Quotas are enforced by creating a ResourceQuota object
|
||||
|
||||
- ResourceQuota objects are namespaced, and apply to their namespace only
|
||||
|
||||
- We can have multiple ResourceQuota objects in the same namespace
|
||||
|
||||
- The most restrictive values are used
|
||||
|
||||
---
|
||||
|
||||
## Limiting total CPU/memory usage
|
||||
|
||||
- The following YAML specifies an upper bound for *limits* and *requests*:
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: a-little-bit-of-compute
|
||||
spec:
|
||||
hard:
|
||||
requests.cpu: "10"
|
||||
requests.memory: 10Gi
|
||||
limits.cpu: "20"
|
||||
limits.memory: 20Gi
|
||||
```
|
||||
|
||||
These quotas will apply to the namespace where the ResourceQuota is created.
|
||||
|
||||
---
|
||||
|
||||
## Limiting number of objects
|
||||
|
||||
- The following YAML specifies how many objects of specific types can be created:
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: quota-for-objects
|
||||
spec:
|
||||
hard:
|
||||
pods: 100
|
||||
services: 10
|
||||
secrets: 10
|
||||
configmaps: 10
|
||||
persistentvolumeclaims: 20
|
||||
services.nodeports: 0
|
||||
services.loadbalancers: 0
|
||||
count/roles.rbac.authorization.k8s.io: 10
|
||||
```
|
||||
|
||||
(The `count/` syntax allows limiting arbitrary objects, including CRDs.)
|
||||
|
||||
---
|
||||
|
||||
## YAML vs CLI
|
||||
|
||||
- Quotas can be created with a YAML definition
|
||||
|
||||
- ...Or with the `kubectl create quota` command
|
||||
|
||||
.exercise[
|
||||
|
||||
- Create the following quota:
|
||||
```bash
|
||||
kubectl create quota my-resource-quota \
|
||||
--hard=pods=300,limits.memory=300Gi
|
||||
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
- With both YAML and CLI form, the values are always under the `hard` section
|
||||
|
||||
(there is no `soft` quota)
|
||||
|
||||
---
|
||||
|
||||
## Viewing current usage
|
||||
|
||||
When a ResourceQuota is created, we can see how much of it is used:
|
||||
|
||||
|
||||
.exercise[
|
||||
|
||||
- Check how much of the ResourceQuota is used:
|
||||
```bash
|
||||
kubectl describe resourcequota my-resource-quota
|
||||
|
||||
```
|
||||
|
||||
- Remove quota:
|
||||
```bash
|
||||
kubectl delete quota my-resource-quota
|
||||
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Advanced quotas and PriorityClass
|
||||
|
||||
- Since Kubernetes 1.12, it is possible to create PriorityClass objects
|
||||
|
||||
- Pods can be assigned a PriorityClass
|
||||
|
||||
- Quotas can be linked to a PriorityClass
|
||||
|
||||
- This allows us to reserve resources for pods within a namespace
|
||||
|
||||
- For more details, check [this documentation page](https://kubernetes.io/docs/concepts/policy/resource-quotas/#resource-quota-per-priorityclass)
|
||||
|
||||
---
|
||||
|
||||
# Limiting resources in practice
|
||||
|
||||
- We have at least three mechanisms:
|
||||
|
||||
- requests and limits per Pod
|
||||
|
||||
- LimitRange per namespace
|
||||
|
||||
- ResourceQuota per namespace
|
||||
|
||||
- Let's see a simple recommendation to get started with resource limits
|
||||
|
||||
---
|
||||
|
||||
## Set a LimitRange
|
||||
|
||||
- In each namespace, create a LimitRange object
|
||||
|
||||
- Set a small default CPU request and CPU limit
|
||||
|
||||
(e.g. "100m")
|
||||
|
||||
- Set a default memory request and limit depending on your most common workload
|
||||
|
||||
- for Java, Ruby: start with "1G"
|
||||
|
||||
- for Go, Python, PHP, Node: start with "250M"
|
||||
|
||||
- Set upper bounds slightly below your expected node size
|
||||
|
||||
(80-90% of your node size, with at least a 500M memory buffer)
|
||||
|
||||
---
|
||||
|
||||
## Set a ResourceQuota
|
||||
|
||||
- In each namespace, create a ResourceQuota object
|
||||
|
||||
- Set generous CPU and memory limits
|
||||
|
||||
(e.g. half the cluster size if the cluster hosts multiple apps)
|
||||
|
||||
- Set generous objects limits
|
||||
|
||||
- these limits should not be here to constrain your users
|
||||
|
||||
- they should catch a runaway process creating many resources
|
||||
|
||||
- example: a custom controller creating many pods
|
||||
|
||||
---
|
||||
|
||||
## Observe, refine, iterate
|
||||
|
||||
- Observe the resource usage of your pods
|
||||
|
||||
(we will see how in the next chapter)
|
||||
|
||||
- Adjust individual pod limits
|
||||
|
||||
- If you see trends: adjust the LimitRange
|
||||
|
||||
(rather than adjusting every individual set of pod limits)
|
||||
|
||||
- Observe the resource usage of your namespaces
|
||||
|
||||
(with `kubectl describe resourcequota ...`)
|
||||
|
||||
- Rinse and repeat regularly
|
||||
@@ -3,14 +3,32 @@
|
||||
- Hello! We are:
|
||||
|
||||
- .emoji[✨] Bridget ([@bridgetkromhout](https://twitter.com/bridgetkromhout))
|
||||
- .emoji[☁️] Aaron ([@as_w](https://twitter.com/as_w))
|
||||
|
||||
- .emoji[🌟] Joe ([@joelaha](https://twitter.com/joelaha))
|
||||
|
||||
- The workshop will run from 13:30-16:45
|
||||
--
|
||||
|
||||
- There will be a break from 15:00-15:15
|
||||
- We encourage networking at #oscon
|
||||
|
||||
- Take a minute to introduce yourself to your neighbors
|
||||
|
||||
- What company or organization are you from? Where are you based?
|
||||
|
||||
- Share what you're hoping to learn in this session! .emoji[✨]
|
||||
|
||||
---
|
||||
## Logistics
|
||||
|
||||
- The tutorial will run from 1:30pm-5:00pm
|
||||
|
||||
- There will be a break from 3:10pm-3:40pm
|
||||
|
||||
- This means we start with 1hr 40min, then 30min break, then 1hr 20min.
|
||||
|
||||
- Feel free to interrupt for questions at any time
|
||||
|
||||
- *Especially when you see full screen container pictures!*
|
||||
|
||||
- Live feedback, questions, help: @@CHAT@@
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ class: title, in-person
|
||||
@@TITLE@@<br/></br>
|
||||
|
||||
.footnote[
|
||||
**Be kind to the WiFi!**<br/>
|
||||
**Be kind to the WiFi!** (Network Name: OReillyCon19, Password: oscon2019)<br/>
|
||||
<!-- *Use the 5G network.* -->
|
||||
*Don't use your hotspot.*<br/>
|
||||
*Don't stream videos or download big files during the workshop[.](https://www.youtube.com/watch?v=h16zyxiwDLY)*<br/>
|
||||
|
||||
Reference in New Issue
Block a user