moar wording tweaks

This commit is contained in:
AJ Bowen
2019-06-09 22:25:57 -07:00
parent 093cfd1c24
commit 7a63dfb0cf
7 changed files with 35 additions and 35 deletions

View File

@@ -86,7 +86,7 @@ like Windows, macOS, Solaris, FreeBSD ...
* No notion of image (container filesystems have to be managed manually).
* Networking has to be setup manually.
* Networking has to be set up manually.
---
@@ -112,7 +112,7 @@ like Windows, macOS, Solaris, FreeBSD ...
* Strong emphasis on security (through privilege separation).
* Networking has to be setup separately (e.g. through CNI plugins).
* Networking has to be set up separately (e.g. through CNI plugins).
* Partial image management (pull, but no push).
@@ -152,7 +152,7 @@ We're not aware of anyone using it directly (i.e. outside of Kubernetes).
* Basic image support (tar archives and raw disk images).
* Network has to be setup manually.
* Network has to be set up manually.
---

View File

@@ -117,7 +117,7 @@ Examples:
## Admission controllers
- When a Pod is created, it is associated to a ServiceAccount
- When a Pod is created, it is associated with a ServiceAccount
(even if we did not specify one explicitly)
@@ -163,7 +163,7 @@ class: pic
- These webhooks can be *validating* or *mutating*
- Webhooks can be setup dynamically (without restarting the API server)
- Webhooks can be set up dynamically (without restarting the API server)
- To setup a dynamic admission webhook, we create a special resource:
@@ -171,7 +171,7 @@ class: pic
- These resources are created and managed like other resources
(i.e. `kubectl create`, `kubectl get` ...)
(i.e. `kubectl create`, `kubectl get`...)
---

View File

@@ -6,15 +6,15 @@
- Horizontal scaling = changing the number of replicas
(adding / removing pods)
(adding/removing pods)
- Vertical scaling = changing the size of individual replicas
(increasing / reducing CPU and RAM per pod)
(increasing/reducing CPU and RAM per pod)
- Cluster scaling = changing the size of the cluster
(adding / removing nodes)
(adding/removing nodes)
---
@@ -50,9 +50,9 @@
- The latter actually makes a lot of sense:
- if a Pod doesn't have a CPU request, it might be using 10% of CPU ...
- if a Pod doesn't have a CPU request, it might be using 10% of CPU...
- ... but only because there is no CPU time available!
- ...but only because there is no CPU time available!
- this makes sure that we won't add pods to nodes that are already resource-starved
@@ -238,7 +238,7 @@ This can also be set with `--cpu-percent=`.
- Kubernetes doesn't implement any of these API groups
- Using these metrics requires to [register additional APIs](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis)
- Using these metrics requires [registering additional APIs](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis)
- The metrics provided by metrics server are standard; everything else is custom

View File

@@ -307,7 +307,7 @@ This policy selects all pods in the current namespace.
It allows traffic only from pods in the current namespace.
(An empty `podSelector` means "all pods".)
(An empty `podSelector` means "all pods.")
```yaml
kind: NetworkPolicy
@@ -329,7 +329,7 @@ This policy selects all pods with label `app=webui`.
It allows traffic from any source.
(An empty `from` fields means "all sources".)
(An empty `from` field means "all sources.")
```yaml
kind: NetworkPolicy
@@ -412,7 +412,7 @@ troubleshoot easily, without having to poke holes in our firewall.
- If we block access to the control plane, we might disrupt legitimate code
- ... Without necessarily improving security
- ...Without necessarily improving security
---

View File

@@ -49,7 +49,7 @@
kubectl create deployment web --image=nginx
```
- Confirm that the Deployment, ReplicaSet, and Pod exist, and Pod is running:
- Confirm that the Deployment, ReplicaSet, and Pod exist, and that the Pod is running:
```bash
kubectl get all
```
@@ -163,7 +163,7 @@
- If we create a Pod directly, it can use a PSP to which *we* have access
- If the Pod is created by e.g. a ReplicaSet or DaemonSet, it's different:
- the ReplicaSet / DaemonSet controllers don't have access to *our* policies
- therefore, we need to give access to the PSP to the Pod's ServiceAccount
@@ -178,7 +178,7 @@
- Then we will create a couple of PodSecurityPolicies
- ... And associated ClusterRoles (giving `use` access to the policies)
- ...And associated ClusterRoles (giving `use` access to the policies)
- Then we will create RoleBindings to grant these roles to ServiceAccounts

View File

@@ -20,7 +20,7 @@
- We don't endorse Prometheus more or less than any other system
- It's relatively well integrated within the Cloud Native ecosystem
- It's relatively well integrated within the cloud-native ecosystem
- It can be self-hosted (this is useful for tutorials like this)
@@ -182,7 +182,7 @@ We need to:
- Run the *node exporter* on each node (with a Daemon Set)
- Setup a Service Account so that Prometheus can query the Kubernetes API
- Set up a Service Account so that Prometheus can query the Kubernetes API
- Configure the Prometheus server
@@ -250,7 +250,7 @@ class: extra-details
## Explaining all the Helm flags
- `helm upgrade prometheus` → upgrade release "prometheus" to the latest version ...
- `helm upgrade prometheus` → upgrade release "prometheus" to the latest version...
(a "release" is a unique name given to an app deployed with Helm)
@@ -288,7 +288,7 @@ class: extra-details
## Querying some metrics
- This is easy ... if you are familiar with PromQL
- This is easy... if you are familiar with PromQL
.exercise[
@@ -433,9 +433,9 @@ class: extra-details
- I/O activity (disk, network), per operation or volume
- Physical/hardware (when applicable): temperature, fan speed ...
- Physical/hardware (when applicable): temperature, fan speed...
- ... and much more!
- ...and much more!
---
@@ -448,7 +448,7 @@ class: extra-details
- RAM breakdown will be different
- active vs inactive memory
- some memory is *shared* between containers, and accounted specially
- some memory is *shared* between containers, and specially accounted for
- I/O activity is also harder to track
@@ -467,11 +467,11 @@ class: extra-details
- Arbitrary metrics related to your application and business
- System performance: request latency, error rate ...
- System performance: request latency, error rate...
- Volume information: number of rows in database, message queue size ...
- Volume information: number of rows in database, message queue size...
- Business data: inventory, items sold, revenue ...
- Business data: inventory, items sold, revenue...
---
@@ -541,8 +541,8 @@ class: extra-details
- That person can set up queries and dashboards for the rest of the team
- It's a little bit likeknowing how to optimize SQL queries, Dockerfiles ...
- It's a little bit likeknowing how to optimize SQL queries, Dockerfiles...
Don't panic if you don't know these tools!
... But make sure at least one person in your team is on it 💯
...But make sure at least one person in your team is on it 💯

View File

@@ -86,17 +86,17 @@ Each pod is assigned a QoS class (visible in `status.qosClass`).
- as long as the container uses less than the limit, it won't be affected
- if all containers in a pod have *(limits=requests)*, QoS is "Guaranteed"
- if all containers in a pod have *(limits=requests)*, QoS is considered "Guaranteed"
- If requests < limits:
- as long as the container uses less than the request, it won't be affected
- otherwise, it might be killed / evicted if the node gets overloaded
- otherwise, it might be killed/evicted if the node gets overloaded
- if at least one container has *(requests<limits)*, QoS is "Burstable"
- if at least one container has *(requests<limits)*, QoS is considered "Burstable"
- If a pod doesn't have any request nor limit, QoS is "BestEffort"
- If a pod doesn't have any request nor limit, QoS is considered "BestEffort"
---
@@ -400,7 +400,7 @@ These quotas will apply to the namespace where the ResourceQuota is created.
- Quotas can be created with a YAML definition
- ... Or with the `kubectl create quota` command
- ...Or with the `kubectl create quota` command
- Example:
```bash