Compare commits

..

1 Commits

Author SHA1 Message Date
Jerome Petazzoni
3fdf3534d6 🔥 Prepare Reblaze August content 2021-08-14 22:03:58 +02:00
19 changed files with 276 additions and 782 deletions

View File

@@ -2,7 +2,7 @@
#/ /kube-halfday.yml.html 200!
#/ /kube-fullday.yml.html 200!
#/ /kube-twodays.yml.html 200!
/ /week1.yml.html 200!
/ /kube.yml.html 200!
# And this allows to do "git clone https://container.training".
/info/refs service=git-upload-pack https://github.com/jpetazzo/container.training/info/refs?service=git-upload-pack

85
slides/exercises.md Normal file
View File

@@ -0,0 +1,85 @@
## Exercises
- At the end of each day, we'll suggest a few more in-depth exercises
- Try to complete them (either at the end of the day, or later, if you can!)
- The exercises should be very quick for someone who already knows Kubernetes
- But they can be more challenging if they concern parts that you haven't used yet!
---
## Day 1
- Deploy a local Kubernetes cluster if you don't already have one
(you can use Docker Desktop, KinD, minikube... whatever you like)
- Deploy dockercoins on that cluster
(feel free to use the YAML file for convenience)
- Connect to the web UI in your browser
(you can expose the port, or use port-forward, or anything you like)
- Scale up dockercoins
---
## Day 2
- Add the Kubernetes dashboard to your local cluster
- Make sure that dockercoins is deployed in a specific namespace
- Use the dashboard to view that namespace in read-only mode
(hint: you'll need a service account, rolebinding, and token)
- Tweak permissions so that you can scale deployments in that namespace
- Add an ingress controller to your local cluster
- Configure an ingress resource to access the web UI with `dockercoins.localdev.me`
(`\*.localdev.me` resolves to 127.0.0.1)
---
## Day 3
- Create a Helm chart to deploy a generic microservice
(using `helm create` to get a generic chart and tweaking that chart)
- Deploy dockercoins by instanciating that chart multiple times
(one time per service, so 5 times total)
- Create a "meta" Helm chart to install the 5 components of dockercoins
(using chart dependencies and aliases)
- Bonus: use Bitnami's redis chart for the dockercoins redis component
---
## Day 4
- Deploy a Kubernetes cluster with multiple nodes
(you can use something like KinD, k3d, or even a managed k8s)
- If the cluster doesn't already have a storage class, add one
(for instance, by using OpenEBS)
- Deploy the Consul or the PostgreSQL example
- Destroy a node and:
- verify the failover behavior (for Consul)
- trigger the failover behavior (for PostgreSQL)

View File

@@ -1,11 +0,0 @@
## Exercise - Healthchecks
- Add readiness and liveness probes to a web service
(we will use the `rng` service in the dockercoins app)
- Observe the correct behavior of the readiness probe
(when deploying e.g. an invalid image)
- Observe the behavior of the liveness probe

View File

@@ -1,37 +0,0 @@
# Exercise - Healthchecks
- We want to add healthchecks to the `rng` service in dockercoins
- First, deploy a new copy of dockercoins
- Then, add a readiness probe on the `rng` service
(using a simple HTTP check on the `/` route of the service)
- Check what happens when deploying an invalid image (e.g. `alpine`)
- Then, add a liveness probe on the `rng` service
(with the same parameters)
- Scale up the `worker` service (to 15+ workers) and observe
- What happens, and how can we improve the situation?
---
## Goal
- *Before* adding the readiness probe:
updating the image of the `rng` service with `alpine` should break it
- *After* adding the readiness probe:
updating the image of the `rng` service with `alpine` shouldn't break it
- When adding the liveness probe, nothing special should happen
- Scaling the `worker` service will then cause disruptions
- The final goal is to understand why, and how to fix it

View File

@@ -1,9 +0,0 @@
## Exercise - Helm Charts
- Create a Helm chart to deploy a generic microservice
- Deploy dockercoins by instanciating that chart multiple times
- Bonus: create a "meta" Helm chart to install the 5 components of dockercoins
- Bonus: use an external chart for the redis component

View File

@@ -1,82 +0,0 @@
# Exercise - Helm Charts
- We want to deploy dockercoins with a Helm chart
- We want to have a "generic chart" and instantiate it 5 times
(once for each service)
- We will pass values to the chart to customize it for each component
(to indicate which image to use, which ports to expose, etc.)
- We'll use `helm create` as a starting point for our generic chart
---
(using `helm create` to get a generic chart and tweaking that chart)
- Deploy dockercoins by instanciating that chart multiple times
(one time per service, so 5 times total)
- Create a "meta" Helm chart to install the 5 components of dockercoins
(using chart dependencies and aliases)
- Bonus: use Bitnami's redis chart for the dockercoins redis component
---
## Goal
- Have a directory with the generic chart
(e.g. `generic-chart`)
- Have 5 value files
(e.g. `hasher.yml`, `redis.yml`, `rng.yml`, `webui.yml`, `worker.yml`)
- Be able to install dockercoins by running 5 times:
`helm install X ./generic-chart --values=X.yml`
---
## Hints
- There are many little things to tweak in the generic chart
(service names, port numbers, healthchecks...)
- Check the training slides if you need a refresher!
---
## Bonus 1
- Create a "meta chart" or "umbrella chart" to install all 5 components
(so that dockercoins can be installed with a single `helm install` command)
- This will require expressing dependencies, and using the `alias` keyword
---
## Bonus 2
- Replace the `redis` component with an external chart
(e.g. Bitnami's redis chart)
- This will require to pass extra values to that chart
(to disable persistence, replication, password authentication)
- This will also require to either:
- import the chart and tweak it to change the service name
- add an ExternalName service pointing to the new redis component

View File

@@ -1,9 +0,0 @@
## Exercise - Ingress
- Add an ingress controller to a Kubernetes cluster
- Create an ingress resource for a web app on that cluster
- Challenge: accessing/exposing port 80
(different methods depending on how the cluster was deployed)

View File

@@ -1,47 +0,0 @@
# Exercise - Ingress
- We want to expose a web app through an ingress controller
- This will require:
- the web app itself (dockercoins, NGINX, whatever we want)
- an ingress controller (we suggest Traefik)
- a domain name (`use \*.nip.io` or `\*.localdev.me`)
- an ingress resource
---
## Goal
- We want to be able to access the web app using an URL like:
http://webapp.localdev.me
*or*
http://webapp.A.B.C.D.nip.io
(where A.B.C.D is the IP address of one of our nodes)
---
## Hints
- Traefik can be installed with Helm
(it can be found on the Artifact Hub)
- If using Kubernetes 1.22+, make sure to use Traefik 2.5+
- If our cluster supports LoadBalancer Services: easy
(nothing special to do)
- For local clusters, things can be more difficult; two options:
- map localhost:80 to e.g. a NodePort service, and use `\*.localdev.me`
- use hostNetwork, or ExternalIP, and use `\*.nip.io`

View File

@@ -1,7 +0,0 @@
## Exercise - Deploy Dockercoins
- Deploy the dockercoins application to our Kubernetes cluster
- Connect components together
- Expose the web UI and open it in a web browser to check that it works

View File

@@ -1,47 +0,0 @@
# Exercise - Deploy Dockercoins
- We want to deploy the dockercoins app
- There are 5 components in the app:
hasher, redis, rng, webui, worker
- We'll use one Deployment for each component
(see next slide for the images to use)
- We'll connect them with Services
- We'll check that we can access the web UI in a browser
---
## Images
- hasher → `dockercoins/hasher:v0.1`
- redis → `redis`
- rng → `dockercoins/rng:v0.1`
- webui → `dockercoins/webui:v0.1`
- worker → `dockercoins/worker:v0.1`
---
## Goal
- We should be able to see the web UI in our browser
(with the graph showing approximatiely 3-4 hashes/second)
---
## Hints
- Make sure to expose services with the right ports
(check the logs of the worker; they indicate the port numbers)
- The web UI can be exposed with a NodePort Service

View File

@@ -1,9 +0,0 @@
## Exercise - Local Cluster
- Deploy a local Kubernetes cluster if you don't already have one
- Deploy dockercoins on that cluster
- Connect to the web UI in your browser
- Scale up dockercoins

View File

@@ -1,43 +0,0 @@
# Exercise - Local Cluster
- We want to have our own local Kubernetes cluster
(we can use Docker Desktop, KinD, minikube... anything will do!)
- Then we want to run a copy of dockercoins on that cluster
- We want to be able to connect to the web UI
(we can expose the port, or use port-forward, or whatever)
---
## Goal
- Be able to see the dockercoins web UI running on our local cluster
---
## Hints
- On a Mac or Windows machine:
the easiest solution is probably Docker Desktop
- On a Linux machine:
the easiest solution is probably KinD or k3d
- To connect to the web UI:
`kubectl port-forward` is probably the easiest solution
---
## Bonus
- If you already have a local Kubernetes cluster:
try to run another one!
- Try to use another method than `kubectl port-forward`

View File

@@ -1,54 +1,62 @@
# Exposing HTTP services with Ingress resources
- HTTP services are typically exposed on port 80
- *Services* give us a way to access a pod or a set of pods
(and 443 for HTTPS)
- Services can be exposed to the outside world:
- `NodePort` services are great, but they are *not* on port 80
- with type `NodePort` (on a port >30000)
(by default, they use port range 30000-32767)
- with type `LoadBalancer` (allocating an external load balancer)
- How can we get *many* HTTP services on port 80? 🤔
- What about HTTP services?
- how can we expose `webui`, `rng`, `hasher`?
- the Kubernetes dashboard?
- a new version of `webui`?
---
## Various ways to expose something on port 80
## Exposing HTTP services
- Service with `type: LoadBalancer`
- If we use `NodePort` services, clients have to specify port numbers
*costs a little bit of money; not always available*
(i.e. http://xxxxx:31234 instead of just http://xxxxx)
- Service with one (or multiple) `ExternalIP`
- `LoadBalancer` services are nice, but:
*requires public nodes; limited by number of nodes*
- they are not available in all environments
- Service with `hostPort` or `hostNetwork`
- they often carry an additional cost (e.g. they provision an ELB)
*same limitations as `ExternalIP`; even harder to manage*
- they require one extra step for DNS integration
<br/>
(waiting for the `LoadBalancer` to be provisioned; then adding it to DNS)
- Ingress resources
*addresses all these limitations, yay!*
- We could build our own reverse proxy
---
## `LoadBalancer` vs `Ingress`
## Building a custom reverse proxy
- Service with `type: LoadBalancer`
- There are many options available:
- requires a particular controller (e.g. CCM, MetalLB)
- costs a bit of money for each service
- if TLS is desired, it has to be implemented by the app
- works for any TCP protocol (not just HTTP)
- doesn't interpret the HTTP protocol (no fancy routing)
Apache, HAProxy, Hipache, NGINX, Traefik, ...
- Ingress
(look at [jpetazzo/aiguillage](https://github.com/jpetazzo/aiguillage) for a minimal reverse proxy configuration using NGINX)
- requires an ingress controller
- flat cost regardless of number of ingresses
- can implement TLS transparently for the app
- only supports HTTP
- can do content-based routing (e.g. per URI)
- Most of these options require us to update/edit configuration files after each change
- Some of them can pick up virtual hosts and backends from a configuration store
- Wouldn't it be nice if this configuration could be managed with the Kubernetes API?
--
- Enter.red[¹] *Ingress* resources!
.footnote[.red[¹] Pun maybe intended.]
---
@@ -58,47 +66,17 @@
- Designed to expose HTTP services
- Requires an *ingress controller*
- Basic features:
(otherwise, resources can be created, but nothing happens)
- load balancing
- SSL termination
- name-based virtual hosting
- Some ingress controllers are based on existing load balancers
- Can also route to different services depending on:
(HAProxy, NGINX...)
- Some are standalone, and sometimes designed for Kubernetes
(Contour, Traefik...)
- Note: there is no "default" or "official" ingress controller!
---
## Ingress standard features
- Load balancing
- SSL termination
- Name-based virtual hosting
- URI routing
(e.g. `/api``api-service`, `/static``assets-service`)
---
## Ingress extended features
(Not always supported; supported through annotations, CRDs, etc.)
- Routing with other headers or cookies
- A/B testing
- Canary deployment
- etc.
- URI path (e.g. `/api``api-service`, `/static``assets-service`)
- Client headers, including cookies (for A/B testing, canary deployment...)
- and more!
---
@@ -106,33 +84,19 @@
- Step 1: deploy an *ingress controller*
(one-time setup)
- ingress controller = load balancer + control loop
- Step 2: create *Ingress resources*
- the control loop watches over ingress resources, and configures the LB accordingly
- maps a domain and/or path to a Kubernetes Service
- the controller watches ingress resources and sets up a LB
- Step 3: set up DNS
- Step 2: set up DNS
- associate DNS entries with the load balancer address
---
- Step 3: create *ingress resources*
class: extra-details
- the ingress controller picks up these resources and configures the LB
## Single or multiple LoadBalancer
- Most ingress controllers will create a LoadBalancer Service
- We need to point our DNS entries to the IP address of that LB
- Some rare ingress controllers will allocate one LB per ingress resource
(example: by default, the AWS ingress controller based on ALBs)
- This leads to increased costs
- Step 4: profit!
---
@@ -460,47 +424,7 @@ This is normal: we haven't provided any ingress rule yet.
---
## Creating ingress resources
- Before Kubernetes 1.19, we must use YAML manifests
(see example on next slide)
- Since Kubernetes 1.19, we can use `kubectl create ingress`
```bash
kubectl create ingress cheddar \
--rule=cheddar.`A.B.C.D`.nip.io/*=cheddar:80
```
- We can specify multiple rules per resource
```bash
kubectl create ingress cheeses \
--rule=cheddar.`A.B.C.D`.nip.io/*=cheddar:80 \
--rule=stilton.`A.B.C.D`.nip.io/*=stilton:80 \
--rule=wensleydale.`A.B.C.D`.nip.io/*=wensleydale:80
```
---
## Pay attention to the `*`!
- The `*` is important:
```
--rule=cheddar.A.B.C.D.nip.io/`*`=cheddar:80
```
- It means "all URIs below that path"
- Without the `*`, it means "only that exact path"
(and requests for e.g. images or other URIs won't work)
---
## Ingress resources in YAML
## What does an ingress resource look like?
Here is a minimal host-based ingress resource:
@@ -525,37 +449,39 @@ spec:
---
class: extra-details
## Creating our first ingress resources
## Ingress API version
.exercise[
- The YAML on the previous slide uses `apiVersion: networking.k8s.io/v1beta1`
- Edit the file `~/container.training/k8s/ingress.yaml`
- Starting with Kubernetes 1.19, `networking.k8s.io/v1` is available
- Replace A.B.C.D with the IP address of `node1`
- However, with Kubernetes 1.19 (and later), we can use `kubectl create ingress`
- Apply the file
- We chose to keep an "old" (deprecated!) YAML example for folks still using older versions of Kubernetes
- Open http://cheddar.A.B.C.D.nip.io
- If we want to see "modern" YAML, we can use `-o yaml --dry-run=client`:
]
```bash
kubectl create ingress cheddar -o yaml --dry-run=client \
--rule=cheddar.`A.B.C.D`.nip.io/*=cheddar:80
```
(An image of a piece of cheese should show up.)
---
## Creating ingress resources
## Creating the other ingress resources
- Create the ingress resources with `kubectl create ingress`
.exercise[
(or use the YAML manifests if using Kubernetes 1.18 or older)
- Edit the file `~/container.training/k8s/ingress.yaml`
- Make sure to update the hostnames!
- Replace `cheddar` with `stilton` (in `name`, `host`, `serviceName`)
- Check that you can connect to the exposed web apps
- Apply the file
- Check that `stilton.A.B.C.D.nip.io` works correctly
- Repeat for `wensleydale`
]
---
@@ -581,31 +507,41 @@ class: extra-details
---
## Ingress in the past
## Ingress: the good
- Before the v1 spec, some features were not standardized
- The traffic flows directly from the ingress load balancer to the backends
- Example: stripping path prefixes in Traefik vs NGINX
- it doesn't need to go through the `ClusterIP`
- in fact, we don't even need a `ClusterIP` (we can use a headless service)
- The load balancer can be outside of Kubernetes
(as long as it has access to the cluster subnet)
- This allows the use of external (hardware, physical machines...) load balancers
- Annotations can encode special features
(rate-limiting, A/B testing, session stickiness, etc.)
---
## Ingress: the bad
- Aforementioned "special features" are not standardized yet
- Some controllers will support them; some won't
- Even relatively common features (stripping a path prefix) can differ:
- [traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip](https://docs.traefik.io/user-guide/kubernetes/#path-based-routing)
- [ingress.kubernetes.io/rewrite-target: /](https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/rewrite)
- However, the v1 spec didn't standardize everything
- The Ingress spec stabilized in Kubernetes 1.19 ...
(e.g. A/B, sticky sessions, canary...)
---
## Ingress in the future
- The [Gateway API SIG](https://gateway-api.sigs.k8s.io/) might be the future of Ingress
- It proposes new resources:
GatewayClass, Gateway, HTTPRoute, TCPRoute...
- It is still in alpha stage
... without specifying these features! 😭
---

View File

@@ -1,86 +0,0 @@
title: |
Advanced
Kubernetes
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
#- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- #1
- k8s/prereqs-admin.md
- k8s/architecture.md
- k8s/internal-apis.md
- k8s/deploymentslideshow.md
- k8s/dmuc.md
- #2
- k8s/multinode.md
- k8s/cni.md
- k8s/interco.md
- #3
- k8s/cni-internals.md
- k8s/apilb.md
- k8s/control-plane-auth.md
- |
# (Extra content)
- k8s/staticpods.md
- k8s/cluster-upgrade.md
- #4
- k8s/kustomize.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- |
# (Extra content)
- k8s/helm-create-better-chart.md
- k8s/helm-dependencies.md
- k8s/helm-values-schema-validation.md
- k8s/helm-secrets.md
- #5
- k8s/extending-api.md
- k8s/operators.md
- k8s/sealed-secrets.md
- k8s/crd.md
#- k8s/exercise-sealed-secrets.md
- #6
- k8s/ingress-tls.md
- k8s/cert-manager.md
- k8s/eck.md
- #7
- k8s/admission.md
- k8s/kyverno.md
- #8
- k8s/aggregation-layer.md
- k8s/metrics-server.md
- k8s/prometheus.md
- k8s/prometheus-stack.md
- k8s/hpa-v2.md
- #9
- k8s/operators-design.md
- k8s/kubebuilder.md
- k8s/events.md
- k8s/finalizers.md
- |
# (Extra content)
- k8s/owners-and-dependents.md
- k8s/apiserver-deepdive.md
#- k8s/record.md
- shared/thankyou.md

View File

@@ -1,156 +0,0 @@
title: |
Deploying and Scaling Microservices
with Docker and Kubernetes
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- in-person
content:
- shared/title.md
#- logistics.md
- k8s/intro.md
- shared/about-slides.md
#- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
-
- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
- k8s/versions-k8s.md
- shared/sampleapp.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
-
- k8s/kubectlget.md
- k8s/kubectl-run.md
- k8s/batch-jobs.md
- k8s/labels-annotations.md
- k8s/kubectl-logs.md
- k8s/logs-cli.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
-
- k8s/kubenet.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
#- k8s/exercise-wordsmith.md
- k8s/yamldeploy.md
-
- k8s/setup-overview.md
- k8s/setup-devel.md
- k8s/setup-managed.md
- k8s/setup-selfhosted.md
- k8s/dashboard.md
- k8s/k9s.md
- k8s/tilt.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/authoring-yaml.md
#- k8s/exercise-yaml.md
-
- k8s/rollout.md
- k8s/healthchecks.md
- k8s/healthchecks-more.md
- k8s/record.md
-
- k8s/namespaces.md
- k8s/localkubeconfig.md
#- k8s/access-eks-cluster.md
- k8s/accessinternal.md
- k8s/kubectlproxy.md
-
- k8s/ingress.md
- k8s/ingress-tls.md
- k8s/cert-manager.md
- k8s/kustomize.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- k8s/helm-create-better-chart.md
- k8s/helm-dependencies.md
- k8s/helm-values-schema-validation.md
- k8s/helm-secrets.md
#- k8s/exercise-helm.md
- k8s/gitlab.md
-
- k8s/netpol.md
- k8s/authn-authz.md
- k8s/podsecuritypolicy.md
- k8s/user-cert.md
- k8s/csr-api.md
- k8s/openid-connect.md
- k8s/control-plane-auth.md
-
- k8s/volumes.md
#- k8s/exercise-configmap.md
- k8s/build-with-docker.md
- k8s/build-with-kaniko.md
-
- k8s/configuration.md
- k8s/secrets.md
- k8s/statefulsets.md
- k8s/local-persistent-volumes.md
- k8s/portworx.md
- k8s/openebs.md
-
- k8s/logs-centralized.md
- k8s/prometheus.md
- k8s/prometheus-stack.md
- k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
- k8s/hpa-v2.md
-
- k8s/extending-api.md
- k8s/apiserver-deepdive.md
- k8s/crd.md
- k8s/aggregation-layer.md
- k8s/admission.md
- k8s/operators.md
- k8s/operators-design.md
- k8s/kubebuilder.md
- k8s/sealed-secrets.md
#- k8s/exercise-sealed-secrets.md
- k8s/kyverno.md
- k8s/eck.md
- k8s/finalizers.md
- k8s/owners-and-dependents.md
- k8s/events.md
-
- k8s/dmuc.md
- k8s/multinode.md
- k8s/cni.md
- k8s/cni-internals.md
- k8s/apilb.md
- k8s/staticpods.md
-
- k8s/cluster-upgrade.md
- k8s/cluster-backup.md
- k8s/cloud-controller-manager.md
- k8s/gitworkflows.md
-
- k8s/lastwords.md
- k8s/links.md
- shared/thankyou.md

View File

@@ -1,14 +1,14 @@
title: |
Kubernetes Training
Week 1
Kubernetes
(Intermediate)
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "[#ext_kube_training](https://zenimaxonline.slack.com/archives/C02E9LSKNMD)"
chat: Slack
gitrepo: github.com/jpetazzo/container.training
slides: https://2021-09-zos.container.training/
slides: https://2021-08-reblaze.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
@@ -18,10 +18,6 @@ exclude:
content:
- shared/title.md
- logistics.md
- exercises/k8sfundamentals-brief.md
- exercises/localcluster-brief.md
- exercises/healthchecks-brief.md
- exercises/ingress-brief.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
@@ -29,65 +25,66 @@ content:
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- shared/connecting.md
- exercises.md
- # DAY 1
- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
#- k8s/versions-k8s.md
- shared/sampleapp.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
- k8s/kubectl-run.md
- k8s/labels-annotations.md
- k8s/kubectl-logs.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/kubenet.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
- exercises/k8sfundamentals-details.md
- k8s/ourapponkube.md
- # DAY 2
- k8s/namespaces.md
- k8s/yamldeploy.md
- k8s/authoring-yaml.md
- k8s/namespaces.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/authoring-yaml.md
- k8s/rollout.md
- k8s/healthchecks.md
#- k8s/healthchecks-more.md
#- k8s/record.md
#- k8s/exercise-yaml.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
#- k8s/kubectlproxy.md
- k8s/setup-overview.md
- k8s/setup-devel.md
#- k8s/setup-managed.md
#- k8s/setup-selfhosted.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
#- k8s/kubectlproxy.md
- exercises/localcluster-details.md
- # DAY 3
- k8s/logs-cli.md
- k8s/rollout.md
- k8s/k9s.md
- k8s/tilt.md
- k8s/healthchecks.md
#- k8s/healthchecks-more.md
#- k8s/record.md
#- k8s/kubectlscale.md
- # DAY 2
- k8s/ingress.md
- k8s/authn-authz.md
- k8s/dashboard.md
- k8s/netpol.md
#- k8s/ingress-tls.md
- k8s/volumes.md
- k8s/configuration.md
- k8s/secrets.md
- exercises/healthchecks-details.md
- # DAY 3
- k8s/kustomize.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- k8s/helm-create-better-chart.md
- k8s/helm-dependencies.md
- k8s/helm-values-schema-validation.md
- k8s/helm-secrets.md
#- k8s/exercise-helm.md
#- k8s/gitlab.md
- # DAY 4
- k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
- k8s/ingress.md
#- k8s/ingress-tls.md
- exercises/ingress-details.md
- # DAY 5
- k8s/tilt.md
- k8s/batch-jobs.md
- k8s/logs-centralized.md
- k8s/prometheus.md
- k8s/prometheus-stack.md
- k8s/statefulsets.md
- k8s/local-persistent-volumes.md
#- k8s/portworx.md
- k8s/openebs.md
#- k8s/extending-api.md
#- k8s/admission.md
#- k8s/operators.md
#- k8s/operators-design.md
#- k8s/staticpods.md
#- k8s/owners-and-dependents.md
#- k8s/gitworkflows.md
- shared/thankyou.md
#- k8s/csr-api.md
#- k8s/openid-connect.md
#- k8s/podsecuritypolicy.md

View File

@@ -0,0 +1,21 @@
## Intros
- Hello! We are:
- 👷🏻‍♀️ AJ ([@s0ulshake], [EphemeraSearch])
- 🐳 Jérôme ([@jpetazzo], Enix SAS)
- The training will run for 4 hours, with a 10 minutes break every hour
(the middle break will be a bit longer)
- Feel free to ask questions at any time
- *Especially when you see full screen container pictures!*
- Live feedback, questions, help: @@CHAT@@
[EphemeraSearch]: https://ephemerasearch.com/
[@s0ulshake]: https://twitter.com/s0ulshake
[@jpetazzo]: https://twitter.com/jpetazzo

View File

@@ -1,36 +1,17 @@
## Intros
- Hello! I'm Jérôme Petazzoni ([@jpetazzo])
- Hello! I'm Jérôme Petazzoni ([@jpetazzo](https://twitter.com/jpetazzo))
- The training will run from 10am to 2pm (Eastern time)
- The training will run from 10h00 to 16h00 (Israel time)
- There will be a coffee break around 11:15am
*Sunday, Monday, Wednesday, Thursday (not Tuesday!)*
- Lunch break will be 12:30pm-1pm
- There will be a lunch break between 13h00 and 14h00
(And coffee breaks!)
- Feel free to interrupt for questions at any time
- *Especially when you see full screen container pictures!*
- Use @@CHAT@@ to ask questions, get help, etc.
[@alexbuisine]: https://twitter.com/alexbuisine
[EphemeraSearch]: https://ephemerasearch.com/
[@jpetazzo]: https://twitter.com/jpetazzo
[@s0ulshake]: https://twitter.com/s0ulshake
---
## Exercises
- At the end of each day, there is a series of exercises
- To make the most out of the training, please try the exercises!
(it will help to practice and memorize the content of the day)
- We recommend to take at least one hour to work on the exercises
(if you understood the content of the day, it will be much faster)
- Each day will start with a quick review of the exercises of the previous day
- Live feedback, questions, help: @@CHAT@@

View File

@@ -34,6 +34,23 @@ If anything goes wrong — ask for help!
---
## Cloning the container.training repository
- We will use many YAML files and other assets during the training
- All these files are stored in a public git repository
.exercise[
- Clone the repository:
```bash
git clone https://container.training/
```
]
---
class: in-person
## `tailhist`