Compare commits

...

26 Commits

Author SHA1 Message Date
Jerome Petazzoni
c596f54dfc fix-redirects.sh: adding forced redirect 2020-04-07 16:57:42 -05:00
Bridget Kromhout
99271a09d3 Merge pull request #391 from bridgetkromhout/velocityeu2018
updating ssid/password
2018-10-30 23:33:24 +01:00
Bridget Kromhout
e51f110e9d updating ssid/password 2018-10-30 22:31:58 +00:00
Bridget Kromhout
58936f098f Merge pull request #389 from bridgetkromhout/velocityeu2018
veleu 2018 updates
2018-10-30 14:00:51 +01:00
Bridget Kromhout
d6f01d5302 veleu 2018 updates 2018-10-30 13:59:20 +01:00
Bridget Kromhout
d5d281b627 Merge pull request #388 from bridgetkromhout/velocityeu2018
vel eu 2018 updates
2018-10-30 13:45:12 +01:00
Bridget Kromhout
0633f952d4 Merge branch 'velocityeu2018' into velocityeu2018 2018-10-30 13:44:25 +01:00
Bridget Kromhout
a93c618154 Update thankyou.md 2018-10-30 13:40:33 +01:00
Bridget Kromhout
4a25c66206 Update logistics-bridget.md
EU logistics
2018-10-30 13:36:16 +01:00
Bridget Kromhout
8530dc750f Merge pull request #387 from jpetazzo/velny-k8s101-2018
bring NY 2018 changes to EU 2018
2018-10-30 12:32:24 +01:00
Bridget Kromhout
0571b1f3a5 Merge branch 'master' of github.com:jpetazzo/container.training 2018-10-30 12:26:54 +01:00
Bridget Kromhout
24e7cab2ca Merge pull request #376 from bridgetkromhout/velny-k8s101-2018
Adding links to thanks slide
2018-09-30 21:55:17 -04:00
Bridget Kromhout
09a364f554 Adding links to thanks slide 2018-09-30 21:51:58 -04:00
Bridget Kromhout
c18d07b06f Merge pull request #375 from bridgetkromhout/velny-k8s101-2018
Custom thanks slide
2018-09-30 21:15:09 -04:00
Bridget Kromhout
41cd6ad554 Custom thanks slide 2018-09-30 21:13:27 -04:00
Bridget Kromhout
565db253bf Merge pull request #374 from bridgetkromhout/velny-k8s101-2018
Velny k8s101 2018 updates
2018-09-30 20:57:37 -04:00
Bridget Kromhout
c46baa0f74 Merge branch 'master' into velny-k8s101-2018 2018-09-30 20:56:17 -04:00
Bridget Kromhout
cb94697a55 Merge pull request #372 from bridgetkromhout/velny-k8s101-2018
Velny k8s101 2018
2018-09-30 20:51:50 -04:00
Bridget Kromhout
74a30db7bd Limiting scope for this event 2018-09-30 20:49:54 -04:00
Bridget Kromhout
336cfbe4dc Merge branch 'master' into velny-k8s101-2018 2018-09-30 20:36:04 -04:00
Bridget Kromhout
48a834e85c Merge pull request #369 from bridgetkromhout/velny-k8s101-2018
Edits for shorter workshop
2018-09-30 20:11:27 -04:00
Bridget Kromhout
11ca023e45 Edits for shorter workshop 2018-09-30 19:52:47 -04:00
Bridget Kromhout
e2f020b994 Merge pull request #367 from bridgetkromhout/velny-k8s101-2018
Readability
2018-09-30 13:40:27 -05:00
Bridget Kromhout
062e8f124a Readability 2018-09-30 14:37:49 -04:00
Bridget Kromhout
9f1c3db527 Merge pull request #366 from bridgetkromhout/velny-k8s101-2018
Adding vel ny 1day
2018-09-30 13:35:16 -05:00
Bridget Kromhout
9a66a894ba Adding vel ny 1day 2018-09-30 14:33:41 -04:00
13 changed files with 45 additions and 586 deletions

1
slides/_redirects Normal file
View File

@@ -0,0 +1 @@
/ /kube-halfday.yml.html 200!

View File

@@ -171,11 +171,7 @@ class: pic
---
## Do we need to run Docker at all?
No!
--
## Default container runtime
- By default, Kubernetes uses the Docker Engine to run containers
@@ -185,42 +181,6 @@ No!
(like CRI-O, or containerd)
---
## Do we need to run Docker at all?
Yes!
--
- In this workshop, we run our app on a single node first
- We will need to build images and ship them around
- We can do these things without Docker
<br/>
(and get diagnosed with NIH¹ syndrome)
- Docker is still the most stable container engine today
<br/>
(but other options are maturing very quickly)
.footnote[¹[Not Invented Here](https://en.wikipedia.org/wiki/Not_invented_here)]
---
## Do we need to run Docker at all?
- On our development environments, CI pipelines ... :
*Yes, almost certainly*
- On our production servers:
*Yes (today)*
*Probably not (in the future)*
.footnote[More information about CRI [on the Kubernetes blog](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes)]
---
@@ -235,11 +195,12 @@ Yes!
- node (a machine — physical or virtual — in our cluster)
- pod (group of containers running together on a node)
- IP addresses are associated with *pods*, not with individual containers
- service (stable network endpoint to connect to one or multiple containers)
- namespace (more-or-less isolated group of things)
- secret (bundle of sensitive data to be passed to a container)
And much more!
- And much more!
- We can see the full list by running `kubectl api-resources`
@@ -250,25 +211,3 @@ Yes!
class: pic
![Node, pod, container](images/k8s-arch3-thanks-weave.png)
---
class: pic
![One of the best Kubernetes architecture diagrams available](images/k8s-arch4-thanks-luxas.png)
---
## Credits
- The first diagram is courtesy of Weave Works
- a *pod* can have multiple containers working together
- IP addresses are associated with *pods*, not with individual containers
- The second diagram is courtesy of Lucas Käldström, in [this presentation](https://speakerdeck.com/luxas/kubeadm-cluster-creation-internals-from-self-hosting-to-upgradability-and-ha)
- it's one of the best Kubernetes architecture diagrams available!
Both diagrams used with permission.

View File

@@ -268,7 +268,7 @@ The master node has [taints](https://kubernetes.io/docs/concepts/configuration/t
- Check the logs of all the pods having a label `run=rng`:
```bash
kubectl logs -l run=rng --tail 1
kubectl get pods -l run=rng -o name | xargs -n 1 kubectl logs --tail 1
```
]

View File

@@ -4,22 +4,6 @@ Our app on Kube
---
## What's on the menu?
In this part, we will:
- **build** images for our app,
- **ship** these images with a registry,
- **run** deployments using these images,
- expose these deployments so they can communicate with each other,
- expose the web UI so we can access it from outside.
---
## The plan
- Build on our control node (`node1`)
@@ -131,47 +115,6 @@ We should see:
---
## Testing our local registry
- We can retag a small image, and push it to the registry
.exercise[
- Make sure we have the busybox image, and retag it:
```bash
docker pull busybox
docker tag busybox $REGISTRY/busybox
```
- Push it:
```bash
docker push $REGISTRY/busybox
```
]
---
## Checking again what's on our local registry
- Let's use the same endpoint as before
.exercise[
- Ensure that our busybox image is now in the local registry:
```bash
curl $REGISTRY/v2/_catalog
```
]
The curl command should now output:
```json
{"repositories":["busybox"]}
```
---
## Building and pushing our images
- We are going to use a convenient feature of Docker Compose

View File

@@ -36,10 +36,6 @@
(At least ... not yet! Though it's [experimental in 1.12](https://kubernetes.io/docs/setup/independent/high-availability/).)
--
- "It's still twice as many steps as setting up a Swarm cluster 😕" -- Jérôme
---
## Other deployment options

View File

@@ -1,6 +1,6 @@
## Versions installed
- Kubernetes 1.12.0
- Kubernetes 1.12.1
- Docker Engine 18.06.1-ce
- Docker Compose 1.21.1

View File

@@ -1,26 +1,3 @@
# Next steps
*Alright, how do I get started and containerize my apps?*
--
Suggested containerization checklist:
.checklist[
- write a Dockerfile for one service in one app
- write Dockerfiles for the other (buildable) services
- write a Compose file for that whole app
- make sure that devs are empowered to run the app in containers
- set up automated builds of container images from the code repo
- set up a CI pipeline using these container images
- set up a CD pipeline (for staging/QA) using these images
]
And *then* it is time to look at orchestration!
---
## Options for our first production cluster
- Get a managed cluster from a major cloud provider (AKS, EKS, GKE...)
@@ -57,26 +34,6 @@ And *then* it is time to look at orchestration!
---
## Namespaces
- Namespaces let you run multiple identical stacks side by side
- Two namespaces (e.g. `blue` and `green`) can each have their own `redis` service
- Each of the two `redis` services has its own `ClusterIP`
- CoreDNS creates two entries, mapping to these two `ClusterIP` addresses:
`redis.blue.svc.cluster.local` and `redis.green.svc.cluster.local`
- Pods in the `blue` namespace get a *search suffix* of `blue.svc.cluster.local`
- As a result, resolving `redis` from a pod in the `blue` namespace yields the "local" `redis`
.warning[This does not provide *isolation*! That would be the job of network policies.]
---
## Stateful services (databases etc.)
- As a first step, it is wiser to keep stateful services *outside* of the cluster
@@ -236,16 +193,3 @@ Sorry Star Trek fans, this is not the federation you're looking for!
- Discover resources across clusters
---
## Developer experience
*We've put this last, but it's pretty important!*
- How do you on-board a new developer?
- What do they need to install to get a dev stack?
- How does a code change make it from dev to prod?
- How does someone add a component to a stack?

View File

@@ -35,19 +35,18 @@ chapters:
- - k8s/kubectlrun.md
- k8s/kubectlexpose.md
- k8s/ourapponkube.md
#- k8s/kubectlproxy.md
#- k8s/localkubeconfig.md
#- k8s/accessinternal.md
- k8s/kubectlproxy.md
# - k8s/localkubeconfig.md
# - k8s/accessinternal.md
- - k8s/dashboard.md
- k8s/kubectlscale.md
- k8s/daemonset.md
- k8s/rollout.md
- - k8s/logs-cli.md
# Bridget hasn't added EFK yet
#- k8s/logs-centralized.md
# - k8s/logs-centralized.md
- k8s/helm.md
- k8s/namespaces.md
#- k8s/netpol.md
- k8s/netpol.md
- k8s/whatsnext.md
# - k8s/links.md
# Bridget-specific

View File

@@ -2,15 +2,26 @@
- Hello! We are:
- .emoji[✨] Bridget ([@bridgetkromhout](https://twitter.com/bridgetkromhout))
- .emoji[✨] Bridget Kromhout ([@bridgetkromhout](https://twitter.com/bridgetkromhout))
- .emoji[🌟] Joe ([@joelaha](https://twitter.com/joelaha))
- .emoji[🌟] Joe Laha ([@joelaha](https://twitter.com/joelaha))
- The workshop will run from 13:30-16:45
- The workshop will run from 9:30 - 13:00
- There will be a break from 15:00-15:15
- There will be a break from 11:00 - 11:30
- Feel free to interrupt for questions at any time
- *Especially when you see full screen container pictures!*
---
## Say hi!
- We encourage networking at [#velocityconf](https://twitter.com/hashtag/velocityconf?f=tweets&vertical=default&src=hash)
- Take a minute to introduce yourself to your neighbors
- Tell them where you're from (where you're based out of & what org you work at)
- Share what you're hoping to learn in this session! .emoji[✨]

View File

@@ -1,46 +1,4 @@
# Pre-requirements
- Be comfortable with the UNIX command line
- navigating directories
- editing files
- a little bit of bash-fu (environment variables, loops)
- Some Docker knowledge
- `docker run`, `docker ps`, `docker build`
- ideally, you know how to write a Dockerfile and build it
<br/>
(even if it's a `FROM` line and a couple of `RUN` commands)
- It's totally OK if you are not a Docker expert!
---
class: title
*Tell me and I forget.*
<br/>
*Teach me and I remember.*
<br/>
*Involve me and I learn.*
Misattributed to Benjamin Franklin
[(Probably inspired by Chinese Confucian philosopher Xunzi)](https://www.barrypopik.com/index.php/new_york_city/entry/tell_me_and_i_forget_teach_me_and_i_may_remember_involve_me_and_i_will_lear/)
---
## Hands-on sections
- The whole workshop is hands-on
- We are going to build, ship, and run containers!
- You are invited to reproduce all the demos
## Hands-on
- All hands-on sections are clearly identified, like the gray rectangle below
@@ -50,53 +8,10 @@ Misattributed to Benjamin Franklin
- Go to @@SLIDES@@ to view these slides
- Join the chat room: @@CHAT@@
<!-- ```open @@SLIDES@@``` -->
]
---
class: in-person
## Where are we going to run our containers?
---
class: in-person, pic
![You get a cluster](images/you-get-a-cluster.jpg)
---
class: in-person
## You get a cluster of cloud VMs
- Each person gets a private cluster of cloud VMs (not shared with anybody else)
- They'll remain up for the duration of the workshop
- You should have a little card with login+password+IP addresses
- You can automatically SSH from one VM to another
- The nodes have aliases: `node1`, `node2`, etc.
---
class: in-person
## Why don't we run containers locally?
- Installing that stuff can be hard on some machines
(32 bits CPU or OS... Laptops without administrator access... etc.)
- *"The whole team downloaded all these container images from the WiFi!
<br/>... and it went great!"* (Literally no-one ever)
- All you need is a computer (or even a phone or tablet!), with:
- an internet connection
@@ -109,203 +24,18 @@ class: in-person
class: in-person
## SSH clients
- On Linux, OS X, FreeBSD... you are probably all set
- On Windows, get one of these:
- [putty](http://www.putty.org/)
- Microsoft [Win32 OpenSSH](https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH)
- [Git BASH](https://git-for-windows.github.io/)
- [MobaXterm](http://mobaxterm.mobatek.net/)
- On Android, [JuiceSSH](https://juicessh.com/)
([Play Store](https://play.google.com/store/apps/details?id=com.sonelli.juicessh))
works pretty well
- Nice-to-have: [Mosh](https://mosh.org/) instead of SSH, if your internet connection tends to lose packets
---
class: in-person, extra-details
## What is this Mosh thing?
*You don't have to use Mosh or even know about it to follow along.
<br/>
We're just telling you about it because some of us think it's cool!*
- Mosh is "the mobile shell"
- It is essentially SSH over UDP, with roaming features
- It retransmits packets quickly, so it works great even on lossy connections
(Like hotel or conference WiFi)
- It has intelligent local echo, so it works great even in high-latency connections
(Like hotel or conference WiFi)
- It supports transparent roaming when your client IP address changes
(Like when you hop from hotel to conference WiFi)
---
class: in-person, extra-details
## Using Mosh
- To install it: `(apt|yum|brew) install mosh`
- It has been pre-installed on the VMs that we are using
- To connect to a remote machine: `mosh user@host`
(It is going to establish an SSH connection, then hand off to UDP)
- It requires UDP ports to be open
(By default, it uses a UDP port between 60000 and 61000)
---
class: in-person
## Connecting to our lab environment
.exercise[
- Log into the first VM (`node1`) with your SSH client
<!--
```bash
for N in $(awk '/\Wnode/{print $2}' /etc/hosts); do
ssh -o StrictHostKeyChecking=no $N true
done
```
```bash
if which kubectl; then
kubectl get deploy,ds -o name | xargs -rn1 kubectl delete
kubectl get all -o name | grep -v service/kubernetes | xargs -rn1 kubectl delete --ignore-not-found=true
kubectl -n kube-system get deploy,svc -o name | grep -v dns | xargs -rn1 kubectl -n kube-system delete
fi
```
-->
- Check that you can SSH (without password) to `node2`:
```bash
ssh node2
```
- Type `exit` or `^D` to come back to `node1`
<!-- ```bash exit``` -->
]
If anything goes wrong — ask for help!
---
## Doing or re-doing the workshop on your own?
- Use something like
[Play-With-Docker](http://play-with-docker.com/) or
[Play-With-Kubernetes](https://training.play-with-kubernetes.com/)
Zero setup effort; but environment are short-lived and
might have limited resources
- Create your own cluster (local or cloud VMs)
Small setup effort; small cost; flexible environments
- Create a bunch of clusters for you and your friends
([instructions](https://@@GITREPO@@/tree/master/prepare-vms))
Bigger setup effort; ideal for group training
---
class: self-paced
## Get your own Docker nodes
- If you already have some Docker nodes: great!
- If not: let's get some thanks to Play-With-Docker
.exercise[
- Go to http://www.play-with-docker.com/
- Log in
- Create your first node
<!-- ```open http://www.play-with-docker.com/``` -->
]
You will need a Docker ID to use Play-With-Docker.
(Creating a Docker ID is free.)
---
## We will (mostly) interact with node1 only
*These remarks apply only when using multiple nodes, of course.*
- Unless instructed, **all commands must be run from the first VM, `node1`**
- We will only checkout/copy the code on `node1`
- During normal operations, we do not need access to the other nodes
- If we had to troubleshoot issues, we would use a combination of:
- SSH (to access system logs, daemon status...)
- Docker API (to check running containers and container engine status)
---
## Terminals
Once in a while, the instructions will say:
<br/>"Open a new terminal."
There are multiple ways to do this:
- create a new window or tab on your machine, and SSH into the VM;
- use screen or tmux on the VM and open a new window from there.
You are welcome to use the method that you feel the most comfortable with.
---
## Tmux cheatsheet
[Tmux](https://en.wikipedia.org/wiki/Tmux) is a terminal multiplexer like `screen`.
*You don't have to use it or even know about it to follow along.
<br/>
But some of us like to use it to switch between terminals.
<br/>
It has been preinstalled on your workshop nodes.*
- Ctrl-b c → creates a new window
- Ctrl-b n → go to next window
- Ctrl-b p → go to previous window
- Ctrl-b " → split window top/bottom
- Ctrl-b % → split window left/right
- Ctrl-b Alt-1 → rearrange windows in columns
- Ctrl-b Alt-2 → rearrange windows in rows
- Ctrl-b arrows → navigate to other windows
- Ctrl-b d → detach session
- tmux attach → reattach to session

View File

@@ -77,79 +77,6 @@ and displays aggregated logs.
---
class: extra-details
## Compose file format version
*Particularly relevant if you have used Compose before...*
- Compose 1.6 introduced support for a new Compose file format (aka "v2")
- Services are no longer at the top level, but under a `services` section
- There has to be a `version` key at the top level, with value `"2"` (as a string, not an integer)
- Containers are placed on a dedicated network, making links unnecessary
- There are other minor differences, but upgrade is easy and straightforward
---
## Service discovery in container-land
- We do not hard-code IP addresses in the code
- We do not hard-code FQDN in the code, either
- We just connect to a service name, and container-magic does the rest
(And by container-magic, we mean "a crafty, dynamic, embedded DNS server")
---
## Example in `worker/worker.py`
```python
redis = Redis("`redis`")
def get_random_bytes():
r = requests.get("http://`rng`/32")
return r.content
def hash_bytes(data):
r = requests.post("http://`hasher`/",
data=data,
headers={"Content-Type": "application/octet-stream"})
```
(Full source code available [here](
https://@@GITREPO@@/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/worker/worker.py#L17
))
---
class: extra-details
## Links, naming, and service discovery
- Containers can have network aliases (resolvable through DNS)
- Compose file version 2+ makes each container reachable through its service name
- Compose file version 1 did require "links" sections
- Network aliases are automatically namespaced
- you can have multiple apps declaring and using a service named `database`
- containers in the blue app will resolve `database` to the IP of the blue database
- containers in the green app will resolve `database` to the IP of the green database
---
## What's this application?
--
@@ -240,50 +167,6 @@ Stop the app with `^C`, edit `dockercoins.yml`, comment out the `volumes` sectio
---
class: extra-details
## Why does the speed seem irregular?
- It *looks like* the speed is approximately 4 hashes/second
- Or more precisely: 4 hashes/second, with regular dips down to zero
- Why?
--
class: extra-details
- The app actually has a constant, steady speed: 3.33 hashes/second
<br/>
(which corresponds to 1 hash every 0.3 seconds, for *reasons*)
- Yes, and?
---
class: extra-details
## The reason why this graph is *not awesome*
- The worker doesn't update the counter after every loop, but up to once per second
- The speed is computed by the browser, checking the counter about once per second
- Between two consecutive updates, the counter will increase either by 4, or by 0
- The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - 0 etc.
- What can we conclude from this?
--
class: extra-details
- "I'm clearly incapable of writing good frontend code!" 😀 — Jérôme
---
## Stopping the application
- If we interrupt Compose (with `^C`), it will politely ask the Docker Engine to stop the app

View File

@@ -4,8 +4,15 @@ Thank you!
---
class: title, in-person
# Thank you!
That's all, folks! <br/> Questions?
- The clusters will be shut down tonight
- If you like:
- [rate this tutorial on the Velocity website](https://conferences.oreilly.com/velocity/vl-eu/public/schedule/evaluate/71149?eval=71149)
- [tweet about what you learned](https://twitter.com/intent/tweet?url=https%3A%2F%2Fcontainer.training&text=Learning%20k8s%20with%20@bridgetkromhout%21&hashtags=VelocityConf), mentioning @bridgetkromhout and #VelocityConf
- [questions, comments, pull requests, workshop invitations, etc](https://github.com/jpetazzo/container.training/)
![end](images/end.jpg)

View File

@@ -8,7 +8,13 @@ class: title, self-paced
class: title, in-person
@@TITLE@@<br/></br>
@@TITLE@@
WiFi: OReillyCon<br>
Password: oreilly18
<br>
<br>
.footnote[
**Be kind to the WiFi!**<br/>