Compare commits

...

65 Commits

Author SHA1 Message Date
Jerome Petazzoni
687b61dbf4 fix-redirects.sh: adding forced redirect 2020-04-07 16:57:06 -05:00
Jerome Petazzoni
22f32ee4c0 Merge branch 'master' into qconsf2018 2018-11-09 02:25:18 -06:00
Jerome Petazzoni
9e051abb32 settings for 4 nodes cluster + two-sided card template 2018-11-09 02:25:00 -06:00
Jerome Petazzoni
ee3c2c3030 Merge branch 'ignore-preflight-errors' into qconsf2018 2018-11-09 02:23:53 -06:00
Jerome Petazzoni
45f9d7bf59 bump versions 2018-11-09 02:23:38 -06:00
Jerome Petazzoni
efb72c2938 Bump all the versions
Bump:
- stern
- Ubuntu

Also, each place where there is a 'bumpable' version, I added
a ##VERSION## marker, easily greppable.
2018-11-08 20:42:02 -06:00
Jerome Petazzoni
357d341d82 Ignore 'wrong Docker version' warning
For some reason, kubeadm doesn't want to deploy with Docker Engine 18.09.
Before, it would just issue a warning; but now apparently the warning blocks
the deployment. So... let's ignore the warning. (I've tested the content
and it works fine with Engine 18.09 as far as I can tell.)
2018-11-08 20:32:52 -06:00
Jerome Petazzoni
d4c338c62c Update prom slides for QCON preload 2018-11-07 23:08:51 -06:00
Bridget Kromhout
3ebcfd142b Merge pull request #394 from jpetazzo/halfday-fullday-twodays
Add kube-twodays.yml
2018-11-07 16:28:20 -05:00
Bridget Kromhout
6c5d049c4c Merge pull request #371 from bridgetkromhout/kubens
Clarify kubens
2018-11-07 16:27:08 -05:00
Bridget Kromhout
072ba44cba Merge pull request #395 from jpetazzo/add-links-to-whatsnext
Add links to what's next section
2018-11-07 16:25:29 -05:00
Bridget Kromhout
bc8a9dc4e7 Merge pull request #398 from jpetazzo/use-dockercoins-from-docker-hub
Add instructions to use the dockercoins/ images
2018-11-07 16:23:37 -05:00
Jerome Petazzoni
d35d186249 Merge branch 'master' into qconsf2018 2018-11-01 19:48:17 -05:00
Jerome Petazzoni
b1ba881eee Limit ElasticSearch RAM to 1 GB
Committing straight to master since this file
is not used by @bridgetkromhout, and people use
that file by cloning the repo (so it has to be
merged in master for people to see it).

HASHTAG YOLO
2018-11-01 19:48:06 -05:00
Jerome Petazzoni
6c8172d7b1 Merge branch 'work-around-kubectl-logs-bug' into qconsf2018 2018-11-01 19:45:45 -05:00
Jerome Petazzoni
d3fac47823 kubectl logs -l ... --tail ... is buggy.
(It always returns 10 lines of output instead
of the requested number.)

This works around the problem, by adding extra
explanations of the issue and providing a shell
function as a workaround.

See kubernetes/kubernetes#70554 for details.
2018-11-01 19:45:13 -05:00
Jerome Petazzoni
4f71074a06 Work around bug in kubectl logs
kubectl logs -l ... --tail ... is buggy.
(It always returns 10 lines of output instead
of the requested number.)

This works around the problem, by adding extra
explanations of the issue and providing a shell
function as a workaround.

See kubernetes/kubernetes#70554 for details.
2018-11-01 19:41:29 -05:00
Jerome Petazzoni
37470fc5ed Merge branch 'use-dockercoins-from-docker-hub' into qconsf2018 2018-11-01 19:08:57 -05:00
Jerome Petazzoni
337a5d94ed Add instructions to use the dockercoins/ images
We have images on the Docker Hub for the various components
of dockercoins. Let's add one slide explaining how to use that,
for people who would be lost or would have issues with their
registry, so that they can catch up.
2018-11-01 19:08:40 -05:00
Jerome Petazzoni
98510f9f1c Setup qconsf2018 2018-11-01 16:10:03 -05:00
Jerome Petazzoni
6be0751147 Merge branch 'preinstall-helm-and-prometheus' into qconsf2018 2018-11-01 15:59:43 -05:00
Jerome Petazzoni
a40b291d54 Merge branch 'kubectl-create-deployment' into qconsf2018 2018-11-01 15:59:21 -05:00
Jerome Petazzoni
f24687e79f Merge branch 'jpetazzo-last-slide' into qconsf2018 2018-11-01 15:59:12 -05:00
Jerome Petazzoni
9f5f16dc09 Merge branch 'halfday-fullday-twodays' into qconsf2018 2018-11-01 15:59:03 -05:00
Jerome Petazzoni
9a5989d1f2 Merge branch 'enixlogo' into qconsf2018 2018-11-01 15:58:55 -05:00
Jerome Petazzoni
43acccc0af Add command to preinstall Helm and Prometheus
In some cases, I would like Prometheus to be pre-installed (so that
it shows a bunch of metrics) without relying on people doing it (and
setting up Helm correctly). This patch allows to run:

./workshopctl helmprom TAG

It will setup Helm with a proper service account, then deploy
the Pormetheus chart, disabling the alert manager, persistence,
and assigning the Prometheus server to NodePort 30090.

This command is idempotent.
2018-11-01 15:35:09 -05:00
Jerome Petazzoni
4a447c7bf5 Clarify further kubens vs kns 2018-11-01 13:48:00 -05:00
Jerome Petazzoni
b9de73d0fd Address deprecation of 'kubectl run'
kubectl run is being deprecated as a multi-purpose tool.
This PR replaces 'kubectl run' with 'kubectl create deployment'
in most places (except in the very first example, to reduce the
cognitive load; and when we really want a single-shot container).

It also updates the places where we use a 'run' label, since
'kubectl create deployment' uses the 'app' label instead.

NOTE: this hasn't gone through end-to-end testing yet.
2018-11-01 01:25:26 -05:00
Jerome Petazzoni
6b9b83a7ae Add link to my private training intake form 2018-10-31 22:50:41 -05:00
Jerome Petazzoni
3f7675be04 Add links to what's next section
For each concept that is present in the full-length tutorial,
I added a link to the corresponding chapter in the final section,
so that people who liked the short version can get similarly
presented info from the longer version.
2018-10-30 17:24:27 -05:00
Jerome Petazzoni
b4bb9e5958 Update QCON entries (jpetazzo is delivering twice) 2018-10-30 16:47:44 -05:00
Jerome Petazzoni
9a6160ba1f Add kube-twodays.yml
kube-fullday is now suitable for one-day tutorials
kube-twodays is not suitable for two-day tutorials

I also tweaked (added a couple of line breaks) so that line
numbers would be aligned on all kube-...yml files.
2018-10-30 16:42:43 -05:00
Bridget Kromhout
1d243b72ec adding vel eu 2018 k8s101 slides
adding vel eu 2018 k8s101 slides
2018-10-30 14:15:44 +01:00
Jerome Petazzoni
c5c1ccaa25 Merge branch 'BretFisher-win-containers-101' 2018-10-29 20:38:21 -05:00
Jerome Petazzoni
b68afe502b Minor formatting/typo edits 2018-10-29 20:38:01 -05:00
Jerome Petazzoni
d18cacab4c Merge branch 'win-containers-101' of git://github.com/BretFisher/container.training into BretFisher-win-containers-101 2018-10-29 19:59:53 -05:00
Bret Fisher
2faca4a507 docker101 fixing titles 2018-10-30 01:53:31 +01:00
Jerome Petazzoni
d797ec62ed Merge branch 'BretFisher-swarm-cicd' 2018-10-29 19:48:59 -05:00
Jerome Petazzoni
a475d63789 add CI/CD slides to self-paced deck as well 2018-10-29 19:48:33 -05:00
Jerome Petazzoni
dd3f2d054f Merge branch 'swarm-cicd' of git://github.com/BretFisher/container.training into BretFisher-swarm-cicd 2018-10-29 19:46:38 -05:00
Bridget Kromhout
73594fd505 Merge pull request #384 from BretFisher/patch-18
swarm workshop at goto canceled 😭
2018-10-26 11:35:53 -05:00
Bret Fisher
16a1b5c6b5 swarm workshop at goto canceled 😭 2018-10-26 07:57:50 +01:00
Bret Fisher
ff7a257844 adding cicd to swarm half day 2018-10-26 07:52:32 +01:00
Bret Fisher
77046a8ddf fixed suggestions 2018-10-26 07:51:09 +01:00
Bret Fisher
3ca696f059 size update from docker docs 2018-10-23 16:27:25 +02:00
Bret Fisher
305db76340 more sizing tweaks 2018-10-23 16:27:25 +02:00
Bret Fisher
b1672704e8 clear up swarm sizes and manager+worker setups
Lot's of people will have ~5-10 servers, so let's give them more detailed info.
2018-10-23 16:27:25 +02:00
Jerome Petazzoni
c058f67a1f Add diagram for dockercoins 2018-10-23 16:25:19 +02:00
Alexandre Buisine
ab56c63901 switch to an up to date version with latest cloud-init binary and multinic patch 2018-10-23 16:22:56 +02:00
Bret Fisher
a5341f9403 Add common Windows/macOS hidden files to gitignore 2018-10-17 19:11:37 +02:00
Laurent Grangeau
b2bdac3384 Typo 2018-10-04 18:02:01 +02:00
Bridget Kromhout
a2531a0c63 making sure two-day events still show up
Because we rebuilt today, the two-day events disappeared from the front page. @jpetazzo this is a temporary fix to make them still show up.
2018-09-30 22:07:03 -04:00
Bridget Kromhout
84e2b90375 Update index.yaml
adding slides
2018-09-30 22:05:01 -04:00
Bridget Kromhout
9639dfb9cc Merge pull request #368 from jpetazzo/kube-ps1
kube-ps1 is cool and we should mention it
2018-09-30 20:55:00 -04:00
Bridget Kromhout
8722de6da2 Update namespaces.md 2018-09-30 20:54:31 -04:00
Bridget Kromhout
f2f87e52b0 Merge pull request #373 from bridgetkromhout/bridget-links
Updating Bridget's links
2018-09-30 20:53:26 -04:00
Bridget Kromhout
56ad2845e7 Updating Bridget's links 2018-09-30 20:52:24 -04:00
Bridget Kromhout
f23272d154 Clarify kubens 2018-09-30 20:32:10 -04:00
Jerome Petazzoni
1020a8ff86 kube-ps1 is cool and we should mention it 2018-09-30 17:43:18 -05:00
Jerome Petazzoni
f01bc2a7a9 Fix overlapsing slide number and pics 2018-09-29 18:54:00 -05:00
Bret Fisher
d01ae0ff39 initial Windows Container pack 2018-09-27 07:13:03 -04:00
Jerome Petazzoni
3eaa844c55 Add ENIX logo
Warning: do not merge this branch to your content, otherwise you
will get the ENIX logo in the top right of all your decks
2018-09-08 07:49:38 -05:00
Bret Fisher
cb407e75ab make CI/CD common for all courses 2018-04-25 14:27:32 -05:00
Bret Fisher
27d4612449 a note about ci/cd with docker 2018-04-25 14:26:02 -05:00
Bret Fisher
43ab5f79b6 a note about ci/cd with docker 2018-04-25 14:23:40 -05:00
55 changed files with 847 additions and 181 deletions

12
.gitignore vendored
View File

@@ -8,3 +8,15 @@ slides/autopilot/state.yaml
slides/index.html
slides/past.html
node_modules
### macOS ###
# General
.DS_Store
.AppleDouble
.LSOverride
### Windows ###
# Windows thumbnail cache files
Thumbs.db
ehthumbs.db
ehthumbs_vista.db

View File

@@ -132,6 +132,9 @@ spec:
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
env:
- name: ES_JAVA_OPTS
value: "-Xms1g -Xmx1g"
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler

View File

@@ -5,7 +5,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: testweb
app: testweb
ingress:
- from:
- podSelector:

View File

@@ -5,6 +5,6 @@ metadata:
spec:
podSelector:
matchLabels:
run: testweb
app: testweb
ingress: []

View File

@@ -16,7 +16,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: webui
app: webui
ingress:
- from: []

View File

@@ -6,7 +6,7 @@ metadata:
creationTimestamp: null
generation: 1
labels:
run: socat
app: socat
name: socat
namespace: kube-system
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/socat
@@ -14,7 +14,7 @@ spec:
replicas: 1
selector:
matchLabels:
run: socat
app: socat
strategy:
rollingUpdate:
maxSurge: 1
@@ -24,7 +24,7 @@ spec:
metadata:
creationTimestamp: null
labels:
run: socat
app: socat
spec:
containers:
- args:
@@ -49,7 +49,7 @@ kind: Service
metadata:
creationTimestamp: null
labels:
run: socat
app: socat
name: socat
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/socat
@@ -60,7 +60,7 @@ spec:
protocol: TCP
targetPort: 80
selector:
run: socat
app: socat
sessionAffinity: None
type: NodePort
status:

View File

@@ -123,7 +123,9 @@ _cmd_kube() {
pssh --timeout 200 "
if grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/admin.conf ]; then
kubeadm token generate > /tmp/token &&
sudo kubeadm init --token \$(cat /tmp/token)
sudo kubeadm init \
--token \$(cat /tmp/token) \
--ignore-preflight-errors=SystemVerification
fi"
# Put kubeconfig in ubuntu's and docker's accounts
@@ -147,7 +149,10 @@ _cmd_kube() {
pssh --timeout 200 "
if ! grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token) &&
sudo kubeadm join --discovery-token-unsafe-skip-ca-verification --token \$TOKEN node1:6443
sudo kubeadm join \
--discovery-token-unsafe-skip-ca-verification \
--ignore-preflight-errors=SystemVerification \
--token \$TOKEN node1:6443
fi"
# Install kubectx and kubens
@@ -170,7 +175,8 @@ EOF"
# Install stern
pssh "
if [ ! -x /usr/local/bin/stern ]; then
sudo curl -L -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.8.0/stern_linux_amd64 &&
##VERSION##
sudo curl -L -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.10.0/stern_linux_amd64 &&
sudo chmod +x /usr/local/bin/stern &&
stern --completion bash | sudo tee /etc/bash_completion.d/stern
fi"
@@ -400,6 +406,28 @@ _cmd_test() {
test_tag
}
_cmd helmprom "Install Helm and Prometheus"
_cmd_helmprom() {
TAG=$1
need_tag
pssh "
if grep -q node1 /tmp/node; then
kubectl -n kube-system get serviceaccount helm ||
kubectl -n kube-system create serviceaccount helm
helm init --service-account helm
kubectl get clusterrolebinding helm-can-do-everything ||
kubectl create clusterrolebinding helm-can-do-everything \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:helm
helm upgrade --install prometheus stable/prometheus \
--namespace kube-system \
--set server.service.type=NodePort \
--set server.service.nodePort=30090 \
--set server.persistentVolume.enabled=false \
--set alertmanager.enabled=false
fi"
}
# Sometimes, weave fails to come up on some nodes.
# Symptom: the pods on a node are unreachable (they don't even ping).
# Remedy: wipe out Weave state and delete weave pod on that node.

View File

@@ -201,5 +201,6 @@ aws_tag_instances() {
}
aws_get_ami() {
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a amd64 -v 16.04 -t hvm:ebs -N -q
##VERSION##
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a amd64 -v 18.04 -t hvm:ebs -N -q
}

View File

@@ -0,0 +1,25 @@
# Number of VMs per cluster
clustersize: 4
# Jinja2 template to use to generate ready-to-cut cards
cards_template: jerome.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -0,0 +1,131 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://qconsf2018.container.training/" -%}
{%- set pagesize = 9 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine" -%}
{%- set this_or_each = "this" -%}
{%- set machine_is_or_machines_are = "machine is" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "Kubernetes workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "each" -%}
{%- set machine_is_or_machines_are = "machines are" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_kube -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head><style>
@import url('https://fonts.googleapis.com/css?family=Slabo+27px');
body, table {
margin: 0;
padding: 0;
line-height: 1.0em;
font-size: 15px;
font-family: 'Slabo 27px';
}
table {
border-spacing: 0;
margin-top: 0.4em;
margin-bottom: 0.4em;
border-left: 0.8em double grey;
padding-left: 0.4em;
}
div {
float: left;
border: 1px dotted black;
height: 31%;
padding-top: 1%;
padding-bottom: 1%;
/* columns * (width+left+right) < 100% */
width: 30%;
padding-left: 1.5%;
padding-right: 1.5%;
}
div.back {
border: 1px dotted white;
}
div.back p {
margin: 0.5em 1em 0 1em;
}
p {
margin: 0.4em 0 0.8em 0;
}
img {
height: 5em;
float: right;
margin-right: 1em;
}
.logpass {
font-family: monospace;
font-weight: bold;
}
.pagebreak {
page-break-after: always;
clear: both;
display: block;
height: 8px;
}
</style></head>
<body>
{% for cluster in clusters %}
<div>
<p>
Here is the connection information to your very own
{{ cluster_or_machine }} for this {{ workshop_name }}.
You can connect to {{ this_or_each }} VM with any SSH client.
</p>
<p>
<img src="{{ image_src }}" />
<table>
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>
<p>
Your {{ machine_is_or_machines_are }}:
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>You can find the slides at:
<center>{{ url }}</center>
</p>
</div>
{% if loop.index%pagesize==0 or loop.last %}
<span class="pagebreak"></span>
{% for x in range(pagesize) %}
<div class="back">
<br/>
<p>You got this card at the workshop "Getting Started With Kubernetes and Container Orchestration"
during QCON San Francisco (November 2018).</p>
<p>That workshop was a 1-day version of a longer curriculum.</p>
<p>If you liked that workshop, the instructor (Jérôme Petazzoni) can deliver it
(or the longer version) to your team or organization.</p>
<p>You can reach him at:</p>
<p>jerome.petazzoni@gmail.com</p>
<p>Thank you!</p>
</div>
{% endfor %}
<span class="pagebreak"></span>
{% endif %}
{% endfor %}
</body>
</html>

View File

@@ -1,7 +1,7 @@
resource "openstack_compute_instance_v2" "machine" {
count = "${var.count}"
name = "${format("%s-%04d", "${var.prefix}", count.index+1)}"
image_name = "Ubuntu 16.04 (Xenial Xerus)"
image_name = "Ubuntu 16.04.5 (Xenial Xerus)"
flavor_name = "${var.flavor}"
security_groups = ["${openstack_networking_secgroup_v2.full_access.name}"]
key_pair = "${openstack_compute_keypair_v2.ssh_deploy_key.name}"

1
slides/_redirects Normal file
View File

@@ -0,0 +1 @@
/ /kube-fullday.yml.html 200!

View File

@@ -1,3 +1,6 @@
class: title
# Advanced Dockerfiles
![construction](images/title-advanced-dockerfiles.jpg)

View File

@@ -1,3 +1,4 @@
class: title
# Getting inside a container

View File

@@ -1,3 +1,4 @@
class: title
# Installing Docker

View File

@@ -1,3 +1,4 @@
class: title
# Our training environment

View File

@@ -0,0 +1,164 @@
class: title
# Windows Containers
![Container with Windows](images/windows-containers.jpg)
---
## Objectives
At the end of this section, you will be able to:
* Understand Windows Container vs. Linux Container.
* Know about the features of Docker for Windows for choosing architecture.
* Run other container architectures via QEMU emulation.
---
## Are containers *just* for Linux?
Remember that a container must run on the kernel of the OS it's on.
- This is both a benefit and a limitation.
(It makes containers lightweight, but limits them to a specific kernel.)
- At its launch in 2013, Docker did only support Linux, and only on amd64 CPUs.
- Since then, many platforms and OS have been added.
(Windows, ARM, i386, IBM mainframes ... But no macOS or iOS yet!)
--
- Docker Desktop (macOS and Windows) can run containers for other architectures
(Check the docs to see how to [run a Raspberry Pi (ARM) or PPC container](https://docs.docker.com/docker-for-mac/multi-arch/)!)
---
## History of Windows containers
- Early 2016, Windows 10 gained support for running Windows binaries in containers.
- These are known as "Windows Containers"
- Win 10 expects Docker for Windows to be installed for full features
- These must run in Hyper-V mini-VM's with a Windows Server x64 kernel
- No "scratch" containers, so use "Core" and "Nano" Server OS base layers
- Since Hyper-V is required, Windows 10 Home won't work (yet...)
--
- Late 2016, Windows Server 2016 ships with native Docker support
- Installed via PowerShell, doesn't need Docker for Windows
- Can run native (without VM), or with [Hyper-V Isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container)
---
## LCOW (Linux Containers On Windows)
While Docker on Windows is largely playing catch up with Docker on Linux,
it's moving fast; and this is one thing that you *cannot* do on Linux!
- LCOW came with the [2017 Fall Creators Update](https://blog.docker.com/2018/02/docker-for-windows-18-02-with-windows-10-fall-creators-update/).
- It can run Linux and Windows containers side-by-side on Win 10.
- It is no longer necessary to switch the Engine to "Linux Containers".
(In fact, if you want to run both Linux and Windows containers at the same time,
make sure that your Engine is set to "Windows Containers" mode!)
--
If you are a Docker for Windows user, start your engine and try this:
```bash
docker pull microsoft/nanoserver:1803
```
(Make sure to switch to "Windows Containers mode" if necessary.)
---
## Run Both Windows and Linux containers
- Run a Windows Nano Server (minimal CLI-only server)
```bash
docker run --rm -it microsoft/nanoserver:1803 powershell
Get-Process
exit
```
- Run busybox on Linux in LCOW
```bash
docker run --rm --platform linux busybox echo hello
```
(Although you will not be able to see them, this will create hidden
Nano and LinuxKit VMs in Hyper-V!)
---
## Did We Say Things Move Fast
- Things keep improving.
- Now `--platform` defaults to `windows`, some images support both:
- golang, mongo, python, redis, hello-world ... and more being added
- you should still use `--plaform` with multi-os images to be certain
- Windows Containers now support `localhost` accessable containers (July 2018)
- Microsoft (April 2018) added Hyper-V support to Windows 10 Home ...
... so stay tuned for Docker support, maybe?!?
---
## Other Windows container options
Most "official" Docker images don't run on Windows yet.
Places to Look:
- Hub Official: https://hub.docker.com/u/winamd64/
- Microsoft: https://hub.docker.com/r/microsoft/
---
## SQL Server? Choice of Linux or Windows
- Microsoft [SQL Server for Linux 2017](https://hub.docker.com/r/microsoft/mssql-server-linux/) (amd64/linux)
- Microsoft [SQL Server Express 2017](https://hub.docker.com/r/microsoft/mssql-server-windows-express/) (amd64/windows)
---
## Windows Tools and Tips
- PowerShell [Tab Completion: DockerCompletion](https://github.com/matt9ucci/DockerCompletion)
- Best Shell GUI: [Cmder.net](http://cmder.net/)
- Good Windows Container Blogs and How-To's
- Dockers DevRel [Elton Stoneman, Microsoft MVP](https://blog.sixeyed.com/)
- Docker Captian [Nicholas Dille](https://dille.name/blog/)
- Docker Captain [Stefan Scherer](https://stefanscherer.github.io/)

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 14 KiB

View File

@@ -0,0 +1 @@
<mxfile userAgent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" version="9.3.0" editor="www.draw.io" type="device"><diagram id="cb13f823-9e55-f92e-d17e-d0d789fca2e0" name="Page-1">7Vnfb9sgEP5rLG0vlQ3+kTyuabs9bFq1Vtr2iO2LjUqMhfGS9q8fxNgxJtXSqmmmrVEeuAMOuO874LCHF6vNR0Hq8gvPgXnIzzcevvAQCkKEPP338/tOk0RxpygEzU2jneKGPoBR+kbb0hwaq6HknEla28qMVxVk0tIRIfjabrbkzB61JgU4ipuMMFf7neay7LSzyN/pPwEtyn7kwDc1KcnuCsHbyoznIbzc/rrqFeltmfZNSXK+HqnwpYcXgnPZlVabBTDt295tXb+rR2qHeQuo5EEdDC6/CGuhn/J2YvK+d8Z2OaA7+B4+X5dUwk1NMl27VvArXSlXTEmBKi4pYwvOuNj2xTmB2TJT+kYKfgejmjibQbpUNQUjTWOMZ3xFM1MeXKOFJa/kFVlRpgn1jadccj0uF/RB1ZBhdCUYNqFQyWZxICRsHvVQMPhd8Rn4CqS4V01Mhx4pw+RgZuT1jhdJrytHnMC9khguFoPpHR6qYCDZDw920FlzcQfCwUg5q9bFrE3hzyClHaKf00Ex0PZrKxmtYOTPZ7h9QgJFf5TtJUEep7HaGvqaPtbwS0FnY4cSF7sA7cHuJaALHehEVbzhdghusQ1bGL4ibJEDW0ma8i3iDow4dELo3KNMQE6bN+QOQW44rk6BXOIec5C29A25g3ZLfELk+gv7CLqPl7cOcFDlH/S1XGOnr3v6kmfdGp+MQDBz/KkxYSQFdj7gPALh6mqhfoPLIXcygInD1QJ4KzKwbmKSiALk6IR3YRm5Pdrj9V4ngBFJf9mT2AeFGeGaUzW9AfVgutPO57aJbvKm1zgDmBpKpvSZGOqW7BjaMmNY9mFkCRyyXH+9+U/YEp2SLWge2SDHk9g/lC04nBgKJoZekC3IYcvt4vr/IEt8UrIk4VniR7MZwhGahz0OBnH8XOqgWXSmzQVJHG7N20SKjkckN4v+N4mU/GVEwoF9tDybOuEkkT8mWdy8vW32pH+qF60b6N6puksp421+wAPZyV6ykokX4+g1L4puYn3SiyJsqPyhyv5ZL/3cSoGRrkFQtUjQkekfM2h7wo2jNjll1E5eX5LnBm0wif7kiFcFN4G84NkdiIUy3ugh65rRTHmFVx6KmdTJoIrpuNCld0vtrO3HBElUViia924jh6kqDqXNTTvvq7jOL60k0agIo0WlRAZLbUHHtJob+2DUkteP7CL2Q7z1Pv7YI/oRtpFJuhnM3V0kjvfQM8BP30aUuPsW0hFj98EJX/4G</diagram></mxfile>

Binary file not shown.

After

Width:  |  Height:  |  Size: 426 KiB

View File

@@ -1,26 +1,28 @@
- date: 2018-11-23
city: Copenhagen
country: dk
event: GOTO
title: Build Container Orchestration with Docker Swarm
speaker: bretfisher
attend: https://gotocph.com/2018/workshops/121
- date: 2018-11-08
city: San Francisco, CA
country: us
event: QCON
title: Introduction to Docker and Containers
speaker: jpetazzo
speaker: zeroasterisk
attend: https://qconsf.com/sf2018/workshop/introduction-docker-and-containers
- date: 2018-11-08
city: San Francisco, CA
country: us
event: QCON
title: Getting Started With Kubernetes and Container Orchestration
speaker: jpetazzo
attend: https://qconsf.com/sf2018/workshop/getting-started-kubernetes-and-container-orchestration-thursday-section
slides: http://qconsf2018.container.training/
- date: 2018-11-09
city: San Francisco, CA
country: us
event: QCON
title: Getting Started With Kubernetes and Container Orchestration
speaker: jpetazzo
attend: https://qconsf.com/sf2018/workshop/getting-started-kubernetes-and-container-orchestration
attend: https://qconsf.com/sf2018/workshop/getting-started-kubernetes-and-container-orchestration-friday-section
slides: http://qconsf2018.container.training/
- date: 2018-10-31
city: London, UK
@@ -28,6 +30,7 @@
event: Velocity EU
title: Kubernetes 101
speaker: bridgetkromhout
slides: https://velocityeu2018.container.training
attend: https://conferences.oreilly.com/velocity/vl-eu/public/schedule/detail/71149
- date: 2018-10-30
@@ -54,8 +57,9 @@
title: Kubernetes 101
speaker: bridgetkromhout
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/70102
slides: https://velny-k8s101-2018.container.training
- date: 2018-09-30
- date: 2018-10-01
city: New York, NY
country: us
event: Velocity
@@ -64,7 +68,7 @@
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/69875
slides: https://k8s2d.container.training
- date: 2018-09-30
- date: 2018-10-01
city: New York, NY
country: us
event: Velocity

View File

@@ -42,6 +42,7 @@ chapters:
#- containers/Connecting_Containers_With_Links.md
- containers/Ambassadors.md
- - containers/Local_Development_Workflow.md
- containers/Windows_Containers.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Docker_Machine.md

View File

@@ -42,6 +42,7 @@ chapters:
#- containers/Connecting_Containers_With_Links.md
- containers/Ambassadors.md
- - containers/Local_Development_Workflow.md
- containers/Windows_Containers.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Docker_Machine.md

View File

@@ -538,7 +538,7 @@ It's important to note a couple of details in these flags ...
- But that we can't create things:
```
./kubectl run tryme --image=nginx
./kubectl create deployment --image=nginx
```
- Exit the container with `exit` or `^D`

View File

@@ -256,19 +256,19 @@ The master node has [taints](https://kubernetes.io/docs/concepts/configuration/t
- Let's check the logs of all these `rng` pods
- All these pods have a `run=rng` label:
- All these pods have the label `app=rng`:
- the first pod, because that's what `kubectl run` does
- the first pod, because that's what `kubectl create deployment` does
- the other ones (in the daemon set), because we
*copied the spec from the first one*
- Therefore, we can query everybody's logs using that `run=rng` selector
- Therefore, we can query everybody's logs using that `app=rng` selector
.exercise[
- Check the logs of all the pods having a label `run=rng`:
- Check the logs of all the pods having a label `app=rng`:
```bash
kubectl logs -l run=rng --tail 1
kubectl logs -l app=rng --tail 1
```
]
@@ -279,11 +279,51 @@ It appears that *all the pods* are serving requests at the moment.
---
## Working around `kubectl logs` bugs
- That last command didn't show what we needed
- We mentioned earlier that regression affecting `kubectl logs` ...
(see [#70554](https://github.com/kubernetes/kubernetes/issues/70554) for more details)
- Let's work around the issue by executing `kubectl logs` one pod at a time
- For convenience, we'll define a little shell function
---
## Our helper function
- The function `ktail` below will:
- list the names of all pods matching a selector
- display the last line of log for each pod
.exercise[
- Define `ktail`:
```bash
ktail () {
kubectl get pods -o name -l $1 |
xargs -rn1 kubectl logs --tail 1
}
```
- Try it:
```bash
ktail app=rng
```
]
---
## The magic of selectors
- The `rng` *service* is load balancing requests to a set of pods
- This set of pods is defined as "pods having the label `run=rng`"
- This set of pods is defined as "pods having the label `app=rng`"
.exercise[
@@ -310,7 +350,7 @@ to the associated load balancer.
--
- What would happen if we removed the `run=rng` label from that pod?
- What would happen if we removed the `app=rng` label from that pod?
--
@@ -322,7 +362,7 @@ to the associated load balancer.
--
- But but but ... Don't we have more than one pod with `run=rng` now?
- But but but ... Don't we have more than one pod with `app=rng` now?
--
@@ -345,7 +385,7 @@ to the associated load balancer.
<br/>(The second command doesn't require you to get the exact name of the replica set)
```bash
kubectl describe rs rng-yyyyyyyy
kubectl describe rs -l run=rng
kubectl describe rs -l app=rng
```
]
@@ -433,11 +473,11 @@ Of course, option 2 offers more learning opportunities. Right?
<!--
```wait Please edit the object below```
```keys /run: rng```
```keys /app: rng```
```keys ^J```
```keys noisactive: "yes"```
```keys ^[``` ]
```keys /run: rng```
```keys /app: rng```
```keys ^J```
```keys oisactive: "yes"```
```keys ^[``` ]
@@ -452,7 +492,7 @@ Of course, option 2 offers more learning opportunities. Right?
<!--
```wait Please edit the object below```
```keys /run: rng```
```keys /app: rng```
```keys ^J```
```keys noisactive: "yes"```
```keys ^[``` ]
@@ -468,9 +508,9 @@ Of course, option 2 offers more learning opportunities. Right?
.exercise[
- Check the most recent log line of all `run=rng` pods to confirm that exactly one per node is now active:
- Check the most recent log line of all `app=rng` pods to confirm that exactly one per node is now active:
```bash
kubectl logs -l run=rng --tail 1
kubectl logs -l app=rng --tail 1
```
]
@@ -496,14 +536,14 @@ The timestamps should give us a hint about how many pods are currently receiving
.exercise[
- List the pods with `run=rng` but without `isactive=yes`:
- List the pods with `app=rng` but without `isactive=yes`:
```bash
kubectl get pods -l run=rng,isactive!=yes
kubectl get pods -l app=rng,isactive!=yes
```
- Remove these pods:
```bash
kubectl delete pods -l run=rng,isactive!=yes
kubectl delete pods -l app=rng,isactive!=yes
```
]
@@ -581,7 +621,7 @@ Ding, dong, the deployment is dead! And the daemon set lives on.
labels:
isactive: "yes"
'
kubectl get pods -l run=rng -l controller-revision-hash -o name |
kubectl get pods -l app=rng -l controller-revision-hash -o name |
xargs kubectl patch -p "$PATCH"
```

View File

@@ -392,9 +392,9 @@ This is normal: we haven't provided any ingress rule yet.
- Run all three deployments:
```bash
kubectl run cheddar --image=errm/cheese:cheddar
kubectl run stilton --image=errm/cheese:stilton
kubectl run wensleydale --image=errm/cheese:wensleydale
kubectl create deployment cheddar --image=errm/cheese:cheddar
kubectl create deployment stilton --image=errm/cheese:stilton
kubectl create deployment wensleydale --image=errm/cheese:wensleydale
```
- Create a service for each of them:

View File

@@ -57,31 +57,49 @@ Under the hood: `kube-proxy` is using a userland proxy and a bunch of `iptables`
- Since `ping` doesn't have anything to connect to, we'll have to run something else
- We could use the `nginx` official image, but ...
... we wouldn't be able to tell the backends from each other!
- We are going to use `jpetazzo/httpenv`, a tiny HTTP server written in Go
- `jpetazzo/httpenv` listens on port 8888
- It serves its environment variables in JSON format
- The environment variables will include `HOSTNAME`, which will be the pod name
(and therefore, will be different on each backend)
---
## Creating a deployment for our HTTP server
- We *could* do `kubectl run httpenv --image=jpetazzo/httpenv` ...
- But since `kubectl run` is being deprecated, let's see how to use `kubectl create` instead
.exercise[
- Start a bunch of HTTP servers:
```bash
kubectl run httpenv --image=jpetazzo/httpenv --replicas=10
```
- Watch them being started:
- In another window, watch the pods (to see when they will be created):
```bash
kubectl get pods -w
```
<!--
```wait httpenv-```
```keys ^C```
-->
<!-- ```keys ^C``` -->
- Create a deployment for this very lightweight HTTP server:
```bash
kubectl create deployment httpenv --image=jpetazzo/httpenv
```
- Scale it to 10 replicas:
```bash
kubectl scale deployment httpenv --replicas=10
```
]
The `jpetazzo/httpenv` image runs an HTTP server on port 8888.
<br/>
It serves its environment variables in JSON format.
The `-w` option "watches" events happening on the specified resources.
---
## Exposing our deployment
@@ -92,12 +110,12 @@ The `-w` option "watches" events happening on the specified resources.
- Expose the HTTP port of our server:
```bash
kubectl expose deploy/httpenv --port 8888
kubectl expose deployment httpenv --port 8888
```
- Look up which IP address was allocated:
```bash
kubectl get svc
kubectl get service
```
]
@@ -237,7 +255,7 @@ class: extra-details
- These IP addresses should match the addresses of the corresponding pods:
```bash
kubectl get pods -l run=httpenv -o wide
kubectl get pods -l app=httpenv -o wide
```
---

View File

@@ -173,6 +173,11 @@ pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m
kubectl scale deploy/pingpong --replicas 8
```
- Note that this command does exactly the same thing:
```bash
kubectl scale deployment pingpong --replicas 8
```
]
Note: what if we tried to scale `replicaset.apps/pingpong-xxxxxxxxxx`?
@@ -290,6 +295,20 @@ Unfortunately, `--follow` cannot (yet) be used to stream the logs from multiple
---
## `kubectl logs -l ... --tail N`
- With Kubernetes 1.12 (and up to at least 1.12.2), the last command shows multiple lines
- This is a regression when `--tail` is used together with `-l`/`--selector`
- It always shows the last 10 lines of output for each container
(instead of the number of lines specified on the command line)
- See [#70554](https://github.com/kubernetes/kubernetes/issues/70554) for details
---
## Aren't we flooding 1.1.1.1?
- If you're wondering this, good question!

View File

@@ -1,15 +1,13 @@
# Links and resources
- [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups
- [Kubernetes on StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes)
- [Play With Kubernetes Hands-On Labs](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
- [Microsoft Learn](https://docs.microsoft.com/learn/)
- [Azure Kubernetes Service](https://docs.microsoft.com/azure/aks/)
- [Cloud Developer Advocates](https://developer.microsoft.com/advocates/)
- [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups
- [Local meetups](https://www.meetup.com/)
- [devopsdays](https://www.devopsdays.org/)

View File

@@ -14,11 +14,11 @@
- Download the `kubectl` binary from one of these links:
[Linux](https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl)
[Linux](https://storage.googleapis.com/kubernetes-release/release/v1.12.2/bin/linux/amd64/kubectl)
|
[macOS](https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/darwin/amd64/kubectl)
[macOS](https://storage.googleapis.com/kubernetes-release/release/v1.12.2/bin/darwin/amd64/kubectl)
|
[Windows](https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/windows/amd64/kubectl.exe)
[Windows](https://storage.googleapis.com/kubernetes-release/release/v1.12.2/bin/windows/amd64/kubectl.exe)
- On Linux and macOS, make the binary executable with `chmod +x kubectl`

View File

@@ -62,10 +62,12 @@ Exactly what we need!
- The following commands will install Stern on a Linux Intel 64 bit machine:
```bash
sudo curl -L -o /usr/local/bin/stern \
https://github.com/wercker/stern/releases/download/1.8.0/stern_linux_amd64
https://github.com/wercker/stern/releases/download/1.10.0/stern_linux_amd64
sudo chmod +x /usr/local/bin/stern
```
<!-- ##VERSION## -->
---
## Using Stern
@@ -130,11 +132,13 @@ Exactly what we need!
- We can use that property to view the logs of all the pods created with `kubectl run`
- Similarly, everything created with `kubectl create deployment` has a label `app`
.exercise[
- View the logs for all the things started with `kubectl run`:
- View the logs for all the things started with `kubectl create deployment`:
```bash
stern -l run
stern -l app
```
<!--

View File

@@ -214,6 +214,10 @@ Note: it might take a minute or two for the app to be up and running.
kubens -
```
- On our clusters, `kubens` is called `kns` instead
(so that it's even fewer keystrokes to switch namespaces)
---
## `kubens` and `kubectx`
@@ -227,3 +231,21 @@ Note: it might take a minute or two for the app to be up and running.
- On our clusters, they are installed as `kns` and `kctx`
(for brevity and to avoid completion clashes between `kubectx` and `kubectl`)
---
## `kube-ps1`
- It's easy to lose track of our current cluster / context / namespace
- `kube-ps1` makes it easy to track these, by showing them in our shell prompt
- It's a simple shell script available from https://github.com/jonmosco/kube-ps1
- On our clusters, `kube-ps1` is installed and included in `PS1`:
```
[123.45.67.89] `(kubernetes-admin@kubernetes:default)` docker@node1 ~
```
(The highlighted part is `context:namespace`, managed by `kube-ps1`)
- Highly recommended if you work across multiple contexts or namespaces!

View File

@@ -117,13 +117,13 @@ This is our game plan:
- Let's use the `nginx` image:
```bash
kubectl run testweb --image=nginx
kubectl create deployment testweb --image=nginx
```
- Find out the IP address of the pod with one of these two commands:
```bash
kubectl get pods -o wide -l run=testweb
IP=$(kubectl get pods -l run=testweb -o json | jq -r .items[0].status.podIP)
kubectl get pods -o wide -l app=testweb
IP=$(kubectl get pods -l app=testweb -o json | jq -r .items[0].status.podIP)
```
- Check that we can connect to the server:
@@ -138,7 +138,7 @@ The `curl` command should show us the "Welcome to nginx!" page.
## Adding a very restrictive network policy
- The policy will select pods with the label `run=testweb`
- The policy will select pods with the label `app=testweb`
- It will specify an empty list of ingress rules (matching nothing)
@@ -172,7 +172,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: testweb
app: testweb
ingress: []
```
@@ -207,7 +207,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: testweb
app: testweb
ingress:
- from:
- podSelector:
@@ -325,7 +325,7 @@ spec:
## Allowing traffic to `webui` pods
This policy selects all pods with label `run=webui`.
This policy selects all pods with label `app=webui`.
It allows traffic from any source.
@@ -339,7 +339,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: webui
app: webui
ingress:
- from: []
```

View File

@@ -74,7 +74,7 @@ In this part, we will:
- Create the registry service:
```bash
kubectl run registry --image=registry
kubectl create deployment registry --image=registry
```
- Expose it on a NodePort:
@@ -246,6 +246,27 @@ class: extra-details
---
## Catching up
- If you have problems deploying the registry ...
- Or building or pushing the images ...
- Don't worry: we provide pre-built images hosted on the Docker Hub!
- The images are named `dockercoins/worker:v0.1`, `dockercoins/rng:v0.1`, etc.
- To use them, just set the `REGISTRY` environment variable to `dockercoins`:
```bash
export REGISTRY=dockercoins
```
- Make sure to set the `TAG` to `v0.1`
(our repositories on the Docker Hub do not provide a `latest` tag)
---
## Deploying all the things
- We can now deploy our code (as well as a redis instance)
@@ -254,13 +275,13 @@ class: extra-details
- Deploy `redis`:
```bash
kubectl run redis --image=redis
kubectl create deployment redis --image=redis
```
- Deploy everything else:
```bash
for SERVICE in hasher rng webui worker; do
kubectl run $SERVICE --image=$REGISTRY/$SERVICE:$TAG
kubectl create deployment $SERVICE --image=$REGISTRY/$SERVICE:$TAG
done
```

View File

@@ -22,14 +22,19 @@
.exercise[
- Let's start a replicated `nginx` deployment:
- Let's create a deployment running `nginx`:
```bash
kubectl run yanginx --image=nginx --replicas=3
kubectl create deployment yanginx --image=nginx
```
- Scale it to a few replicas:
```bash
kubectl scale deployment yanginx --replicas=3
```
- Once it's up, check the corresponding pods:
```bash
kubectl get pods -l run=yanginx -o yaml | head -n 25
kubectl get pods -l app=yanginx -o yaml | head -n 25
```
]
@@ -99,12 +104,12 @@ so the lines should not be indented (otherwise the indentation will insert space
- Delete the Deployment:
```bash
kubectl delete deployment -l run=yanginx --cascade=false
kubectl delete deployment -l app=yanginx --cascade=false
```
- Delete the Replica Set:
```bash
kubectl delete replicaset -l run=yanginx --cascade=false
kubectl delete replicaset -l app=yanginx --cascade=false
```
- Check that the pods are still here:
@@ -126,7 +131,7 @@ class: extra-details
- If we change the labels on a dependent, so that it's not selected anymore
(e.g. change the `run: yanginx` in the pods of the previous example)
(e.g. change the `app: yanginx` in the pods of the previous example)
- If a deployment tool that we're using does these things for us
@@ -174,4 +179,4 @@ class: extra-details
]
As always, the [documentation](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) has useful extra information and pointers.
As always, the [documentation](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) has useful extra information and pointers.

View File

@@ -139,7 +139,7 @@
- To install Portworx, we need to go to https://install.portworx.com/
- This website will ask us a bunch of questoins about our cluster
- This website will ask us a bunch of questions about our cluster
- Then, it will generate a YAML file that we should apply to our cluster

View File

@@ -151,7 +151,7 @@ scrape_configs:
## Running Prometheus on our cluster
We need to:
We would need to:
- Run the Prometheus server in a pod
@@ -171,19 +171,21 @@ We need to:
## Helm Charts to the rescue
- To make our lives easier, we are going to use a Helm Chart
- To make our lives easier, we could use a Helm Chart
- The Helm Chart will take care of all the steps explained above
- The Helm Chart would take care of all the steps explained above
(including some extra features that we don't need, but won't hurt)
- In fact, Prometheus has been pre-installed on our clusters with Helm
(it was pre-installed so that it would be populated with metrics by now)
---
## Step 1: install Helm
## Step 1: if we had to install Helm
- If we already installed Helm earlier, these commands won't break anything
.exercice[
- Note that if Helm is already installed, these commands won't break anything
- Install Tiller (Helm's server-side component) on our cluster:
```bash
@@ -196,27 +198,17 @@ We need to:
--clusterrole=cluster-admin --serviceaccount=kube-system:default
```
]
---
## Step 2: install Prometheus
## Step 2: if we had to install Prometheus
- Skip this if we already installed Prometheus earlier
(in doubt, check with `helm list`)
.exercice[
- Install Prometheus on our cluster:
- This is how we would use Helm to deploy Prometheus on the cluster:
```bash
helm install stable/prometheus \
--set server.service.type=NodePort \
--set server.persistentVolume.enabled=false
```
]
The provided flags:
- expose the server web UI (and API) on a NodePort
@@ -235,11 +227,13 @@ The provided flags:
- Figure out the NodePort that was allocated to the Prometheus server:
```bash
kubectl get svc | grep prometheus-server
kubectl get svc -n kube-system | grep prometheus-server
```
- With your browser, connect to that port
(spoiler alert: it should be 30090)
]
---

View File

@@ -4,7 +4,9 @@
--
- We used `kubeadm` on freshly installed VM instances running Ubuntu 16.04 LTS
<!-- ##VERSION## -->
- We used `kubeadm` on freshly installed VM instances running Ubuntu 18.04 LTS
1. Install Docker

View File

@@ -1,9 +1,10 @@
## Versions installed
- Kubernetes 1.12.0
- Docker Engine 18.06.1-ce
- Kubernetes 1.12.2
- Docker Engine 18.09.0
- Docker Compose 1.21.1
<!-- ##VERSION## -->
.exercise[

View File

@@ -77,6 +77,18 @@ And *then* it is time to look at orchestration!
---
## Relevant sections
- [Namespaces](kube-selfpaced.yml.html#toc-namespaces)
- [Network Policies](kube-selfpaced.yml.html#toc-network-policies)
- [Role-Based Access Control](kube-selfpaced.yml.html#toc-authentication-and-authorization)
(covers permissions model, user and service accounts management ...)
---
## Stateful services (databases etc.)
- As a first step, it is wiser to keep stateful services *outside* of the cluster
@@ -113,6 +125,13 @@ And *then* it is time to look at orchestration!
- what do we gain by deploying this stateful service on Kubernetes?
- Relevant sections:
[Volumes](kube-selfpaced.yml.html#toc-volumes)
|
[Stateful Sets](kube-selfpaced.yml.html#toc-stateful-sets)
|
[Persistent Volumes](kube-selfpaced.yml.html#toc-highly-available-persistent-volumes)
---
## HTTP traffic handling
@@ -130,7 +149,7 @@ And *then* it is time to look at orchestration!
- URI mapping
- and much more!
- Check out e.g. [Træfik](https://docs.traefik.io/user-guide/kubernetes/)
- [This section](kube-selfpaced.yml.html#toc-exposing-http-services-with-ingress-resources) shows how to expose multiple HTTP apps using [Træfik](https://docs.traefik.io/user-guide/kubernetes/)
---
@@ -146,6 +165,8 @@ And *then* it is time to look at orchestration!
(e.g. with an agent bind-mounting the log directory)
- [This section](kube-selfpaced.yml.html#toc-centralized-logging) shows how to do that with [Fluentd](https://docs.fluentd.org/v0.12/articles/kubernetes-fluentd) and the EFK stack
---
## Metrics
@@ -180,6 +201,8 @@ And *then* it is time to look at orchestration!
(It's the container equivalent of the password on a post-it note on your screen)
- [This section](kube-selfpaced.yml.html#toc-managing-configuration) shows how to manage app config with config maps (among others)
---
## Managing stack deployments

View File

@@ -1,14 +1,15 @@
title: |
Deploying and Scaling Microservices
with Kubernetes
Getting Started With
Kubernetes and
Container Orchestration
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
chat: "Gitter ([Thursday](https://gitter.im/jpetazzo/workshop-20181108-sanfrancisco)|[Friday](https://gitter.im/jpetazzo/workshop-20181109-sanfrancisco))"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
slides: http://qconsf2018.container.training/
exclude:
- self-paced
@@ -33,30 +34,30 @@ chapters:
- k8s/kubectlrun.md
- k8s/kubectlexpose.md
- - k8s/ourapponkube.md
- k8s/kubectlproxy.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
# - k8s/kubectlproxy.md
# - k8s/localkubeconfig.md
# - k8s/accessinternal.md
- k8s/dashboard.md
- k8s/kubectlscale.md
- - k8s/daemonset.md
- k8s/rollout.md
- k8s/healthchecks.md
- k8s/daemonset.md
- - k8s/rollout.md
# - k8s/healthchecks.md
- k8s/logs-cli.md
- k8s/logs-centralized.md
- - k8s/helm.md
- k8s/namespaces.md
- k8s/netpol.md
- k8s/authn-authz.md
- - k8s/ingress.md
- k8s/gitworkflows.md
#- - k8s/helm.md
# - k8s/namespaces.md
# - k8s/netpol.md
# - k8s/authn-authz.md
#- - k8s/ingress.md
# - k8s/gitworkflows.md
- k8s/prometheus.md
- - k8s/volumes.md
- k8s/build-with-docker.md
- k8s/build-with-kaniko.md
- k8s/configuration.md
- - k8s/owners-and-dependents.md
- k8s/statefulsets.md
- k8s/portworx.md
#- - k8s/volumes.md
# - k8s/build-with-docker.md
# - k8s/build-with-kaniko.md
# - k8s/configuration.md
#- - k8s/owners-and-dependents.md
# - k8s/statefulsets.md
# - k8s/portworx.md
- - k8s/whatsnext.md
- k8s/links.md
- shared/thankyou.md

View File

@@ -1,6 +1,7 @@
title: |
Kubernetes 101
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
chat: "In person!"

View File

@@ -5,6 +5,7 @@ title: |
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/

62
slides/kube-twodays.yml Normal file
View File

@@ -0,0 +1,62 @@
title: |
Deploying and Scaling Microservices
with Kubernetes
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
- k8s/versions-k8s.md
- shared/sampleapp.md
- shared/composescale.md
- shared/composedown.md
- k8s/concepts-k8s.md
- shared/declarative.md
- k8s/declarative.md
- - k8s/kubenet.md
- k8s/kubectlget.md
- k8s/setup-k8s.md
- k8s/kubectlrun.md
- k8s/kubectlexpose.md
- - k8s/ourapponkube.md
- k8s/kubectlproxy.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
- k8s/dashboard.md
- k8s/kubectlscale.md
- - k8s/daemonset.md
- k8s/rollout.md
- k8s/healthchecks.md
- k8s/logs-cli.md
- k8s/logs-centralized.md
- - k8s/helm.md
- k8s/namespaces.md
- k8s/netpol.md
- k8s/authn-authz.md
- - k8s/ingress.md
- k8s/gitworkflows.md
- k8s/prometheus.md
- - k8s/volumes.md
- k8s/build-with-docker.md
- k8s/build-with-kaniko.md
- k8s/configuration.md
- - k8s/owners-and-dependents.md
- k8s/statefulsets.md
- k8s/portworx.md
- - k8s/whatsnext.md
- k8s/links.md
- shared/thankyou.md

View File

@@ -1,26 +1,11 @@
## Intros
- This slide should be customized by the tutorial instructor(s).
- Hello! I'm
Jérôme Petazzoni ([@jpetazzo](https://twitter.com/jpetazzo), Enix SAS)
- Hello! We are:
- The workshop will run from 9am to 4pm
- .emoji[👩🏻‍🏫] Ann O'Nymous ([@...](https://twitter.com/...), Megacorp Inc)
- .emoji[👨🏾‍🎓] Stu Dent ([@...](https://twitter.com/...), University of Wakanda)
<!-- .dummy[
- .emoji[👷🏻‍♀️] AJ ([@s0ulshake](https://twitter.com/s0ulshake), Travis CI)
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Enix SAS)
- .emoji[⛵] Jérémy ([@jeremygarrouste](twitter.com/jeremygarrouste), Inpiwee)
] -->
- The workshop will run from ...
- There will be a lunch break at ...
- There will be a lunch break from noon to 1pm
(And coffee breaks!)

17
slides/override.css Normal file
View File

@@ -0,0 +1,17 @@
.remark-slide-content:not(.pic) {
background-repeat: no-repeat;
background-position: 99% 1%;
background-size: 8%;
background-image: url(https://enix.io/static/img/logos/logo-domain-cropped.png);
}
div.extra-details:not(.pic) {
background-image: url("images/extra-details.png"), url(https://enix.io/static/img/logos/logo-domain-cropped.png);
background-position: 0.5% 1%, 99% 1%;
background-size: 4%, 8%;
}
.remark-slide-content:not(.pic) div.remark-slide-number {
top: 16px;
right: 112px
}

View File

@@ -176,6 +176,12 @@ class: extra-details
---
class: pic
![Diagram showing the 5 containers of the applications](images/dockercoins-diagram.svg)
---
## Our application at work
- On the left-hand side, the "rainbow strip" shows the container names

View File

@@ -9,3 +9,20 @@ class: title, in-person
That's all, folks! <br/> Questions?
![end](images/end.jpg)
---
## Final words
- You can find more content on http://container.training/
(More slides, videos, dates of upcoming workshops and tutorials...)
- If you want me to train your team:
[contact me!](https://docs.google.com/forms/d/e/1FAIpQLScm2evHMvRU8C5ZK59l8FGsLY_Kkup9P_GHgjfByUMyMpMmDA/viewform)
(This workshop is also available as longer training sessions, covering advanced topics)
- The organizers of this conference would like you to rate this workshop!
.footnote[*Thank you!*]

View File

@@ -41,6 +41,7 @@ chapters:
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/compose2swarm.md
- swarm/cicd.md
- swarm/updatingservices.md
#- swarm/rollingupdates.md
- swarm/healthchecks.md

View File

@@ -41,6 +41,7 @@ chapters:
#- swarm/btp-manual.md
#- swarm/swarmready.md
- swarm/compose2swarm.md
- swarm/cicd.md
- swarm/updatingservices.md
#- swarm/rollingupdates.md
#- swarm/healthchecks.md

View File

@@ -42,6 +42,7 @@ chapters:
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/compose2swarm.md
- swarm/cicd.md
- |
name: part-2

37
slides/swarm/cicd.md Normal file
View File

@@ -0,0 +1,37 @@
name: cicd
# CI/CD for Docker and orchestration
A quick note about continuous integration and deployment
- This lab won't have you building out CI/CD pipelines
- We're cheating a bit by building images on server hosts and not in CI tool
- Docker and orchestration works with all the CI and deployment tools
---
## CI/CD general process
- Have your CI build your images, run tests *in them*, then push to registry
- If you security scan, do it then on your images after tests but before push
- Optionally, have CI do continuous deployment if build/test/push is successful
- CD tool would SSH into nodes, or use docker cli against remote engine
- If supported, it could use docker engine TCP API (swarm API is built-in)
- Docker KBase [Development Pipeline Best Practices](https://success.docker.com/article/dev-pipeline)
- Docker KBase [Continuous Integration with Docker Hub](https://success.docker.com/article/continuous-integration-with-docker-hub)
- Docker KBase [Building a Docker Secure Supply Chain](https://success.docker.com/article/secure-supply-chain)
---
class: pic
![CI-CD with Docker](images/ci-cd-with-docker.png)

View File

@@ -131,42 +131,50 @@ class: self-paced
- 5 managers = 2 failures (or 1 failure during 1 maintenance)
- 7 managers and more = now you might be overdoing it a little bit
- 7 managers and more = now you might be overdoing it for most designs
.footnote[
see [Docker's admin guide](https://docs.docker.com/engine/swarm/admin_guide/#add-manager-nodes-for-fault-tolerance)
on node failure and datacenter redundancy
]
---
## Why not have *all* nodes be managers?
- Intuitively, it's harder to reach consensus in larger groups
- With Raft, writes have to go to (and be acknowledged by) all nodes
- More nodes = more network traffic
- Thus, it's harder to reach consensus in larger groups
- Bigger network = more latency
- Only one manager is Leader (writable), so more managers ≠ more capacity
- Managers should be &#60; 10ms latency from each other
- These design parameters lead us to recommended designs
---
## What would McGyver do?
- If some of your machines are more than 10ms away from each other,
<br/>
try to break them down in multiple clusters
(keeping internal latency low)
- Keep managers in one region (multi-zone/datacenter/rack)
- Groups of up to 9 nodes: all of them are managers
- Groups of 3 or 5 nodes: all are managers. Beyond 5, seperate out managers and workers
- Groups of 10 nodes and up: pick 5 "stable" nodes to be managers
<br/>
(Cloud pro-tip: use separate auto-scaling groups for managers and workers)
- Groups of 10-100 nodes: pick 5 "stable" nodes to be managers
- Groups of more than 100 nodes: watch your managers' CPU and RAM
- Groups of more than 1000 nodes:
- 16GB memory or more, 4 CPU's or more, SSD's for Raft I/O
- otherwise, break down your nodes in multiple smaller clusters
- if you can afford to have fast, stable managers, add more of them
- otherwise, break down your nodes in multiple clusters
.footnote[
Cloud pro-tip: use separate auto-scaling groups for managers and workers
See docker's "[Running Docker at scale](http://success.docker.com/article/running-docker-ee-at-scale)" document
]
---
## What's the upper limit?
@@ -181,11 +189,11 @@ class: self-paced
- Testing by the community: [4700 heterogeneous nodes all over the 'net](https://sematext.com/blog/2016/11/14/docker-swarm-lessons-from-swarm3k/)
- it just works
- it just works, assuming they have the resources
- more nodes require more CPU; more containers require more RAM
- more nodes require manager CPU and networking; more containers require RAM
- scheduling of large jobs (70000 containers) is slow, though (working on it!)
- scheduling of large jobs (70,000 containers) is slow, though ([getting better](https://github.com/moby/moby/pull/37372)!)
---

View File

@@ -4,6 +4,7 @@
<title>@@TITLE@@</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<link rel="stylesheet" href="workshop.css">
<link rel="stylesheet" href="override.css">
</head>
<body>
<!--