Compare commits

...

34 Commits

Author SHA1 Message Date
Jerome Petazzoni
a72148d51a fix-redirects.sh: adding forced redirect 2020-04-07 16:58:03 -05:00
Jerome Petazzoni
2d2246db4e Fix FIXME :) 2019-05-28 09:49:17 -05:00
Jerome Petazzoni
5872100101 Merge branch 'master' into wwrk-2019-05 2019-05-28 05:43:22 -05:00
Jerome Petazzoni
8b98058f22 Add note about Helm first deploy fail 2019-05-27 15:51:57 -05:00
Jerome Petazzoni
33f5a6b2ed merge 2019-05-26 14:17:03 -05:00
Jerome Petazzoni
56f2083a2b Fix Ingress section 2019-05-26 14:16:25 -05:00
Jerome Petazzoni
48bd2a98bd merge 2019-05-25 21:45:06 -05:00
Jerome Petazzoni
918fa2091d Merge branch 'master' into wwrk-2019-05 2019-05-25 21:43:27 -05:00
Jerome Petazzoni
e56ab48070 Fixup title 2019-05-25 21:18:09 -05:00
Jerome Petazzoni
50ad11a697 Merge branch 'master' into wwrk-2019-05 2019-05-25 21:14:21 -05:00
Jerome Petazzoni
5447b187ac Update final slides 2019-05-25 20:51:53 -05:00
Jerome Petazzoni
d2a91c27c1 Break down each day into 4 parts 2019-05-25 20:44:50 -05:00
Jerome Petazzoni
a8605a9316 Adapt ingress section to wek8s 2019-05-25 20:31:06 -05:00
Jerome Petazzoni
25e2f8eca8 merge 2019-05-25 19:44:59 -05:00
Jerome Petazzoni
a6bd6a94e8 Add extended chapter on Helm in wek8s context 2019-05-25 17:21:38 -05:00
Jerome Petazzoni
8650209381 Merge branch 'master' into wwrk-2019-05 2019-05-25 13:56:32 -05:00
Jerome Petazzoni
b0aeac555d Add a short blurb about wek8s and security 2019-05-24 22:13:52 -05:00
Jerome Petazzoni
f3b9340528 Add note about Slack channel 2019-05-24 22:02:11 -05:00
Jerome Petazzoni
927484bcbc Merge branch 'master' into wwrk-2019-05 2019-05-24 21:40:32 -05:00
Jerome Petazzoni
8d0c568f5a Merge branch 'master' into wwrk-2019-05 2019-05-24 20:28:46 -05:00
Jerome Petazzoni
53c466e6ed Fix AWS role name 2019-05-24 20:21:03 -05:00
Jerome Petazzoni
9b130861ea Add #connecting-to-wek8s anchor 2019-05-24 19:49:39 -05:00
Jerome Petazzoni
b28ed0bbfc Merge branch 'master' into wwrk-2019-05 2019-05-24 19:43:26 -05:00
Jerome Petazzoni
8672a11c3b Add wek8s basic info + show how to connect 2019-05-24 18:12:45 -05:00
Jerome Petazzoni
65647d5882 Merge branch 'master' into wwrk-2019-05 2019-05-24 16:21:17 -05:00
Jerome Petazzoni
1bc7415c54 Improve transition between Docker and Kubernetes section 2019-05-24 16:12:26 -05:00
Jerome Petazzoni
2fdede72f1 Merge branch 'master' into wwrk-2019-05 2019-05-24 15:44:05 -05:00
Jerome Petazzoni
70c91b121c Add Slack URL 2019-05-24 15:37:41 -05:00
Jerome Petazzoni
a7833a75b4 Setup redirect 2019-05-24 12:49:44 -05:00
Jerome Petazzoni
31e23477d6 Prepare cards and scripts 2019-05-24 12:12:54 -05:00
Jerome Petazzoni
4ba9d5e82e Merge branch 'master' into wwrk-2019-05 2019-05-23 23:15:44 -05:00
Jerome Petazzoni
7ddda3456c Remove Kustomize (we'll put more emphasis on Helm) 2019-05-23 22:39:04 -05:00
Jerome Petazzoni
747f7a07d4 Merge branch 'master' into wwrk-2019-05 2019-05-23 22:35:35 -05:00
Jerome Petazzoni
faf7e1af42 WWRK NYC 2019-05-23 17:36:19 -05:00
35 changed files with 1562 additions and 1134 deletions

21
k8s/malicious-pod.yaml Normal file
View File

@@ -0,0 +1,21 @@
apiVersion: v1
kind: Pod
metadata:
name: malicious
spec:
volumes:
- name: slash
hostPath:
path: /
containers:
- image: alpine
name: alpine
securityContext:
privileged: true
command:
- sleep
- "1000000000"
volumeMounts:
- name: slash
mountPath: /hostfs
restartPolicy: Never

View File

@@ -1,28 +0,0 @@
# Number of VMs per cluster
clustersize: 1
# The hostname of each node will be clusterprefix + a number
clusterprefix: dmuc
# Jinja2 template to use to generate ready-to-cut cards
cards_template: admin.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: A4
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -1,28 +0,0 @@
# Number of VMs per cluster
clustersize: 3
# The hostname of each node will be clusterprefix + a number
clusterprefix: kubenet
# Jinja2 template to use to generate ready-to-cut cards
cards_template: admin.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: A4
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -1,28 +0,0 @@
# Number of VMs per cluster
clustersize: 3
# The hostname of each node will be clusterprefix + a number
clusterprefix: kuberouter
# Jinja2 template to use to generate ready-to-cut cards
cards_template: admin.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: A4
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -1,28 +0,0 @@
# Number of VMs per cluster
clustersize: 3
# The hostname of each node will be clusterprefix + a number
clusterprefix: test
# Jinja2 template to use to generate ready-to-cut cards
cards_template: admin.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: A4
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -7,7 +7,7 @@ clustersize: 1
clusterprefix: node
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
cards_template: jerome.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter

View File

@@ -1,29 +0,0 @@
# Number of VMs per cluster
clustersize: 1
# The hostname of each node will be clusterprefix + a number
clusterprefix: node
# Jinja2 template to use to generate ready-to-cut cards
cards_template: enix.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: A4
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -1,30 +0,0 @@
# customize your cluster size, your cards template, and the versions
# Number of VMs per cluster
clustersize: 5
# The hostname of each node will be clusterprefix + a number
clusterprefix: node
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: test
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.18.0
machine_version: 0.13.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -1,31 +0,0 @@
# 3 nodes for k8s 101 workshops
# Number of VMs per cluster
clustersize: 3
# The hostname of each node will be clusterprefix + a number
clusterprefix: node
# Jinja2 template to use to generate ready-to-cut cards
cards_template: kube101.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -1,30 +0,0 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
# Number of VMs per cluster
clustersize: 3
# The hostname of each node will be clusterprefix + a number
clusterprefix: node
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.22.0
machine_version: 0.15.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -1,124 +0,0 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://FIXME.container.training" -%}
{%- set pagesize = 9 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine virtuelle" -%}
{%- set this_or_each = "cette" -%}
{%- set plural = "" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "Kubernetes workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "chaque" -%}
{%- set plural = "s" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_kube -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head><style>
@import url('https://fonts.googleapis.com/css?family=Slabo+27px');
body, table {
margin: 0;
padding: 0;
line-height: 1em;
font-size: 15px;
font-family: 'Slabo 27px';
}
table {
border-spacing: 0;
margin-top: 0.4em;
margin-bottom: 0.4em;
border-left: 0.8em double grey;
padding-left: 0.4em;
}
div {
float: left;
border: 1px dotted black;
padding-top: 1%;
padding-bottom: 1%;
/* columns * (width+left+right) < 100% */
width: 30%;
padding-left: 1.5%;
padding-right: 1.5%;
}
p {
margin: 0.4em 0 0.4em 0;
}
img {
height: 4em;
float: right;
margin-right: -0.3em;
}
img.enix {
height: 4.0em;
margin-top: 0.4em;
}
img.kube {
height: 4.2em;
margin-top: 1.7em;
}
.logpass {
font-family: monospace;
font-weight: bold;
}
.pagebreak {
page-break-after: always;
clear: both;
display: block;
height: 8px;
}
</style></head>
<body>
{% for cluster in clusters %}
{% if loop.index0>0 and loop.index0%pagesize==0 %}
<span class="pagebreak"></span>
{% endif %}
<div>
<p>
Voici les informations permettant de se connecter à un
des environnements utilisés pour cette formation.
Vous pouvez vous connecter à {{ this_or_each }} machine
virtuelle avec n'importe quel client SSH.
</p>
<p>
<img class="enix" src="https://enix.io/static/img/logos/logo-domain-cropped.png" />
<table>
<tr><td>cluster:</td></tr>
<tr><td class="logpass">{{ clusterprefix }}</td></tr>
<tr><td>identifiant:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>mot de passe:</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>
<p>
Adresse{{ plural }} IP :
<!--<img class="kube" src="{{ image_src }}" />-->
<table>
{% for node in cluster %}
<tr><td>{{ clusterprefix }}{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>Le support de formation est à l'adresse suivante :
<center>{{ url }}</center>
</p>
</div>
{% endfor %}
</body>
</html>

View File

@@ -1,106 +0,0 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://container.training/" -%}
{%- set pagesize = 12 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine" -%}
{%- set this_or_each = "this" -%}
{%- set machine_is_or_machines_are = "machine is" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "orchestration workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "each" -%}
{%- set machine_is_or_machines_are = "machines are" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_swarm -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head><style>
body, table {
margin: 0;
padding: 0;
line-height: 1em;
font-size: 14px;
}
table {
border-spacing: 0;
margin-top: 0.4em;
margin-bottom: 0.4em;
border-left: 0.8em double grey;
padding-left: 0.4em;
}
div {
float: left;
border: 1px dotted black;
padding-top: 1%;
padding-bottom: 1%;
/* columns * (width+left+right) < 100% */
width: 21.5%;
padding-left: 1.5%;
padding-right: 1.5%;
}
p {
margin: 0.4em 0 0.4em 0;
}
img {
height: 4em;
float: right;
margin-right: -0.4em;
}
.logpass {
font-family: monospace;
font-weight: bold;
}
.pagebreak {
page-break-after: always;
clear: both;
display: block;
height: 8px;
}
</style></head>
<body>
{% for cluster in clusters %}
{% if loop.index0>0 and loop.index0%pagesize==0 %}
<span class="pagebreak"></span>
{% endif %}
<div>
<p>
Here is the connection information to your very own
{{ cluster_or_machine }} for this {{ workshop_name }}.
You can connect to {{ this_or_each }} VM with any SSH client.
</p>
<p>
<img src="{{ image_src }}" />
<table>
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>
<p>
Your {{ machine_is_or_machines_are }}:
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>You can find the slides at:
<center>{{ url }}</center>
</p>
</div>
{% endfor %}
</body>
</html>

View File

@@ -1,121 +0,0 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://FIXME.container.training" -%}
{%- set pagesize = 9 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine virtuelle" -%}
{%- set this_or_each = "cette" -%}
{%- set plural = "" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "Kubernetes workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "chaque" -%}
{%- set plural = "s" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_kube -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head><style>
@import url('https://fonts.googleapis.com/css?family=Slabo+27px');
body, table {
margin: 0;
padding: 0;
line-height: 1em;
font-size: 15px;
font-family: 'Slabo 27px';
}
table {
border-spacing: 0;
margin-top: 0.4em;
margin-bottom: 0.4em;
border-left: 0.8em double grey;
padding-left: 0.4em;
}
div {
float: left;
border: 1px dotted black;
padding-top: 1%;
padding-bottom: 1%;
/* columns * (width+left+right) < 100% */
width: 30%;
padding-left: 1.5%;
padding-right: 1.5%;
}
p {
margin: 0.4em 0 0.4em 0;
}
img {
height: 4em;
float: right;
margin-right: -0.3em;
}
img.enix {
height: 4.0em;
margin-top: 0.4em;
}
img.kube {
height: 4.2em;
margin-top: 1.7em;
}
.logpass {
font-family: monospace;
font-weight: bold;
}
.pagebreak {
page-break-after: always;
clear: both;
display: block;
height: 8px;
}
</style></head>
<body>
{% for cluster in clusters %}
{% if loop.index0>0 and loop.index0%pagesize==0 %}
<span class="pagebreak"></span>
{% endif %}
<div>
<p>
Voici les informations permettant de se connecter à votre
{{ cluster_or_machine }} pour cette formation.
Vous pouvez vous connecter à {{ this_or_each }} machine virtuelle
avec n'importe quel client SSH.
</p>
<p>
<img class="enix" src="https://enix.io/static/img/logos/logo-domain-cropped.png" />
<table>
<tr><td>identifiant:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>mot de passe:</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>
<p>
Adresse{{ plural }} IP :
<!--<img class="kube" src="{{ image_src }}" />-->
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>Le support de formation est à l'adresse suivante :
<center>{{ url }}</center>
</p>
</div>
{% endfor %}
</body>
</html>

View File

@@ -1,15 +1,14 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://qconuk2019.container.training/" -%}
{%- set url = "http://wwrk-2019-05.container.training/" -%}
{%- set pagesize = 9 -%}
{%- set workshop_name = "training session" -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine" -%}
{%- set cluster_or_machine = "Docker machine" -%}
{%- set this_or_each = "this" -%}
{%- set machine_is_or_machines_are = "machine is" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "Kubernetes workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set cluster_or_machine = "Kubernetes cluster" -%}
{%- set this_or_each = "each" -%}
{%- set machine_is_or_machines_are = "machines are" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
@@ -109,26 +108,6 @@ img {
<center>{{ url }}</center>
</p>
</div>
{% if loop.index%pagesize==0 or loop.last %}
<span class="pagebreak"></span>
{% for x in range(pagesize) %}
<div class="back">
<br/>
<p>You got this at the workshop
"Getting Started With Kubernetes and Container Orchestration"
during QCON London (March 2019).</p>
<p>If you liked that workshop,
I can train your team or organization
on Docker, container, and Kubernetes,
with curriculums of 1 to 5 days.
</p>
<p>Interested? Contact me at:</p>
<p>jerome.petazzoni@gmail.com</p>
<p>Thank you!</p>
</div>
{% endfor %}
<span class="pagebreak"></span>
{% endif %}
{% endfor %}
</body>
</html>

View File

@@ -1,106 +0,0 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://container.training/" -%}
{%- set pagesize = 12 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine" -%}
{%- set this_or_each = "this" -%}
{%- set machine_is_or_machines_are = "machine is" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "Kubernetes workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "each" -%}
{%- set machine_is_or_machines_are = "machines are" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_kube -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head><style>
body, table {
margin: 0;
padding: 0;
line-height: 1em;
font-size: 14px;
}
table {
border-spacing: 0;
margin-top: 0.4em;
margin-bottom: 0.4em;
border-left: 0.8em double grey;
padding-left: 0.4em;
}
div {
float: left;
border: 1px dotted black;
padding-top: 1%;
padding-bottom: 1%;
/* columns * (width+left+right) < 100% */
width: 21.5%;
padding-left: 1.5%;
padding-right: 1.5%;
}
p {
margin: 0.4em 0 0.4em 0;
}
img {
height: 4em;
float: right;
margin-right: -0.4em;
}
.logpass {
font-family: monospace;
font-weight: bold;
}
.pagebreak {
page-break-after: always;
clear: both;
display: block;
height: 8px;
}
</style></head>
<body>
{% for cluster in clusters %}
{% if loop.index0>0 and loop.index0%pagesize==0 %}
<span class="pagebreak"></span>
{% endif %}
<div>
<p>
Here is the connection information to your very own
{{ cluster_or_machine }} for this {{ workshop_name }}.
You can connect to {{ this_or_each }} VM with any SSH client.
</p>
<p>
<img src="{{ image_src }}" />
<table>
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>
<p>
Your {{ machine_is_or_machines_are }}:
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>You can find the slides at:
<center>{{ url }}</center>
</p>
</div>
{% endfor %}
</body>
</html>

View File

@@ -2,3 +2,4 @@
#/ /kube-halfday.yml.html 200
#/ /kube-fullday.yml.html 200
#/ /kube-twodays.yml.html 200
/ /wwrk.yml.html 200!

View File

@@ -1,4 +1,4 @@
# Healthchecks
# Healthchecks (extra material)
- Kubernetes provides two kinds of healthchecks: liveness and readiness

View File

@@ -1,44 +0,0 @@
title: |
Kubernetes
for Admins and Ops
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
- static-pods-exercise
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
- - k8s/prereqs-admin.md
- k8s/architecture.md
- k8s/dmuc.md
- - k8s/multinode.md
- k8s/cni.md
- k8s/apilb.md
#FIXME: check le talk de Laurent Corbes pour voir s'il y a d'autres choses utiles à mentionner
#BONUS: intégration CoreDNS pour résoudre les noms des clusters des voisins
- - k8s/setup-managed.md
- k8s/setup-selfhosted.md
- k8s/cluster-upgrade.md
- k8s/staticpods.md
- k8s/cluster-backup.md
- k8s/cloud-controller-manager.md
- k8s/bootstrap.md
- - k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
- - k8s/lastwords-admin.md
- k8s/links.md
- shared/thankyou.md

View File

@@ -1,66 +0,0 @@
title: |
Kubernetes 101
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
chapters:
- shared/title.md
#- logistics.md
# Bridget-specific; others use logistics.md
- logistics-bridget.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
- shared/connecting.md
- k8s/versions-k8s.md
- shared/sampleapp.md
# Bridget doesn't go into as much depth with compose
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- shared/declarative.md
- k8s/declarative.md
- k8s/kubenet.md
- k8s/kubectlget.md
- k8s/setup-k8s.md
- - k8s/kubectlrun.md
- k8s/deploymentslideshow.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
#- k8s/kubectlproxy.md
#- k8s/localkubeconfig.md
#- k8s/accessinternal.md
- - k8s/dashboard.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/rollout.md
- - k8s/logs-cli.md
# Bridget hasn't added EFK yet
#- k8s/logs-centralized.md
- k8s/namespaces.md
- k8s/helm.md
- k8s/create-chart.md
#- k8s/kustomize.md
#- k8s/netpol.md
- k8s/whatsnext.md
# - k8s/links.md
# Bridget-specific
- k8s/links-bridget.md
- shared/thankyou.md

View File

@@ -1,35 +1,35 @@
## Intros
- This slide should be customized by the tutorial instructor(s).
- Hello! We are:
- .emoji[👩🏻‍🏫] Ann O'Nymous ([@...](https://twitter.com/...), Megacorp Inc)
- .emoji[👨🏾‍🎓] Stu Dent ([@...](https://twitter.com/...), University of Wakanda)
<!-- .dummy[
- .emoji[👷🏻‍♀️] AJ ([@s0ulshake](https://twitter.com/s0ulshake), Travis CI)
- .emoji[🚁] Alexandre ([@alexbuisine](https://twitter.com/alexbuisine), Enix SAS)
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Enix SAS)
- .emoji[⛵] Jérémy ([@jeremygarrouste](twitter.com/jeremygarrouste), Inpiwee)
- .emoji[🎧] Romain ([@rdegez](https://twitter.com/rdegez), Enix SAS)
] -->
- The workshop will run from ...
- There will be a lunch break at ...
(And coffee breaks!)
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Tiny Shell Script LLC)
- Feel free to interrupt for questions at any time
- *Especially when you see full screen container pictures!*
- Live feedback, questions, help: @@CHAT@@
(Let's make sure right now that we all are on that channel!)
---
## Logistics
Training schedule for the 3 days:
|||
|-------------------|--------------------|
| 9:00am | Start of training
| 10:30am → 11:00am | Break
| 12:30pm → 1:30pm | Lunch
| 3:00pm → 3:30pm | Break
| 5:00pm | End of training
- Lunch will be catered
- During the breaks, the instructors will be available for Q&A
- Make sure to hydrate / caffeinate / stretch out your limbs :)

View File

@@ -14,7 +14,7 @@ done
```
```bash
# FIXME find a way to reset the cluster, maybe?
## FIXME find a way to reset the cluster, maybe?
```
-->

View File

@@ -8,14 +8,8 @@ class: title, self-paced
class: title, in-person
@@TITLE@@<br/></br>
@@TITLE@@
.footnote[
**Be kind to the WiFi!**<br/>
<!-- *Use the 5G network.* -->
*Don't use your hotspot.*<br/>
*Don't stream videos or download big files during the workshop[.](https://www.youtube.com/watch?v=h16zyxiwDLY)*<br/>
*Thank you!*
**Slides: @@SLIDES@@**
]
**Slides
[](https://www.youtube.com/watch?v=h16zyxiwDLY)
@@SLIDES@@**

View File

@@ -1,65 +0,0 @@
title: |
Container Orchestration
with Docker and Swarm
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
- snap
- btp-auto
- benchmarking
- elk-manual
- prom-manual
chapters:
- shared/title.md
- logistics.md
- swarm/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
- shared/connecting.md
- swarm/versions.md
- shared/sampleapp.md
- shared/composescale.md
- shared/hastyconclusions.md
- shared/composedown.md
- swarm/swarmkit.md
- shared/declarative.md
- swarm/swarmmode.md
- swarm/creatingswarm.md
#- swarm/machine.md
- swarm/morenodes.md
- - swarm/firstservice.md
- swarm/ourapponswarm.md
- swarm/hostingregistry.md
- swarm/testingregistry.md
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/stacks.md
- swarm/cicd.md
- swarm/updatingservices.md
- swarm/rollingupdates.md
- swarm/healthchecks.md
- - swarm/operatingswarm.md
- swarm/netshoot.md
- swarm/ipsec.md
- swarm/swarmtools.md
- swarm/security.md
- swarm/secrets.md
- swarm/encryptionatrest.md
- swarm/leastprivilege.md
- swarm/apiscope.md
- - swarm/logging.md
- swarm/metrics.md
- swarm/gui.md
- swarm/stateful.md
- swarm/extratips.md
- shared/thankyou.md
- swarm/links.md

View File

@@ -1,64 +0,0 @@
title: |
Container Orchestration
with Docker and Swarm
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
- snap
- btp-manual
- benchmarking
- elk-manual
- prom-manual
chapters:
- shared/title.md
- logistics.md
- swarm/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
- shared/connecting.md
- swarm/versions.md
- shared/sampleapp.md
- shared/composescale.md
- shared/hastyconclusions.md
- shared/composedown.md
- swarm/swarmkit.md
- shared/declarative.md
- swarm/swarmmode.md
- swarm/creatingswarm.md
#- swarm/machine.md
- swarm/morenodes.md
- - swarm/firstservice.md
- swarm/ourapponswarm.md
#- swarm/hostingregistry.md
#- swarm/testingregistry.md
#- swarm/btp-manual.md
#- swarm/swarmready.md
- swarm/stacks.md
- swarm/cicd.md
- swarm/updatingservices.md
#- swarm/rollingupdates.md
#- swarm/healthchecks.md
- - swarm/operatingswarm.md
#- swarm/netshoot.md
#- swarm/ipsec.md
#- swarm/swarmtools.md
- swarm/security.md
#- swarm/secrets.md
#- swarm/encryptionatrest.md
- swarm/leastprivilege.md
- swarm/apiscope.md
- swarm/logging.md
- swarm/metrics.md
#- swarm/stateful.md
#- swarm/extratips.md
- shared/thankyou.md
- swarm/links.md

View File

@@ -1,73 +0,0 @@
title: |
Container Orchestration
with Docker and Swarm
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- in-person
- btp-auto
chapters:
- shared/title.md
#- shared/logistics.md
- swarm/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
- shared/connecting.md
- swarm/versions.md
- |
name: part-1
class: title, self-paced
Part 1
- shared/sampleapp.md
- shared/composescale.md
- shared/hastyconclusions.md
- shared/composedown.md
- swarm/swarmkit.md
- shared/declarative.md
- swarm/swarmmode.md
- swarm/creatingswarm.md
#- swarm/machine.md
- swarm/morenodes.md
- - swarm/firstservice.md
- swarm/ourapponswarm.md
- swarm/hostingregistry.md
- swarm/testingregistry.md
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/stacks.md
- swarm/cicd.md
- |
name: part-2
class: title, self-paced
Part 2
- - swarm/operatingswarm.md
- swarm/netshoot.md
- swarm/swarmnbt.md
- swarm/ipsec.md
- swarm/updatingservices.md
- swarm/rollingupdates.md
- swarm/healthchecks.md
- swarm/nodeinfo.md
- swarm/swarmtools.md
- - swarm/security.md
- swarm/secrets.md
- swarm/encryptionatrest.md
- swarm/leastprivilege.md
- swarm/apiscope.md
- swarm/logging.md
- swarm/metrics.md
- swarm/stateful.md
- swarm/extratips.md
- shared/thankyou.md
- swarm/links.md

View File

@@ -1,72 +0,0 @@
title: |
Container Orchestration
with Docker and Swarm
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- in-person
- btp-auto
chapters:
- shared/title.md
#- shared/logistics.md
- swarm/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
- shared/connecting.md
- swarm/versions.md
- |
name: part-1
class: title, self-paced
Part 1
- shared/sampleapp.md
- shared/composescale.md
- shared/hastyconclusions.md
- shared/composedown.md
- swarm/swarmkit.md
- shared/declarative.md
- swarm/swarmmode.md
- swarm/creatingswarm.md
#- swarm/machine.md
- swarm/morenodes.md
- - swarm/firstservice.md
- swarm/ourapponswarm.md
- swarm/hostingregistry.md
- swarm/testingregistry.md
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/stacks.md
- |
name: part-2
class: title, self-paced
Part 2
- - swarm/operatingswarm.md
#- swarm/netshoot.md
#- swarm/swarmnbt.md
- swarm/ipsec.md
- swarm/updatingservices.md
- swarm/rollingupdates.md
#- swarm/healthchecks.md
- swarm/nodeinfo.md
- swarm/swarmtools.md
- - swarm/security.md
- swarm/secrets.md
- swarm/encryptionatrest.md
- swarm/leastprivilege.md
- swarm/apiscope.md
#- swarm/logging.md
#- swarm/metrics.md
- swarm/stateful.md
- swarm/extratips.md
- shared/thankyou.md
- swarm/links.md

234
slides/wek8s/connecting.md Normal file
View File

@@ -0,0 +1,234 @@
name: connecting-to-wek8s
## Connecting to wek8s
- Let's see what it entails to connect to one of our wek8s clusters
- We need an account on https://we.okta.com/
(with access to "Dev AWS" environment)
- We need an account on https://quay.io/
(with access to images
[wework/okta-aws](https://quay.io/repository/wework/okta-aws)
and
[wework/wek8s-tools](https://quay.io/repository/wework/wek8s-tools))
- We will obtain AWS credentials through Okta
- Then, we will use these AWS credentials to obtain Kubernetes credentials
(because the wek8s cluster we will connect to is using AWS EKS under the hood)
.warning[These instructions are up-to-date as of May 2019, but may change in the future.]
---
## Pulling okta-aws and wek8s-tools images
- If we are already logged into quay.io, we can skip that step
<br/>
(the images will be pulled automatically when we need them)
- ... But this makes it easier to troubleshoot registry issues
<br/>
(if we get an error *now*, we know where it's coming from)
.exercise[
- Log into quay.io:
```bash
docker login quay.io
```
- Pull both images:
```bash
docker pull quay.io/wework/okta-aws
docker pull quay.io/wework/wek8s-tools:0.3.2
```
]
---
## Obtaining AWS credentials
- We will use okta-aws to obtain our AWS credentials
- For convenience, we will use a pre-built okta-aws container
.warning[If we already have credentials in `~/.aws`, this may overwrite them!]
.exercise[
- Run okta-aws to obtain AWS credentials and store them in `~/.aws`:
```bash
docker run -it --rm -v ~/.aws:/package/.aws quay.io/wework/okta-aws
```
- Select `Dev` environment at the first prompt
- Enter Okta email, password, and MFA code
]
---
## Verifying account and role
The last lines of output of okta-aws will confirm which account we logged into.
For the `Dev` account, this should look like this:
```
Account: 681484253316
Role: AWS-Tech-User
Profile: saml
```
... And a few files have been updated in `~/.aws`, including `~/.aws/credentials`.
Q: How did the container update `~/.aws` on our machine?
A: Because we mounted that directory into the container with `-v`.
---
## Running wek8s-tools
- Two more steps are necessary to obtain Kubernetes cluster credentials
- For simplicity, we are going to use a "Swiss Army Knife" image, wek8s-tools
- This image contains tools to obtain the Kubernetes credentials + many others
(including kubectl, helm, ...)
.exercise[
- Start a container using the the wek8s-tools image:
```bash
docker run --rm -v ~/.aws:/root/.aws -it quay.io/wework/wek8s-tools:0.3.2 sh
```
]
*We are using the `-v` option again, to mount our fresh AWS credentials into this container.*
---
## Generating kubeconfig
- The next step is to generate a kubeconfig file with:
- the address of the wek8s cluster we want to use
- instructions to use the AWS IAM authenticator plugin
- This is done with the `deploy_helper` binary
.exercise[
- Generate the kubeconfig file:
```bash
deploy_helper fetch_reqs --env wek8s-phoenix --namespace k8s-training
```
]
We now have a `~kube/config` file (in the container).
---
## Using the cluster
- Let's get a shell on this cluster!
.exercise[
- Run a one-time Pod with an Alpine container:
```bash
kubectl -n k8s-training run --restart=Never --rm -it test-$RANDOM --image=alpine
```
- Find out the node's IP address:
```bash
apk add curl
curl https://canihazip.com/s
```
- Exit when done
]
---
## Using local tools
.warning[Do not run the commands in this slide! This is not an exercise ☺]
- What if we wanted to use our local tools, instead of the wek8s-tools image?
- First, we would need to install the AWS IAM authenticator plugin
(see [AWS EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html) for instructions)
- Then, we would need to get the kubeconfig file:
```bash
docker run --rm -v ~/.aws:/root/.aws -v ~/.kube-wek8s:/root/.kube \
quay.io/wework/wek8s-tools:0.3.2 \
deploy_helper fetch_reqs --env wek8s-phoenix --namespace k8s-training
```
- This would generate the file `~/.kube-wek8s/config`
---
## Permission issues
.warning[Do not run the commands in this slide! This is not an exercise ☺]
- If you use Docker Desktop (on Windows or macOS), you should be set
- Otherwise (on Linux or Docker Toolbox) you will need to fix permissions:
```bash
chown -R $USER ~/.kube-wek8s
```
---
## Connecting to wek8s with local tools
.warning[Do not run the commands in this slide! This is not an exercise ☺]
- We would need to tell kubectl (and other tools) to use the file we generated:
```bash
export KUBECONFIG=~/.kube-wek8s/config
```
- Then we could do some simple commands to test the connection:
```bash
kubectl get version
kubectl get svc -n default kubernetes
```
---
## Deploying DockerCoins on wek8s
.warning[Do not run the commands in this slide! This is not an exercise ☺]
- We could deploy DockerCoins like this:
```bash
git clone https://github.com/jpetazzo/kubercoins
kubectl -n k8s-training apply -f kubercoins
```
- To access the web UI, we would need an Ingress
(more on that later)
- Rather than applying YAML directly, we would use Helm Charts
(more on that later)

667
slides/wek8s/helm.md Normal file
View File

@@ -0,0 +1,667 @@
# Managing stacks with Helm
- We created our first resources with `kubectl run`, `kubectl expose` ...
- We have also created resources by loading YAML files with `kubectl apply -f`
- For larger stacks, managing thousands of lines of YAML is unreasonable
- These YAML bundles need to be customized with variable parameters
(E.g.: number of replicas, image version to use ...)
- It would be nice to have an organized, versioned collection of bundles
- It would be nice to be able to upgrade/rollback these bundles carefully
- [Helm](https://helm.sh/) is an open source project offering all these things!
---
## Helm concepts
- `helm` is a CLI tool
- `tiller` is its companion server-side component
- A "chart" is an archive containing templatized YAML bundles
- Charts are versioned
- Charts can be stored on private or public repositories
---
## Helm 2 / Helm 3
- Helm 3.0.0-alpha.1 was released May 15th, 2019
- Helm 2 is still the stable version (and will be for a while)
- Helm 3 removes Tiller (which simplifies permission management)
- There are many other smaller changes
(see [Helm release changelog](https://github.com/helm/helm/releases/tag/v3.0.0-alpha.1) for the full list!)
---
## Installing Helm
- If the `helm` CLI is not installed in your environment, install it
.exercise[
- Check if `helm` is installed:
```bash
helm
```
- If it's not installed, run the following command:
```bash
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
```
]
---
## Installing Tiller
- Tiller is composed of a *service* and a *deployment* in the `kube-system` namespace
- They can be managed (installed, upgraded...) with the `helm` CLI
.exercise[
- Deploy Tiller:
```bash
helm init
```
]
If Tiller was already installed, don't worry: this won't break it.
At the end of the install process, you will see:
```
Happy Helming!
```
---
## Fix account permissions
- Helm permission model requires us to tweak permissions
- In a more realistic deployment, you might create per-user or per-team
service accounts, roles, and role bindings
.exercise[
- Grant `cluster-admin` role to `kube-system:default` service account:
```bash
kubectl create clusterrolebinding add-on-cluster-admin \
--clusterrole=cluster-admin --serviceaccount=kube-system:default
```
]
(Defining the exact roles and permissions on your cluster requires
a deeper knowledge of Kubernetes' RBAC model. The command above is
fine for personal and development clusters.)
---
## Repositories
- A repository is a remote server hosting a number of charts
(any HTTP server can be a chart repository)
- Repositories are identified by a local name
- We can add as many repositories as we need
.exercise[
- List the repositories currently available:
```bash
helm repo list
```
]
When we install Helm, it automatically configures a repository called `stable`.
<br/>
(Think of it like "Debian stable", for instance.)
---
## View available charts
- We can view available charts with `helm search` (and an optional keyword)
.exercise[
- View all available charts:
```bash
helm search
```
- View charts related to `prometheus`:
```bash
helm search prometheus
```
]
---
## Viewing installed charts
- Helm keeps track of what we've installed
.exercise[
- List installed Helm charts:
```bash
helm list
```
]
---
## Adding the WeWork repository
- The generic syntax is `helm repo add <nickname> <url>`
- We have a number of charts in Artifactory
- Since Artifactory is password-protected, we need to add `--username`
.exercise[
- Add the WeWork repository:
```bash
helm repo add wework https://wework.jfrog.io/wework/helm/ --username=`jdoe`
```
- When prompted, provide your password
]
---
## Looking at the WeWork repository
- Let's have a look at the charts in this repository
.exercise[
- Search the repository name:
```bash
helm search wework
```
]
---
## What's next?
- We could *install an existing application that has already been packaged*:
`helm install wework/moonbase-climate-control`
(sorry folks, that one doesn't exist [yet](https://disruption.medium.com/))
- We could *create a chart from scratch*:
`helm create my-wonderful-new-app`
(this creates a directory named `my-wonderful-new-app` with a barebones chart)
- We could do something in between: *install an app using a generic chart*
(let's do that!)
---
## The wek8s generic service chart
- There is ~~an app~~ a chart for that!
.exercise[
- Look for `generic service`:
```bash
helm search generic service
```
]
- The one that we want is `wework/wek8s-generic-service`
---
## Inspecting a chart
- Before installing a chart, we can check its description, README, etc.
.exercise[
- Look at all the available information:
```bash
helm inspect wework/wek8s-generic-service
```
(that's way too much information!)
- Look at the chart's description:
```bash
helm inspect chart wework/wek8s-generic-service
```
]
---
## Using the wek8s generic chart
- We are going to download the chart's `values.yaml`
(a file showing all the possible parameters for that chart)
- We are going to set the parameters we need, and discard the ones we don't
- Then we will install DockerCoins using that chart
---
## Dumping the chart's values
- Let's download the chart's `values.yaml`
- Then we will edit it to suit our needs
.exercise[
- Dump the chart's values to a YAML file:
```bash
helm inspect values wework/wek8s-generic-service > values-rng.yaml
```
]
---
## Editing the chart's values
Edit `values-rng.yaml` and keep only this:
```yaml
appName: rng
replicaCount: 1
image:
repository: dockercoins/rng
tag: v0.1
service:
enabled: true
ports:
- port: 80
containerPort: 80
```
---
## Deploying the chart
- We can now install a *release* of the generic service chart using these values
- We will do that in a separate namespace (to avoid colliding with other resources)
.exercise[
- Switch to the `happyhelming` namespace:
```bash
kubectl config set-context --current --namespace=happyhelming
```
- Install the `rng` release:
```bash
helm install wework/wek8s-generic-service --name=rng --values=values-rng.yaml
```
]
Note: Helm will automatically create the namespace if it doesn't exist.
---
## Testing what we did
- If we were directly on the cluster, we could curl the service's ClusterIP
- But we're *not* on the cluster, so we will use `kubectl port-forward`
.exercise[
- Create a port forwarding to access port 80 of Deployment `rng`:
```bash
kubectl port-forward deploy/rng 1234:80 &
```
- Confirm that RNG is running correctly:
```bash
curl localhost:1234
```
- Terminate the port forwarder:
```bash
kill %1
```
]
---
## Deploying the other services
- We need to create the values files for the other services:
- `values-hasher.yaml` → almost identical (just change name and image)
- `values-webui.yaml` → same
- `values-redis.yaml` → same, but adjust port number
- `values-worker.yaml` → same, but we can even remove the `service` part
- Then create all these services, using these YAML files
---
# Exercise — deploying an app with the wek8s Generic chart
.exercise[
- Create the 4 YAML files mentioned previously
- Install 4 Helm releases (one for each YAML file)
- What do we see in the logs of the worker?
]
---
## Troubleshooting
- We should see errors like this:
```
Error -2 connecting to redis:6379. Name does not resolve.
```
- Why?
--
- Hint: `kubectl get services`
--
- Our services are named `redis-service`, `rng-service`, etc.
- Our code connects to `redis`, `rng`, etc.
- We need to drop the extra `-service`
---
## Editing a chart
- To edit a chart, we can push a new version to the repository
- But there is a much simpler and faster way
- We can use Helm to download the chart locally, make changes, apply them
- This also works when creating / developing a chart
(we don't need to push it to the repository to try it out)
---
## Before diving in ...
.warning[Before editing or forking a generic chart like this one ...]
- Have a conversation with the authors of the chart
- Perhaps they can suggest other options, or adapt the chart
- We will edit the chart here, as a learning experience
- It may or may not be the right course of action in the general case!
---
## Download the chart
.exercise[
- Fetch the generic service chart to have a local, editable copy:
```bash
helm fetch wework/wek8s-generic-service --untar
```
- This creates the directory `wek8s-generic-service`
- Have a look!
]
---
## Chart structure
Here is the structure of the directory containing our chart:
```
$ tree wek8s-generic-service/
wek8s-generic-service/
├── Chart.yaml
├── migrations
│ └── ...
├── README.md
├── templates
│ ├── _appContainer.yaml
│ ├── configmaps.yaml
│ ├── ... more YAML ...
│ ├── ... also, some .tpl files ...
│ ├── NOTES.txt
│ ├── ... more more YAML ...
│ └── ... and more more .tpl files
└── values.yaml
```
---
## Explanations
- `Chart.yaml` → chart short descriptipon and metadata
- `README.md` → longer description
- `values.yaml` → the file we downloaded earlier
- `templates/` → files in this directory will be *rendered* when the chart is installed
- after rendering, each file is treated as a Kubernetes resource YAML file
- ... except the ones starting with underscore (these will contain templates)
- ... and except `NOTES.txt`, which is shown at the end of the deployment
Note: file extension doesn't really matter; the leading underscore does.
---
## Templates details
- Helm uses an extension of the Go template package
- This means that the files in `templates/` will be peppered with `{{ ... }}`
- For instance, this is an excerpt of `wek8s-generic-service/templates/service.yaml`:
```yaml
metadata:
name: {{ .Values.appName }}-service
labels:
app: {{ .Values.appName }}
```
- `{{ .Values.appName }}` will be replaced by the `appName` field from the values YAML
- For more details about the templating system, see the [Helm docs](https://helm.sh/docs/chart_template_guide/)
---
## Editing the templates
- Let's remove the trailing `-service` in the service definition
- Then, we will roll out that change
.exercise[
- Edit the file `wek8s-generic-service/templates/service.yaml`
- Remove the `-service` suffix
- Roll out the change to the `redis` release:
```bash
helm upgrade redis wek8s-generic-service
```
]
- We used `upgrade` instead of `install` this time
- We didn't need to pass again the YAML file with the values
---
## Viewing our changes
- Normally, we "fixed" the `redis` service
- The `worker` should now be able to contact `redis`
.exercise[
- Check the logs of the `worker`:
```bash
kubectl logs deploy/worker --tail 10 --follow
```
]
- Alright, now we need to fix `rng`, `hasher`, and `webui` the same way
---
## Fixing the other services
- We don't need to download the chart or edit it again
- We can use the same chart for the other services
.exercise[
- Upgrade `rng`, `hasher`, and `webui` with the updated chart
- Confirm that the `worker` works correctly
(it should say, "X units of work done ...")
]
---
## Extra steps
(If time permits ...)
.exercise[
- Setup a `port-forward` to view the web UI
- Scale the `worker` by updating the `replicaCount`
]
---
## Exposing a web application
- How do we expose the web UI with a proper URL?
- We will need to use an *Ingress*
- More on that later!
---
## If we wanted to submit our changes
- The source of the wek8s-generic-chart is in the following GitHub repository:
https://github.com/WeConnect/WeK8s-charts
(along with many other charts)
---
class: extra-details
## Good to know ...
- If we don't specify `--name` when running `helm install`, a name is generated
(like `wisfhul-elephant` or `nihilist-alligator`)
- If we want to install-or-upgrade, we can use `helm upgrade --install`:
`helm upgrade <name> <chart> --install --values=...`
- If we only want to set a few values, we can use `--set`, for instance:
`helm upgrade redis wke8s-generic-chart --values=... --set=replicaCount=5`
(we can use `--set` multiple times if needed)
.warning[If we specify `--set` without `--values`, it erases all the other values!]
---
class: extra-details
## If the first deployment fails
- If the first deployment of a release fails, it will be in an inconsistent state
- Further attempts to `helm install` or `helm upgrade` will fail
- To fix the problem, two solutions:
- `helm delete --purge` that release
- `helm upgrade --force` that release
- This only applies to the first deployment
(i.e., Helm knows how to recover if a subsequent deployment fails)

342
slides/wek8s/ingress.md Normal file
View File

@@ -0,0 +1,342 @@
# Exposing HTTP services with Ingress resources
- *Services* give us a way to access a pod or a set of pods
- Services can be exposed to the outside world:
- with type `NodePort` (on a port >30000)
- with type `LoadBalancer` (allocating an external load balancer)
- What about HTTP services?
- how can we expose `webui`, `rng`, `hasher`?
- the Kubernetes dashboard?
- a new version of `webui`?
---
## Exposing HTTP services
- If we use `NodePort` services, clients have to specify port numbers
(i.e. http://xxxxx:31234 instead of just http://xxxxx)
- `LoadBalancer` services are nice, but:
- they are not available in all environments
- they often carry an additional cost (e.g. they provision an ELB)
- they require one extra step for DNS integration
<br/>
(waiting for the `LoadBalancer` to be provisioned; then adding it to DNS)
---
## Ingress resources
- Kubernetes API resource (`kubectl get ingress`/`ingresses`/`ing`)
- Designed to expose HTTP services
- Basic features:
- load balancing
- SSL termination
- name-based virtual hosting
- Can also route to different services depending on:
- URI path (e.g. `/api``api-service`, `/static``assets-service`)
- Client headers, including cookies (for A/B testing, canary deployment...)
- and more!
---
## Principle of operation
- Step 1: deploy an *ingress controller*
- ingress controller = load balancer + control loop
- the control loop watches over ingress resources, and configures the LB accordingly
- Step 2: setup DNS
- associate DNS entries with the load balancer address
- Step 3: create *ingress resources*
- the ingress controller picks up these resources and configures the LB
- Step 4: profit!
---
## Ingress in action
- We will deploy the Traefik ingress controller
- this is an arbitrary choice
- maybe motivated by the fact that Traefik releases are named after cheeses
- For DNS, we will use [nip.io](http://nip.io/)
- `*.1.2.3.4.nip.io` resolves to `1.2.3.4`
- We will create ingress resources for various HTTP services
---
## Running Traefik on our cluster
- We provide a YAML file (`k8s/traefik.yaml`) which is essentially the sum of:
- [Traefik's Daemon Set resources](https://github.com/containous/traefik/blob/v1.7/examples/k8s/traefik-ds.yaml) (patched with `hostNetwork` and tolerations)
- [Traefik's RBAC rules](https://github.com/containous/traefik/blob/v1.7/examples/k8s/traefik-rbac.yaml) allowing it to watch necessary API objects
.exercise[
- Apply the YAML:
```bash
kubectl apply -f ~/container.training/k8s/traefik.yaml
```
]
---
## Checking that Traefik runs correctly
- If Traefik started correctly, we now have a web server listening on each node
.exercise[
- Check that Traefik is serving 80/tcp:
```bash
curl localhost
```
]
We should get a `404 page not found` error.
This is normal: we haven't provided any ingress rule yet.
---
## Setting up DNS
- To make our lives easier, we will use [nip.io](http://nip.io)
- Check out `http://webui.A.B.C.D.nip.io`
(replacing A.B.C.D with the IP address of `node1`)
- We should get the same `404 page not found` error
(meaning that our DNS is "set up properly", so to speak!)
---
## Traefik web UI
- Traefik provides a web dashboard
- With the current install method, it's listening on port 8080
.exercise[
- Go to `http://node1:8080` (replacing `node1` with its IP address)
<!-- ```open http://node1:8080``` -->
]
---
## Ingress with the wek8s generic chart
- The wek8s generic chart that we used earlier can generate an Ingress for us
- All we have to do is add a few lines to the YAML file (`values-webui.yaml`)
- ... And update the Helm release `webui` to use these new values
.exercise[
- Add the following snippet to the `values-webui.yaml` file:
```yaml
ingress:
enabled: true
hosts:
- webui.`A.B.C.D`.nip.io
```
(Where `A.B.C.D` is the IP address of node1, that we used earlier)
]
---
## Update the Helm release
- Now, we need to use these new values
- We will use `helm upgrade` to run the templates and apply them to the cluster
.exercise[
- Update the Helm release:
```bash
helm upgrade webui wek8s-generic-service --values=values-webui.yaml
```
]
We should see an Ingress resource appear in the output.
---
## Something's wrong ...
- In the Traefik web UI, at this point, we may see an error
(the backend is highlighted in red)
- What's happening?
- Let's try and find out!
---
## Inspecting the Ingress
- Let's look at the Ingress generated by the generic service chart
.exercise[
- Dump the YAML for the Ingress:
```bash
kubectl get ingress webui-ingress -o yaml
```
]
- Can you see the problem?
--
- It still refers to `webui-service` instead of `webui`!
---
## Fixing the Ingress
- We need to edit the chart (again)
.exercise[
- Find the file defining the Ingress resource
- Make the necessary changes
- Upgrade the `webui` release with the new chart
]
---
## Access the service
- Go back to the browser tab where we were loading webui.A.B.C.D.nip.io
- Hit reload ...
- ... And we should we the web UI for DockerCoins!
---
## Creating an Ingress by hand
If we need to, here is a minimal host-based ingress resource:
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: webui
spec:
rules:
- host: webui.`A.B.C.D`.nip.io
http:
paths:
- path: /
backend:
serviceName: webui
servicePort: 80
```
---
## Using multiple ingress controllers
- You can have multiple ingress controllers active simultaneously
(e.g. Traefik and NGINX)
- You can even have multiple instances of the same controller
(e.g. one for internal, another for external traffic)
- The `kubernetes.io/ingress.class` annotation can be used to tell which one to use
- It's OK if multiple ingress controllers configure the same resource
(it just means that the service will be accessible through multiple paths)
---
## Using Ingress on wek8s
- Each wek8s cluster has a wildcard domain mapped to it
- For instance, phoenix has `*.phoenix.dev.wwrk.co`
- To be reachable from outside, we must use the ingress class `nginx-external`
- This is done through an annotation, like this:
```yaml
ingress:
enabled: true
hosts:
- webui.phoenix.dev.wwrk.co
annotations:
kubernetes.io/ingress.class: nginx-external
```
---
## Extra goodies
- The wek8s generic service chart can also provide:
- TLS
- HTTP Basic Auth
- GRPC with HTTP/2
- For more details, we can look at:
- the `values.yaml` file
- the chart's README
- the file `templates/ingress.yaml` in the chart

19
slides/wek8s/security.md Normal file
View File

@@ -0,0 +1,19 @@
## Security in the context of wek8s
- The wek8s dev clusters have permissive policies
(so that we can easily experiment and try things)
- This means that we need to be particularly careful about unknown sources
- Check the provenance of images, YAML bundles, Helm Charts, etc.:
- does it come from the website / documentation / repository of a trusted vendor?
- is it maintained; how often does it get updates?
- For images:
- is the source (Dockerfile or otherwise) available?
- are they checked by an automated vulnerability scanner?

View File

@@ -0,0 +1,3 @@
## Get back to node1
- From now on, it is recommended to log back into `node1`

View File

@@ -0,0 +1,15 @@
## Differences with wek8s
- We have `cluster-admin` (=`root`) privileges on our clusters
(we can read/write everything)
- This is typical when working on "personal" clusters
(used by a single person, or a very small team)
- But *not* when working on production clusters
- On wek8s clusters, we can't access most *global* resources
(resources that don't belong to namespaces; e.g. nodes)

110
slides/wek8s/whatsnext.md Normal file
View File

@@ -0,0 +1,110 @@
# Next steps
*Alright, how do I get started and containerize my apps?*
--
Suggested containerization checklist:
.checklist[
- write a Dockerfile for one service in one app
- write Dockerfiles for the other (buildable) services
- write a Compose file for that whole app
- make sure that devs are empowered to run the app in containers
- set up automated builds of container images from the code repo
- set up a CI pipeline using these container images
- set up a CD pipeline (for staging/QA) using these images
]
And *then* it is time to look at orchestration!
---
## Local workflow
- Make sure that you have a local Kubernetes cluster
(Docker Desktop, Minikube, microk8s ...)
- Use that cluster early and often
- Regularly try to deploy on a "real" cluster
---
## Isolation
- We did *not* talk about Role-Based Access Control (RBAC)
- We did *not* talk about Network Policies
- We did *not* talk about Pod Security Policies
- We did *not* talk about resource limits, Limit Ranges, Resource Quotas
- You don't need these features when getting started
(your friendly s19e team is here for that)
---
## Stateful services (databases etc.)
- As a first step, it is wiser to keep stateful services *outside* of the cluster
- Exposing them to pods can be done with multiple solutions:
- `ExternalName` services
<br/>
(`redis.blue.svc.cluster.local` will be a `CNAME` record)
- `ClusterIP` services with explicit `Endpoints`
<br/>
(instead of letting Kubernetes generate the endpoints from a selector)
- Ambassador services
<br/>
(application-level proxies that can provide credentials injection and more)
---
## Stateful services (second take)
- If we want to host stateful services on Kubernetes, we can use:
- a storage provider
- persistent volumes, persistent volume claims
- stateful sets
- Good questions to ask:
- what's the *operational cost* of running this service ourselves?
- what do we gain by deploying this stateful service on Kubernetes?
- Relevant sections:
[Volumes](kube-selfpaced.yml.html#toc-volumes)
|
[Stateful Sets](kube-selfpaced.yml.html#toc-stateful-sets)
|
[Persistent Volumes](kube-selfpaced.yml.html#toc-highly-available-persistent-volumes)
- Excellent [blog post](http://www.databasesoup.com/2018/07/should-i-run-postgres-on-kubernetes.html) tackling the question: “Should I run Postgres on Kubernetes?”
---
## Developer experience
*We've put this last, but it's pretty important!*
- How do you on-board a new developer?
- What do they need to install to get a dev stack?
- How does a code change make it from dev to prod?
- How does someone add a component to a stack?
*Mind the gap!*

116
slides/wwrk.yml Normal file
View File

@@ -0,0 +1,116 @@
title: |
Containers,
Docker,
Kubernetes,
WeK8S
chat: "Slack (channel [#k8s-training-may2019](https://wework.slack.com/messages/GJN9CBZLH/))"
gitrepo: github.com/jpetazzo/container.training
slides: http://wwrk-2019-05.container.training/
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
- - containers/Training_Environment.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Initial_Images.md
- - containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
- - containers/Multi_Stage_Builds.md
- containers/Dockerfile_Tips.md
- containers/Exercise_Dockerfile_Advanced.md
- containers/Naming_And_Inspecting.md
- containers/Getting_Inside.md
- - containers/Container_Networking_Basics.md
- containers/Network_Drivers.md
- containers/Container_Network_Model.md
- containers/Ambassadors.md
- - containers/Local_Development_Workflow.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
- - containers/Application_Configuration.md
- containers/Orchestration_Overview.md
- |
# From Docker to Kubernetes
- We are now going to run a demo app made of multiple containers
- We will start by running it on one node, with Compose
- Then we will deploy that application on a Kubernetes cluster
- We will identify performance bottlenecks and scale out that app
(and learn Kubernetes in the process)
---
## Our new environment
- Since a 1-node cluster isn't fun, we will switch to a new environment!
- This environment is a 4-node Kubernetes cluster
- Also, from now on, demos and labs are identified with these gray boxes
.exercise[
- You should run this command:
```bash
echo Hello world
```
]
- shared/connecting.md
- k8s/versions-k8s.md
- - shared/sampleapp.md
- shared/composedown.md
- k8s/concepts-k8s.md
- shared/declarative.md
- k8s/declarative.md
- k8s/kubenet.md
- k8s/kubectlget.md
- - wek8s/visibility.md
- k8s/kubectlrun.md
- k8s/deploymentslideshow.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
- - k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/namespaces.md
- - k8s/rollout.md
- k8s/setup-k8s.md
- wek8s/connecting.md
- k8s/logs-cli.md
- k8s/dashboard.md
- wek8s/security.md
- - k8s/localkubeconfig.md
- k8s/accessinternal.md
- wek8s/helm.md
- wek8s/ingress.md
- wek8s/switchback.md
- - k8s/prometheus.md
- k8s/volumes.md
- k8s/configuration.md
- k8s/healthchecks.md
- - wek8s/whatsnext.md
- k8s/links.md
- shared/thankyou.md