Compare commits

..

4 Commits

Author SHA1 Message Date
Jérôme Petazzoni
58e76b81f2 Fixing missing stack name 2017-06-12 17:41:55 +02:00
Jérôme Petazzoni
a1adc2292f Merge pull request #81 from soulshake/devopscon-compose-version
Fix docker-compose version
2017-06-11 23:01:33 +02:00
AJ Bowen
8d6421ddea Fix docker-compose version 2017-06-11 23:00:22 +02:00
Jérôme Petazzoni
63b6db4843 First round of updates for devopscon 2017-06-10 21:40:05 +02:00
347 changed files with 10044 additions and 37122 deletions

5
.gitignore vendored
View File

@@ -6,8 +6,3 @@ prepare-vms/ips.html
prepare-vms/ips.pdf
prepare-vms/settings.yaml
prepare-vms/tags
slides/*.yml.html
slides/autopilot/state.yaml
slides/index.html
slides/past.html
node_modules

View File

@@ -1,24 +0,0 @@
Checklist to use when delivering a workshop
Authored by Jérôme; additions by Bridget
- [ ] Create event-named branch (such as `conferenceYYYY`) in the [main repo](https://github.com/jpetazzo/container.training/)
- [ ] Create file `slides/_redirects` containing a link to the desired tutorial: `/ /kube-halfday.yml.html 200`
- [ ] Push local branch to GitHub and merge into main repo
- [ ] [Netlify setup](https://app.netlify.com/sites/container-training/settings/domain): create subdomain for event-named branch
- [ ] Add link to event-named branch to [container.training front page](https://github.com/jpetazzo/container.training/blob/master/slides/index.html)
- [ ] Update the slides that says which versions we are using for [kube](https://github.com/jpetazzo/container.training/blob/master/slides/kube/versions-k8s.md) or [swarm](https://github.com/jpetazzo/container.training/blob/master/slides/swarm/versions.md) workshops
- [ ] Update the version of Compose and Machine in [settings](https://github.com/jpetazzo/container.training/tree/master/prepare-vms/settings)
- [ ] (optional) Create chatroom
- [ ] (optional) Set chatroom in YML ([kube half-day example](https://github.com/jpetazzo/container.training/blob/master/slides/kube-halfday.yml#L6-L8)) and deploy
- [ ] (optional) Put chat link on [container.training front page](https://github.com/jpetazzo/container.training/blob/master/slides/index.html)
- [ ] How many VMs do we need? Check with event organizers ahead of time
- [ ] Provision VMs (slightly more than we think we'll need)
- [ ] Change password on presenter's VMs (to forestall any hijinx)
- [ ] Onsite: walk the room to count seats, check power supplies, lectern, A/V setup
- [ ] Print cards
- [ ] Cut cards
- [ ] Last-minute merge from master
- [ ] Check that all looks good
- [ ] DELIVER!
- [ ] Shut down VMs
- [ ] Update index.html to remove chat link and move session to past things

19
LICENSE
View File

@@ -1,12 +1,13 @@
The code in this repository is licensed under the Apache License
Version 2.0. You may obtain a copy of this license at:
Copyright 2015 Jérôme Petazzoni
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
The instructions and slides in this repository (e.g. the files
with extension .md and .yml in the "slides" subdirectory) are
under the Creative Commons Attribution 4.0 International Public
License. You may obtain a copy of this license at:
https://creativecommons.org/licenses/by/4.0/legalcode
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

295
README.md
View File

@@ -1,134 +1,137 @@
# Container Training
# Docker Orchestration Workshop
This repository (formerly known as `orchestration-workshop`)
contains materials (slides, scripts, demo app, and other
code samples) used for various workshops, tutorials, and
training sessions around the themes of Docker, containers,
and orchestration.
For the moment, it includes:
- Introduction to Docker and Containers,
- Container Orchestration with Docker Swarm,
- Container Orchestration with Kubernetes.
These materials have been designed around the following
principles:
- they assume very little prior knowledge of Docker,
containers, or a particular programming language;
- they can be used in a classroom setup (with an
instructor), or self-paced at home;
- they are hands-on, meaning that they contain lots
of examples and exercises that you can easily
reproduce;
- they progressively introduce concepts in chapters
that build on top of each other.
If you're looking for the materials, you can stop reading
right now, and hop to http://container.training/, which
hosts all the slides decks available.
The rest of this document explains how this repository
is structured, and how to use it to deliver (or create)
your own tutorials.
This is the material (slides, scripts, demo app, and other
code samples) for the "Docker orchestration workshop"
written and delivered by Jérôme Petazzoni (and lots of others)
non-stop since June 2015.
## Why a single repository?
## Content
All these materials have been gathered in a single repository
because they have a few things in common:
- some [common slides](slides/common/) that are re-used
(and updated) identically between different decks;
- a [build system](slides/) generating HTML slides from
Markdown source files;
- a [semi-automated test harness](slides/autopilot/) to check
that the exercises and examples provided work properly;
- a [PhantomJS script](slides/slidechecker.js) to check
that the slides look good and don't have formatting issues;
- [deployment scripts](prepare-vms/) to start training
VMs in bulk;
- a fancy pipeline powered by
[Netlify](https://www.netlify.com/) and continuously
deploying `master` to http://container.training/.
- Chapter 1: Getting Started: running apps with docker-compose
- Chapter 2: Scaling out with Swarm Mode
- Chapter 3: Operating the Swarm (networks, updates, logging, metrics)
- Chapter 4: Deeper in Swarm (stateful services, scripting, DAB's)
## What are the different courses available?
## Quick start (or, "I want to try it!")
**Introduction to Docker** is derived from the first
"Docker Fundamentals" training materials. For more information,
see [jpetazzo/intro-to-docker](https://github.com/jpetazzo/intro-to-docker).
The version in this repository has been adapted to the Markdown
publishing pipeline. It is still maintained, but only receives
minor updates once in a while.
This workshop is designed to be *hands on*, i.e. to give you a step-by-step
guide where you will build your own Docker cluster, and use it to deploy
a sample application.
**Container Orchestration with Docker Swarm** (formerly
known as "Orchestration Workshop") is a workshop created by Jérôme
Petazzoni in June 2015. Since then, it has been continuously updated
and improved, and received contributions from many others authors.
It is actively maintained.
The easiest way to follow the workshop is to attend it when it is delivered
by an instructor. In that case, the instructor will generally give you
credentials (IP addresses, login, password) to connect to your own cluster
of virtual machines; and the [slides](http://jpetazzo.github.io/orchestration-workshop)
assume that you have your own cluster indeed.
**Container Orchestration with Kubernetes** was created by
Jérôme Petazzoni in October 2017, with help and feedback from
a few other contributors. It is actively maintained.
If you want to follow the workshop on your own, and want to have your
own cluster, we have multiple solutions for you!
## Repository structure
### Using [play-with-docker](http://play-with-docker.com/)
- [bin](bin/)
- A few helper scripts that you can safely ignore for now.
- [dockercoins](dockercoins/)
- The demo app used throughout the orchestration workshops.
- [efk](efk/), [elk](elk/), [prom](prom/), [snap](snap/):
- Logging and metrics stacks used in the later parts of
the orchestration workshops.
- [prepare-local](prepare-local/), [prepare-machine](prepare-machine/):
- Contributed scripts to automate the creation of local environments.
These could use some help to test/check that they work.
- [prepare-vms](prepare-vms/):
- Scripts to automate the creation of AWS instances for students.
These are routinely used and actively maintained.
- [slides](slides/):
- All the slides! They are assembled from Markdown files with
a custom Python script, and then rendered using [gnab/remark](
https://github.com/gnab/remark). Check this directory for more details.
- [stacks](stacks/):
- A handful of Compose files (version 3) allowing to easily
deploy complex application stacks.
This method is very easy to get started (you don't need any extra account
or resources!) but will require a bit of adaptation from the workshop slides.
To get started, go to [play-with-docker](http://play-with-docker.com/), and
click on _ADD NEW INSTANCE_ five times. You will get five "docker-in-docker"
containers, all on a private network. These are your five nodes for the workshop!
When the instructions in the slides tell you to "SSH on node X", just go to
the tab corresponding to that node.
The nodes are not directly reachable from outside; so when the slides tell
you to "connect to the IP address of your node on port XYZ" you will have
to use a different method.
We suggest to use "supergrok", a container offering a NGINX+ngrok combo to
expose your services. To use it, just start (on any of your nodes) the
`jpetazzo/supergrok` image. The image will output further instructions:
```
docker run --name supergrok -d jpetazzo/supergrok
docker logs --follow supergrok
```
The logs of the container will give you a tunnel address and explain you
how to connected to exposed services. That's all you need to do!
We are also working on a native proxy, embedded to Play-With-Docker.
Stay tuned!
<!--
- You can use a proxy provided by Play-With-Docker. When the slides
instruct you to connect to nodeX on port ABC, instead, you will connect
to http://play-with-docker.com/XXX.XXX.XXX.XXX:ABC, where XXX.XXX.XXX.XXX
is the IP address of nodeX.
-->
Note that the instances provided by Play-With-Docker have a short lifespan
(a few hours only), so if you want to do the workshop over multiple sessions,
you will have to start over each time ... Or create your own cluster with
one of the methods described below.
## Course structure
### Using Docker Machine to create your own cluster
(This applies only for the orchestration workshops.)
This method requires a bit more work to get started, but you get a permanent
cluster, with less limitations.
The workshop introduces a demo app, "DockerCoins," built
around a micro-services architecture. First, we run it
on a single node, using Docker Compose. Then, we pretend
that we need to scale it, and we use an orchestrator
(SwarmKit or Kubernetes) to deploy and scale the app on
a cluster.
You will need Docker Machine (if you have Docker Mac, Docker Windows, or
the Docker Toolbox, you're all set already). You will also need:
We explain the concepts of the orchestrator. For SwarmKit,
we setup the cluster with `docker swarm init` and `docker swarm join`.
For Kubernetes, we use pre-configured clusters.
- credentials for a cloud provider (e.g. API keys or tokens),
- or a local install of VirtualBox or VMware (or anything supported
by Docker Machine).
Then, we cover more advanced concepts: scaling, load balancing,
updates, global services or daemon sets.
There are a number of advanced optional chapters about
logging, metrics, secrets, network encryption, etc.
The content is very modular: it is broken down in a large
number of Markdown files, that are put together according
to a YAML manifest. This allows to re-use content
between different workshops very easily.
Full instructions are in the [prepare-machine](prepare-machine) subdirectory.
### DockerCoins
### Using our scripts to mass-create a bunch of clusters
The sample app is in the `dockercoins` directory.
It's used during all chapters
Since we often deliver the workshop during conferences or similar events,
we have scripts to automate the creation of a bunch of clusters using
AWS EC2. If you want to create multiple clusters and have EC2 credits,
check the [prepare-vms](prepare-vms) directory for more information.
## How This Repo is Organized
- **dockercoins**
- Sample App: compose files and source code for the dockercoins sample apps
used throughout the workshop
- **docs**
- Slide Deck: presentation slide deck, works out-of-box with GitHub Pages,
uses https://remarkjs.com
- **prepare-local**
- untested scripts for automating the creation of local virtualbox VM's
(could use your help validating)
- **prepare-machine**
- instructions explaining how to use Docker Machine to create VMs
- **prepare-vms**
- scripts for automating the creation of AWS instances for students
## Slide Deck
- The slides are in the `docs` directory.
- To view them locally open `docs/index.html` in your browser. It works
offline too.
- To view them online open https://jpetazzo.github.io/orchestration-workshop/
in your browser.
- When you fork this repo, be sure GitHub Pages is enabled in repo Settings
for "master branch /docs folder" and you'll have your own website for them.
- They use https://remarkjs.com to allow simple markdown in a html file that
remark will transform into a presentation in the browser.
## Sample App: Dockercoins!
The sample app is in the `dockercoins` directory. It's used during all chapters
for explaining different concepts of orchestration.
To see it in action:
@@ -138,18 +141,13 @@ To see it in action:
- the web UI will be available on port 8000
*If you just want to run the workshop for yourself, you can stop reading
here. If you want to deliver the workshop for others (i.e. if you
want to become an instructor), keep reading!*
## Running the Workshop
If you want to deliver one of these workshops yourself,
this section is for you!
> *This section has been mostly contributed by
> [Bret Fisher](https://twitter.com/bretfisher), who was
> one of the first persons to have the bravery of delivering
> this workshop without me. Thanks Bret! 🍻
>
> Jérôme.*
### General timeline of planning a workshop
@@ -157,7 +155,7 @@ this section is for you!
understand the different `dockercoins` repo's and the steps we go through to
get to a full Swarm Mode cluster of many containers. You'll update the first
few slides and last slide at a minimum, with your info.
- ~~Your docs directory can use GitHub Pages.~~
- Your docs directory can use GitHub Pages.
- This workshop expects 5 servers per student. You can get away with as little
as 2 servers per student, but you'll need to change the slide deck to
accommodate. More servers = more fun.
@@ -183,14 +181,13 @@ this section is for you!
they need for class.
- Typically you create the servers the day before or morning of workshop, and
leave them up the rest of day after workshop. If creating hundreds of servers,
you'll likely want to run all these `workshopctl` commands from a dedicated
you'll likely want to run all these `trainer` commands from a dedicated
instance you have in same region as instances you want to create. Much faster
this way if you're on poor internet. Also, create 2 sets of servers for
yourself, and use one during workshop and the 2nd is a backup.
- Remember you'll need to print the "cards" for students, so you'll need to
create instances while you have a way to print them.
### Things That Could Go Wrong
- Creating AWS instances ahead of time, and you hit its limits in region and
@@ -204,15 +201,13 @@ this section is for you!
- Forget to print "cards" and cut them up for handing out IP's.
- Forget to have fun and focus on your students!
### Creating the VMs
`prepare-vms/workshopctl` is the script that gets you most of what you need for
`prepare-vms/trainer` is the script that gets you most of what you need for
setting up instances. See
[prepare-vms/README.md](prepare-vms)
for all the info on tools and scripts.
### Content for Different Workshop Durations
With all the slides, this workshop is a full day long. If you need to deliver
@@ -221,7 +216,6 @@ can replace `---` with `???` which will hide slides. Or leave them there and
add something like `(EXTRA CREDIT)` to title so students can still view the
content but you also know to skip during presentation.
#### 3 Hour Version
- Limit time on debug tools, maybe skip a few. *"Chapter 1:
@@ -233,7 +227,6 @@ content but you also know to skip during presentation.
- Mention what DAB's are, but make this part optional in case you run out
of time
#### 2 Hour Version
- Skip all the above, and:
@@ -247,17 +240,6 @@ content but you also know to skip during presentation.
- Last 15-30 minutes is for stateful services, DAB files, and questions.
### Pre-built images
There are pre-built images for the 4 components of the DockerCoins demo app: `dockercoins/hasher:v0.1`, `dockercoins/rng:v0.1`, `dockercoins/webui:v0.1`, and `dockercoins/worker:v0.1`. They correspond to the code in this repository.
There are also three variants, for demo purposes:
- `dockercoins/rng:v0.2` is broken (the server won't even start),
- `dockercoins/webui:v0.2` has bigger font on the Y axis and a green graph (instead of blue),
- `dockercoins/worker:v0.2` is 11x slower than `v0.1`.
## Past events
Since its inception, this workshop has been delivered dozens of times,
@@ -289,34 +271,13 @@ If there is a bug and you can't fix it, but you can
reproduce it: submit an issue explaining how to reproduce.
If there is a bug and you can't even reproduce it:
sorry. It is probably an Heisenbug. We can't act on it
until it's reproducible, alas.
sorry. It is probably an Heisenbug. I can't act on it
until it's reproducible.
if you have attended this workshop and have feedback,
or if you want us to deliver that workshop at your
conference or for your company: contact me (jerome
at docker dot com).
# “Please teach us!”
If you have attended one of these workshops, and want
your team or organization to attend a similar one, you
can look at the list of upcoming events on
http://container.training/.
You are also welcome to reuse these materials to run
your own workshop, for your team or even at a meetup
or conference. In that case, you might enjoy watching
[Bridget Kromhout's talk at KubeCon 2018 Europe](
https://www.youtube.com/watch?v=mYsp_cGY2O0), explaining
precisely how to run such a workshop yourself.
Finally, you can also contact the following persons,
who are experienced speakers, are familiar with the
material, and are available to deliver these workshops
at your conference or for your company:
- jerome dot petazzoni at gmail dot com
- bret at bretfisher dot com
(If you are willing and able to deliver such workshops,
feel free to submit a PR to add your name to that list!)
**Thank you!**
Thank you!

191
autotest/autotest.py Executable file
View File

@@ -0,0 +1,191 @@
#!/usr/bin/env python
import os
import re
import signal
import subprocess
import time
def print_snippet(snippet):
print(78*'-')
print(snippet)
print(78*'-')
class Snippet(object):
def __init__(self, slide, content):
self.slide = slide
self.content = content
self.actions = []
def __str__(self):
return self.content
class Slide(object):
current_slide = 0
def __init__(self, content):
Slide.current_slide += 1
self.number = Slide.current_slide
# Remove commented-out slides
# (remark.js considers ??? to be the separator for speaker notes)
content = re.split("\n\?\?\?\n", content)[0]
self.content = content
self.snippets = []
exercises = re.findall("\.exercise\[(.*)\]", content, re.DOTALL)
for exercise in exercises:
if "```" in exercise and "<br/>`" in exercise:
print("! Exercise on slide {} has both ``` and <br/>` delimiters, skipping."
.format(self.number))
print_snippet(exercise)
elif "```" in exercise:
for snippet in exercise.split("```")[1::2]:
self.snippets.append(Snippet(self, snippet))
elif "<br/>`" in exercise:
for snippet in re.findall("<br/>`(.*)`", exercise):
self.snippets.append(Snippet(self, snippet))
else:
print(" Exercise on slide {} has neither ``` or <br/>` delimiters, skipping."
.format(self.number))
def __str__(self):
text = self.content
for snippet in self.snippets:
text = text.replace(snippet.content, ansi("7")(snippet.content))
return text
def ansi(code):
return lambda s: "\x1b[{}m{}\x1b[0m".format(code, s)
slides = []
with open("index.html") as f:
content = f.read()
for slide in re.split("\n---?\n", content):
slides.append(Slide(slide))
is_editing_file = False
placeholders = {}
for slide in slides:
for snippet in slide.snippets:
content = snippet.content
# Multi-line snippets should be ```highlightsyntax...
# Single-line snippets will be interpreted as shell commands
if '\n' in content:
highlight, content = content.split('\n', 1)
else:
highlight = "bash"
content = content.strip()
# If the previous snippet was a file fragment, and the current
# snippet is not YAML or EDIT, complain.
if is_editing_file and highlight not in ["yaml", "edit"]:
print("! On slide {}, previous snippet was YAML, so what do what do?"
.format(slide.number))
print_snippet(content)
is_editing_file = False
if highlight == "yaml":
is_editing_file = True
elif highlight == "placeholder":
for line in content.split('\n'):
variable, value = line.split(' ', 1)
placeholders[variable] = value
elif highlight == "bash":
for variable, value in placeholders.items():
quoted = "`{}`".format(variable)
if quoted in content:
content = content.replace(quoted, value)
del placeholders[variable]
if '`' in content:
print("! The following snippet on slide {} contains a backtick:"
.format(slide.number))
print_snippet(content)
continue
print("_ "+content)
snippet.actions.append((highlight, content))
elif highlight == "edit":
print(". "+content)
snippet.actions.append((highlight, content))
elif highlight == "meta":
print("^ "+content)
snippet.actions.append((highlight, content))
else:
print("! Unknown highlight {!r} on slide {}.".format(highlight, slide.number))
if placeholders:
print("! Remaining placeholder values: {}".format(placeholders))
actions = sum([snippet.actions for snippet in sum([slide.snippets for slide in slides], [])], [])
# Strip ^{ ... ^} for now
def strip_curly_braces(actions, in_braces=False):
if actions == []:
return []
elif actions[0] == ("meta", "^{"):
return strip_curly_braces(actions[1:], True)
elif actions[0] == ("meta", "^}"):
return strip_curly_braces(actions[1:], False)
elif in_braces:
return strip_curly_braces(actions[1:], True)
else:
return [actions[0]] + strip_curly_braces(actions[1:], False)
actions = strip_curly_braces(actions)
background = []
cwd = os.path.expanduser("~")
env = {}
for current_action, next_action in zip(actions, actions[1:]+[("bash", "true")]):
if current_action[0] == "meta":
continue
print(ansi(7)(">>> {}".format(current_action[1])))
time.sleep(1)
popen_options = dict(shell=True, cwd=cwd, stdin=subprocess.PIPE, preexec_fn=os.setpgrp)
# The follow hack allows to capture the environment variables set by `docker-machine env`
# FIXME: this doesn't handle `unset` for now
if any([
"eval $(docker-machine env" in current_action[1],
"DOCKER_HOST" in current_action[1],
"COMPOSE_FILE" in current_action[1],
]):
popen_options["stdout"] = subprocess.PIPE
current_action[1] += "\nenv"
proc = subprocess.Popen(current_action[1], **popen_options)
proc.cmd = current_action[1]
if next_action[0] == "meta":
print(">>> {}".format(next_action[1]))
time.sleep(3)
if next_action[1] == "^C":
os.killpg(proc.pid, signal.SIGINT)
proc.wait()
elif next_action[1] == "^Z":
# Let the process run
background.append(proc)
elif next_action[1] == "^D":
proc.communicate()
proc.wait()
else:
print("! Unknown meta action {} after snippet:".format(next_action[1]))
print_snippet(next_action[1])
print(ansi(7)("<<< {}".format(current_action[1])))
else:
proc.wait()
if "stdout" in popen_options:
stdout, stderr = proc.communicate()
for line in stdout.split('\n'):
if line.startswith("DOCKER_"):
variable, value = line.split('=', 1)
env[variable] = value
print("=== {}={}".format(variable, value))
print(ansi(7)("<<< {} >>> {}".format(proc.returncode, current_action[1])))
if proc.returncode != 0:
print("Got non-zero status code; aborting.")
break
if current_action[1].startswith("cd "):
cwd = os.path.expanduser(current_action[1][3:])
for proc in background:
print("Terminating background process:")
print_snippet(proc.cmd)
proc.terminate()
proc.wait()

1
autotest/index.html Symbolic link
View File

@@ -0,0 +1 @@
../www/htdocs/index.html

42
bin/add-load-balancer-v1.py Executable file
View File

@@ -0,0 +1,42 @@
#!/usr/bin/env python
import os
import sys
import yaml
# arg 1 = service name
# arg 2 = number of instances
service_name = sys.argv[1]
desired_instances = int(sys.argv[2])
compose_file = os.environ["COMPOSE_FILE"]
input_file, output_file = compose_file, compose_file
config = yaml.load(open(input_file))
# The ambassadors need to know the service port to use.
# Those ports must be declared here.
ports = yaml.load(open("ports.yml"))
port = str(ports[service_name])
command_line = port
depends_on = []
for n in range(1, 1+desired_instances):
config["services"]["{}{}".format(service_name, n)] = config["services"][service_name]
command_line += " {}{}:{}".format(service_name, n, port)
depends_on.append("{}{}".format(service_name, n))
config["services"][service_name] = {
"image": "jpetazzo/hamba",
"command": command_line,
"depends_on": depends_on,
}
if "networks" in config["services"]["{}1".format(service_name)]:
config["services"][service_name]["networks"] = config["services"]["{}1".format(service_name)]["networks"]
yaml.safe_dump(config, open(output_file, "w"), default_flow_style=False)

87
bin/add-load-balancer-v2.py Executable file
View File

@@ -0,0 +1,87 @@
#!/usr/bin/env python
import os
import sys
import yaml
def error(msg):
print("ERROR: {}".format(msg))
exit(1)
# arg 1 = service name
service_name = sys.argv[1]
compose_file = os.environ["COMPOSE_FILE"]
input_file, output_file = compose_file, compose_file
config = yaml.load(open(input_file))
version = config.get("version")
if version != "2":
error("Unsupported $COMPOSE_FILE version: {!r}".format(version))
# The load balancers need to know the service port to use.
# Those ports must be declared here.
ports = yaml.load(open("ports.yml"))
port = str(ports[service_name])
if service_name not in config["services"]:
error("service {} not found in $COMPOSE_FILE"
.format(service_name))
lb_name = "{}-lb".format(service_name)
be_name = "{}-be".format(service_name)
wd_name = "{}-wd".format(service_name)
if lb_name in config["services"]:
error("load balancer {} already exists in $COMPOSE_FILE"
.format(lb_name))
if wd_name in config["services"]:
error("dns watcher {} already exists in $COMPOSE_FILE"
.format(wd_name))
service = config["services"][service_name]
if "networks" in service:
error("service {} has custom networks"
.format(service_name))
# Put the service on its own network.
service["networks"] = {service_name: {"aliases": [ be_name ] } }
# Put a label indicating which load balancer is responsible for this service.
if "labels" not in service:
service["labels"] = {}
service["labels"]["loadbalancer"] = lb_name
# Add the load balancer.
config["services"][lb_name] = {
"image": "jpetazzo/hamba",
"command": "{} {} {}".format(port, be_name, port),
"depends_on": [ service_name ],
"networks": {
"default": {
"aliases": [ service_name ],
},
service_name: None,
},
}
# Add the DNS watcher.
config["services"][wd_name] = {
"image": "jpetazzo/watchdns",
"command": "{} {} {}".format(port, be_name, port),
"volumes_from": [ lb_name ],
"networks": {
service_name: None,
},
}
if "networks" not in config:
config["networks"] = {}
if service_name not in config["networks"]:
config["networks"][service_name] = None
yaml.safe_dump(config, open(output_file, "w"), default_flow_style=False)

63
bin/build-tag-push.py Executable file
View File

@@ -0,0 +1,63 @@
#!/usr/bin/env python
from common import ComposeFile
import os
import subprocess
import time
registry = os.environ.get("DOCKER_REGISTRY")
if not registry:
print("Please set the DOCKER_REGISTRY variable, e.g.:")
print("export DOCKER_REGISTRY=jpetazzo # use the Docker Hub")
print("export DOCKER_REGISTRY=localhost:5000 # use a local registry")
exit(1)
# Get the name of the current directory.
project_name = os.path.basename(os.path.realpath("."))
# Version used to tag the generated Docker image, using the UNIX timestamp or the given version.
if "VERSION" not in os.environ:
version = str(int(time.time()))
else:
version = os.environ["VERSION"]
# Execute "docker-compose build" and abort if it fails.
subprocess.check_call(["docker-compose", "-f", "docker-compose.yml", "build"])
# Load the services from the input docker-compose.yml file.
# TODO: run parallel builds.
compose_file = ComposeFile("docker-compose.yml")
# Iterate over all services that have a "build" definition.
# Tag them, and initiate a push in the background.
push_operations = dict()
for service_name, service in compose_file.services.items():
if "build" in service:
compose_image = "{}_{}".format(project_name, service_name)
registry_image = "{}/{}:{}".format(registry, compose_image, version)
# Re-tag the image so that it can be uploaded to the registry.
subprocess.check_call(["docker", "tag", compose_image, registry_image])
# Spawn "docker push" to upload the image.
push_operations[service_name] = subprocess.Popen(["docker", "push", registry_image])
# Replace the "build" definition by an "image" definition,
# using the name of the image on the registry.
del service["build"]
service["image"] = registry_image
# Wait for push operations to complete.
for service_name, popen_object in push_operations.items():
print("Waiting for {} push to complete...".format(service_name))
popen_object.wait()
print("Done.")
# Write the new docker-compose.yml file.
if "COMPOSE_FILE" not in os.environ:
os.environ["COMPOSE_FILE"] = "docker-compose.yml-{}".format(version)
print("Writing to new Compose file:")
else:
print("Writing to provided Compose file:")
print("COMPOSE_FILE={}".format(os.environ["COMPOSE_FILE"]))
compose_file.save()

76
bin/common.py Executable file
View File

@@ -0,0 +1,76 @@
import os
import subprocess
import sys
import time
import yaml
def COMPOSE_FILE():
if "COMPOSE_FILE" not in os.environ:
print("The $COMPOSE_FILE environment variable is not set. Aborting.")
exit(1)
return os.environ["COMPOSE_FILE"]
class ComposeFile(object):
def __init__(self, filename=None):
if filename is None:
filename = COMPOSE_FILE()
if not os.path.isfile(filename):
print("File {!r} does not exist. Aborting.".format(filename))
exit(1)
self.data = yaml.load(open(filename))
@property
def services(self):
if self.data.get("version") == "2":
return self.data["services"]
else:
return self.data
def save(self, filename=None):
if filename is None:
filename = COMPOSE_FILE()
with open(filename, "w") as f:
yaml.safe_dump(self.data, f, default_flow_style=False)
# Executes a bunch of commands in parallel, but no more than N at a time.
# This allows to execute concurrently a large number of tasks, without
# turning into a fork bomb.
# `parallelism` is the number of tasks to execute simultaneously.
# `commands` is a list of tasks to execute.
# Each task is itself a list, where the first element is a descriptive
# string, and the folloowing elements are the arguments to pass to Popen.
def parallel_run(commands, parallelism):
running = []
# While stuff is running, or we have stuff to run...
while commands or running:
# While there is stuff to run, and room in the pipe...
while commands and len(running)<parallelism:
command = commands.pop(0)
print("START {}".format(command[0]))
popen = subprocess.Popen(
command[1:], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
popen._desc = command[0]
running.append(popen)
must_sleep = True
for popen in running:
status = popen.poll()
if status is not None:
must_sleep = False
running.remove(popen)
if status==0:
print("OK {}".format(popen._desc))
else:
print("ERROR {} [Exit status: {}]"
.format(popen._desc, status))
output = "\n" + popen.communicate()[0].strip()
output = output.replace("\n", "\n| ")
print(output)
else:
print("WAIT ({} running, {} more to run)"
.format(len(running), len(commands)))
if must_sleep:
time.sleep(1)

69
bin/configure-ambassadors.py Executable file
View File

@@ -0,0 +1,69 @@
#!/usr/bin/env python
from common import parallel_run
import os
import subprocess
project_name = os.path.basename(os.path.realpath("."))
# Get all services and backends in our compose application.
containers_data = subprocess.check_output([
"docker", "ps",
"--filter", "label=com.docker.compose.project={}".format(project_name),
"--format", '{{ .ID }} '
'{{ .Label "com.docker.compose.service" }} '
'{{ .Ports }}',
])
# Build list of backends.
frontend_ports = dict()
backends = dict()
for container in containers_data.split('\n'):
if not container:
continue
# TODO: support services with multiple ports!
container_id, service_name, port = container.split(' ')
if not port:
continue
backend, frontend = port.split("->")
backend_addr, backend_port = backend.split(':')
frontend_port, frontend_proto = frontend.split('/')
# TODO: deal with udp (mostly skip it?)
assert frontend_proto == "tcp"
# TODO: check inconsistencies between port mappings
frontend_ports[service_name] = frontend_port
if service_name not in backends:
backends[service_name] = []
backends[service_name].append((backend_addr, backend_port))
# Get all existing ambassadors for this application.
ambassadors_data = subprocess.check_output([
"docker", "ps",
"--filter", "label=ambassador.project={}".format(project_name),
"--format", '{{ .ID }} '
'{{ .Label "ambassador.service" }} '
'{{ .Label "ambassador.bindaddr" }}',
])
# Update ambassadors.
operations = []
for ambassador in ambassadors_data.split('\n'):
if not ambassador:
continue
ambassador_id, service_name, bind_address = ambassador.split()
print("Updating configuration for {}/{} -> {}:{} -> {}"
.format(service_name, ambassador_id,
bind_address, frontend_ports[service_name],
backends[service_name]))
command = [
ambassador_id,
"docker", "run", "--rm", "--volumes-from", ambassador_id,
"jpetazzo/hamba", "reconfigure",
"{}:{}".format(bind_address, frontend_ports[service_name])
]
for backend_addr, backend_port in backends[service_name]:
command.extend([backend_addr, backend_port])
operations.append(command)
# Execute all commands in parallel.
parallel_run(operations, 10)

71
bin/create-ambassadors.py Executable file
View File

@@ -0,0 +1,71 @@
#!/usr/bin/env python
from common import ComposeFile, parallel_run
import os
import subprocess
config = ComposeFile()
project_name = os.path.basename(os.path.realpath("."))
# Get all services in our compose application.
containers_data = subprocess.check_output([
"docker", "ps",
"--filter", "label=com.docker.compose.project={}".format(project_name),
"--format", '{{ .ID }} {{ .Label "com.docker.compose.service" }}',
])
# Get all existing ambassadors for this application.
ambassadors_data = subprocess.check_output([
"docker", "ps",
"--filter", "label=ambassador.project={}".format(project_name),
"--format", '{{ .ID }} '
'{{ .Label "ambassador.container" }} '
'{{ .Label "ambassador.service" }}',
])
# Build a set of existing ambassadors.
ambassadors = dict()
for ambassador in ambassadors_data.split('\n'):
if not ambassador:
continue
ambassador_id, container_id, linked_service = ambassador.split()
ambassadors[container_id, linked_service] = ambassador_id
operations = []
# Start the missing ambassadors.
for container in containers_data.split('\n'):
if not container:
continue
container_id, service_name = container.split()
extra_hosts = config.services[service_name].get("extra_hosts", {})
for linked_service, bind_address in extra_hosts.items():
description = "Ambassador {}/{}/{}".format(
service_name, container_id, linked_service)
ambassador_id = ambassadors.pop((container_id, linked_service), None)
if ambassador_id:
print("{} already exists: {}".format(description, ambassador_id))
else:
print("{} not found, creating it.".format(description))
operations.append([
description,
"docker", "run", "-d",
"--net", "container:{}".format(container_id),
"--label", "ambassador.project={}".format(project_name),
"--label", "ambassador.container={}".format(container_id),
"--label", "ambassador.service={}".format(linked_service),
"--label", "ambassador.bindaddr={}".format(bind_address),
"jpetazzo/hamba", "run"
])
# Destroy extraneous ambassadors.
for ambassador_id in ambassadors.values():
print("{} is not useful anymore, destroying it.".format(ambassador_id))
operations.append([
"rm -f {}".format(ambassador_id),
"docker", "rm", "-f", ambassador_id,
])
# Execute all commands in parallel.
parallel_run(operations, 10)

3
bin/delete-ambassadors.sh Executable file
View File

@@ -0,0 +1,3 @@
#!/bin/sh
docker ps -q --filter label=ambassador.project=dockercoins |
xargs docker rm -f

16
bin/fixup-yaml.sh Executable file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
# Some tools will choke on the YAML files generated by PyYAML;
# in particular on a section like this one:
#
# service:
# ports:
# - 8000:5000
#
# This script adds two spaces in front of the dash in those files.
# Warning: it is a hack, and probably won't work on some YAML files.
[ -f "$COMPOSE_FILE" ] || {
echo "Cannot find COMPOSE_FILE"
exit 1
}
sed -i 's/^ -/ -/' $COMPOSE_FILE

38
bin/link-to-ambassadors.py Executable file
View File

@@ -0,0 +1,38 @@
#!/usr/bin/env python
from common import ComposeFile
import yaml
config = ComposeFile()
# The ambassadors need to know the service port to use.
# Those ports must be declared here.
ports = yaml.load(open("ports.yml"))
def generate_local_addr():
last_byte = 2
while last_byte<255:
yield "127.127.0.{}".format(last_byte)
last_byte += 1
for service_name, service in config.services.items():
if "links" in service:
for link, local_addr in zip(service["links"], generate_local_addr()):
if link not in ports:
print("Skipping link {} in service {} "
"(no port mapping defined). "
"Your code will probably break."
.format(link, service_name))
continue
if "extra_hosts" not in service:
service["extra_hosts"] = {}
service["extra_hosts"][link] = local_addr
del service["links"]
if "ports" in service:
del service["ports"]
if "volumes" in service:
del service["volumes"]
if service_name in ports:
service["ports"] = [ ports[service_name] ]
config.save()

View File

@@ -0,0 +1,46 @@
#!/usr/bin/env python
# FIXME: hardcoded
PORT="80"
import os
import subprocess
project_name = os.path.basename(os.path.realpath("."))
# Get all existing services for this application.
containers_data = subprocess.check_output([
"docker", "ps",
"--filter", "label=com.docker.compose.project={}".format(project_name),
"--format", '{{ .Label "com.docker.compose.service" }} '
'{{ .Label "com.docker.compose.container-number" }} '
'{{ .Label "loadbalancer" }}',
])
load_balancers = dict()
for line in containers_data.split('\n'):
if not line:
continue
service_name, container_number, load_balancer = line.split(' ')
if load_balancer:
if load_balancer not in load_balancers:
load_balancers[load_balancer] = []
load_balancers[load_balancer].append((service_name, int(container_number)))
for load_balancer, backends in load_balancers.items():
# FIXME: iterate on all load balancers
container_name = "{}_{}_1".format(project_name, load_balancer)
command = [
"docker", "run", "--rm",
"--volumes-from", container_name,
"--net", "container:{}".format(container_name),
"jpetazzo/hamba", "reconfigure", PORT,
]
command.extend(
"{}_{}_{}:{}".format(project_name, backend_name, backend_number, PORT)
for (backend_name, backend_number) in sorted(backends)
)
print("Updating configuration for {} with {} backend(s)..."
.format(container_name, len(backends)))
subprocess.check_output(command)

201
bin/setup-all-the-things.sh Executable file
View File

@@ -0,0 +1,201 @@
#!/bin/sh
unset DOCKER_REGISTRY
unset DOCKER_HOST
unset COMPOSE_FILE
SWARM_IMAGE=${SWARM_IMAGE:-swarm}
prepare_1_check_ssh_keys () {
for N in $(seq 1 5); do
ssh node$N true
done
}
prepare_2_compile_swarm () {
cd ~
git clone git://github.com/docker/swarm
cd swarm
[[ -z "$1" ]] && {
echo "Specify which revision to build."
return
}
git checkout "$1" || return
mkdir -p image
docker build -t docker/swarm:$1 .
docker run -i --entrypoint sh docker/swarm:$1 \
-c 'cat $(which swarm)' > image/swarm
chmod +x image/swarm
cat >image/Dockerfile <<EOF
FROM scratch
COPY ./swarm /swarm
ENTRYPOINT ["/swarm", "-debug", "-experimental"]
EOF
docker build -t jpetazzo/swarm:$1 image
docker login
docker push jpetazzo/swarm:$1
docker logout
SWARM_IMAGE=jpetazzo/swarm:$1
}
clean_1_containers () {
for N in $(seq 1 5); do
ssh node$N "docker ps -aq | xargs -r -n1 -P10 docker rm -f"
done
}
clean_2_volumes () {
for N in $(seq 1 5); do
ssh node$N "docker volume ls -q | xargs -r docker volume rm"
done
}
clean_3_images () {
for N in $(seq 1 5); do
ssh node$N "docker images | awk '/dockercoins|jpetazzo/ {print \$1\":\"\$2}' | xargs -r docker rmi -f"
done
}
clean_4_machines () {
rm -rf ~/.docker/machine/
}
clean_all () {
clean_1_containers
clean_2_volumes
clean_3_images
clean_4_machines
}
dm_swarm () {
eval $(docker-machine env node1 --swarm)
}
dm_node1 () {
eval $(docker-machine env node1)
}
setup_1_swarm () {
grep node[12345] /etc/hosts | grep -v ^127 |
while read IPADDR NODENAME; do
docker-machine create --driver generic \
--engine-opt cluster-store=consul://localhost:8500 \
--engine-opt cluster-advertise=eth0:2376 \
--swarm --swarm-master --swarm-image $SWARM_IMAGE \
--swarm-discovery consul://localhost:8500 \
--swarm-opt replication --swarm-opt advertise=$IPADDR:3376 \
--generic-ssh-user docker --generic-ip-address $IPADDR $NODENAME
done
}
setup_2_consul () {
IPADDR=$(ssh node1 ip a ls dev eth0 |
sed -n 's,.*inet \(.*\)/.*,\1,p')
for N in 1 2 3 4 5; do
ssh node$N -- docker run -d --restart=always --name consul_node$N \
-e CONSUL_BIND_INTERFACE=eth0 --net host consul \
agent -server -retry-join $IPADDR -bootstrap-expect 5 \
-ui -client 0.0.0.0
done
}
setup_3_wait () {
# Wait for a Swarm master
dm_swarm
while ! docker ps; do sleep 1; done
# Wait for all nodes to be there
while ! [ "$(docker info | grep "^Nodes:")" = "Nodes: 5" ]; do sleep 1; done
}
setup_4_registry () {
cd ~/orchestration-workshop/registry
dm_swarm
docker-compose up -d
for N in $(seq 2 5); do
docker-compose scale frontend=$N
done
}
setup_5_btp_dockercoins () {
cd ~/orchestration-workshop/dockercoins
dm_node1
export DOCKER_REGISTRY=localhost:5000
cp docker-compose.yml-v2 docker-compose.yml
~/orchestration-workshop/bin/build-tag-push.py | tee /tmp/btp.log
export $(tail -n 1 /tmp/btp.log)
}
setup_6_add_lbs () {
cd ~/orchestration-workshop/dockercoins
~/orchestration-workshop/bin/add-load-balancer-v2.py rng
~/orchestration-workshop/bin/add-load-balancer-v2.py hasher
}
setup_7_consulfs () {
dm_swarm
docker pull jpetazzo/consulfs
for N in $(seq 1 5); do
ssh node$N "docker run --rm -v /usr/local/bin:/target jpetazzo/consulfs"
ssh node$N mkdir -p ~/consul
ssh -f node$N "mountpoint ~/consul || consulfs localhost:8500 ~/consul"
done
}
setup_8_syncmachine () {
while ! mountpoint ~/consul; do
sleep 1
done
cp -r ~/.docker/machine ~/consul/
for N in $(seq 2 5); do
ssh node$N mkdir -p ~/.docker
ssh node$N "[ -L ~/.docker/machine ] || ln -s ~/consul/machine ~/.docker"
done
}
setup_9_elk () {
dm_swarm
cd ~/orchestration-workshop/elk
docker-compose up -d
for N in $(seq 1 5); do
docker-compose scale logstash=$N
done
}
setup_all () {
setup_1_swarm
setup_2_consul
setup_3_wait
setup_4_registry
setup_5_btp_dockercoins
setup_6_add_lbs
setup_7_consulfs
setup_8_syncmachine
dm_swarm
}
force_remove_network () {
dm_swarm
NET="$1"
for CNAME in $(docker network inspect $NET | grep Name | grep -v \"$NET\" | cut -d\" -f4); do
echo $CNAME
docker network disconnect -f $NET $CNAME
done
docker network rm $NET
}
demo_1_compose_up () {
dm_swarm
cd ~/orchestration-workshop/dockercoins
docker-compose up -d
}
grep -qs -- MAGICMARKER "$0" && { # Don't display this line in the function lis
echo "You should source this file, then invoke the following functions:"
grep -- '^[a-z].*{$' "$0" | cut -d" " -f1
}
show_swarm_primary () {
dm_swarm
docker info 2>/dev/null | grep -e ^Role -e ^Primary
}

View File

@@ -0,0 +1,12 @@
version: "2"
services:
cadvisor:
image: google/cadvisor
ports:
- "8080:8080"
volumes:
- "/:/rootfs:ro"
- "/var/run:/var/run:rw"
- "/sys:/sys:ro"
- "/var/lib/docker/:/var/lib/docker:ro"

19
ceph/README.md Normal file
View File

@@ -0,0 +1,19 @@
# CEPH on Docker
Note: this doesn't quite work yet.
The OSD containers need to be started twice (the first time, they fail
initializing; second time is a champ).
Also, it looks like you need at least two OSD containers (or the OSD
container should have two disks/directories, whatever).
RadosGw is listening on port 8080.
The `admin` container will create a `docker` user using `radosgw-admin`.
If you run it multiple times, that's OK: further invocations are idempotent.
Last but not least: it looks like AWS CLI uses a new signature format
that doesn't work with RadosGW. After almost two hours trying to figure
out what was wrong, I tried the S3 credentials directly with boto and
it worked immediately (I was able to create a bucket).

53
ceph/docker-compose.yml Normal file
View File

@@ -0,0 +1,53 @@
version: "2"
services:
mon:
image: ceph/daemon
command: mon
environment:
CEPH_PUBLIC_NETWORK: 10.33.0.0/16
MON_IP: 10.33.0.2
osd:
image: ceph/daemon
command: osd_directory
depends_on:
- mon
volumes_from:
- mon
volumes:
- /var/lib/ceph/osd
mds:
image: ceph/daemon
command: mds
environment:
CEPHFS_CREATE: 1
depends_on:
- mon
volumes_from:
- mon
rgw:
image: ceph/daemon
command: rgw
depends_on:
- mon
volumes_from:
- mon
environment:
CEPH_OPTS: --verbose
admin:
image: ceph/daemon
entrypoint: radosgw-admin
depends_on:
- mon
volumes_from:
- mon
command: user create --uid=docker --display-name=docker
networks:
default:
ipam:
driver: default
config:
- subnet: 10.33.0.0/16
gateway: 10.33.0.1

12
consul/docker-compose.yml Normal file
View File

@@ -0,0 +1,12 @@
version: "2"
services:
bootstrap:
image: jpetazzo/consul
command: agent -server -bootstrap
container_name: bootstrap
server:
image: jpetazzo/consul
command: agent -server -join bootstrap -join server
client:
image: jpetazzo/consul
command: members -rpc-addr server:8400

View File

@@ -1,10 +1,7 @@
FROM ruby:alpine
RUN apk add --update build-base curl
RUN apk add --update build-base
RUN gem install sinatra
RUN gem install thin
ADD hasher.rb /
CMD ["ruby", "hasher.rb"]
EXPOSE 80
HEALTHCHECK \
--interval=1s --timeout=2s --retries=3 --start-period=1s \
CMD curl http://localhost/ || exit 1

View File

@@ -28,5 +28,5 @@ def rng(how_many_bytes):
if __name__ == "__main__":
app.run(host="0.0.0.0", port=80, threaded=False)
app.run(host="0.0.0.0", port=80)

View File

@@ -50,7 +50,7 @@ function refresh () {
points.push({ x: s2.now, y: speed });
}
$("#speed").text("~" + speed.toFixed(1) + " hashes/second");
var msg = ("I'm attending a @docker orchestration workshop, "
var msg = ("I'm attending the @docker workshop at #LinuxCon, "
+ "and my #DockerCoins mining rig is crunching "
+ speed.toFixed(1) + " hashes/second! W00T!");
$("#tweet").attr(

View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

9
docs/chat/index.html Normal file
View File

@@ -0,0 +1,9 @@
<html>
<!-- Generated with index.html.sh -->
<head>
<meta http-equiv="refresh" content="0; URL='https://dockercommunity.slack.com/messages/docker-mentor'" />
</head>
<body>
<a href="https://dockercommunity.slack.com/messages/docker-mentor">https://dockercommunity.slack.com/messages/docker-mentor</a>
</body>
</html>

16
docs/chat/index.html.sh Executable file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
#LINK=https://gitter.im/jpetazzo/workshop-20170322-sanjose
LINK=https://dockercommunity.slack.com/messages/docker-mentor
#LINK=https://usenix-lisa.slack.com/messages/docker
sed "s,@@LINK@@,$LINK,g" >index.html <<EOF
<html>
<!-- Generated with index.html.sh -->
<head>
<meta http-equiv="refresh" content="0; URL='$LINK'" />
</head>
<body>
<a href="$LINK">$LINK</a>
</body>
</html>
EOF

View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

View File

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 26 KiB

View File

Before

Width:  |  Height:  |  Size: 680 KiB

After

Width:  |  Height:  |  Size: 680 KiB

View File

Before

Width:  |  Height:  |  Size: 137 KiB

After

Width:  |  Height:  |  Size: 137 KiB

View File

Before

Width:  |  Height:  |  Size: 252 KiB

After

Width:  |  Height:  |  Size: 252 KiB

View File

Before

Width:  |  Height:  |  Size: 213 KiB

After

Width:  |  Height:  |  Size: 213 KiB

View File

Before

Width:  |  Height:  |  Size: 901 KiB

After

Width:  |  Height:  |  Size: 901 KiB

View File

Before

Width:  |  Height:  |  Size: 575 KiB

After

Width:  |  Height:  |  Size: 575 KiB

View File

Before

Width:  |  Height:  |  Size: 205 KiB

After

Width:  |  Height:  |  Size: 205 KiB

BIN
docs/extra-details.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

19
docs/extract-section-titles.py Executable file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/env python
"""
Extract and print level 1 and 2 titles from workshop slides.
"""
separators = [
"---",
"--"
]
slide_count = 1
for line in open("index.html"):
line = line.strip()
if line in separators:
slide_count += 1
if line.startswith('# '):
print slide_count, '# #', line
elif line.startswith('# '):
print slide_count, line

View File

Before

Width:  |  Height:  |  Size: 147 KiB

After

Width:  |  Height:  |  Size: 147 KiB

View File

Before

Width:  |  Height:  |  Size: 145 KiB

After

Width:  |  Height:  |  Size: 145 KiB

7557
docs/index.html Normal file

File diff suppressed because it is too large Load Diff

View File

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 24 KiB

View File

Before

Width:  |  Height:  |  Size: 145 KiB

After

Width:  |  Height:  |  Size: 145 KiB

View File

Before

Width:  |  Height:  |  Size: 41 KiB

After

Width:  |  Height:  |  Size: 41 KiB

View File

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

View File

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View File

Before

Width:  |  Height:  |  Size: 59 KiB

After

Width:  |  Height:  |  Size: 59 KiB

View File

Before

Width:  |  Height:  |  Size: 48 KiB

After

Width:  |  Height:  |  Size: 48 KiB

View File

Before

Width:  |  Height:  |  Size: 266 KiB

After

Width:  |  Height:  |  Size: 266 KiB

View File

Before

Width:  |  Height:  |  Size: 1.2 MiB

After

Width:  |  Height:  |  Size: 1.2 MiB

View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

View File

Before

Width:  |  Height:  |  Size: 53 KiB

After

Width:  |  Height:  |  Size: 53 KiB

View File

@@ -1,62 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: consul
spec:
ports:
- port: 8500
name: http
selector:
app: consul
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul
spec:
serviceName: consul
replicas: 3
selector:
matchLabels:
app: consul
template:
metadata:
labels:
app: consul
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- consul
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
containers:
- name: consul
image: "consul:1.2.2"
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- "agent"
- "-bootstrap-expect=3"
- "-retry-join=consul-0.consul.$(NAMESPACE).svc.cluster.local"
- "-retry-join=consul-1.consul.$(NAMESPACE).svc.cluster.local"
- "-retry-join=consul-2.consul.$(NAMESPACE).svc.cluster.local"
- "-client=0.0.0.0"
- "-data-dir=/consul/data"
- "-server"
- "-ui"
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- consul leave

View File

@@ -1,28 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: build-image
spec:
restartPolicy: OnFailure
containers:
- name: docker-build
image: docker
env:
- name: REGISTRY_PORT
value: #"30000"
command: ["sh", "-c"]
args:
- |
apk add --no-cache git &&
mkdir /workspace &&
git clone https://github.com/jpetazzo/container.training /workspace &&
docker build -t localhost:$REGISTRY_PORT/worker /workspace/dockercoins/worker &&
docker push localhost:$REGISTRY_PORT/worker
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
volumes:
- name: docker-socket
hostPath:
path: /var/run/docker.sock

View File

@@ -1,222 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:elasticsearch
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
# X-Pack Authentication
# =====================
- name: FLUENT_ELASTICSEARCH_USER
value: "elastic"
- name: FLUENT_ELASTICSEARCH_PASSWORD
value: "changeme"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
run: elasticsearch
name: elasticsearch
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/elasticsearch
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: elasticsearch
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: elasticsearch
spec:
containers:
- image: elasticsearch:5.6.8
imagePullPolicy: IfNotPresent
name: elasticsearch
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: elasticsearch
name: elasticsearch
selfLink: /api/v1/namespaces/default/services/elasticsearch
spec:
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
run: elasticsearch
sessionAffinity: None
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
run: kibana
name: kibana
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/kibana
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: kibana
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200/
image: kibana:5.6.8
imagePullPolicy: Always
name: kibana
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: kibana
name: kibana
selfLink: /api/v1/namespaces/default/services/kibana
spec:
externalTrafficPolicy: Cluster
ports:
- port: 5601
protocol: TCP
targetPort: 5601
selector:
run: kibana
sessionAffinity: None
type: NodePort

View File

@@ -1,14 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system

View File

@@ -1,18 +0,0 @@
global
daemon
maxconn 256
defaults
mode tcp
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend the-frontend
bind *:80
default_backend the-backend
backend the-backend
server google.com-80 google.com:80 maxconn 32 check
server bing.com-80 bing.com:80 maxconn 32 check

View File

@@ -1,16 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: haproxy
spec:
volumes:
- name: config
configMap:
name: haproxy
containers:
- name: haproxy
image: haproxy
volumeMounts:
- name: config
mountPath: /usr/local/etc/haproxy/

View File

@@ -1,14 +0,0 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cheddar
spec:
rules:
- host: cheddar.A.B.C.D.nip.io
http:
paths:
- path: /
backend:
serviceName: cheddar
servicePort: 80

View File

@@ -1,29 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: kaniko-build
spec:
initContainers:
- name: git-clone
image: alpine
command: ["sh", "-c"]
args:
- |
apk add --no-cache git &&
git clone git://github.com/jpetazzo/container.training /workspace
volumeMounts:
- name: workspace
mountPath: /workspace
containers:
- name: build-image
image: gcr.io/kaniko-project/executor:latest
args:
- "--context=/workspace/dockercoins/rng"
- "--skip-tls-verify"
- "--destination=registry:5000/rng-kaniko:latest"
volumeMounts:
- name: workspace
mountPath: /workspace
volumes:
- name: workspace

View File

@@ -1,167 +0,0 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
#
# Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard

View File

@@ -1,14 +0,0 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-testcurl-for-testweb
spec:
podSelector:
matchLabels:
run: testweb
ingress:
- from:
- podSelector:
matchLabels:
run: testcurl

View File

@@ -1,10 +0,0 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-for-testweb
spec:
podSelector:
matchLabels:
run: testweb
ingress: []

View File

@@ -1,22 +0,0 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-from-other-namespaces
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-webui
spec:
podSelector:
matchLabels:
run: webui
ingress:
- from: []

View File

@@ -1,21 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-volume
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
- name: git
image: alpine
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/
restartPolicy: OnFailure

View File

@@ -1,580 +0,0 @@
# SOURCE: https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop0&c=px-workshop&stork=true&lh=true
apiVersion: v1
kind: ConfigMap
metadata:
name: stork-config
namespace: kube-system
data:
policy.cfg: |-
{
"kind": "Policy",
"apiVersion": "v1",
"extenders": [
{
"urlPrefix": "http://stork-service.kube-system.svc:8099",
"apiVersion": "v1beta1",
"filterVerb": "filter",
"prioritizeVerb": "prioritize",
"weight": 5,
"enableHttps": false,
"nodeCacheCapable": false
}
]
}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: stork-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "delete"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["create", "list", "watch", "delete"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshotdatas"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "update"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["*"]
resources: ["deployments", "deployments/extensions"]
verbs: ["list", "get", "watch", "patch", "update", "initialize"]
- apiGroups: ["*"]
resources: ["statefulsets", "statefulsets/extensions"]
verbs: ["list", "get", "watch", "patch", "update", "initialize"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-role-binding
subjects:
- kind: ServiceAccount
name: stork-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: stork-role
apiGroup: rbac.authorization.k8s.io
---
kind: Service
apiVersion: v1
metadata:
name: stork-service
namespace: kube-system
spec:
selector:
name: stork
ports:
- protocol: TCP
port: 8099
targetPort: 8099
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
tier: control-plane
name: stork
namespace: kube-system
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
replicas: 3
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
name: stork
tier: control-plane
spec:
containers:
- command:
- /stork
- --driver=pxd
- --verbose
- --leader-elect=true
- --health-monitor-interval=120
imagePullPolicy: Always
image: openstorage/stork:1.1.3
resources:
requests:
cpu: '0.1'
name: stork
hostPID: false
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "name"
operator: In
values:
- stork
topologyKey: "kubernetes.io/hostname"
serviceAccountName: stork-account
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: stork-snapshot-sc
provisioner: stork-snapshot
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: stork-scheduler-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-scheduler-role
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch", "update"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create"]
- apiGroups: [""]
resourceNames: ["kube-scheduler"]
resources: ["endpoints"]
verbs: ["delete", "get", "patch", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete", "get", "list", "watch"]
- apiGroups: [""]
resources: ["bindings", "pods/binding"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/status"]
verbs: ["patch", "update"]
- apiGroups: [""]
resources: ["replicationcontrollers", "services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["app", "extensions"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["statefulsets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["policy"]
resources: ["poddisruptionbudgets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims", "persistentvolumes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-scheduler-role-binding
subjects:
- kind: ServiceAccount
name: stork-scheduler-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: stork-scheduler-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
component: scheduler
tier: control-plane
name: stork-scheduler
name: stork-scheduler
namespace: kube-system
spec:
replicas: 3
template:
metadata:
labels:
component: scheduler
tier: control-plane
name: stork-scheduler
spec:
containers:
- command:
- /usr/local/bin/kube-scheduler
- --address=0.0.0.0
- --leader-elect=true
- --scheduler-name=stork
- --policy-configmap=stork-config
- --policy-configmap-namespace=kube-system
- --lock-object-name=stork-scheduler
image: gcr.io/google_containers/kube-scheduler-amd64:v1.11.2
livenessProbe:
httpGet:
path: /healthz
port: 10251
initialDelaySeconds: 15
name: stork-scheduler
readinessProbe:
httpGet:
path: /healthz
port: 10251
resources:
requests:
cpu: '0.1'
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "name"
operator: In
values:
- stork-scheduler
topologyKey: "kubernetes.io/hostname"
hostPID: false
serviceAccountName: stork-scheduler-account
---
kind: Service
apiVersion: v1
metadata:
name: portworx-service
namespace: kube-system
labels:
name: portworx
spec:
selector:
name: portworx
ports:
- name: px-api
protocol: TCP
port: 9001
targetPort: 9001
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-get-put-list-role
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["watch", "get", "update", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete", "get", "list"]
- apiGroups: [""]
resources: ["persistentvolumeclaims", "persistentvolumes"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "update", "create"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["privileged"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-role-binding
subjects:
- kind: ServiceAccount
name: px-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: node-get-put-list-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Namespace
metadata:
name: portworx
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-role
namespace: portworx
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-role-binding
namespace: portworx
subjects:
- kind: ServiceAccount
name: px-account
namespace: kube-system
roleRef:
kind: Role
name: px-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: portworx
namespace: kube-system
annotations:
portworx.com/install-source: "https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop0&c=px-workshop&stork=true&lh=true"
spec:
minReadySeconds: 0
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
name: portworx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: px/enabled
operator: NotIn
values:
- "false"
- key: node-role.kubernetes.io/master
operator: DoesNotExist
hostNetwork: true
hostPID: false
containers:
- name: portworx
image: portworx/oci-monitor:1.4.2.2
imagePullPolicy: Always
args:
["-c", "px-workshop", "-s", "/dev/loop0", "-b",
"-x", "kubernetes"]
env:
- name: "PX_TEMPLATE_VERSION"
value: "v4"
livenessProbe:
periodSeconds: 30
initialDelaySeconds: 840 # allow image pull in slow networks
httpGet:
host: 127.0.0.1
path: /status
port: 9001
readinessProbe:
periodSeconds: 10
httpGet:
host: 127.0.0.1
path: /health
port: 9015
terminationMessagePath: "/tmp/px-termination-log"
securityContext:
privileged: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
- name: etcpwx
mountPath: /etc/pwx
- name: optpwx
mountPath: /opt/pwx
- name: proc1nsmount
mountPath: /host_proc/1/ns
- name: sysdmount
mountPath: /etc/systemd/system
- name: diagsdump
mountPath: /var/cores
- name: journalmount1
mountPath: /var/run/log
readOnly: true
- name: journalmount2
mountPath: /var/log
readOnly: true
- name: dbusmount
mountPath: /var/run/dbus
restartPolicy: Always
serviceAccountName: px-account
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: etcpwx
hostPath:
path: /etc/pwx
- name: optpwx
hostPath:
path: /opt/pwx
- name: proc1nsmount
hostPath:
path: /proc/1/ns
- name: sysdmount
hostPath:
path: /etc/systemd/system
- name: diagsdump
hostPath:
path: /var/cores
- name: journalmount1
hostPath:
path: /var/run/log
- name: journalmount2
hostPath:
path: /var/log
- name: dbusmount
hostPath:
path: /var/run/dbus
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-lh-account
namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-lh-role
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "update"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-lh-role-binding
namespace: kube-system
subjects:
- kind: ServiceAccount
name: px-lh-account
namespace: kube-system
roleRef:
kind: Role
name: px-lh-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: px-lighthouse
namespace: kube-system
labels:
tier: px-web-console
spec:
type: NodePort
ports:
- name: http
port: 80
nodePort: 32678
- name: https
port: 443
nodePort: 32679
selector:
tier: px-web-console
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: px-lighthouse
namespace: kube-system
labels:
tier: px-web-console
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
tier: px-web-console
replicas: 1
template:
metadata:
labels:
tier: px-web-console
spec:
initContainers:
- name: config-init
image: portworx/lh-config-sync:0.2
imagePullPolicy: Always
args:
- "init"
volumeMounts:
- name: config
mountPath: /config/lh
containers:
- name: px-lighthouse
image: portworx/px-lighthouse:1.5.0
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- name: config
mountPath: /config/lh
- name: config-sync
image: portworx/lh-config-sync:0.2
imagePullPolicy: Always
args:
- "sync"
volumeMounts:
- name: config
mountPath: /config/lh
serviceAccountName: px-lh-account
volumes:
- name: config
emptyDir: {}

View File

@@ -1,30 +0,0 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
serviceName: postgres
template:
metadata:
labels:
app: postgres
spec:
schedulerName: stork
containers:
- name: postgres
image: postgres:10.5
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgres
volumeClaimTemplates:
- metadata:
name: postgres
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: registry
spec:
containers:
- name: registry
image: registry
env:
- name: REGISTRY_HTTP_ADDR
valueFrom:
configMapKeyRef:
name: registry
key: http.addr

View File

@@ -1,67 +0,0 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: null
generation: 1
labels:
run: socat
name: socat
namespace: kube-system
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/socat
spec:
replicas: 1
selector:
matchLabels:
run: socat
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: socat
spec:
containers:
- args:
- sh
- -c
- apk add --no-cache socat && socat TCP-LISTEN:80,fork,reuseaddr OPENSSL:kubernetes-dashboard:443,verify=0
image: alpine
imagePullPolicy: Always
name: socat
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: socat
name: socat
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/socat
spec:
externalTrafficPolicy: Cluster
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: socat
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}

View File

@@ -1,11 +0,0 @@
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: portworx-replicated
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "2"
priority_io: "high"

View File

@@ -1,100 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
tolerations:
- effect: NoSchedule
operator: Exists
hostNetwork: true
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

View File

@@ -1,5 +0,0 @@
[build]
base = "slides"
publish = "slides"
command = "./build.sh once"

View File

@@ -71,7 +71,7 @@ to your node, then run the following command:
```bash
sudo curl -L \
https://github.com/docker/compose/releases/download/1.15.0/docker-compose-`uname -s`-`uname -m` \
https://github.com/docker/compose/releases/download/1.9.0/docker-compose-`uname -s`-`uname -m` \
-o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
```

View File

@@ -2,6 +2,7 @@ FROM debian:jessie
MAINTAINER AJ Bowen <aj@soulshake.net>
RUN apt-get update && apt-get install -y \
wkhtmltopdf \
bsdmainutils \
ca-certificates \
curl \
@@ -11,20 +12,19 @@ RUN apt-get update && apt-get install -y \
man \
pssh \
python \
python-docutils \
python-pip \
python-docutils \
ssh \
wkhtmltopdf \
xvfb \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
RUN pip install \
awscli \
jinja2 \
pdfkit \
PyYAML \
termcolor
RUN mv $(which wkhtmltopdf) $(which wkhtmltopdf).real
COPY lib/wkhtmltopdf /usr/local/bin/wkhtmltopdf
WORKDIR $$HOME
RUN echo "alias ll='ls -lahF'" >> /root/.bashrc
ENTRYPOINT ["/root/prepare-vms/scripts/trainer-cli"]

View File

@@ -1,40 +1,33 @@
# Trainer tools to create and prepare VMs for Docker workshops on AWS or Azure
# Trainer tools to create and prepare VMs for Docker workshops on AWS
## Prerequisites
- [Docker](https://docs.docker.com/engine/installation/)
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Parallel SSH](https://code.google.com/archive/p/parallel-ssh/) (on a Mac: `brew install pssh`) - the configuration scripts require this
And if you want to generate printable cards:
- [pyyaml](https://pypi.python.org/pypi/PyYAML) (on a Mac: `brew install pyyaml`)
- [jinja2](https://pypi.python.org/pypi/Jinja2) (on a Mac: `brew install jinja2`)
## General Workflow
- fork/clone repo
- set required environment variables
- set required environment variables for AWS
- create your own setting file from `settings/example.yaml`
- if necessary, increase allowed open files: `ulimit -Sn 10000`
- run `./workshopctl` commands to create instances, install docker, setup each users environment in node1, other management tasks
- run `./workshopctl cards` command to generate PDF for printing handouts of each users host IP's and login info
- run `./trainer` commands to create instances, install docker, setup each users environment in node1, other management tasks
- run `./trainer cards` command to generate PDF for printing handouts of each users host IP's and login info
## Clone/Fork the Repo, and Build the Tools Image
The Docker Compose file here is used to build a image with all the dependencies to run the `./workshopctl` commands and optional tools. Each run of the script will check if you have those dependencies locally on your host, and will only use the container if you're [missing a dependency](workshopctl#L5).
The Docker Compose file here is used to build a image with all the dependencies to run the `./trainer` commands and optional tools. Each run of the script will check if you have those dependencies locally on your host, and will only use the container if you're [missing a dependency](trainer#L5).
$ git clone https://github.com/jpetazzo/orchestration-workshop.git
$ cd orchestration-workshop/prepare-vms
$ docker-compose build
## Preparing to Run `./workshopctl`
## Preparing to Run `./trainer`
### Required AWS Permissions/Info
- Initial assumptions are you're using a root account. If you'd like to use a IAM user, it will need `AmazonEC2FullAccess` and `IAMReadOnlyAccess`.
- Using a non-default VPC or Security Group isn't supported out of box yet, so you will have to customize `lib/commands.sh` if you want to change that.
- These instances will assign the default VPC Security Group, which does not open any ports from Internet by default. So you'll need to add Inbound rules for `SSH | TCP | 22 | 0.0.0.0/0` and `Custom TCP Rule | TCP | 8000 - 8002 | 0.0.0.0/0`, or run `./workshopctl opensg` which opens up all ports.
- Using a non-default VPC or Security Group isn't supported out of box yet, but until then you can [customize the `trainer-cli` script](scripts/trainer-cli#L396-L401).
- These instances will assign the default VPC Security Group, which does not open any ports from Internet by default. So you'll need to add Inbound rules for `SSH | TCP | 22 | 0.0.0.0/0` and `Custom TCP Rule | TCP | 8000 - 8002 | 0.0.0.0/0`, or run `./trainer opensg` which opens up all ports.
### Required Environment Variables
@@ -42,132 +35,61 @@ The Docker Compose file here is used to build a image with all the dependencies
- `AWS_SECRET_ACCESS_KEY`
- `AWS_DEFAULT_REGION`
If you're not using AWS, set these to placeholder values:
```
export AWS_ACCESS_KEY_ID="foo"
export AWS_SECRET_ACCESS_KEY="foo"
export AWS_DEFAULT_REGION="foo"
```
If you don't have the `aws` CLI installed, you will get a warning that it's a missing dependency. If you're not using AWS you can ignore this.
### Update/copy `settings/example.yaml`
Then pass `settings/YOUR_WORKSHOP_NAME-settings.yaml` as an argument to `./workshopctl deploy`, `./workshopctl cards`, etc.
Then pass `settings/YOUR_WORKSHOP_NAME-settings.yaml` as an argument to `trainer deploy`, `trainer cards`, etc.
./workshopctl cards 2016-09-28-00-33-bret settings/orchestration.yaml
./trainer cards 2016-09-28-00-33-bret settings/orchestration.yaml
## `./workshopctl` Usage
## `./trainer` Usage
```
workshopctl - the orchestration workshop swiss army knife
Commands:
ami Show the AMI that will be used for deployment
amis List Ubuntu AMIs in the current region
build Build the Docker image to run this program in a container
cards Generate ready-to-print cards for a batch of VMs
deploy Install Docker on a bunch of running VMs
ec2quotas Check our EC2 quotas (max instances)
help Show available commands
ids List the instance IDs belonging to a given tag or token
ips List the IP addresses of the VMs for a given tag or token
kube Setup kubernetes clusters with kubeadm (must be run AFTER deploy)
kubetest Check that all notes are reporting as Ready
list List available batches in the current region
opensg Open the default security group to ALL ingress traffic
pull_images Pre-pull a bunch of Docker images
retag Apply a new tag to a batch of VMs
start Start a batch of VMs
status List instance status for a given batch
stop Stop (terminate, shutdown, kill, remove, destroy...) instances
test Run tests (pre-flight checks) on a batch of VMs
wrap Run this program in a container
./trainer <command> [n-instances|tag] [settings/file.yaml]
Core commands:
start n Start n instances
list [TAG] If a tag is provided, list its VMs. Otherwise, list tags.
deploy TAG Deploy all instances with a given tag
pull-images TAG Pre-pull docker images. Run only after deploying.
stop TAG Stop and delete instances tagged TAG
Extras:
ips TAG List all IPs of instances with a given tag (updates ips.txt)
ids TAG/TOKEN List all instance IDs with a given tag
shell Get a shell in the trainer container
status TAG Print information about this tag and its VMs
tags List all tags (per-region)
retag TAG/TOKEN TAG Retag instances with a new tag
Beta:
ami Look up Amazon Machine Images
cards FILE Generate cards
opensg Modify AWS security groups
```
### Summary of What `./workshopctl` Does For You
### Summary of What `./trainer` Does For You
- Used to manage bulk AWS instances for you without needing to use AWS cli or gui.
- Can manage multiple "tags" or groups of instances, which are tracked in `prepare-vms/tags/`
- Can also create PDF/HTML for printing student info for instance IP's and login.
- The `./workshopctl` script can be executed directly.
- The `./trainer` script can be executed directly.
- It will run locally if all its dependencies are fulfilled; otherwise it will run in the Docker container you created with `docker-compose build` (preparevms_prepare-vms).
- During `start` it will add your default local SSH key to all instances under the `ubuntu` user.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. This can be configured with the `docker_user_password` property in the settings file.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. For now, this is hard coded.
### Example Steps to Launch a Batch of AWS Instances for a Workshop
### Example Steps to Launch a Batch of Instances for a Workshop
- Run `./workshopctl start N` Creates `N` EC2 instances
- Export the environment variables needed by the AWS CLI (see **Required Environment Variables** above)
- Run `./trainer start N` Creates `N` EC2 instances
- Your local SSH key will be synced to instances under `ubuntu` user
- AWS instances will be created and tagged based on date, and IP's stored in `prepare-vms/tags/`
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `lib/postprep.py` via parallel-ssh
- Run `./trainer deploy TAG settings/somefile.yaml` to run `scripts/postprep.rc` via parallel-ssh
- If it errors or times out, you should be able to rerun
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
- Run `./workshopctl pull_images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./workshopctl cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
- Run `./trainer pull-images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./trainer cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
- *Have a great workshop*
- Run `./workshopctl stop TAG` to terminate instances.
### Example Steps to Launch Azure Instances
- Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and authenticate with a valid account
- Customize `azuredeploy.parameters.json`
- Required:
- Provide the SSH public key you plan to use for instance configuration
- Optional:
- Choose a name for the workshop (default is "workshop")
- Choose the number of instances (default is 3)
- Customize the desired instance size (default is Standard_D1_v2)
- Launch instances with your chosen resource group name and your preferred region; the examples are "workshop" and "eastus":
```
az group create --name workshop --location eastus
az group deployment create --resource-group workshop --template-file azuredeploy.json --parameters @azuredeploy.parameters.json
```
The `az group deployment create` command can take several minutes and will only say `- Running ..` until it completes, unless you increase the verbosity with `--verbose` or `--debug`.
To display the IPs of the instances you've launched:
```
az vm list-ip-addresses --resource-group workshop --output table
```
If you want to put the IPs into `prepare-vms/tags/<tag>/ips.txt` for a tag of "myworkshop":
1) If you haven't yet installed `jq` and/or created your event's tags directory in `prepare-vms`:
```
brew install jq
mkdir -p tags/myworkshop
```
2) And then generate the IP list:
```
az vm list-ip-addresses --resource-group workshop --output json | jq -r '.[].virtualMachine.network.publicIpAddresses[].ipAddress' > tags/myworkshop/ips.txt
```
After the workshop is over, remove the instances:
```
az group delete --resource-group workshop
```
### Example Steps to Configure Instances from a non-AWS Source
- Launch instances via your preferred method. You'll need to get the instance IPs and be able to ssh into them.
- Set placeholder values for [AWS environment variable settings](#required-environment-variables).
- Choose a tag. It could be an event name, datestamp, etc. Ensure you have created a directory for your tag: `prepare-vms/tags/<tag>/`
- If you have not already generated a file with the IPs to be configured:
- The file should be named `prepare-vms/tags/<tag>/ips.txt`
- Format is one IP per line, no other info needed.
- Ensure the settings file is as desired (especially the number of nodes): `prepare-vms/settings/kube101.yaml`
- For a tag called `myworkshop`, configure instances: `workshopctl deploy myworkshop settings/kube101.yaml`
- Optionally, configure Kubernetes clusters of the size in the settings: `workshopctl kube myworkshop`
- Optionally, test your Kubernetes clusters. They may take a little time to become ready: `workshopctl kubetest myworkshop`
- Generate cards to print and hand out: `workshopctl cards myworkshop settings/kube101.yaml`
- Print the cards file: `prepare-vms/tags/myworkshop/ips.html`
- Run `./trainer stop TAG` to terminate instances.
## Other Tools
@@ -178,6 +100,13 @@ az group delete --resource-group workshop
- Run `pcopykey`.
### Installing extra packages
- Source `postprep.rc`.
(This will install a few extra packages, add entries to
/etc/hosts, generate SSH keys, and deploy them on all hosts.)
## Even More Details
#### Sync of SSH keys
@@ -204,35 +133,31 @@ If you create new VMs, the symlinked file will be overwritten.
Instances can be deployed manually using the `deploy` command:
$ ./workshopctl deploy TAG settings/somefile.yaml
$ ./trainer deploy TAG settings/somefile.yaml
The `postprep.py` file will be copied via parallel-ssh to all of the VMs and executed.
The `postprep.rc` file will be copied via parallel-ssh to all of the VMs and executed.
#### Pre-pull images
$ ./workshopctl pull_images TAG
$ ./trainer pull-images TAG
#### Generate cards
$ ./workshopctl cards TAG settings/somefile.yaml
If you want to generate both HTML and PDF cards, install [wkhtmltopdf](https://wkhtmltopdf.org/downloads.html); without that installed, only HTML cards will be generated.
If you don't have `wkhtmltopdf` installed, you will get a warning that it is a missing dependency. If you plan to just print the HTML cards, you can ignore this.
$ ./trainer cards TAG settings/somefile.yaml
#### List tags
$ ./workshopctl list
$ ./trainer list
#### List VMs
$ ./workshopctl list TAG
$ ./trainer list TAG
This will print a human-friendly list containing some information about each instance.
#### Stop and destroy VMs
$ ./workshopctl stop TAG
$ ./trainer stop TAG
## ToDo

View File

@@ -1,250 +0,0 @@
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"workshopName": {
"type": "string",
"defaultValue": "workshop",
"metadata": {
"description": "Workshop name."
}
},
"vmPrefix": {
"type": "string",
"defaultValue": "node",
"metadata": {
"description": "Prefix for VM names."
}
},
"numberOfInstances": {
"type": "int",
"defaultValue": 3,
"metadata": {
"description": "Number of VMs to create."
}
},
"adminUsername": {
"type": "string",
"defaultValue": "ubuntu",
"metadata": {
"description": "Admin username for VMs."
}
},
"sshKeyData": {
"type": "string",
"defaultValue": "",
"metadata": {
"description": "SSH rsa public key file as a string."
}
},
"imagePublisher": {
"type": "string",
"defaultValue": "Canonical",
"metadata": {
"description": "OS image publisher; default Canonical."
}
},
"imageOffer": {
"type": "string",
"defaultValue": "UbuntuServer",
"metadata": {
"description": "The name of the image offer. The default is Ubuntu"
}
},
"imageSKU": {
"type": "string",
"defaultValue": "16.04-LTS",
"metadata": {
"description": "Version of the image. The default is 16.04-LTS"
}
},
"vmSize": {
"type": "string",
"defaultValue": "Standard_D1_v2",
"metadata": {
"description": "VM Size."
}
}
},
"variables": {
"vnetID": "[resourceId('Microsoft.Network/virtualNetworks',variables('virtualNetworkName'))]",
"subnet1Ref": "[concat(variables('vnetID'),'/subnets/',variables('subnet1Name'))]",
"vmName": "[parameters('vmPrefix')]",
"sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]",
"publicIPAddressName": "PublicIP",
"publicIPAddressType": "Dynamic",
"virtualNetworkName": "MyVNET",
"netSecurityGroup": "MyNSG",
"addressPrefix": "10.0.0.0/16",
"subnet1Name": "subnet-1",
"subnet1Prefix": "10.0.0.0/24",
"nicName": "myVMNic"
},
"resources": [
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/publicIPAddresses",
"name": "[concat(variables('publicIPAddressName'),copyIndex(1))]",
"location": "[resourceGroup().location]",
"copy": {
"name": "publicIPLoop",
"count": "[parameters('numberOfInstances')]"
},
"properties": {
"publicIPAllocationMethod": "[variables('publicIPAddressType')]"
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/virtualNetworks",
"name": "[variables('virtualNetworkName')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Network/networkSecurityGroups/', variables('netSecurityGroup'))]"
],
"properties": {
"addressSpace": {
"addressPrefixes": [
"[variables('addressPrefix')]"
]
},
"subnets": [
{
"name": "[variables('subnet1Name')]",
"properties": {
"addressPrefix": "[variables('subnet1Prefix')]",
"networkSecurityGroup": {
"id": "[resourceId('Microsoft.Network/networkSecurityGroups', variables('netSecurityGroup'))]"
}
}
}
]
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/networkInterfaces",
"name": "[concat(variables('nicName'),copyIndex(1))]",
"location": "[resourceGroup().location]",
"copy": {
"name": "nicLoop",
"count": "[parameters('numberOfInstances')]"
},
"dependsOn": [
"[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'),copyIndex(1))]",
"[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]"
],
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses', concat(variables('publicIPAddressName'), copyIndex(1)))]"
},
"subnet": {
"id": "[variables('subnet1Ref')]"
}
}
}
]
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-12-01",
"type": "Microsoft.Compute/virtualMachines",
"name": "[concat(variables('vmName'),copyIndex(1))]",
"location": "[resourceGroup().location]",
"copy": {
"name": "vmLoop",
"count": "[parameters('numberOfInstances')]"
},
"dependsOn": [
"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'), copyIndex(1))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "[parameters('vmSize')]"
},
"osProfile": {
"computerName": "[concat(variables('vmName'),copyIndex(1))]",
"adminUsername": "[parameters('adminUsername')]",
"linuxConfiguration": {
"disablePasswordAuthentication": true,
"ssh": {
"publicKeys": [
{
"path": "[variables('sshKeyPath')]",
"keyData": "[parameters('sshKeyData')]"
}
]
}
}
},
"storageProfile": {
"osDisk": {
"createOption": "FromImage"
},
"imageReference": {
"publisher": "[parameters('imagePublisher')]",
"offer": "[parameters('imageOffer')]",
"sku": "[parameters('imageSKU')]",
"version": "latest"
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('nicName'),copyIndex(1)))]"
}
]
}
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/networkSecurityGroups",
"name": "[variables('netSecurityGroup')]",
"location": "[resourceGroup().location]",
"tags": {
"workshop": "[parameters('workshopName')]"
},
"properties": {
"securityRules": [
{
"name": "default-open-ports",
"properties": {
"protocol": "Tcp",
"sourcePortRange": "*",
"destinationPortRange": "*",
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 1000,
"direction": "Inbound"
}
}
]
}
}
],
"outputs": {
"resourceID": {
"type": "string",
"value": "[resourceId('Microsoft.Network/publicIPAddresses', concat(variables('publicIPAddressName'),'1'))]"
}
}
}

View File

@@ -1,18 +0,0 @@
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"sshKeyData": {
"value": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDXTIl/M9oeSlcsC5Rfe+nZr4Jc4sl200pSw2lpdxlZ3xzeP15NgSSMJnigUrKUXHfqRQ+2wiPxEf0Odz2GdvmXvR0xodayoOQsO24AoERjeSBXCwqITsfp1bGKzMb30/3ojRBo6LBR6r1+lzJYnNCGkT+IQwLzRIpm0LCNz1j08PUI2aZ04+mcDANvHuN/hwi/THbLLp6SNWN43m9r02RcC6xlCNEhJi4wk4VzMzVbSv9RlLGST2ocbUHwmQ2k9OUmpzoOx73aQi9XNnEaFh2w/eIdXM75VtkT3mRryyykg9y0/hH8/MVmIuRIdzxHQqlm++DLXVH5Ctw6a4kS+ki7 workshop"
},
"workshopName": {
"value": "workshop"
},
"numberOfInstances": {
"value": 3
},
"vmSize": {
"value": "Standard_D1_v2"
}
}
}

View File

@@ -1,106 +0,0 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://container.training/" -%}
{%- set pagesize = 12 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine" -%}
{%- set this_or_each = "this" -%}
{%- set machine_is_or_machines_are = "machine is" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "orchestration workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "each" -%}
{%- set machine_is_or_machines_are = "machines are" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_swarm -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head><style>
body, table {
margin: 0;
padding: 0;
line-height: 1em;
font-size: 14px;
}
table {
border-spacing: 0;
margin-top: 0.4em;
margin-bottom: 0.4em;
border-left: 0.8em double grey;
padding-left: 0.4em;
}
div {
float: left;
border: 1px dotted black;
padding-top: 1%;
padding-bottom: 1%;
/* columns * (width+left+right) < 100% */
width: 21.5%;
padding-left: 1.5%;
padding-right: 1.5%;
}
p {
margin: 0.4em 0 0.4em 0;
}
img {
height: 4em;
float: right;
margin-right: -0.4em;
}
.logpass {
font-family: monospace;
font-weight: bold;
}
.pagebreak {
page-break-after: always;
clear: both;
display: block;
height: 8px;
}
</style></head>
<body>
{% for cluster in clusters %}
{% if loop.index0>0 and loop.index0%pagesize==0 %}
<span class="pagebreak"></span>
{% endif %}
<div>
<p>
Here is the connection information to your very own
{{ cluster_or_machine }} for this {{ workshop_name }}.
You can connect to {{ this_or_each }} VM with any SSH client.
</p>
<p>
<img src="{{ image_src }}" />
<table>
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>
<p>
Your {{ machine_is_or_machines_are }}:
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>You can find the slides at:
<center>{{ url }}</center>
</p>
</div>
{% endfor %}
</body>
</html>

View File

@@ -1,5 +0,0 @@
Put your initials in the first column to "claim" a cluster.
Initials{% for node in clusters[0] %} node{{ loop.index }}{% endfor %}
{% for cluster in clusters -%}
{%- for node in cluster %} {{ node|trim }}{% endfor %}
{% endfor %}
Can't render this file because it contains an unexpected character in line 1 and column 42.

View File

@@ -1,21 +0,0 @@
#!/bin/sh
if [ $(whoami) != ubuntu ]; then
echo "This script should be executed on a freshly deployed node,"
echo "with the 'ubuntu' user. Aborting."
exit 1
fi
if id docker; then
sudo userdel -r docker
fi
pip install --user awscli jinja2 pdfkit
sudo apt-get install -y wkhtmltopdf xvfb
tmux new-session \; send-keys "
[ -f ~/.ssh/id_rsa ] || ssh-keygen
eval \$(ssh-agent)
ssh-add
Xvfb :0 &
export DISPLAY=:0
mkdir -p ~/www
sudo docker run -d -p 80:80 -v \$HOME/www:/usr/share/nginx/html nginx
"

View File

@@ -1,19 +1,26 @@
version: "2"
services:
workshopctl:
prepare-vms:
build: .
image: workshopctl
container_name: prepare-vms
working_dir: /root/prepare-vms
volumes:
- $HOME/.aws/:/root/.aws/
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
- /etc/localtime:/etc/localtime:ro
- /tmp/.X11-unix:/tmp/.X11-unix
- $SSH_AUTH_DIRNAME:$SSH_AUTH_DIRNAME
- $PWD/:/root/prepare-vms/
environment:
SCRIPT_DIR: /root/prepare-vms
DISPLAY: ${DISPLAY}
SSH_AUTH_SOCK: ${SSH_AUTH_SOCK}
SSH_AGENT_PID: ${SSH_AGENT_PID}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_DEFAULT_REGION: ${AWS_DEFAULT_REGION}
AWS_DEFAULT_OUTPUT: json
AWS_INSTANCE_TYPE: ${AWS_INSTANCE_TYPE}
AWS_VPC_ID: ${AWS_VPC_ID}
USER: ${USER}
entrypoint: /root/prepare-vms/workshopctl
entrypoint: /root/prepare-vms/scripts/trainer-cli

View File

@@ -1,76 +0,0 @@
# Abort if any error happens, and show the command that caused the error.
_ERR() {
error "Command $BASH_COMMAND failed (exit status: $?)"
}
set -eE
trap _ERR ERR
die() {
if [ -n "$1" ]; then
error "$1"
fi
exit 1
}
error() {
>/dev/stderr echo "[$(red ERROR)] $1"
}
warning() {
>/dev/stderr echo "[$(yellow WARNING)] $1"
}
info() {
>/dev/stderr echo "[$(green INFO)] $1"
}
# Print a full-width separator.
# If given an argument, will print it in the middle of that separator.
# If the argument is longer than the screen width, it will be printed between two separator lines.
sep() {
if [ -z "$COLUMNS" ]; then
COLUMNS=80
fi
SEP=$(yes = | tr -d "\n" | head -c $(($COLUMNS - 1)))
if [ -z "$1" ]; then
>/dev/stderr echo $SEP
else
MSGLEN=$(echo "$1" | wc -c)
if [ $(($MSGLEN + 4)) -gt $COLUMNS ]; then
>/dev/stderr echo "$SEP"
>/dev/stderr echo "$1"
>/dev/stderr echo "$SEP"
else
LEFTLEN=$((($COLUMNS - $MSGLEN - 2) / 2))
RIGHTLEN=$(($COLUMNS - $MSGLEN - 2 - $LEFTLEN))
LEFTSEP=$(echo $SEP | head -c $LEFTLEN)
RIGHTSEP=$(echo $SEP | head -c $RIGHTLEN)
>/dev/stderr echo "$LEFTSEP $1 $RIGHTSEP"
fi
fi
}
need_tag() {
if [ -z "$1" ]; then
die "Please specify a tag or token. To see available tags and tokens, run: $0 list"
fi
}
need_settings() {
if [ -z "$1" ]; then
die "Please specify a settings file."
elif [ ! -f "$1" ]; then
die "Settings file $1 doesn't exist."
fi
}
need_ips_file() {
IPS_FILE=$1
if [ -z "$IPS_FILE" ]; then
die "IPS_FILE not set."
fi
if [ ! -s "$IPS_FILE" ]; then
die "IPS_FILE $IPS_FILE not found. Please run: $0 ips <TAG>"
fi
}

View File

@@ -1,15 +0,0 @@
bold() {
echo "$(tput bold)$1$(tput sgr0)"
}
red() {
echo "$(tput setaf 1)$1$(tput sgr0)"
}
green() {
echo "$(tput setaf 2)$1$(tput sgr0)"
}
yellow() {
echo "$(tput setaf 3)$1$(tput sgr0)"
}

View File

@@ -1,562 +0,0 @@
export AWS_DEFAULT_OUTPUT=text
HELP=""
_cmd() {
HELP="$(printf "%s\n%-12s %s\n" "$HELP" "$1" "$2")"
}
_cmd help "Show available commands"
_cmd_help() {
printf "$(basename $0) - the orchestration workshop swiss army knife\n"
printf "Commands:"
printf "%s" "$HELP" | sort
}
_cmd amis "List Ubuntu AMIs in the current region"
_cmd_amis() {
find_ubuntu_ami -r $AWS_DEFAULT_REGION "$@"
}
_cmd ami "Show the AMI that will be used for deployment"
_cmd_ami() {
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a amd64 -v 16.04 -t hvm:ebs -N -q
}
_cmd build "Build the Docker image to run this program in a container"
_cmd_build() {
docker-compose build
}
_cmd wrap "Run this program in a container"
_cmd_wrap() {
docker-compose run --rm workshopctl "$@"
}
_cmd cards "Generate ready-to-print cards for a batch of VMs"
_cmd_cards() {
TAG=$1
SETTINGS=$2
need_tag $TAG
need_settings $SETTINGS
# If you're not using AWS, populate the ips.txt file manually
if [ ! -f tags/$TAG/ips.txt ]; then
aws_get_instance_ips_by_tag $TAG >tags/$TAG/ips.txt
fi
# Remove symlinks to old cards
rm -f ips.html ips.pdf
# This will generate two files in the base dir: ips.pdf and ips.html
lib/ips-txt-to-html.py $SETTINGS
for f in ips.html ips.pdf; do
# Remove old versions of cards if they exist
rm -f tags/$TAG/$f
# Move the generated file and replace it with a symlink
mv -f $f tags/$TAG/$f && ln -s tags/$TAG/$f $f
done
info "Cards created. You can view them with:"
info "xdg-open ips.html ips.pdf (on Linux)"
info "open ips.html ips.pdf (on MacOS)"
}
_cmd deploy "Install Docker on a bunch of running VMs"
_cmd_deploy() {
TAG=$1
SETTINGS=$2
need_tag $TAG
need_settings $SETTINGS
link_tag $TAG
count=$(wc -l ips.txt)
# wait until all hosts are reachable before trying to deploy
info "Trying to reach $TAG instances..."
while ! tag_is_reachable $TAG; do
>/dev/stderr echo -n "."
sleep 2
done
>/dev/stderr echo ""
sep "Deploying tag $TAG"
pssh -I tee /tmp/settings.yaml <$SETTINGS
pssh "
sudo apt-get update &&
sudo apt-get install -y python-setuptools &&
sudo easy_install pyyaml"
# Copy postprep.py to the remote machines, and execute it, feeding it the list of IP addresses
pssh -I tee /tmp/postprep.py <lib/postprep.py
pssh --timeout 900 --send-input "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" <ips.txt
# Install docker-prompt script
pssh -I sudo tee /usr/local/bin/docker-prompt <lib/docker-prompt
pssh sudo chmod +x /usr/local/bin/docker-prompt
# If /home/docker/.ssh/id_rsa doesn't exist, copy it from node1
pssh "
sudo -u docker [ -f /home/docker/.ssh/id_rsa ] ||
ssh -o StrictHostKeyChecking=no node1 sudo -u docker tar -C /home/docker -cvf- .ssh |
sudo -u docker tar -C /home/docker -xf-"
# if 'docker@' doesn't appear in /home/docker/.ssh/authorized_keys, copy it there
pssh "
grep docker@ /home/docker/.ssh/authorized_keys ||
cat /home/docker/.ssh/id_rsa.pub |
sudo -u docker tee -a /home/docker/.ssh/authorized_keys"
# On node1, create and deploy TLS certs using Docker Machine
# (Currently disabled.)
true || pssh "
if grep -q node1 /tmp/node; then
grep ' node' /etc/hosts |
xargs -n2 sudo -H -u docker \
docker-machine create -d generic --generic-ssh-user docker --generic-ip-address
fi"
sep "Deployed tag $TAG"
info "You may want to run one of the following commands:"
info "$0 kube $TAG"
info "$0 pull_images $TAG"
info "$0 cards $TAG $SETTINGS"
}
_cmd kube "Setup kubernetes clusters with kubeadm (must be run AFTER deploy)"
_cmd_kube() {
# Install packages
pssh --timeout 200 "
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg |
sudo apt-key add - &&
echo deb http://apt.kubernetes.io/ kubernetes-xenial main |
sudo tee /etc/apt/sources.list.d/kubernetes.list"
pssh --timeout 200 "
sudo apt-get update -q &&
sudo apt-get install -qy kubelet kubeadm kubectl
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl"
# Initialize kube master
pssh --timeout 200 "
if grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/admin.conf ]; then
kubeadm token generate > /tmp/token
sudo kubeadm init --token \$(cat /tmp/token)
fi"
# Put kubeconfig in ubuntu's and docker's accounts
pssh "
if grep -q node1 /tmp/node; then
sudo mkdir -p \$HOME/.kube /home/docker/.kube &&
sudo cp /etc/kubernetes/admin.conf \$HOME/.kube/config &&
sudo cp /etc/kubernetes/admin.conf /home/docker/.kube/config &&
sudo chown -R \$(id -u) \$HOME/.kube &&
sudo chown -R docker /home/docker/.kube
fi"
# Install weave as the pod network
pssh "
if grep -q node1 /tmp/node; then
kubever=\$(kubectl version | base64 | tr -d '\n')
kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=\$kubever
fi"
# Join the other nodes to the cluster
pssh --timeout 200 "
if ! grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token)
sudo kubeadm join --discovery-token-unsafe-skip-ca-verification --token \$TOKEN node1:6443
fi"
# Install stern
pssh "
if [ ! -x /usr/local/bin/stern ]; then
sudo curl -L -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.8.0/stern_linux_amd64
sudo chmod +x /usr/local/bin/stern
stern --completion bash | sudo tee /etc/bash_completion.d/stern
fi"
# Install helm
pssh "
if [ ! -x /usr/local/bin/helm ]; then
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | sudo bash
helm completion bash | sudo tee /etc/bash_completion.d/helm
fi"
sep "Done"
}
_cmd kubetest "Check that all notes are reporting as Ready"
_cmd_kubetest() {
# There are way too many backslashes in the command below.
# Feel free to make that better ♥
pssh "
set -e
[ -f /tmp/node ]
if grep -q node1 /tmp/node; then
which kubectl
for NODE in \$(awk /\ node/\ {print\ \\\$2} /etc/hosts); do
echo \$NODE ; kubectl get nodes | grep -w \$NODE | grep -w Ready
done
fi"
}
_cmd ids "List the instance IDs belonging to a given tag or token"
_cmd_ids() {
TAG=$1
need_tag $TAG
info "Looking up by tag:"
aws_get_instance_ids_by_tag $TAG
# Just in case we managed to create instances but weren't able to tag them
info "Looking up by token:"
aws_get_instance_ids_by_client_token $TAG
}
_cmd ips "List the IP addresses of the VMs for a given tag or token"
_cmd_ips() {
TAG=$1
need_tag $TAG
mkdir -p tags/$TAG
aws_get_instance_ips_by_tag $TAG | tee tags/$TAG/ips.txt
link_tag $TAG
}
_cmd list "List available batches in the current region"
_cmd_list() {
info "Listing batches in region $AWS_DEFAULT_REGION:"
aws_display_tags
}
_cmd status "List instance status for a given batch"
_cmd_status() {
info "Using region $AWS_DEFAULT_REGION."
TAG=$1
need_tag $TAG
describe_tag $TAG
tag_is_reachable $TAG
info "You may be interested in running one of the following commands:"
info "$0 ips $TAG"
info "$0 deploy $TAG <settings/somefile.yaml>"
}
_cmd opensg "Open the default security group to ALL ingress traffic"
_cmd_opensg() {
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol icmp \
--port -1 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol udp \
--port 0-65535 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol tcp \
--port 0-65535 \
--cidr 0.0.0.0/0
}
_cmd pull_images "Pre-pull a bunch of Docker images"
_cmd_pull_images() {
TAG=$1
need_tag $TAG
pull_tag $TAG
}
_cmd retag "Apply a new tag to a batch of VMs"
_cmd_retag() {
OLDTAG=$1
NEWTAG=$2
need_tag $OLDTAG
if [[ -z "$NEWTAG" ]]; then
die "You must specify a new tag to apply."
fi
aws_tag_instances $OLDTAG $NEWTAG
}
_cmd start "Start a batch of VMs"
_cmd_start() {
# Number of instances to create
COUNT=$1
# Optional settings file (to carry on with deployment)
SETTINGS=$2
if [ -z "$COUNT" ]; then
die "Indicate number of instances to start."
fi
# Print our AWS username, to ease the pain of credential-juggling
greet
# Upload our SSH keys to AWS if needed, to be added to each VM's authorized_keys
key_name=$(sync_keys)
AMI=$(_cmd_ami) # Retrieve the AWS image ID
if [ -z "$AMI" ]; then
die "I could not find which AMI to use in this region. Try another region?"
fi
TOKEN=$(get_token) # generate a timestamp token for this batch of VMs
AWS_KEY_NAME=$(make_key_name)
sep "Starting instances"
info " Count: $COUNT"
info " Region: $AWS_DEFAULT_REGION"
info " Token/tag: $TOKEN"
info " AMI: $AMI"
info " Key name: $AWS_KEY_NAME"
result=$(aws ec2 run-instances \
--key-name $AWS_KEY_NAME \
--count $COUNT \
--instance-type ${AWS_INSTANCE_TYPE-t2.medium} \
--client-token $TOKEN \
--image-id $AMI)
reservation_id=$(echo "$result" | head -1 | awk '{print $2}')
info "Reservation ID: $reservation_id"
sep
# if instance creation succeeded, we should have some IDs
IDS=$(aws_get_instance_ids_by_client_token $TOKEN)
if [ -z "$IDS" ]; then
die "Instance creation failed."
fi
# Tag these new instances with a tag that is the same as the token
TAG=$TOKEN
aws_tag_instances $TOKEN $TAG
wait_until_tag_is_running $TAG $COUNT
sep
info "Successfully created $COUNT instances with tag $TAG"
sep
mkdir -p tags/$TAG
IPS=$(aws_get_instance_ips_by_tag $TAG)
echo "$IPS" >tags/$TAG/ips.txt
link_tag $TAG
if [ -n "$SETTINGS" ]; then
_cmd_deploy $TAG $SETTINGS
else
info "To deploy or kill these instances, run one of the following:"
info "$0 deploy $TAG <settings/somefile.yaml>"
info "$0 stop $TAG"
fi
}
_cmd ec2quotas "Check our EC2 quotas (max instances)"
_cmd_ec2quotas() {
greet
max_instances=$(aws ec2 describe-account-attributes \
--attribute-names max-instances \
--query 'AccountAttributes[*][AttributeValues]')
info "In the current region ($AWS_DEFAULT_REGION) you can deploy up to $max_instances instances."
# Print list of AWS EC2 regions, highlighting ours ($AWS_DEFAULT_REGION) in the list
# If our $AWS_DEFAULT_REGION is not valid, the error message will be pretty descriptive:
# Could not connect to the endpoint URL: "https://ec2.foo.amazonaws.com/"
info "Available regions:"
aws ec2 describe-regions | awk '{print $3}' | grep --color=auto $AWS_DEFAULT_REGION -C50
}
_cmd stop "Stop (terminate, shutdown, kill, remove, destroy...) instances"
_cmd_stop() {
TAG=$1
need_tag $TAG
aws_kill_instances_by_tag $TAG
}
_cmd test "Run tests (pre-flight checks) on a batch of VMs"
_cmd_test() {
TAG=$1
need_tag $TAG
test_tag $TAG
}
###
greet() {
IAMUSER=$(aws iam get-user --query 'User.UserName')
info "Hello! You seem to be UNIX user $USER, and IAM user $IAMUSER."
}
link_tag() {
TAG=$1
need_tag $TAG
IPS_FILE=tags/$TAG/ips.txt
need_ips_file $IPS_FILE
ln -sf $IPS_FILE ips.txt
}
pull_tag() {
TAG=$1
need_tag $TAG
link_tag $TAG
if [ ! -s $IPS_FILE ]; then
die "Nonexistent or empty IPs file $IPS_FILE."
fi
# Pre-pull a bunch of images
pssh --timeout 900 'for I in \
debian:latest \
ubuntu:latest \
fedora:latest \
centos:latest \
elasticsearch:2 \
postgres \
redis \
alpine \
registry \
nicolaka/netshoot \
jpetazzo/trainingwheels \
golang \
training/namer \
dockercoins/hasher \
dockercoins/rng \
dockercoins/webui \
dockercoins/worker \
logstash \
prom/node-exporter \
google/cadvisor \
dockersamples/visualizer \
nathanleclaire/redisonrails; do
sudo -u docker docker pull $I
done'
info "Finished pulling images for $TAG."
info "You may now want to run:"
info "$0 cards $TAG <settings/somefile.yaml>"
}
wait_until_tag_is_running() {
max_retry=50
TAG=$1
COUNT=$2
i=0
done_count=0
while [[ $done_count -lt $COUNT ]]; do
let "i += 1"
info "$(printf "%d/%d instances online" $done_count $COUNT)"
done_count=$(aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running" \
"Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].State.Name" \
| tr "\t" "\n" \
| wc -l)
if [[ $i -gt $max_retry ]]; then
die "Timed out while waiting for instance creation (after $max_retry retries)"
fi
sleep 1
done
}
tag_is_reachable() {
TAG=$1
need_tag $TAG
link_tag $TAG
pssh -t 5 true 2>&1 >/dev/null
}
test_tag() {
TAG=$1
ips_file=tags/$TAG/ips.txt
info "Picking a random IP address in $ips_file to run tests."
n=$((1 + $RANDOM % $(wc -l <$ips_file)))
ip=$(head -n $n $ips_file | tail -n 1)
test_vm $ip
info "Tests complete."
}
test_vm() {
ip=$1
info "Testing instance with IP address $ip."
user=ubuntu
errors=""
for cmd in "hostname" \
"whoami" \
"hostname -i" \
"cat /tmp/node" \
"cat /tmp/ipv4" \
"cat /etc/hosts" \
"hostnamectl status" \
"docker version | grep Version -B1" \
"docker-compose version" \
"docker-machine version" \
"docker images" \
"docker ps" \
"curl --silent localhost:55555" \
"sudo ls -la /mnt/ | grep docker" \
"env" \
"ls -la /home/docker/.ssh"; do
sep "$cmd"
echo "$cmd" \
| ssh -A -q \
-o "UserKnownHostsFile /dev/null" \
-o "StrictHostKeyChecking=no" \
$user@$ip sudo -u docker -i \
|| {
status=$?
error "$cmd exit status: $status"
errors="[$status] $cmd\n$errors"
}
done
sep
if [ -n "$errors" ]; then
error "The following commands had non-zero exit codes:"
printf "$errors"
fi
info "Test VM was $ip."
}
make_key_name() {
SHORT_FINGERPRINT=$(ssh-add -l | grep RSA | head -n1 | cut -d " " -f 2 | tr -d : | cut -c 1-8)
echo "${SHORT_FINGERPRINT}-${USER}"
}
sync_keys() {
# make sure ssh-add -l contains "RSA"
ssh-add -l | grep -q RSA \
|| die "The output of \`ssh-add -l\` doesn't contain 'RSA'. Start the agent, add your keys?"
AWS_KEY_NAME=$(make_key_name)
info "Syncing keys... "
if ! aws ec2 describe-key-pairs --key-name "$AWS_KEY_NAME" &>/dev/null; then
aws ec2 import-key-pair --key-name $AWS_KEY_NAME \
--public-key-material "$(ssh-add -L \
| grep -i RSA \
| head -n1 \
| cut -d " " -f 1-2)" &>/dev/null
if ! aws ec2 describe-key-pairs --key-name "$AWS_KEY_NAME" &>/dev/null; then
die "Somehow, importing the key didn't work. Make sure that 'ssh-add -l | grep RSA | head -n1' returns an RSA key?"
else
info "Imported new key $AWS_KEY_NAME."
fi
else
info "Using existing key $AWS_KEY_NAME."
fi
}
get_token() {
if [ -z $USER ]; then
export USER=anonymous
fi
date +%Y-%m-%d-%H-%M-$USER
}
describe_tag() {
# Display instance details and reachability/status information
TAG=$1
need_tag $TAG
aws_display_instances_by_tag $TAG
aws_display_instance_statuses_by_tag $TAG
}

View File

@@ -1,21 +0,0 @@
#!/bin/sh
case "$DOCKER_HOST" in
*:3376)
echo swarm
;;
*:2376)
echo $DOCKER_MACHINE_NAME
;;
*:2375)
echo $DOCKER_MACHINE_NAME
;;
*:55555)
echo $DOCKER_MACHINE_NAME
;;
"")
echo local
;;
*)
echo unknown
;;
esac

View File

@@ -1,174 +0,0 @@
# borrowed from https://gist.github.com/kirikaza/6627072
# The original script has been wrapped in a function that invokes a subshell.
# That way, it can be safely invoked as a function from other scripts.
find_ubuntu_ami() {
(
usage() {
cat >&2 <<__
usage: find-ubuntu-ami.sh [ <filter>... ] [ <sorting> ] [ <options> ]
where:
<filter> is pair of key and substring to search
-r <region>
-n <name>
-v <version>
-a <arch>
-t <type>
-d <date>
-i <image>
-k <kernel>
<sorting> is one of:
-R by region
-N by name
-V by version
-A by arch
-T by type
-D by date
-I by image
-K by kernel
<options> can be:
-q just show AMI
protip for Docker orchestration workshop admin:
./find-ubuntu-ami.sh -t hvm:ebs -r \$AWS_REGION -v 15.10 -N
__
exit 1
}
args=$(getopt hr:n:v:a:t:d:i:k:RNVATDIKq $*)
if [ $? != 0 ]; then
echo >&2
usage
fi
region=
name=
version=
arch=
type=
date=
image=
kernel=
sort=date
quiet=
set -- $args
for a; do
case "$a" in
-h) usage ;;
-r)
region=$2
shift
;;
-n)
name=$2
shift
;;
-v)
version=$2
shift
;;
-a)
arch=$2
shift
;;
-t)
type=$2
shift
;;
-d)
date=$2
shift
;;
-i)
image=$2
shift
;;
-k)
kernel=$2
shift
;;
-R) sort=region ;;
-N) sort=name ;;
-V) sort=version ;;
-A) sort=arch ;;
-T) sort=type ;;
-D) sort=date ;;
-I) sort=image ;;
-K) sort=kernel ;;
-q) quiet=y ;;
--)
shift
break
;;
*) continue ;;
esac
shift
done
[ $# = 0 ] || usage
fix_json() {
tr -d \\n | sed 's/,]}/]}/'
}
jq_query() {
cat <<__
.aaData | map (
{
region: .[0],
name: .[1],
version: .[2],
arch: .[3],
type: .[4],
date: .[5],
image: .[6],
kernel: .[7]
} | select (
(.region | contains("$region")) and
(.name | contains("$name")) and
(.version | contains("$version")) and
(.arch | contains("$arch")) and
(.type | contains("$type")) and
(.date | contains("$date")) and
(.image | contains("$image</a>")) and
(.kernel | contains("$kernel"))
)
) | sort_by(.$sort) | .[] |
"\(.region)|\(.name)|\(.version)|\(.arch)|\(.type)|\(.date)|\(.image)|\(.kernel)"
__
}
trim_quotes() {
sed 's/^"//;s/"$//'
}
escape_spaces() {
sed 's/ /\\\ /g'
}
url=http://cloud-images.ubuntu.com/locator/ec2/releasesTable
{
[ "$quiet" ] || echo REGION NAME VERSION ARCH TYPE DATE IMAGE KERNEL
curl -s $url | fix_json | jq "$(jq_query)" | trim_quotes | escape_spaces | tr \| ' '
} \
| while read region name version arch type date image kernel; do
image=${image%<*}
image=${image#*>}
if [ "$quiet" ]; then
echo $image
else
echo "$region|$name|$version|$arch|$type|$date|$image|$kernel"
fi
done | column -t -s \|
)
}

View File

@@ -1,51 +0,0 @@
#!/usr/bin/env python
import os
import sys
import yaml
import jinja2
def prettify(l):
l = [ip.strip() for ip in l]
ret = [ "node{}: <code>{}</code>".format(i+1, s) for (i, s) in zip(range(len(l)), l) ]
return ret
# Read settings from user-provided settings file
SETTINGS = yaml.load(open(sys.argv[1]))
clustersize = SETTINGS["clustersize"]
ips = list(open("ips.txt"))
print("---------------------------------------------")
print(" Number of IPs: {}".format(len(ips)))
print(" VMs per cluster: {}".format(clustersize))
print("---------------------------------------------")
assert len(ips)%clustersize == 0
clusters = []
while ips:
cluster = ips[:clustersize]
ips = ips[clustersize:]
clusters.append(cluster)
template_file_name = SETTINGS["cards_template"]
template = jinja2.Template(open(template_file_name).read())
with open("ips.html", "w") as f:
f.write(template.render(clusters=clusters, **SETTINGS))
print("Generated ips.html")
try:
import pdfkit
with open("ips.html") as f:
pdfkit.from_file(f, "ips.pdf", options={
"page-size": SETTINGS["paper_size"],
"margin-top": SETTINGS["paper_margin"],
"margin-bottom": SETTINGS["paper_margin"],
"margin-left": SETTINGS["paper_margin"],
"margin-right": SETTINGS["paper_margin"],
})
print("Generated ips.pdf")
except ImportError:
print("WARNING: could not import pdfkit; did not generate ips.pdf")

View File

@@ -1,23 +0,0 @@
# This file can be sourced in order to directly run commands on
# a batch of VMs whose IPs are located in ips.txt of the directory in which
# the command is run.
pssh() {
HOSTFILE="ips.txt"
[ -f $HOSTFILE ] || {
>/dev/stderr echo "No hostfile found at $HOSTFILE"
return
}
echo "[parallel-ssh] $@"
export PSSH=$(which pssh || which parallel-ssh)
$PSSH -h $HOSTFILE -l ubuntu \
--par 100 \
-O LogLevel=ERROR \
-O UserKnownHostsFile=/dev/null \
-O StrictHostKeyChecking=no \
-O ForwardAgent=yes \
"$@"
}

View File

@@ -1,4 +0,0 @@
#!/bin/sh
pidof Xvfb || Xvfb -terminate &
export DISPLAY=:0
exec wkhtmltopdf.real "$@"

View File

@@ -0,0 +1,70 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 16.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
width="466px" height="423.891px" viewBox="0 0 466 423.891" enable-background="new 0 0 466 423.891" xml:space="preserve">
<rect x="-43.5" y="37.005" display="none" fill-rule="evenodd" clip-rule="evenodd" fill="#E7E7E7" width="792" height="612"/>
<g>
<path id="outline_7_" fill-rule="evenodd" clip-rule="evenodd" d="M242.133,168.481h47.146v48.194h23.837
c11.008,0,22.33-1.962,32.755-5.494c5.123-1.736,10.872-4.154,15.926-7.193c-6.656-8.689-10.053-19.661-11.054-30.476
c-1.358-14.71,1.609-33.855,11.564-45.368l4.956-5.732l5.905,4.747c14.867,11.946,27.372,28.638,29.577,47.665
c17.901-5.266,38.921-4.02,54.701,5.088l6.475,3.734l-3.408,6.652c-13.345,26.046-41.246,34.113-68.524,32.687
c-40.817,101.663-129.68,149.794-237.428,149.794c-55.666,0-106.738-20.81-135.821-70.197l-0.477-0.807l-4.238-8.621
C4.195,271.415,0.93,247.6,3.145,223.803l0.664-7.127h40.315v-48.194h47.143v-47.145h94.292V74.191h56.574V168.481z"/>
<g display="none">
<path display="inline" fill="#394D54" d="M61.093,319.89c6.023,0,11.763-0.157,17.219-0.464c0.476-0.026,0.932-0.063,1.402-0.092
c0.005-0.002,0.008-0.002,0.012-0.002c13.872-0.855,25.876-2.708,35.902-5.57c0.002-0.002,0.004-0.002,0.006-0.002
c1.823-0.521,3.588-1.07,5.282-1.656c1.894-0.657,2.896-2.725,2.241-4.618c-0.656-1.895-2.722-2.899-4.618-2.24
c-12.734,4.412-29.535,6.842-50.125,7.298c-0.002,0-0.004,0-0.005,0c-10.477,0.232-21.93-0.044-34.352-0.843c0,0,0,0-0.001,0
c-0.635-0.038-1.259-0.075-1.9-0.118c-1.995-0.128-3.731,1.374-3.869,3.375c-0.136,1.999,1.376,3.73,3.375,3.866
c2.537,0.173,5.03,0.321,7.49,0.453c0.392,0.021,0.77,0.034,1.158,0.054l0,0C47.566,319.697,54.504,319.89,61.093,319.89z"/>
</g>
<g id="Containers_8_">
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M86.209,179.744h3.227v34.052h-3.227V179.744z M80.02,179.744
h3.354v34.052H80.02V179.744z M73.828,179.744h3.354v34.052h-3.354V179.744z M67.636,179.744h3.354v34.052h-3.354V179.744z
M61.446,179.744H64.8v34.052h-3.354V179.744z M55.384,179.744h3.224v34.052h-3.224V179.744z M51.981,176.338h40.858v40.86H51.981
V176.338z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M133.354,132.598h3.229v34.051h-3.229V132.598z M127.165,132.598
h3.354v34.051h-3.354V132.598z M120.973,132.598h3.354v34.051h-3.354V132.598z M114.781,132.598h3.354v34.051h-3.354V132.598z
M108.593,132.598h3.352v34.051h-3.352V132.598z M102.531,132.598h3.222v34.051h-3.222V132.598z M99.124,129.193h40.863v40.859
H99.124V129.193z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M133.354,179.744h3.229v34.052h-3.229V179.744z M127.165,179.744
h3.354v34.052h-3.354V179.744z M120.973,179.744h3.354v34.052h-3.354V179.744z M114.781,179.744h3.354v34.052h-3.354V179.744z
M108.593,179.744h3.352v34.052h-3.352V179.744z M102.531,179.744h3.222v34.052h-3.222V179.744z M99.124,176.338h40.863v40.86
H99.124V176.338z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M180.501,179.744h3.225v34.052h-3.225V179.744z M174.31,179.744
h3.355v34.052h-3.355V179.744z M168.12,179.744h3.354v34.052h-3.354V179.744z M161.928,179.744h3.354v34.052h-3.354V179.744z
M155.736,179.744h3.354v34.052h-3.354V179.744z M149.676,179.744h3.222v34.052h-3.222V179.744z M146.271,176.338h40.861v40.86
h-40.861V176.338z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M180.501,132.598h3.225v34.051h-3.225V132.598z M174.31,132.598
h3.355v34.051h-3.355V132.598z M168.12,132.598h3.354v34.051h-3.354V132.598z M161.928,132.598h3.354v34.051h-3.354V132.598z
M155.736,132.598h3.354v34.051h-3.354V132.598z M149.676,132.598h3.222v34.051h-3.222V132.598z M146.271,129.193h40.861v40.859
h-40.861V129.193z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M227.647,179.744h3.226v34.052h-3.226V179.744z M221.457,179.744
h3.354v34.052h-3.354V179.744z M215.265,179.744h3.354v34.052h-3.354V179.744z M209.073,179.744h3.354v34.052h-3.354V179.744z
M202.884,179.744h3.354v34.052h-3.354V179.744z M196.821,179.744h3.224v34.052h-3.224V179.744z M193.416,176.338h40.861v40.86
h-40.861V176.338z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M227.647,132.598h3.226v34.051h-3.226V132.598z M221.457,132.598
h3.354v34.051h-3.354V132.598z M215.265,132.598h3.354v34.051h-3.354V132.598z M209.073,132.598h3.354v34.051h-3.354V132.598z
M202.884,132.598h3.354v34.051h-3.354V132.598z M196.821,132.598h3.224v34.051h-3.224V132.598z M193.416,129.193h40.861v40.859
h-40.861V129.193z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M227.647,85.451h3.226v34.053h-3.226V85.451z M221.457,85.451
h3.354v34.053h-3.354V85.451z M215.265,85.451h3.354v34.053h-3.354V85.451z M209.073,85.451h3.354v34.053h-3.354V85.451z
M202.884,85.451h3.354v34.053h-3.354V85.451z M196.821,85.451h3.224v34.053h-3.224V85.451z M193.416,82.048h40.861v40.86h-40.861
V82.048z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M274.792,179.744h3.224v34.052h-3.224V179.744z M268.602,179.744
h3.352v34.052h-3.352V179.744z M262.408,179.744h3.354v34.052h-3.354V179.744z M256.218,179.744h3.354v34.052h-3.354V179.744z
M250.026,179.744h3.354v34.052h-3.354V179.744z M243.964,179.744h3.227v34.052h-3.227V179.744z M240.561,176.338h40.86v40.86
h-40.86V176.338z"/>
</g>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M137.428,283.445c6.225,0,11.271,5.049,11.271,11.272
c0,6.225-5.046,11.271-11.271,11.271c-6.226,0-11.272-5.046-11.272-11.271C126.156,288.494,131.202,283.445,137.428,283.445"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M137.428,286.644c1.031,0,2.015,0.194,2.923,0.546
c-0.984,0.569-1.65,1.635-1.65,2.854c0,1.82,1.476,3.293,3.296,3.293c1.247,0,2.329-0.693,2.89-1.715
c0.395,0.953,0.615,1.999,0.615,3.097c0,4.458-3.615,8.073-8.073,8.073c-4.458,0-8.074-3.615-8.074-8.073
C129.354,290.258,132.971,286.644,137.428,286.644"/>
<path fill="#FFFFFF" d="M167.394,364.677c-27.916-13.247-43.239-31.256-51.765-50.915c-10.37,2.961-22.835,4.852-37.317,5.664
c-5.457,0.307-11.196,0.464-17.219,0.464c-6.942,0-14.26-0.205-21.94-0.613c25.6,25.585,57.094,45.283,115.408,45.645
C158.866,364.921,163.14,364.837,167.394,364.677z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 6.5 KiB

BIN
prepare-vms/media/swarm.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 457 KiB

47
prepare-vms/lib/aws.sh → prepare-vms/scripts/aws.sh Normal file → Executable file
View File

@@ -1,28 +1,32 @@
aws_display_tags() {
#!/bin/bash
source scripts/cli.sh
aws_display_tags(){
# Print all "Name" tags in our region with their instance count
echo "[#] [Status] [Token] [Tag]" \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
| awk '{ printf " %7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
aws ec2 describe-instances \
--query "Reservations[*].Instances[*].[State.Name,ClientToken,Tags[0].Value]" \
--query "Reservations[*].Instances[*].[State.Name,ClientToken,Tags[0].Value]" \
| tr -d "\r" \
| awk '{ printf " %-12s %-25s %-25s\n", $1, $2, $3}' \
| uniq -c \
| sort -k 3 \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
| sort -k 3
}
aws_get_tokens() {
aws ec2 describe-instances --output text \
--query 'Reservations[*].Instances[*].[ClientToken]' \
--query 'Reservations[*].Instances[*].[ClientToken]' \
| sort -u
}
aws_display_instance_statuses_by_tag() {
TAG=$1
need_tag $TAG
IDS=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].InstanceId" | tr '\t' ' ')
--query "Reservations[*].Instances[*].InstanceId" | tr '\t' ' ' )
aws ec2 describe-instance-status \
--instance-ids $IDS \
@@ -34,20 +38,22 @@ aws_display_instances_by_tag() {
TAG=$1
need_tag $TAG
result=$(aws ec2 describe-instances --output table \
--filter "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].[ \
--filter "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].[ \
InstanceId, \
State.Name, \
Tags[0].Value, \
PublicIpAddress, \
InstanceType \
]"
)
if [[ -z $result ]]; then
die "No instances found with tag $TAG in region $AWS_DEFAULT_REGION."
else
echo "$result"
fi
)
if [[ -z $result ]]; then
echo "No instances found with tag $TAG in region $AWS_DEFAULT_REGION."
else
echo "ID State Tags IP Type" \
| awk '{ printf "%9s %12s %15s %20s %14s \n", $1, $2, $3, $4, $5}' # column -t -c 70}
echo "$result"
fi
}
aws_get_instance_ids_by_filter() {
@@ -57,6 +63,7 @@ aws_get_instance_ids_by_filter() {
--output text | tr "\t" "\n" | tr -d "\r"
}
aws_get_instance_ids_by_client_token() {
TOKEN=$1
need_tag $TOKEN
@@ -75,8 +82,8 @@ aws_get_instance_ips_by_tag() {
aws ec2 describe-instances --filter "Name=tag:Name,Values=$TAG" \
--output text \
--query "Reservations[*].Instances[*].PublicIpAddress" \
| tr "\t" "\n" \
| sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 # sort IPs
| tr "\t" "\n" \
| sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 # sort IPs
}
aws_kill_instances_by_tag() {
@@ -87,12 +94,10 @@ aws_kill_instances_by_tag() {
die "Invalid tag."
fi
info "Deleting instances with tag $TAG."
echo "Deleting instances with tag $TAG"
aws ec2 terminate-instances --instance-ids $IDS \
| grep ^TERMINATINGINSTANCES
info "Deleted instances with tag $TAG."
}
aws_tag_instances() {

30
prepare-vms/scripts/cli.sh Executable file
View File

@@ -0,0 +1,30 @@
die () {
if [ -n "$1" ]; then
>&2 echo -n $(tput setaf 1)
>&2 echo -e "$1"
>&2 echo -n $(tput sgr0)
fi
exit 1
}
need_tag(){
TAG=$1
if [ -z "$TAG" ]; then
echo "Please specify a tag or token. Here's the list: "
aws_display_tags
die
fi
}
need_ips_file() {
IPS_FILE=$1
if [ -z "$IPS_FILE" ]; then
echo "IPS_FILE not set."
die
fi
if [ ! -s "$IPS_FILE" ]; then
echo "IPS_FILE $IPS_FILE not found. Please run: trainer ips <TAG>"
die
fi
}

View File

@@ -0,0 +1,15 @@
bold() {
msg=$1
echo "$(tput bold)$1$(tput sgr0)"
}
green() {
msg=$1
echo "$(tput setaf 2)$1$(tput sgr0)"
}
yellow(){
msg=$1
echo "$(tput setaf 3)$1$(tput sgr0)"
}

View File

@@ -0,0 +1,142 @@
#!/bin/bash
# borrowed from https://gist.github.com/kirikaza/6627072
usage() {
cat >&2 <<__
usage: find-ubuntu-ami.sh [ <filter>... ] [ <sorting> ] [ <options> ]
where:
<filter> is pair of key and substring to search
-r <region>
-n <name>
-v <version>
-a <arch>
-t <type>
-d <date>
-i <image>
-k <kernel>
<sorting> is one of:
-R by region
-N by name
-V by version
-A by arch
-T by type
-D by date
-I by image
-K by kernel
<options> can be:
-q just show AMI
protip for Docker orchestration workshop admin:
./find-ubuntu-ami.sh -t hvm:ebs -r \$AWS_REGION -v 15.10 -N
__
exit 1
}
args=`getopt hr:n:v:a:t:d:i:k:RNVATDIKq $*`
if [ $? != 0 ] ; then
echo >&2
usage
fi
region=
name=
version=
arch=
type=
date=
image=
kernel=
sort=date
quiet=
set -- $args
for a ; do
case "$a" in
-h) usage ;;
-r) region=$2 ; shift ;;
-n) name=$2 ; shift ;;
-v) version=$2 ; shift ;;
-a) arch=$2 ; shift ;;
-t) type=$2 ; shift ;;
-d) date=$2 ; shift ;;
-i) image=$2 ; shift ;;
-k) kernel=$2 ; shift ;;
-R) sort=region ;;
-N) sort=name ;;
-V) sort=version ;;
-A) sort=arch ;;
-T) sort=type ;;
-D) sort=date ;;
-I) sort=image ;;
-K) sort=kernel ;;
-q) quiet=y ;;
--) shift ; break ;;
*) continue ;;
esac
shift
done
[ $# = 0 ] || usage
fix_json() {
tr -d \\n | sed 's/,]}/]}/'
}
jq_query() { cat <<__
.aaData | map (
{
region: .[0],
name: .[1],
version: .[2],
arch: .[3],
type: .[4],
date: .[5],
image: .[6],
kernel: .[7]
} | select (
(.region | contains("$region")) and
(.name | contains("$name")) and
(.version | contains("$version")) and
(.arch | contains("$arch")) and
(.type | contains("$type")) and
(.date | contains("$date")) and
(.image | contains("$image</a>")) and
(.kernel | contains("$kernel"))
)
) | sort_by(.$sort) | .[] |
"\(.region)|\(.name)|\(.version)|\(.arch)|\(.type)|\(.date)|\(.image)|\(.kernel)"
__
}
trim_quotes() {
sed 's/^"//;s/"$//'
}
escape_spaces() {
sed 's/ /\\\ /g'
}
url=http://cloud-images.ubuntu.com/locator/ec2/releasesTable
{
[ "$quiet" ] || echo REGION NAME VERSION ARCH TYPE DATE IMAGE KERNEL
curl -s $url | fix_json | jq "`jq_query`" | trim_quotes | escape_spaces | tr \| ' '
} |
while read region name version arch type date image kernel ; do
image=${image%<*}
image=${image#*>}
if [ "$quiet" ]; then
echo $image
else
echo "$region|$name|$version|$arch|$type|$date|$image|$kernel"
fi
done | column -t -s \|

View File

@@ -0,0 +1,120 @@
#!/usr/bin/env python
import os
import sys
import yaml
try:
import pdfkit
except ImportError:
print("WARNING: could not import pdfkit; PDF generation will fali.")
def prettify(l):
l = [ip.strip() for ip in l]
ret = [ "node{}: <code>{}</code>".format(i+1, s) for (i, s) in zip(range(len(l)), l) ]
return ret
# Read settings from user-provided settings file
with open(sys.argv[1]) as f:
data = f.read()
SETTINGS = yaml.load(data)
SETTINGS['footer'] = SETTINGS['footer'].format(url=SETTINGS['url'])
globals().update(SETTINGS)
###############################################################################
ips = list(open("ips.txt"))
print("Current settings (as defined in settings.yaml):")
print(" Number of IPs: {}".format(len(ips)))
print(" VMs per cluster: {}".format(clustersize))
print("Background image: {}".format(background_image))
print("---------------------------------------------")
assert len(ips)%clustersize == 0
if clustersize == 1:
blurb = blurb.format(
cluster_or_machine="machine",
this_or_each="this",
machine_is_or_machines_are="machine is",
workshop_name=workshop_short_name,
)
else:
blurb = blurb.format(
cluster_or_machine="cluster",
this_or_each="each",
machine_is_or_machines_are="machines are",
workshop_name=workshop_short_name,
)
clusters = []
while ips:
cluster = ips[:clustersize]
ips = ips[clustersize:]
clusters.append(cluster)
html = open("ips.html", "w")
html.write("<html><head><style>")
head = """
div {{
float:left;
border: 1px dotted black;
width: 27%;
padding: 6% 2.5% 2.5% 2.5%;
font-size: x-small;
background-image: url("{background_image}");
background-size: 13%;
background-position-x: 50%;
background-position-y: 5%;
background-repeat: no-repeat;
}}
p {{
margin: 0.5em 0 0.5em 0;
}}
.pagebreak {{
page-break-before: always;
clear: both;
display: block;
height: 8px;
}}
"""
head = head.format(background_image=SETTINGS['background_image'])
html.write(head)
html.write("</style></head><body>")
for i, cluster in enumerate(clusters):
if i>0 and i%pagesize==0:
html.write('<span class="pagebreak"></span>\n')
html.write("<div>")
html.write(blurb)
for s in prettify(cluster):
html.write("<li>%s</li>\n"%s)
html.write("</ul></p>")
html.write("<p>login: <b><code>{}</code></b> <br>password: <b><code>{}</code></b></p>\n".format(instance_login, instance_password))
html.write(footer)
html.write("</div>")
html.close()
"""
html.write("<div>")
html.write("<p>{}</p>".format(blurb))
for s in prettify(cluster):
html.write("<li>{}</li>".format(s))
html.write("</ul></p>")
html.write("<center>")
html.write("<p>login: <b><code>{}</code></b> &nbsp&nbsp password: <b><code>{}</code></b></p>\n".format(instance_login, instance_password))
html.write("</center>")
html.write(footer)
html.write("</div>")
html.close()
"""
with open('ips.html') as f:
pdfkit.from_file(f, 'ips.pdf')

View File

@@ -1,3 +1,10 @@
pssh -I tee /tmp/settings.yaml < $SETTINGS
pssh sudo apt-get update
pssh sudo apt-get install -y python-setuptools
pssh sudo easy_install pyyaml
pssh -I tee /tmp/postprep.py <<EOF
#!/usr/bin/env python
import os
import platform
@@ -11,9 +18,9 @@ import yaml
config = yaml.load(open("/tmp/settings.yaml"))
COMPOSE_VERSION = config["compose_version"]
MACHINE_VERSION = config["machine_version"]
SWARM_VERSION = config["swarm_version"]
CLUSTER_SIZE = config["clustersize"]
ENGINE_VERSION = config["engine_version"]
DOCKER_USER_PASSWORD = config["docker_user_password"]
#################################
@@ -32,21 +39,18 @@ def system(cmd):
t1 = time.time()
f.write(bold("--- RUNNING [step {}] ---> {}...".format(STEP, cmd)))
retcode = os.system(cmd)
if retcode:
retcode = bold(retcode)
t2 = time.time()
td = str(t2-t1)[:5]
f.write(bold("[{}] in {}s\n".format(retcode, td)))
f.write("[{}] in {}s\n".format(retcode, td))
STEP += 1
with open("/home/ubuntu/.bash_history", "a") as f:
f.write("{}\n".format(cmd))
if retcode != 0:
msg = "The following command failed with exit code {}:\n".format(retcode)
msg+= cmd
raise(Exception(msg))
# On EC2, the ephemeral disk might be mounted on /mnt.
# If /mnt is a mountpoint, place Docker workspace on it.
system("if mountpoint -q /mnt; then sudo mkdir -p /mnt/docker && sudo ln -sfn /mnt/docker /var/lib/docker; fi")
system("if mountpoint -q /mnt; then sudo mkdir /mnt/docker && sudo ln -s /mnt/docker /var/lib/docker; fi")
# Put our public IP in /tmp/ipv4
# ipv4_retrieval_endpoint = "http://169.254.169.254/latest/meta-data/public-ipv4"
@@ -55,13 +59,39 @@ system("curl --silent {} > /tmp/ipv4".format(ipv4_retrieval_endpoint))
ipv4 = open("/tmp/ipv4").read()
# Add a "docker" user with password coming from the settings
system("id docker || sudo useradd -d /home/docker -m -s /bin/bash docker")
system("echo docker:{} | sudo chpasswd".format(DOCKER_USER_PASSWORD))
# Add a "docker" user with password "training"
system("sudo useradd -d /home/docker -m -s /bin/bash docker")
system("echo docker:training | sudo chpasswd")
# Helper for Docker prompt.
system("""sudo tee /usr/local/bin/docker-prompt <<SQRL
#!/bin/sh
case "\\\$DOCKER_HOST" in
*:3376)
echo swarm
;;
*:2376)
echo \\\$DOCKER_MACHINE_NAME
;;
*:2375)
echo \\\$DOCKER_MACHINE_NAME
;;
*:55555)
echo \\\$DOCKER_MACHINE_NAME
;;
"")
echo local
;;
*)
echo unknown
;;
esac
SQRL""")
system("sudo chmod +x /usr/local/bin/docker-prompt")
# Fancy prompt courtesy of @soulshake.
system("""sudo -u docker tee -a /home/docker/.bashrc <<SQRL
export PS1='\e[1m\e[31m[{}] \e[32m(\\$(docker-prompt)) \e[34m\u@\h\e[35m \w\e[0m\n$ '
export PS1='\e[1m\e[31m[\h] \e[32m(\\\$(docker-prompt)) \e[34m\u@{}\e[35m \w\e[0m\n$ '
SQRL""".format(ipv4))
# Custom .vimrc
@@ -85,6 +115,9 @@ system("sudo service ssh restart")
system("sudo apt-get -q update")
system("sudo apt-get -qy install git jq python-pip")
# increase the size of the conntrack table so we don't blow it up when going crazy with http load testing
system("echo 1000000 | sudo tee /proc/sys/net/nf_conntrack_max")
#######################
### DOCKER INSTALLS ###
#######################
@@ -101,20 +134,22 @@ system("sudo apt-get -qy install docker-ce")
#system("sudo pip install -U docker-compose=={}".format(COMPOSE_VERSION))
system("sudo curl -sSL -o /usr/local/bin/docker-compose https://github.com/docker/compose/releases/download/{}/docker-compose-{}-{}".format(COMPOSE_VERSION, platform.system(), platform.machine()))
system("sudo chmod +x /usr/local/bin/docker-compose")
system("docker-compose version")
### Install docker-machine
system("sudo curl -sSL -o /usr/local/bin/docker-machine https://github.com/docker/machine/releases/download/v{}/docker-machine-{}-{}".format(MACHINE_VERSION, platform.system(), platform.machine()))
system("sudo chmod +x /usr/local/bin/docker-machine")
system("docker-machine version")
system("sudo apt-get remove -y --purge dnsmasq-base")
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh tree")
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh")
### Wait for Docker to be up.
### (If we don't do this, Docker will not be responsive during the next step.)
system("while ! sudo -u docker docker version ; do sleep 2; done")
### Install Swarm
#system("docker pull swarm:{}".format(SWARM_VERSION))
#system("docker tag -f swarm:{} swarm".format(SWARM_VERSION))
### BEGIN CLUSTERING ###
addresses = list(l.strip() for l in sys.stdin)
@@ -136,9 +171,7 @@ while addresses:
print(cluster)
mynode = cluster.index(ipv4) + 1
system("echo node{} | sudo -u docker tee /tmp/node".format(mynode))
system("echo node{} | sudo tee /etc/hostname".format(mynode))
system("sudo hostname node{}".format(mynode))
system("echo 'node{}' | sudo -u docker tee /tmp/node".format(mynode))
system("sudo -u docker mkdir -p /home/docker/.ssh")
system("sudo -u docker touch /home/docker/.ssh/authorized_keys")
@@ -149,3 +182,25 @@ while addresses:
FINISH = time.time()
duration = "Initial deployment took {}s".format(str(FINISH - START)[:5])
system("echo {}".format(duration))
EOF
IPS_FILE=ips.txt
if [ ! -s $IPS_FILE ]; then
echo "ips.txt not found."
exit 1
fi
pssh --timeout 900 --send-input "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" < $IPS_FILE
# If /home/docker/.ssh/id_rsa doesn't exist, copy it from node1
pssh "sudo -u docker [ -f /home/docker/.ssh/id_rsa ] || ssh -o StrictHostKeyChecking=no node1 sudo -u docker tar -C /home/docker -cvf- .ssh | sudo -u docker tar -C /home/docker -xf-"
# if 'docker@' doesn't appear in /home/docker/.ssh/authorized_keys, copy it there
pssh "grep docker@ /home/docker/.ssh/authorized_keys \
|| cat /home/docker/.ssh/id_rsa.pub \
| sudo -u docker tee -a /home/docker/.ssh/authorized_keys"
# On node1, create and deploy TLS certs using Docker Machine
#pssh "if grep -q node1 /tmp/node; then grep ' node' /etc/hosts | xargs -n2 sudo -H -u docker docker-machine create -d generic --generic-ssh-user docker --generic-ip-address; fi"

23
prepare-vms/scripts/rc Executable file
View File

@@ -0,0 +1,23 @@
# This file can be sourced in order to directly run commands on
# a batch of VMs whose IPs are located in ips.txt of the directory in which
# the command is run.
pssh () {
HOSTFILE="ips.txt"
[ -f $HOSTFILE ] || {
echo "No hostfile found at $HOSTFILE"
return
}
echo "[parallel-ssh] $@"
export PSSH=$(which pssh || which parallel-ssh)
$PSSH -h $HOSTFILE -l ubuntu \
--par 100 \
-O LogLevel=ERROR \
-O UserKnownHostsFile=/dev/null \
-O StrictHostKeyChecking=no \
-O ForwardAgent=yes \
"$@"
}

Some files were not shown because too many files have changed in this diff Show More