mirror of
https://github.com/jpetazzo/container.training.git
synced 2026-02-14 17:49:59 +00:00
Major rehaul of trainer script (it is now workshopctl)
This commit is contained in:
@@ -10,24 +10,24 @@
|
||||
- fork/clone repo
|
||||
- set required environment variables for AWS
|
||||
- create your own setting file from `settings/example.yaml`
|
||||
- run `./trainer` commands to create instances, install docker, setup each users environment in node1, other management tasks
|
||||
- run `./trainer cards` command to generate PDF for printing handouts of each users host IP's and login info
|
||||
- run `./workshopctl` commands to create instances, install docker, setup each users environment in node1, other management tasks
|
||||
- run `./workshopctl cards` command to generate PDF for printing handouts of each users host IP's and login info
|
||||
|
||||
## Clone/Fork the Repo, and Build the Tools Image
|
||||
|
||||
The Docker Compose file here is used to build a image with all the dependencies to run the `./trainer` commands and optional tools. Each run of the script will check if you have those dependencies locally on your host, and will only use the container if you're [missing a dependency](trainer#L5).
|
||||
The Docker Compose file here is used to build a image with all the dependencies to run the `./workshopctl` commands and optional tools. Each run of the script will check if you have those dependencies locally on your host, and will only use the container if you're [missing a dependency](workshopctl#L5).
|
||||
|
||||
$ git clone https://github.com/jpetazzo/orchestration-workshop.git
|
||||
$ cd orchestration-workshop/prepare-vms
|
||||
$ docker-compose build
|
||||
|
||||
## Preparing to Run `./trainer`
|
||||
## Preparing to Run `./workshopctl`
|
||||
|
||||
### Required AWS Permissions/Info
|
||||
|
||||
- Initial assumptions are you're using a root account. If you'd like to use a IAM user, it will need `AmazonEC2FullAccess` and `IAMReadOnlyAccess`.
|
||||
- Using a non-default VPC or Security Group isn't supported out of box yet, but until then you can [customize the `trainer-cli` script](scripts/trainer-cli#L396-L401).
|
||||
- These instances will assign the default VPC Security Group, which does not open any ports from Internet by default. So you'll need to add Inbound rules for `SSH | TCP | 22 | 0.0.0.0/0` and `Custom TCP Rule | TCP | 8000 - 8002 | 0.0.0.0/0`, or run `./trainer opensg` which opens up all ports.
|
||||
- Using a non-default VPC or Security Group isn't supported out of box yet, so you will have to customize `lib/commands.sh` if you want to change that.
|
||||
- These instances will assign the default VPC Security Group, which does not open any ports from Internet by default. So you'll need to add Inbound rules for `SSH | TCP | 22 | 0.0.0.0/0` and `Custom TCP Rule | TCP | 8000 - 8002 | 0.0.0.0/0`, or run `./workshopctl opensg` which opens up all ports.
|
||||
|
||||
### Required Environment Variables
|
||||
|
||||
@@ -37,59 +37,56 @@ The Docker Compose file here is used to build a image with all the dependencies
|
||||
|
||||
### Update/copy `settings/example.yaml`
|
||||
|
||||
Then pass `settings/YOUR_WORKSHOP_NAME-settings.yaml` as an argument to `trainer deploy`, `trainer cards`, etc.
|
||||
Then pass `settings/YOUR_WORKSHOP_NAME-settings.yaml` as an argument to `./workshopctl deploy`, `./workshopctl cards`, etc.
|
||||
|
||||
./trainer cards 2016-09-28-00-33-bret settings/orchestration.yaml
|
||||
./workshopctl cards 2016-09-28-00-33-bret settings/orchestration.yaml
|
||||
|
||||
## `./trainer` Usage
|
||||
## `./workshopctl` Usage
|
||||
|
||||
```
|
||||
./trainer <command> [n-instances|tag] [settings/file.yaml]
|
||||
|
||||
Core commands:
|
||||
start n Start n instances
|
||||
list [TAG] If a tag is provided, list its VMs. Otherwise, list tags.
|
||||
deploy TAG Deploy all instances with a given tag
|
||||
pull-images TAG Pre-pull docker images. Run only after deploying.
|
||||
stop TAG Stop and delete instances tagged TAG
|
||||
|
||||
Extras:
|
||||
ips TAG List all IPs of instances with a given tag (updates ips.txt)
|
||||
ids TAG/TOKEN List all instance IDs with a given tag
|
||||
shell Get a shell in the trainer container
|
||||
status TAG Print information about this tag and its VMs
|
||||
tags List all tags (per-region)
|
||||
retag TAG/TOKEN TAG Retag instances with a new tag
|
||||
|
||||
Beta:
|
||||
ami Look up Amazon Machine Images
|
||||
cards FILE Generate cards
|
||||
opensg Modify AWS security groups
|
||||
workshopctl - the orchestration workshop swiss army knife
|
||||
Commands:
|
||||
ami Show the AMI that will be used for deployment
|
||||
amis List Ubuntu AMIs in the current region
|
||||
cards Generate ready-to-print cards for a batch of VMs
|
||||
deploy Install Docker on a bunch of running VMs
|
||||
ec2quotas Check our EC2 quotas (max instances)
|
||||
help Show available commands
|
||||
ids List the instance IDs belonging to a given tag or token
|
||||
ips List the IP addresses of the VMs for a given tag or token
|
||||
kube Setup kubernetes clusters with kubeadm (must be run AFTER deploy)
|
||||
list List available batches in the current region
|
||||
opensg Open the default security group to ALL ingress traffic
|
||||
pull_images Pre-pull a bunch of Docker images
|
||||
retag Apply a new tag to a batch of VMs
|
||||
start Start a batch of VMs
|
||||
status List instance status for a given batch
|
||||
stop Stop (terminate, shutdown, kill, remove, destroy...) instances
|
||||
test Run tests (pre-flight checks) on a batch of VMs
|
||||
```
|
||||
|
||||
### Summary of What `./trainer` Does For You
|
||||
### Summary of What `./workshopctl` Does For You
|
||||
|
||||
- Used to manage bulk AWS instances for you without needing to use AWS cli or gui.
|
||||
- Can manage multiple "tags" or groups of instances, which are tracked in `prepare-vms/tags/`
|
||||
- Can also create PDF/HTML for printing student info for instance IP's and login.
|
||||
- The `./trainer` script can be executed directly.
|
||||
- The `./workshopctl` script can be executed directly.
|
||||
- It will run locally if all its dependencies are fulfilled; otherwise it will run in the Docker container you created with `docker-compose build` (preparevms_prepare-vms).
|
||||
- During `start` it will add your default local SSH key to all instances under the `ubuntu` user.
|
||||
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. For now, this is hard coded.
|
||||
|
||||
### Example Steps to Launch a Batch of Instances for a Workshop
|
||||
|
||||
- Export the environment variables needed by the AWS CLI (see **Required Environment Variables** above)
|
||||
- Run `./trainer start N` Creates `N` EC2 instances
|
||||
- Run `./workshopctl start N` Creates `N` EC2 instances
|
||||
- Your local SSH key will be synced to instances under `ubuntu` user
|
||||
- AWS instances will be created and tagged based on date, and IP's stored in `prepare-vms/tags/`
|
||||
- Run `./trainer deploy TAG settings/somefile.yaml` to run `scripts/postprep.rc` via parallel-ssh
|
||||
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `scripts/postprep.rc` via parallel-ssh
|
||||
- If it errors or times out, you should be able to rerun
|
||||
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
|
||||
- Run `./trainer pull-images TAG` to pre-pull a bunch of Docker images to the instances
|
||||
- Run `./trainer cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
|
||||
- Run `./workshopctl pull-images TAG` to pre-pull a bunch of Docker images to the instances
|
||||
- Run `./workshopctl cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
|
||||
- *Have a great workshop*
|
||||
- Run `./trainer stop TAG` to terminate instances.
|
||||
- Run `./workshopctl stop TAG` to terminate instances.
|
||||
|
||||
## Other Tools
|
||||
|
||||
@@ -133,31 +130,31 @@ If you create new VMs, the symlinked file will be overwritten.
|
||||
|
||||
Instances can be deployed manually using the `deploy` command:
|
||||
|
||||
$ ./trainer deploy TAG settings/somefile.yaml
|
||||
$ ./workshopctl deploy TAG settings/somefile.yaml
|
||||
|
||||
The `postprep.rc` file will be copied via parallel-ssh to all of the VMs and executed.
|
||||
|
||||
#### Pre-pull images
|
||||
|
||||
$ ./trainer pull-images TAG
|
||||
$ ./workshopctl pull-images TAG
|
||||
|
||||
#### Generate cards
|
||||
|
||||
$ ./trainer cards TAG settings/somefile.yaml
|
||||
$ ./workshopctl cards TAG settings/somefile.yaml
|
||||
|
||||
#### List tags
|
||||
|
||||
$ ./trainer list
|
||||
$ ./workshopctl list
|
||||
|
||||
#### List VMs
|
||||
|
||||
$ ./trainer list TAG
|
||||
$ ./workshopctl list TAG
|
||||
|
||||
This will print a human-friendly list containing some information about each instance.
|
||||
|
||||
#### Stop and destroy VMs
|
||||
|
||||
$ ./trainer stop TAG
|
||||
$ ./workshopctl stop TAG
|
||||
|
||||
## ToDo
|
||||
|
||||
|
||||
16
prepare-vms/scripts/aws.sh → prepare-vms/lib/aws.sh
Executable file → Normal file
16
prepare-vms/scripts/aws.sh → prepare-vms/lib/aws.sh
Executable file → Normal file
@@ -1,17 +1,13 @@
|
||||
#!/bin/bash
|
||||
|
||||
source scripts/cli.sh
|
||||
|
||||
aws_display_tags(){
|
||||
# Print all "Name" tags in our region with their instance count
|
||||
echo "[#] [Status] [Token] [Tag]" \
|
||||
| awk '{ printf " %7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
|
||||
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
|
||||
aws ec2 describe-instances \
|
||||
--query "Reservations[*].Instances[*].[State.Name,ClientToken,Tags[0].Value]" \
|
||||
| tr -d "\r" \
|
||||
| awk '{ printf " %-12s %-25s %-25s\n", $1, $2, $3}' \
|
||||
| uniq -c \
|
||||
| sort -k 3
|
||||
| sort -k 3 \
|
||||
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
|
||||
}
|
||||
|
||||
aws_get_tokens() {
|
||||
@@ -48,10 +44,8 @@ aws_display_instances_by_tag() {
|
||||
]"
|
||||
)
|
||||
if [[ -z $result ]]; then
|
||||
echo "No instances found with tag $TAG in region $AWS_DEFAULT_REGION."
|
||||
die "No instances found with tag $TAG in region $AWS_DEFAULT_REGION."
|
||||
else
|
||||
echo "ID State Tags IP Type" \
|
||||
| awk '{ printf "%9s %12s %15s %20s %14s \n", $1, $2, $3, $4, $5}' # column -t -c 70}
|
||||
echo "$result"
|
||||
fi
|
||||
}
|
||||
@@ -94,7 +88,7 @@ aws_kill_instances_by_tag() {
|
||||
die "Invalid tag."
|
||||
fi
|
||||
|
||||
echo "Deleting instances with tag $TAG"
|
||||
info "Deleting instances with tag $TAG."
|
||||
|
||||
aws ec2 terminate-instances --instance-ids $IDS \
|
||||
| grep ^TERMINATINGINSTANCES
|
||||
76
prepare-vms/lib/cli.sh
Normal file
76
prepare-vms/lib/cli.sh
Normal file
@@ -0,0 +1,76 @@
|
||||
# Abort if any error happens, and show the command that caused the error.
|
||||
_ERR() {
|
||||
error "Command $BASH_COMMAND failed (exit status: $?)"
|
||||
}
|
||||
set -e
|
||||
trap _ERR ERR
|
||||
|
||||
die() {
|
||||
if [ -n "$1" ]; then
|
||||
error "$1"
|
||||
fi
|
||||
exit 1
|
||||
}
|
||||
|
||||
error() {
|
||||
echo "[$(red ERROR)] $1"
|
||||
}
|
||||
|
||||
warning() {
|
||||
echo "[$(yellow WARNING)] $1"
|
||||
}
|
||||
|
||||
info() {
|
||||
echo "[$(green INFO)] $1"
|
||||
}
|
||||
|
||||
# Print a full-width separator.
|
||||
# If given an argument, will print it in the middle of that separator.
|
||||
# If the argument is longer than the screen width, it will be printed between two separator lines.
|
||||
sep() {
|
||||
if [ -z "$COLUMNS" ]; then
|
||||
COLUMNS=80
|
||||
fi
|
||||
SEP=$(yes = | tr -d "\n" | head -c $[$COLUMNS - 1])
|
||||
if [ -z "$1" ]; then
|
||||
echo $SEP
|
||||
else
|
||||
MSGLEN=$(echo "$1" | wc -c)
|
||||
if [ $[ $MSGLEN +4 ] -gt $COLUMNS ]; then
|
||||
echo "$SEP"
|
||||
echo "$1"
|
||||
echo "$SEP"
|
||||
else
|
||||
LEFTLEN=$[ ($COLUMNS - $MSGLEN - 2) / 2 ]
|
||||
RIGHTLEN=$[ $COLUMNS - $MSGLEN - 2 - $LEFTLEN ]
|
||||
echo "$(echo $SEP | head -c $LEFTLEN) $1 $(echo $SEP | head -c $RIGHTLEN)"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
need_tag() {
|
||||
if [ -z "$1" ]; then
|
||||
die "Please specify a tag or token. To see available tags and tokens, run: $0 list"
|
||||
fi
|
||||
}
|
||||
|
||||
need_settings() {
|
||||
if [ -z "$1" ]; then
|
||||
die "Please specify a settings file."
|
||||
elif [ ! -f "$1" ]; then
|
||||
die "Settings file $1 doesn't exist."
|
||||
fi
|
||||
}
|
||||
|
||||
need_ips_file() {
|
||||
IPS_FILE=$1
|
||||
if [ -z "$IPS_FILE" ]; then
|
||||
echo "IPS_FILE not set."
|
||||
die
|
||||
fi
|
||||
|
||||
if [ ! -s "$IPS_FILE" ]; then
|
||||
echo "IPS_FILE $IPS_FILE not found. Please run: $0 ips <TAG>"
|
||||
die
|
||||
fi
|
||||
}
|
||||
15
prepare-vms/lib/colors.sh
Normal file
15
prepare-vms/lib/colors.sh
Normal file
@@ -0,0 +1,15 @@
|
||||
bold() {
|
||||
echo "$(tput bold)$1$(tput sgr0)"
|
||||
}
|
||||
|
||||
red() {
|
||||
echo "$(tput setaf 1)$1$(tput sgr0)"
|
||||
}
|
||||
|
||||
green() {
|
||||
echo "$(tput setaf 2)$1$(tput sgr0)"
|
||||
}
|
||||
|
||||
yellow(){
|
||||
echo "$(tput setaf 3)$1$(tput sgr0)"
|
||||
}
|
||||
537
prepare-vms/lib/commands.sh
Normal file
537
prepare-vms/lib/commands.sh
Normal file
@@ -0,0 +1,537 @@
|
||||
export AWS_DEFAULT_OUTPUT=text
|
||||
|
||||
HELP=""
|
||||
_cmd () {
|
||||
HELP="$(printf "%s\n%-12s %s\n" "$HELP" "$1" "$2")"
|
||||
}
|
||||
|
||||
_cmd help "Show available commands"
|
||||
_cmd_help() {
|
||||
printf "$(basename $0) - the orchestration workshop swiss army knife\n"
|
||||
printf "Commands:"
|
||||
printf "%s" "$HELP" | sort
|
||||
}
|
||||
|
||||
_cmd amis "List Ubuntu AMIs in the current region"
|
||||
_cmd_amis() {
|
||||
find_ubuntu_ami -r $AWS_DEFAULT_REGION "$@"
|
||||
}
|
||||
|
||||
_cmd ami "Show the AMI that will be used for deployment"
|
||||
_cmd_ami() {
|
||||
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a amd64 -v 16.04 -t hvm:ebs -N -q
|
||||
}
|
||||
|
||||
_cmd cards "Generate ready-to-print cards for a batch of VMs"
|
||||
_cmd_cards() {
|
||||
TAG=$1
|
||||
SETTINGS=$2
|
||||
need_tag $TAG
|
||||
need_settings $SETTINGS
|
||||
|
||||
aws_get_instance_ips_by_tag $TAG > tags/$TAG/ips.txt
|
||||
|
||||
# Remove symlinks to old cards
|
||||
rm -f ips.html ips.pdf
|
||||
|
||||
# This will generate two files in the base dir: ips.pdf and ips.html
|
||||
python scripts/ips-txt-to-html.py $SETTINGS
|
||||
|
||||
for f in ips.html ips.pdf; do
|
||||
# Remove old versions of cards if they exist
|
||||
rm -f tags/$TAG/$f
|
||||
|
||||
# Move the generated file and replace it with a symlink
|
||||
mv -f $f tags/$TAG/$f && ln -s tags/$TAG/$f $f
|
||||
done
|
||||
|
||||
info "Cards created. You can view them with:"
|
||||
info "xdg-open ips.html ips.pdf (on Linux)"
|
||||
info "open ips.html ips.pdf (on MacOS)"
|
||||
}
|
||||
|
||||
_cmd deploy "Install Docker on a bunch of running VMs"
|
||||
_cmd_deploy() {
|
||||
TAG=$1
|
||||
SETTINGS=$2
|
||||
need_tag $TAG
|
||||
need_settings $SETTINGS
|
||||
link_tag $TAG
|
||||
count=$(wc -l ips.txt)
|
||||
|
||||
# wait until all hosts are reachable before trying to deploy
|
||||
info "Trying to reach $TAG instances..."
|
||||
while ! tag_is_reachable $TAG; do
|
||||
echo -n "."
|
||||
sleep 2
|
||||
done
|
||||
echo ""
|
||||
|
||||
sep "Deploying tag $TAG"
|
||||
pssh -I tee /tmp/settings.yaml < $SETTINGS
|
||||
pssh "
|
||||
sudo apt-get update &&
|
||||
sudo apt-get install -y python-setuptools &&
|
||||
sudo easy_install pyyaml"
|
||||
|
||||
# Copy postprep.py to the remote machines, and execute it, feeding it the list of IP addresses
|
||||
pssh -I tee /tmp/postprep.py < lib/postprep.py
|
||||
pssh --timeout 900 --send-input "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" < ips.txt
|
||||
|
||||
# Install docker-prompt script
|
||||
pssh -I sudo tee /usr/local/bin/docker-prompt < lib/docker-prompt
|
||||
pssh sudo chmod +x /usr/local/bin/docker-prompt
|
||||
|
||||
# If /home/docker/.ssh/id_rsa doesn't exist, copy it from node1
|
||||
pssh "
|
||||
sudo -u docker [ -f /home/docker/.ssh/id_rsa ] ||
|
||||
ssh -o StrictHostKeyChecking=no node1 sudo -u docker tar -C /home/docker -cvf- .ssh |
|
||||
sudo -u docker tar -C /home/docker -xf-"
|
||||
|
||||
# if 'docker@' doesn't appear in /home/docker/.ssh/authorized_keys, copy it there
|
||||
pssh "
|
||||
grep docker@ /home/docker/.ssh/authorized_keys ||
|
||||
cat /home/docker/.ssh/id_rsa.pub |
|
||||
sudo -u docker tee -a /home/docker/.ssh/authorized_keys"
|
||||
|
||||
# On node1, create and deploy TLS certs using Docker Machine
|
||||
true || pssh "
|
||||
if grep -q node1 /tmp/node; then
|
||||
grep ' node' /etc/hosts |
|
||||
xargs -n2 sudo -H -u docker \
|
||||
docker-machine create -d generic --generic-ssh-user docker --generic-ip-address
|
||||
fi"
|
||||
|
||||
sep "Deployed tag $TAG"
|
||||
info "You may want to run one of the following commands:"
|
||||
info "$0 kube $TAG"
|
||||
info "$0 pull-images $TAG"
|
||||
info "$0 cards $TAG $SETTINGS"
|
||||
}
|
||||
|
||||
_cmd kube "Setup kubernetes clusters with kubeadm (must be run AFTER deploy)"
|
||||
_cmd_kube() {
|
||||
|
||||
# Install packages
|
||||
pssh "
|
||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg |
|
||||
sudo apt-key add - &&
|
||||
echo deb http://apt.kubernetes.io/ kubernetes-xenial main |
|
||||
sudo tee /etc/apt/sources.list.d/kubernetes.list"
|
||||
pssh "
|
||||
sudo apt-get update -q &&
|
||||
sudo apt-get install -qy kubelet kubeadm kubectl"
|
||||
|
||||
# Work around https://github.com/kubernetes/kubernetes/issues/53356
|
||||
pssh "
|
||||
if [ ! -f /etc/kubernetes/kubelet.conf ]; then
|
||||
sudo systemctl stop kubelet
|
||||
sudo rm -rf /var/lib/kubelet/pki
|
||||
fi"
|
||||
|
||||
# Initialize kube master
|
||||
pssh "
|
||||
if grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/admin.conf ]; then
|
||||
sudo kubeadm init
|
||||
fi"
|
||||
|
||||
# Put kubeconfig in ubuntu's and docker's accounts
|
||||
pssh "
|
||||
if grep -q node1 /tmp/node; then
|
||||
sudo mkdir -p \$HOME/.kube /home/docker/.kube &&
|
||||
sudo cp /etc/kubernetes/admin.conf \$HOME/.kube/config &&
|
||||
sudo cp /etc/kubernetes/admin.conf /home/docker/.kube/config &&
|
||||
sudo chown -R \$(id -u) \$HOME/.kube &&
|
||||
sudo chown -R docker /home/docker/.kube
|
||||
fi"
|
||||
|
||||
# Get bootstrap token
|
||||
pssh "
|
||||
if grep -q node1 /tmp/node; then
|
||||
TOKEN_NAME=\$(kubectl -n kube-system get secret -o name | grep bootstrap-token)
|
||||
TOKEN_ID=\$(kubectl -n kube-system get \$TOKEN_NAME -o go-template --template '{{ index .data \"token-id\" }}' | base64 -d)
|
||||
TOKEN_SECRET=\$(kubectl -n kube-system get \$TOKEN_NAME -o go-template --template '{{ index .data \"token-secret\" }}' | base64 -d)
|
||||
echo \$TOKEN_ID.\$TOKEN_SECRET >/tmp/token
|
||||
fi"
|
||||
|
||||
# Install weave as the pod network
|
||||
pssh "
|
||||
if grep -q node1 /tmp/node; then
|
||||
kubever=\$(kubectl version | base64 | tr -d '\n')
|
||||
kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=\$kubever
|
||||
fi"
|
||||
|
||||
# Join the other nodes to the cluster
|
||||
pssh "
|
||||
if ! grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
|
||||
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token)
|
||||
sudo kubeadm join --token \$TOKEN node1:6443
|
||||
fi"
|
||||
|
||||
sep "Done"
|
||||
}
|
||||
|
||||
_cmd ids "List the instance IDs belonging to a given tag or token"
|
||||
_cmd_ids() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
|
||||
info "Looking up by tag:"
|
||||
aws_get_instance_ids_by_tag $TAG
|
||||
|
||||
# Just in case we managed to create instances but weren't able to tag them
|
||||
info "Looking up by token:"
|
||||
aws_get_instance_ids_by_client_token $TAG
|
||||
}
|
||||
|
||||
_cmd ips "List the IP addresses of the VMs for a given tag or token"
|
||||
_cmd_ips() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
mkdir -p tags/$TAG
|
||||
aws_get_instance_ips_by_tag $TAG | tee tags/$TAG/ips.txt
|
||||
link_tag $TAG
|
||||
}
|
||||
|
||||
_cmd list "List available batches in the current region"
|
||||
_cmd_list(){
|
||||
info "Listing batches in region $AWS_DEFAULT_REGION:"
|
||||
aws_display_tags
|
||||
}
|
||||
|
||||
_cmd status "List instance status for a given batch"
|
||||
_cmd_status() {
|
||||
info "Using region $AWS_DEFAULT_REGION."
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
describe_tag $TAG
|
||||
tag_is_reachable $TAG
|
||||
echo "You may be interested in running one of the following commands:"
|
||||
echo "$0 ips $TAG"
|
||||
echo "$0 deploy $TAG <settings/somefile.yaml>"
|
||||
}
|
||||
|
||||
_cmd opensg "Open the default security group to ALL ingress traffic"
|
||||
_cmd_opensg() {
|
||||
aws ec2 authorize-security-group-ingress \
|
||||
--group-name default \
|
||||
--protocol icmp \
|
||||
--port -1 \
|
||||
--cidr 0.0.0.0/0
|
||||
|
||||
aws ec2 authorize-security-group-ingress \
|
||||
--group-name default \
|
||||
--protocol udp \
|
||||
--port 0-65535 \
|
||||
--cidr 0.0.0.0/0
|
||||
|
||||
aws ec2 authorize-security-group-ingress \
|
||||
--group-name default \
|
||||
--protocol tcp \
|
||||
--port 0-65535 \
|
||||
--cidr 0.0.0.0/0
|
||||
}
|
||||
|
||||
_cmd pull_images "Pre-pull a bunch of Docker images"
|
||||
_cmd_pull_images() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
pull_tag $TAG
|
||||
}
|
||||
|
||||
_cmd retag "Apply a new tag to a batch of VMs"
|
||||
_cmd_retag() {
|
||||
OLDTAG=$1
|
||||
NEWTAG=$2
|
||||
need_tag $OLDTAG
|
||||
if [[ -z "$NEWTAG" ]]; then
|
||||
die "You must specify a new tag to apply."
|
||||
fi
|
||||
aws_tag_instances $OLDTAG $NEWTAG
|
||||
}
|
||||
|
||||
_cmd start "Start a batch of VMs"
|
||||
_cmd_start() {
|
||||
# Number of instances to create
|
||||
COUNT=$1
|
||||
# Optional settings file (to carry on with deployment)
|
||||
SETTINGS=$2
|
||||
|
||||
if [ -z "$COUNT" ]; then
|
||||
die "Indicate number of instances to start."
|
||||
fi
|
||||
|
||||
# Print our AWS username, to ease the pain of credential-juggling
|
||||
greet
|
||||
|
||||
# Upload our SSH keys to AWS if needed, to be added to each VM's authorized_keys
|
||||
key_name=$(sync_keys)
|
||||
|
||||
AMI=$(_cmd_ami) # Retrieve the AWS image ID
|
||||
TOKEN=$(get_token) # generate a timestamp token for this batch of VMs
|
||||
AWS_KEY_NAME=$(make_key_name)
|
||||
|
||||
sep "Starting instances"
|
||||
info " Count: $COUNT"
|
||||
info " Region: $AWS_DEFAULT_REGION"
|
||||
info " Token/tag: $TOKEN"
|
||||
info " AMI: $AMI"
|
||||
info " Key name: $AWS_KEY_NAME"
|
||||
result=$(aws ec2 run-instances \
|
||||
--key-name $AWS_KEY_NAME \
|
||||
--count $COUNT \
|
||||
--instance-type t2.medium \
|
||||
--client-token $TOKEN \
|
||||
--image-id $AMI)
|
||||
reservation_id=$(echo "$result" | head -1 | awk '{print $2}' )
|
||||
info "Reservation ID: $reservation_id"
|
||||
sep
|
||||
|
||||
# if instance creation succeeded, we should have some IDs
|
||||
IDS=$(aws_get_instance_ids_by_client_token $TOKEN)
|
||||
if [ -z "$IDS" ]; then
|
||||
die "Instance creation failed."
|
||||
fi
|
||||
|
||||
# Tag these new instances with a tag that is the same as the token
|
||||
TAG=$TOKEN
|
||||
aws_tag_instances $TOKEN $TAG
|
||||
|
||||
wait_until_tag_is_running $TAG $COUNT
|
||||
|
||||
sep
|
||||
info "Successfully created $COUNT instances with tag $TAG"
|
||||
sep
|
||||
|
||||
mkdir -p tags/$TAG
|
||||
IPS=$(aws_get_instance_ips_by_tag $TAG)
|
||||
echo "$IPS" > tags/$TAG/ips.txt
|
||||
link_tag $TAG
|
||||
if [ -n "$SETTINGS" ]; then
|
||||
_cmd_deploy $TAG $SETTINGS
|
||||
else
|
||||
echo "To deploy or kill these instances, run one of the following:"
|
||||
echo "$0 deploy $TAG <settings/somefile.yaml>"
|
||||
echo "$0 stop $TAG"
|
||||
fi
|
||||
}
|
||||
|
||||
_cmd ec2quotas "Check our EC2 quotas (max instances)"
|
||||
_cmd_ec2quotas(){
|
||||
greet
|
||||
|
||||
max_instances=$(aws ec2 describe-account-attributes \
|
||||
--attribute-names max-instances \
|
||||
--query 'AccountAttributes[*][AttributeValues]')
|
||||
info "In the current region ($AWS_DEFAULT_REGION) you can deploy up to $max_instances instances."
|
||||
|
||||
# Print list of AWS EC2 regions, highlighting ours ($AWS_DEFAULT_REGION) in the list
|
||||
# If our $AWS_DEFAULT_REGION is not valid, the error message will be pretty descriptive:
|
||||
# Could not connect to the endpoint URL: "https://ec2.foo.amazonaws.com/"
|
||||
info "Available regions:"
|
||||
aws ec2 describe-regions | awk '{print $3}' | grep --color=auto $AWS_DEFAULT_REGION -C50
|
||||
}
|
||||
|
||||
_cmd stop "Stop (terminate, shutdown, kill, remove, destroy...) instances"
|
||||
_cmd_stop() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
aws_kill_instances_by_tag $TAG
|
||||
}
|
||||
|
||||
_cmd test "Run tests (pre-flight checks) on a batch of VMs"
|
||||
_cmd_test() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
test_tag $TAG
|
||||
}
|
||||
|
||||
###
|
||||
|
||||
greet() {
|
||||
IAMUSER=$(aws iam get-user --query 'User.UserName')
|
||||
info "Hello! You seem to be UNIX user $USER, and IAM user $IAMUSER."
|
||||
}
|
||||
|
||||
deploy_hq(){
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
REMOTE_USER=ubuntu
|
||||
REMOTE_HOST=$(aws_get_instance_ips_by_tag $TAG)
|
||||
echo "Trying to reach $TAG instances..."
|
||||
while ! tag_is_reachable $TAG; do
|
||||
echo -n "."
|
||||
sleep 2
|
||||
done
|
||||
env | grep -i aws > envvars.sh
|
||||
scp \
|
||||
-o "UserKnownHostsFile /dev/null" \
|
||||
-o "StrictHostKeyChecking=no" \
|
||||
scripts/remote-execution.sh \
|
||||
envvars.sh \
|
||||
$REMOTE_USER@$REMOTE_HOST:/tmp/
|
||||
|
||||
ssh -A $REMOTE_USER@$REMOTE_HOST "bash /tmp/remote-execution.sh >>/tmp/pre.out 2>>/tmp/pre.err"
|
||||
ssh -A $REMOTE_USER@$REMOTE_HOST
|
||||
}
|
||||
|
||||
|
||||
link_tag() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
IPS_FILE=tags/$TAG/ips.txt
|
||||
need_ips_file $IPS_FILE
|
||||
ln -sf $IPS_FILE ips.txt
|
||||
}
|
||||
|
||||
pull_tag(){
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
link_tag $TAG
|
||||
if [ ! -s $IPS_FILE ]; then
|
||||
echo "Nonexistent or empty IPs file $IPS_FILE"
|
||||
fi
|
||||
|
||||
# Pre-pull a bunch of images
|
||||
pssh --timeout 900 'for I in \
|
||||
debian:latest \
|
||||
ubuntu:latest \
|
||||
fedora:latest \
|
||||
centos:latest \
|
||||
postgres \
|
||||
redis \
|
||||
training/namer \
|
||||
nathanleclaire/redisonrails; do
|
||||
sudo -u docker docker pull $I
|
||||
done'
|
||||
|
||||
info "Finished pulling images for $TAG."
|
||||
info "You may now want to run:"
|
||||
info "$0 cards $TAG <settings/somefile.yaml>"
|
||||
}
|
||||
|
||||
wait_until_tag_is_running() {
|
||||
max_retry=50
|
||||
TAG=$1
|
||||
COUNT=$2
|
||||
i=0
|
||||
done_count=0
|
||||
while [[ $done_count -lt $COUNT ]]; do \
|
||||
let "i += 1"
|
||||
info "$(printf "%d/%d instances online" $done_count $COUNT)"
|
||||
done_count=$(aws ec2 describe-instances \
|
||||
--filters "Name=instance-state-name,Values=running" \
|
||||
"Name=tag:Name,Values=$TAG" \
|
||||
--query "Reservations[*].Instances[*].State.Name" \
|
||||
| tr "\t" "\n" \
|
||||
| wc -l)
|
||||
|
||||
if [[ $i -gt $max_retry ]]; then
|
||||
die "Timed out while waiting for instance creation (after $max_retry retries)"
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
}
|
||||
|
||||
tag_is_reachable() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
link_tag $TAG
|
||||
pssh -t 5 true 2>&1 >/dev/null
|
||||
}
|
||||
|
||||
test_tag() {
|
||||
ips_file=tags/$TAG/ips.txt
|
||||
info "Picking a random IP address in $ips_file to run tests."
|
||||
n=$[ 1 + $RANDOM % $(wc -l < $ips_file) ]
|
||||
ip=$(head -n $n $ips_file | tail -n 1)
|
||||
test_vm $ip
|
||||
info "Tests complete."
|
||||
}
|
||||
|
||||
test_vm() {
|
||||
ip=$1
|
||||
info "Testing instance with IP address $ip."
|
||||
user=ubuntu
|
||||
errors=""
|
||||
|
||||
for cmd in "hostname" \
|
||||
"whoami" \
|
||||
"hostname -i" \
|
||||
"cat /tmp/node" \
|
||||
"cat /tmp/ipv4" \
|
||||
"cat /etc/hosts" \
|
||||
"hostnamectl status" \
|
||||
"docker version | grep Version -B1" \
|
||||
"docker-compose version" \
|
||||
"docker-machine version" \
|
||||
"docker images" \
|
||||
"docker ps" \
|
||||
"curl --silent localhost:55555" \
|
||||
"sudo ls -la /mnt/ | grep docker" \
|
||||
"env" \
|
||||
"ls -la /home/docker/.ssh"; do
|
||||
sep "$cmd"
|
||||
echo "$cmd" |
|
||||
ssh -A -q \
|
||||
-o "UserKnownHostsFile /dev/null" \
|
||||
-o "StrictHostKeyChecking=no" \
|
||||
$user@$ip sudo -u docker -i \
|
||||
|| {
|
||||
status=$?
|
||||
error "$cmd exit status: $status"
|
||||
errors="[$status] $cmd\n$errors"
|
||||
}
|
||||
done
|
||||
sep
|
||||
if [ -n "$errors" ]; then
|
||||
error "The following commands had non-zero exit codes:"
|
||||
printf "$errors"
|
||||
fi
|
||||
info "Test VM was $ip."
|
||||
}
|
||||
|
||||
make_key_name(){
|
||||
SHORT_FINGERPRINT=$(ssh-add -l | grep RSA | head -n1 | cut -d " " -f 2 | tr -d : | cut -c 1-8)
|
||||
echo "${SHORT_FINGERPRINT}-${USER}"
|
||||
}
|
||||
|
||||
sync_keys() {
|
||||
# make sure ssh-add -l contains "RSA"
|
||||
ssh-add -l | grep -q RSA ||
|
||||
die "The output of \`ssh-add -l\` doesn't contain 'RSA'. Start the agent, add your keys?"
|
||||
|
||||
AWS_KEY_NAME=$(make_key_name)
|
||||
info "Syncing keys... "
|
||||
if ! aws ec2 describe-key-pairs --key-name "$AWS_KEY_NAME" &> /dev/null; then
|
||||
aws ec2 import-key-pair --key-name $AWS_KEY_NAME \
|
||||
--public-key-material "$(ssh-add -L \
|
||||
| grep -i RSA \
|
||||
| head -n1 \
|
||||
| cut -d " " -f 1-2)" &> /dev/null
|
||||
|
||||
if ! aws ec2 describe-key-pairs --key-name "$AWS_KEY_NAME" &> /dev/null; then
|
||||
die "Somehow, importing the key didn't work. Make sure that 'ssh-add -l | grep RSA | head -n1' returns an RSA key?"
|
||||
else
|
||||
info "Imported new key $AWS_KEY_NAME."
|
||||
fi
|
||||
else
|
||||
info "Using existing key $AWS_KEY_NAME."
|
||||
fi
|
||||
}
|
||||
|
||||
get_token() {
|
||||
if [ -z $USER ]; then
|
||||
export USER=anonymous
|
||||
fi
|
||||
date +%Y-%m-%d-%H-%M-$USER
|
||||
}
|
||||
|
||||
describe_tag() {
|
||||
# Display instance details and reachability/status information
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
aws_display_instances_by_tag $TAG
|
||||
aws_display_instance_statuses_by_tag $TAG
|
||||
}
|
||||
|
||||
21
prepare-vms/lib/docker-prompt
Executable file
21
prepare-vms/lib/docker-prompt
Executable file
@@ -0,0 +1,21 @@
|
||||
#!/bin/sh
|
||||
case "$DOCKER_HOST" in
|
||||
*:3376)
|
||||
echo swarm
|
||||
;;
|
||||
*:2376)
|
||||
echo $DOCKER_MACHINE_NAME
|
||||
;;
|
||||
*:2375)
|
||||
echo $DOCKER_MACHINE_NAME
|
||||
;;
|
||||
*:55555)
|
||||
echo $DOCKER_MACHINE_NAME
|
||||
;;
|
||||
"")
|
||||
echo local
|
||||
;;
|
||||
*)
|
||||
echo unknown
|
||||
;;
|
||||
esac
|
||||
10
prepare-vms/scripts/find-ubuntu-ami.sh → prepare-vms/lib/find-ubuntu-ami.sh
Executable file → Normal file
10
prepare-vms/scripts/find-ubuntu-ami.sh → prepare-vms/lib/find-ubuntu-ami.sh
Executable file → Normal file
@@ -1,5 +1,9 @@
|
||||
#!/bin/bash
|
||||
# borrowed from https://gist.github.com/kirikaza/6627072
|
||||
# The original script has been wrapped in a function that invokes a subshell.
|
||||
# That way, it can be safely invoked as a function from other scripts.
|
||||
|
||||
find_ubuntu_ami() {
|
||||
(
|
||||
|
||||
usage() {
|
||||
cat >&2 <<__
|
||||
@@ -138,5 +142,5 @@ url=http://cloud-images.ubuntu.com/locator/ec2/releasesTable
|
||||
fi
|
||||
done | column -t -s \|
|
||||
|
||||
|
||||
|
||||
)
|
||||
}
|
||||
@@ -1,11 +1,3 @@
|
||||
pssh -I tee /tmp/settings.yaml < $SETTINGS
|
||||
|
||||
pssh "
|
||||
sudo apt-get update &&
|
||||
sudo apt-get install -y python-setuptools &&
|
||||
sudo easy_install pyyaml"
|
||||
|
||||
pssh -I tee /tmp/postprep.py <<EOF
|
||||
#!/usr/bin/env python
|
||||
import os
|
||||
import platform
|
||||
@@ -66,32 +58,6 @@ ipv4 = open("/tmp/ipv4").read()
|
||||
system("id docker || sudo useradd -d /home/docker -m -s /bin/bash docker")
|
||||
system("echo docker:training | sudo chpasswd")
|
||||
|
||||
# Helper for Docker prompt.
|
||||
system("""sudo tee /usr/local/bin/docker-prompt <<SQRL
|
||||
#!/bin/sh
|
||||
case "\\\$DOCKER_HOST" in
|
||||
*:3376)
|
||||
echo swarm
|
||||
;;
|
||||
*:2376)
|
||||
echo \\\$DOCKER_MACHINE_NAME
|
||||
;;
|
||||
*:2375)
|
||||
echo \\\$DOCKER_MACHINE_NAME
|
||||
;;
|
||||
*:55555)
|
||||
echo \\\$DOCKER_MACHINE_NAME
|
||||
;;
|
||||
"")
|
||||
echo local
|
||||
;;
|
||||
*)
|
||||
echo unknown
|
||||
;;
|
||||
esac
|
||||
SQRL""")
|
||||
system("sudo chmod +x /usr/local/bin/docker-prompt")
|
||||
|
||||
# Fancy prompt courtesy of @soulshake.
|
||||
system("""sudo -u docker tee -a /home/docker/.bashrc <<SQRL
|
||||
export PS1='\e[1m\e[31m[\h] \e[32m(\\\$(docker-prompt)) \e[34m\u@{}\e[35m \w\e[0m\n$ '
|
||||
@@ -180,98 +146,3 @@ while addresses:
|
||||
FINISH = time.time()
|
||||
duration = "Initial deployment took {}s".format(str(FINISH - START)[:5])
|
||||
system("echo {}".format(duration))
|
||||
|
||||
EOF
|
||||
|
||||
IPS_FILE=ips.txt
|
||||
if [ ! -s $IPS_FILE ]; then
|
||||
echo "ips.txt not found."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
pssh --timeout 900 --send-input "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" < $IPS_FILE
|
||||
|
||||
# If /home/docker/.ssh/id_rsa doesn't exist, copy it from node1
|
||||
pssh "
|
||||
sudo -u docker [ -f /home/docker/.ssh/id_rsa ] ||
|
||||
ssh -o StrictHostKeyChecking=no node1 sudo -u docker tar -C /home/docker -cvf- .ssh |
|
||||
sudo -u docker tar -C /home/docker -xf-"
|
||||
|
||||
# if 'docker@' doesn't appear in /home/docker/.ssh/authorized_keys, copy it there
|
||||
pssh "
|
||||
grep docker@ /home/docker/.ssh/authorized_keys ||
|
||||
cat /home/docker/.ssh/id_rsa.pub |
|
||||
sudo -u docker tee -a /home/docker/.ssh/authorized_keys"
|
||||
|
||||
# On node1, create and deploy TLS certs using Docker Machine
|
||||
true || pssh "
|
||||
if grep -q node1 /tmp/node; then
|
||||
grep ' node' /etc/hosts |
|
||||
xargs -n2 sudo -H -u docker \
|
||||
docker-machine create -d generic --generic-ssh-user docker --generic-ip-address
|
||||
fi"
|
||||
|
||||
### Kubernetes cluster setup below ###
|
||||
|
||||
_setup_kubernetes_ () {
|
||||
|
||||
# Install packages
|
||||
pssh "
|
||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg |
|
||||
sudo apt-key add - &&
|
||||
echo deb http://apt.kubernetes.io/ kubernetes-xenial main |
|
||||
sudo tee /etc/apt/sources.list.d/kubernetes.list"
|
||||
pssh "
|
||||
sudo apt-get update -q &&
|
||||
sudo apt-get install -qy kubelet kubeadm kubectl"
|
||||
|
||||
# Work around https://github.com/kubernetes/kubernetes/issues/53356
|
||||
pssh "
|
||||
if [ ! -f /etc/kubernetes/kubelet.conf ]; then
|
||||
sudo systemctl stop kubelet
|
||||
sudo rm -rf /var/lib/kubelet/pki
|
||||
fi"
|
||||
|
||||
# Initialize kube master
|
||||
pssh "
|
||||
if grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/admin.conf ]; then
|
||||
sudo kubeadm init
|
||||
fi"
|
||||
|
||||
# Put kubeconfig in ubuntu's and docker's accounts
|
||||
pssh "
|
||||
if grep -q node1 /tmp/node; then
|
||||
sudo mkdir -p \$HOME/.kube /home/docker/.kube &&
|
||||
sudo cp /etc/kubernetes/admin.conf \$HOME/.kube/config &&
|
||||
sudo cp /etc/kubernetes/admin.conf /home/docker/.kube/config &&
|
||||
sudo chown -R \$(id -u) \$HOME/.kube &&
|
||||
sudo chown -R docker /home/docker/.kube
|
||||
fi"
|
||||
|
||||
# Get bootstrap token
|
||||
pssh "
|
||||
if grep -q node1 /tmp/node; then
|
||||
TOKEN_NAME=\$(kubectl -n kube-system get secret -o name | grep bootstrap-token)
|
||||
TOKEN_ID=\$(kubectl -n kube-system get \$TOKEN_NAME -o go-template --template '{{ index .data \"token-id\" }}' | base64 -d)
|
||||
TOKEN_SECRET=\$(kubectl -n kube-system get \$TOKEN_NAME -o go-template --template '{{ index .data \"token-secret\" }}' | base64 -d)
|
||||
echo \$TOKEN_ID.\$TOKEN_SECRET >/tmp/token
|
||||
fi"
|
||||
|
||||
# Install weave as the pod network
|
||||
pssh "
|
||||
if grep -q node1 /tmp/node; then
|
||||
kubever=\$(kubectl version | base64 | tr -d '\n')
|
||||
kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=\$kubever
|
||||
fi"
|
||||
|
||||
# Join the other nodes to the cluster
|
||||
pssh "
|
||||
if ! grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
|
||||
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token)
|
||||
sudo kubeadm join --token \$TOKEN node1:6443
|
||||
fi"
|
||||
|
||||
}
|
||||
|
||||
# Just uncomment that line to enable kubernetes provisioning!
|
||||
#_setup_kubernetes_
|
||||
0
prepare-vms/scripts/rc → prepare-vms/lib/pssh.sh
Executable file → Normal file
0
prepare-vms/scripts/rc → prepare-vms/lib/pssh.sh
Executable file → Normal file
@@ -1,30 +0,0 @@
|
||||
die () {
|
||||
if [ -n "$1" ]; then
|
||||
>&2 echo -n $(tput setaf 1)
|
||||
>&2 echo -e "$1"
|
||||
>&2 echo -n $(tput sgr0)
|
||||
fi
|
||||
exit 1
|
||||
}
|
||||
|
||||
need_tag(){
|
||||
TAG=$1
|
||||
if [ -z "$TAG" ]; then
|
||||
echo "Please specify a tag or token. Here's the list: "
|
||||
aws_display_tags
|
||||
die
|
||||
fi
|
||||
}
|
||||
|
||||
need_ips_file() {
|
||||
IPS_FILE=$1
|
||||
if [ -z "$IPS_FILE" ]; then
|
||||
echo "IPS_FILE not set."
|
||||
die
|
||||
fi
|
||||
|
||||
if [ ! -s "$IPS_FILE" ]; then
|
||||
echo "IPS_FILE $IPS_FILE not found. Please run: trainer ips <TAG>"
|
||||
die
|
||||
fi
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
bold() {
|
||||
msg=$1
|
||||
echo "$(tput bold)$1$(tput sgr0)"
|
||||
}
|
||||
|
||||
green() {
|
||||
msg=$1
|
||||
echo "$(tput setaf 2)$1$(tput sgr0)"
|
||||
}
|
||||
|
||||
yellow(){
|
||||
msg=$1
|
||||
echo "$(tput setaf 3)$1$(tput sgr0)"
|
||||
}
|
||||
|
||||
@@ -1,488 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Don't execute this script directly. Use ../trainer instead.
|
||||
|
||||
set -e # if we encounter an error, abort
|
||||
|
||||
export AWS_DEFAULT_OUTPUT=text
|
||||
|
||||
greet() {
|
||||
hello=$(aws iam get-user --query 'User.UserName')
|
||||
echo "Greetings, $hello/${USER}!"
|
||||
}
|
||||
|
||||
deploy_hq(){
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
REMOTE_USER=ubuntu
|
||||
REMOTE_HOST=$(aws_get_instance_ips_by_tag $TAG)
|
||||
echo "Trying to reach $TAG instances..."
|
||||
while ! tag_is_reachable $TAG; do
|
||||
echo -n "."
|
||||
sleep 2
|
||||
done
|
||||
env | grep -i aws > envvars.sh
|
||||
scp \
|
||||
-o "UserKnownHostsFile /dev/null" \
|
||||
-o "StrictHostKeyChecking=no" \
|
||||
scripts/remote-execution.sh \
|
||||
envvars.sh \
|
||||
$REMOTE_USER@$REMOTE_HOST:/tmp/
|
||||
|
||||
ssh -A $REMOTE_USER@$REMOTE_HOST "bash /tmp/remote-execution.sh >>/tmp/pre.out 2>>/tmp/pre.err"
|
||||
ssh -A $REMOTE_USER@$REMOTE_HOST
|
||||
}
|
||||
|
||||
deploy_tag(){
|
||||
TAG=$1
|
||||
SETTINGS=$2
|
||||
need_tag $TAG
|
||||
link_tag $TAG
|
||||
|
||||
|
||||
count=$(wc -l ips.txt)
|
||||
|
||||
# wait until all hosts are reachable before trying to deploy
|
||||
echo "Trying to reach $TAG instances..."
|
||||
while ! tag_is_reachable $TAG; do
|
||||
echo -n "."
|
||||
sleep 2
|
||||
done
|
||||
|
||||
echo "[[ Deploying tag $TAG ]]"
|
||||
export SETTINGS
|
||||
source scripts/postprep.rc
|
||||
echo "Finished deploying $TAG."
|
||||
echo "You may want to run one of the following commands:"
|
||||
echo "./trainer pull-images $TAG"
|
||||
echo "./trainer cards $TAG <settings/somefile.yaml>"
|
||||
}
|
||||
|
||||
link_tag() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
IPS_FILE=tags/$TAG/ips.txt
|
||||
need_ips_file $IPS_FILE
|
||||
ln -sf $IPS_FILE ips.txt
|
||||
}
|
||||
|
||||
pull_tag(){
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
link_tag $TAG
|
||||
if [ ! -s $IPS_FILE ]; then
|
||||
echo "Nonexistent or empty IPs file $IPS_FILE"
|
||||
fi
|
||||
|
||||
# Pre-pull a bunch of images
|
||||
pssh --timeout 900 'for I in \
|
||||
debian:latest \
|
||||
ubuntu:latest \
|
||||
fedora:latest \
|
||||
centos:latest \
|
||||
postgres \
|
||||
redis \
|
||||
training/namer \
|
||||
nathanleclaire/redisonrails; do
|
||||
sudo -u docker docker pull $I
|
||||
done'
|
||||
|
||||
echo "Finished pulling images for $TAG"
|
||||
|
||||
echo "You may now want to run:"
|
||||
echo "./trainer cards $TAG <settings/somefile.yaml>"
|
||||
}
|
||||
|
||||
wait_until_tag_is_running() {
|
||||
max_retry=50
|
||||
TAG=$1
|
||||
COUNT=$2
|
||||
i=0
|
||||
done_count=0
|
||||
while [[ $done_count -lt $COUNT ]]; do \
|
||||
let "i += 1"
|
||||
echo "Waiting: $done_count/$COUNT instances online"
|
||||
done_count=$(aws ec2 describe-instances \
|
||||
--filters "Name=instance-state-name,Values=running" \
|
||||
"Name=tag:Name,Values=$TAG" \
|
||||
--query "Reservations[*].Instances[*].State.Name" \
|
||||
| tr "\t" "\n" \
|
||||
| wc -l)
|
||||
|
||||
if [[ $i -gt $max_retry ]]; then
|
||||
die "Timed out while waiting for instance creation (after $max_retry retries)"
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
}
|
||||
|
||||
tag_is_reachable() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
link_tag $TAG
|
||||
pssh -t 5 true 2>&1 >/dev/null
|
||||
}
|
||||
|
||||
test_tag(){
|
||||
ips_file=tags/$TAG/ips.txt
|
||||
echo "Using random IP in $ips_file to run tests on $TAG"
|
||||
ip=$(shuf -n 1 $ips_file)
|
||||
test_vm $ip
|
||||
echo "Tests complete. You may want to run one of the following commands:"
|
||||
echo "./trainer cards $TAG <settings/somefile.yaml>"
|
||||
}
|
||||
|
||||
test_vm() {
|
||||
ip=$1
|
||||
echo "[[ Testing instance with IP $(tput bold)$ip $(tput sgr0) ]]"
|
||||
user=ubuntu
|
||||
|
||||
for cmd in "hostname" \
|
||||
"whoami" \
|
||||
"hostname -i" \
|
||||
"cat /tmp/node" \
|
||||
"cat /tmp/ipv4" \
|
||||
"cat /etc/hosts" \
|
||||
"hostnamectl status" \
|
||||
"docker version | grep Version -B1" \
|
||||
"docker-compose version" \
|
||||
"docker-machine version" \
|
||||
"docker images" \
|
||||
"docker ps" \
|
||||
"curl --silent localhost:55555" \
|
||||
"sudo ls -la /mnt/ | grep docker" \
|
||||
"env" \
|
||||
"ls -la /home/docker/.ssh"; do
|
||||
echo "=== $cmd ==="
|
||||
echo "$cmd" |
|
||||
ssh -A -q \
|
||||
-o "UserKnownHostsFile /dev/null" \
|
||||
-o "StrictHostKeyChecking=no" \
|
||||
$user@$ip sudo -u docker -i
|
||||
echo
|
||||
done
|
||||
}
|
||||
|
||||
make_key_name(){
|
||||
SHORT_FINGERPRINT=$(ssh-add -l | grep RSA | head -n1 | cut -d " " -f 2 | tr -d : | cut -c 1-8)
|
||||
echo "${SHORT_FINGERPRINT}-${USER}"
|
||||
}
|
||||
|
||||
sync_keys() {
|
||||
# make sure ssh-add -l contains "RSA"
|
||||
ssh-add -l | grep -q RSA ||
|
||||
die "The output of \`ssh-add -l\` doesn't contain 'RSA'. Start the agent, add your keys?"
|
||||
|
||||
AWS_KEY_NAME=$(make_key_name)
|
||||
echo -n "Syncing keys... "
|
||||
if ! aws ec2 describe-key-pairs --key-name "$AWS_KEY_NAME" &> /dev/null; then
|
||||
aws ec2 import-key-pair --key-name $AWS_KEY_NAME \
|
||||
--public-key-material "$(ssh-add -L \
|
||||
| grep -i RSA \
|
||||
| head -n1 \
|
||||
| cut -d " " -f 1-2)" &> /dev/null
|
||||
|
||||
if ! aws ec2 describe-key-pairs --key-name "$AWS_KEY_NAME" &> /dev/null; then
|
||||
die "Somehow, importing the key didn't work. Make sure that 'ssh-add -l | grep RSA | head -n1' returns an RSA key?"
|
||||
else
|
||||
echo "Imported new key $AWS_KEY_NAME."
|
||||
fi
|
||||
else
|
||||
echo "Using existing key $AWS_KEY_NAME."
|
||||
fi
|
||||
}
|
||||
|
||||
suggest_amis() {
|
||||
scripts/find-ubuntu-ami.sh -r $AWS_DEFAULT_REGION -a amd64 -v 16.04 -t hvm:ebs -N -q
|
||||
}
|
||||
|
||||
get_token() {
|
||||
if [ -z $USER ]; then
|
||||
export USER=anonymous
|
||||
fi
|
||||
date +%Y-%m-%d-%H-%M-$USER
|
||||
}
|
||||
|
||||
get_ami() {
|
||||
suggest_amis | head -1
|
||||
}
|
||||
|
||||
|
||||
make_cards(){
|
||||
# Generate cards for a given tag
|
||||
TAG=$1
|
||||
SETTINGS_FILE=$2
|
||||
[[ -z "$SETTINGS_FILE" ]] && {
|
||||
echo "Please specify the settings file you want to use."
|
||||
echo "e.g.: settings/orchestration.yaml"
|
||||
exit 1
|
||||
}
|
||||
aws_get_instance_ips_by_tag $TAG > tags/$TAG/ips.txt
|
||||
|
||||
# Remove symlinks to old cards
|
||||
rm -f ips.html ips.pdf
|
||||
|
||||
# This will generate two files in the base dir: ips.pdf and ips.html
|
||||
python scripts/ips-txt-to-html.py $SETTINGS_FILE
|
||||
|
||||
for f in ips.html ips.pdf; do
|
||||
# Remove old versions of cards if they exist
|
||||
rm -f tags/$TAG/$f
|
||||
|
||||
# Move the generated file and replace it with a symlink
|
||||
mv -f $f tags/$TAG/$f && ln -s tags/$TAG/$f $f
|
||||
done
|
||||
|
||||
echo "Cards created. You may want to run:"
|
||||
echo "chromium ips.html"
|
||||
echo "chromium ips.pdf"
|
||||
}
|
||||
|
||||
describe_tag() {
|
||||
# Display instance details and reachability/status information
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
echo "============= Tag: $TAG ============="
|
||||
aws_display_instances_by_tag $TAG
|
||||
aws_display_instance_statuses_by_tag $TAG
|
||||
}
|
||||
|
||||
run_cli() {
|
||||
case "$1" in
|
||||
ami)
|
||||
# A wrapper for scripts/find-ubuntu-ami.sh
|
||||
shift
|
||||
scripts/find-ubuntu-ami.sh -r $AWS_DEFAULT_REGION $*
|
||||
echo
|
||||
echo "Protip:"
|
||||
echo "./trainer ami -a amd64 -v 16.04 -t hvm:ebs -N | grep -v ^REGION | cut -d\" \" -f15"
|
||||
echo
|
||||
echo "Suggestions:"
|
||||
suggest_amis
|
||||
;;
|
||||
cards)
|
||||
TAG=$2
|
||||
need_tag $TAG
|
||||
make_cards $TAG $3
|
||||
;;
|
||||
deploy)
|
||||
TAG=$2
|
||||
need_tag $TAG
|
||||
if [[ $TAG == *"-hq"* ]]; then
|
||||
echo "Deploying HQ"
|
||||
deploy_hq $TAG
|
||||
else
|
||||
SETTINGS=$3
|
||||
if [[ -z "$SETTINGS" ]]; then
|
||||
echo "Please specify a settings file."
|
||||
exit 1
|
||||
fi
|
||||
if ! [[ -f "$SETTINGS" ]]; then
|
||||
echo "Settings file $SETTINGS not found."
|
||||
exit 1
|
||||
fi
|
||||
echo "Deploying with settings $SETTINGS."
|
||||
deploy_tag $TAG $SETTINGS
|
||||
fi
|
||||
;;
|
||||
ids)
|
||||
TAG=$2
|
||||
need_tag $TAG
|
||||
IDS=$(aws_get_instance_ids_by_tag $TAG)
|
||||
echo "$IDS"
|
||||
|
||||
# Just in case we managed to create instances but weren't able to tag them
|
||||
echo "Lookup by client token $TAG:"
|
||||
IDS=$(aws_get_instance_ids_by_client_token $TAG)
|
||||
echo "$IDS"
|
||||
;;
|
||||
ips)
|
||||
TAG=$2
|
||||
need_tag $TAG
|
||||
mkdir -p tags/$TAG
|
||||
aws_get_instance_ips_by_tag $TAG | tee tags/$TAG/ips.txt
|
||||
link_tag $TAG
|
||||
;;
|
||||
list)
|
||||
# list existing instances in a given batch
|
||||
# to list batches, see "tags" command
|
||||
echo "Using region $AWS_DEFAULT_REGION."
|
||||
TAG=$2
|
||||
need_tag $TAG
|
||||
describe_tag $TAG
|
||||
tag_is_reachable $TAG
|
||||
echo "You may be interested in running one of the following commands:"
|
||||
echo "./trainer ips $TAG"
|
||||
echo "./trainer deploy $TAG <settings/somefile.yaml>"
|
||||
;;
|
||||
opensg)
|
||||
aws ec2 authorize-security-group-ingress \
|
||||
--group-name default \
|
||||
--protocol icmp \
|
||||
--port -1 \
|
||||
--cidr 0.0.0.0/0
|
||||
|
||||
aws ec2 authorize-security-group-ingress \
|
||||
--group-name default \
|
||||
--protocol udp \
|
||||
--port 0-65535 \
|
||||
--cidr 0.0.0.0/0
|
||||
|
||||
aws ec2 authorize-security-group-ingress \
|
||||
--group-name default \
|
||||
--protocol tcp \
|
||||
--port 0-65535 \
|
||||
--cidr 0.0.0.0/0
|
||||
;;
|
||||
pull-images)
|
||||
TAG=$2
|
||||
need_tag $TAG
|
||||
pull_tag $TAG
|
||||
;;
|
||||
retag)
|
||||
if [[ -z "$2" ]] || [[ -z "$3" ]]; then
|
||||
die "Please specify old tag/token, and new tag."
|
||||
fi
|
||||
aws_tag_instances $2 $3
|
||||
;;
|
||||
shell)
|
||||
# Get a shell in the container
|
||||
export PS1="trainer@$AWS_DEFAULT_REGION# "
|
||||
exec $SHELL
|
||||
;;
|
||||
start)
|
||||
# Create $2 instances
|
||||
COUNT=$2
|
||||
# (and optionally carry on with deployment using specified settings file)
|
||||
SETTINGS=$3
|
||||
|
||||
if [ -z "$COUNT" ]; then
|
||||
die "Indicate number of instances to start."
|
||||
fi
|
||||
|
||||
greet # Print our AWS username, to ease the pain of credential-juggling
|
||||
key_name=$(sync_keys) # Upload our SSH keys to AWS if needed, to be added to each VM's authorized_keys
|
||||
AMI=$(get_ami) # Retrieve the AWS image ID
|
||||
TOKEN=$(get_token) # generate a timestamp token for this batch of VMs
|
||||
if [ ! -z $3 ]; then
|
||||
# If an extra arg is present, append it to the tag
|
||||
TOKEN=$TOKEN-$3
|
||||
fi
|
||||
|
||||
echo "-----------------------------------"
|
||||
echo "Starting $COUNT instances:"
|
||||
echo " Region: $AWS_DEFAULT_REGION"
|
||||
echo " Token/tag: $TOKEN"
|
||||
echo " AMI: $AMI"
|
||||
|
||||
AWS_KEY_NAME=$(make_key_name)
|
||||
result=$(aws ec2 run-instances \
|
||||
--key-name $AWS_KEY_NAME \
|
||||
--count $2 \
|
||||
--instance-type t2.medium \
|
||||
--client-token $TOKEN \
|
||||
--image-id $AMI)
|
||||
reservation_id=$(echo "$result" | head -1 | awk '{print $2}' )
|
||||
echo " Key name: $AWS_KEY_NAME"
|
||||
echo "Reservation ID: $reservation_id"
|
||||
echo "-----------------------------------"
|
||||
|
||||
# if instance creation succeeded, we should have some IDs
|
||||
IDS=$(aws_get_instance_ids_by_client_token $TOKEN)
|
||||
if [ -z "$IDS" ]; then
|
||||
die "Instance creation failed."
|
||||
fi
|
||||
|
||||
# Tag these new instances with a tag that is the same as the token
|
||||
TAG=$TOKEN
|
||||
aws_tag_instances $TOKEN $TAG
|
||||
|
||||
wait_until_tag_is_running $TAG $COUNT
|
||||
|
||||
echo "[-------------------------------------------------------------------------------------]"
|
||||
echo " Successfully created $2 instances with tag: $TAG"
|
||||
echo "[-------------------------------------------------------------------------------------]"
|
||||
|
||||
mkdir -p tags/$TAG
|
||||
IPS=$(aws_get_instance_ips_by_tag $TAG)
|
||||
echo "$IPS" > tags/$TAG/ips.txt
|
||||
link_tag $TAG
|
||||
if [ -n "$SETTINGS" ]; then
|
||||
deploy_tag $TOKEN $SETTINGS
|
||||
else
|
||||
echo "To deploy or kill these instances, run one of the following:"
|
||||
echo "./trainer deploy $TAG <settings/somefile.yaml>"
|
||||
echo "./trainer list $TAG"
|
||||
fi
|
||||
;;
|
||||
status)
|
||||
greet && echo
|
||||
|
||||
max_instances=$(aws ec2 describe-account-attributes \
|
||||
--attribute-names max-instances \
|
||||
--query 'AccountAttributes[*][AttributeValues]')
|
||||
echo "Max instances: $max_instances" && echo
|
||||
|
||||
# Print list of AWS EC2 regions, highlighting ours ($AWS_DEFAULT_REGION) in the list
|
||||
# If our $AWS_DEFAULT_REGION is not valid, the error message will be pretty descriptive:
|
||||
# Could not connect to the endpoint URL: "https://ec2.foo.amazonaws.com/"
|
||||
echo "Region:" # $AWS_DEFAULT_REGION."
|
||||
aws ec2 describe-regions | awk '{print $3}' | grep --color=auto $AWS_DEFAULT_REGION -C50
|
||||
|
||||
;;
|
||||
stop)
|
||||
TAG=$2
|
||||
need_tag $TAG
|
||||
aws_kill_instances_by_tag $TAG
|
||||
;;
|
||||
tag)
|
||||
# add a tag to a batch of VMs
|
||||
TAG=$2
|
||||
NEW_TAG_KEY=$3
|
||||
NEW_TAG_VALUE=$4
|
||||
need_tag $TAG
|
||||
need_tag $NEW_TAG_KEY
|
||||
need_tag $NEW_TAG_VALUE
|
||||
;;
|
||||
test)
|
||||
TAG=$2
|
||||
need_tag $TAG
|
||||
test_tag $TAG
|
||||
;;
|
||||
*)
|
||||
echo "
|
||||
./trainer <command> [n-instances|tag] [settings/file.yaml]
|
||||
|
||||
Core commands:
|
||||
start n Start n instances
|
||||
list [TAG] If a tag is provided, list its VMs. Otherwise, list tags.
|
||||
deploy TAG Deploy all instances with a given tag
|
||||
pull-images TAG Pre-pull docker images. Run only after deploying.
|
||||
stop TAG Stop and delete instances tagged TAG
|
||||
|
||||
Extras:
|
||||
ips TAG List all IPs of instances with a given tag (updates ips.txt)
|
||||
ids TAG/TOKEN List all instance IDs with a given tag
|
||||
shell Get a shell in the trainer container
|
||||
status TAG Print information about this tag and its VMs
|
||||
tags List all tags (per-region)
|
||||
retag TAG/TOKEN TAG Retag instances with a new tag
|
||||
|
||||
Beta:
|
||||
ami Look up Amazon Machine Images
|
||||
cards FILE Generate cards
|
||||
opensg Modify AWS security groups
|
||||
"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
(
|
||||
cd $SCRIPT_DIR
|
||||
source scripts/cli.sh
|
||||
source scripts/aws.sh
|
||||
source scripts/rc
|
||||
source scripts/colors.sh
|
||||
mkdir -p tags
|
||||
# TODO: unset empty envvars
|
||||
run_cli "$@"
|
||||
)
|
||||
@@ -1,80 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
TRAINER_IMAGE="preparevms_prepare-vms"
|
||||
|
||||
DEPENDENCIES="
|
||||
aws
|
||||
ssh
|
||||
curl
|
||||
jq
|
||||
pssh
|
||||
wkhtmltopdf
|
||||
man
|
||||
"
|
||||
|
||||
ENVVARS="
|
||||
AWS_ACCESS_KEY_ID
|
||||
AWS_SECRET_ACCESS_KEY
|
||||
AWS_DEFAULT_REGION
|
||||
SSH_AUTH_SOCK
|
||||
"
|
||||
|
||||
check_envvars() {
|
||||
STATUS=0
|
||||
for envvar in $ENVVARS; do
|
||||
if [ -z "${!envvar}" ]; then
|
||||
echo "Please set environment variable $envvar."
|
||||
STATUS=1
|
||||
unset $envvar
|
||||
fi
|
||||
done
|
||||
return $STATUS
|
||||
}
|
||||
|
||||
check_dependencies() {
|
||||
STATUS=0
|
||||
for dependency in $DEPENDENCIES ; do
|
||||
if ! command -v $dependency >/dev/null; then
|
||||
echo "Could not find dependency $dependency."
|
||||
STATUS=1
|
||||
fi
|
||||
done
|
||||
return $STATUS
|
||||
}
|
||||
|
||||
check_ssh_auth_sock() {
|
||||
if [ -z $SSH_AUTH_SOCK ]; then
|
||||
echo -n "SSH_AUTH_SOCK envvar not set, so its parent directory can't be "
|
||||
echo "mounted as a volume in a container."
|
||||
echo "Try running the command below and trying again:"
|
||||
echo "eval \$(ssh-agent) && ssh-add"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_image() {
|
||||
docker inspect $TRAINER_IMAGE >/dev/null 2>&1
|
||||
}
|
||||
|
||||
# Get the script's real directory, whether we're being called directly or via a symlink
|
||||
if [ -L "$0" ]; then
|
||||
export SCRIPT_DIR=$(dirname $(readlink "$0"))
|
||||
else
|
||||
export SCRIPT_DIR=$(dirname "$0")
|
||||
fi
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
check_envvars || exit 1
|
||||
|
||||
if check_dependencies; then
|
||||
scripts/trainer-cli "$@"
|
||||
elif check_image; then
|
||||
check_ssh_auth_sock
|
||||
export SSH_AUTH_DIRNAME=$(dirname $SSH_AUTH_SOCK)
|
||||
docker-compose run prepare-vms "$@"
|
||||
else
|
||||
echo "Some dependencies are missing, and docker image $TRAINER_IMAGE doesn't exist locally."
|
||||
echo "Please do one of the following: "
|
||||
echo "- run \`docker-compose build\`"
|
||||
echo "- install missing dependencies"
|
||||
fi
|
||||
82
prepare-vms/workshopctl
Executable file
82
prepare-vms/workshopctl
Executable file
@@ -0,0 +1,82 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Get the script's real directory, whether we're being called directly or via a symlink
|
||||
if [ -L "$0" ]; then
|
||||
export SCRIPT_DIR=$(dirname $(readlink "$0"))
|
||||
else
|
||||
export SCRIPT_DIR=$(dirname "$0")
|
||||
fi
|
||||
|
||||
# Load all scriptlets
|
||||
cd "$SCRIPT_DIR"
|
||||
for lib in lib/*.sh; do
|
||||
. $lib
|
||||
done
|
||||
|
||||
TRAINER_IMAGE="preparevms_prepare-vms"
|
||||
|
||||
DEPENDENCIES="
|
||||
aws
|
||||
ssh
|
||||
curl
|
||||
jq
|
||||
pssh
|
||||
wkhtmltopdf
|
||||
man
|
||||
"
|
||||
|
||||
ENVVARS="
|
||||
AWS_ACCESS_KEY_ID
|
||||
AWS_SECRET_ACCESS_KEY
|
||||
AWS_DEFAULT_REGION
|
||||
SSH_AUTH_SOCK
|
||||
"
|
||||
|
||||
check_envvars() {
|
||||
status=0
|
||||
for envvar in $ENVVARS; do
|
||||
if [ -z "${!envvar}" ]; then
|
||||
error "Environment variable $envvar is not set."
|
||||
if [ "$envvar" = "SSH_AUTH_SOCK" ]; then
|
||||
error "Hint: run '\$(ssh-agent) ; ssh-add' and try again?"
|
||||
fi
|
||||
status=1
|
||||
fi
|
||||
done
|
||||
return $status
|
||||
}
|
||||
|
||||
check_dependencies() {
|
||||
status=0
|
||||
for dependency in $DEPENDENCIES; do
|
||||
if ! command -v $dependency >/dev/null; then
|
||||
warning "Dependency $dependency could not be found."
|
||||
status=1
|
||||
fi
|
||||
done
|
||||
return $status
|
||||
}
|
||||
|
||||
check_image() {
|
||||
docker inspect $TRAINER_IMAGE >/dev/null 2>&1
|
||||
}
|
||||
|
||||
check_envvars ||
|
||||
die "Please set all required environment variables."
|
||||
|
||||
check_dependencies ||
|
||||
warning "At least one dependency is missing. Install it or try the image wrapper."
|
||||
|
||||
# Now check which command was invoked and execute it
|
||||
if [ "$1" ]; then
|
||||
cmd="$1"
|
||||
shift
|
||||
else
|
||||
cmd=help
|
||||
fi
|
||||
fun=_cmd_$cmd
|
||||
type -t $fun | grep -q function || die "Invalid command: $cmd"
|
||||
$fun "$@"
|
||||
|
||||
# export SSH_AUTH_DIRNAME=$(dirname $SSH_AUTH_SOCK)
|
||||
# docker-compose run prepare-vms "$@"
|
||||
Reference in New Issue
Block a user